text
stringlengths
100
500k
subset
stringclasses
4 values
Associations of serum calcium levels and dietary calcium intake with incident type 2 diabetes over 10 years: the Korean Genome and Epidemiology Study (KoGES) Kyoung-Nam Kim1,2, Se-Young Oh3 & Yun-Chul Hong2,4,5 Previous evidence regarding the associations between serum calcium concentrations, dietary calcium intake, and type 2 diabetes (T2D) is limited. We investigated the longitudinal associations of serum calcium levels and dietary calcium intake with T2D development. This study used data from the Ansung–Ansan cohort, a community-based, prospective cohort that was followed up for 10 years. Cox regression models adjusted for potential confounders were used to evaluate the associations of serum calcium levels (mean, 9.41 mg/dL) and dietary calcium intake (median, 389.59 mg/day) with T2D incidence. Association between dietary calcium intake and serum calcium levels was assessed using linear regression models. Albumin-adjusted serum calcium levels were not associated with T2D risk (hazard ratio [HR] = 1.07, 95% confidence interval [CI] 0.96, 1.19, p-value = 0.2333). A one-unit increase in log-transformed, energy-adjusted dietary calcium intake was associated with a decreased risk of T2D (HR = 0.88, 95% CI 0.77, 1.00, p-value = 0.0460) and lower albumin-adjusted serum calcium levels (β = − 0.04, 95% CI − 0.07, − 0.02, p-value = 0.0014). The associations did not differ according to sex (all p-values for interaction > 0.10). Serum calcium levels were not associated with T2D risk, while higher dietary calcium intake was associated with a decreased risk of T2D development. These results have public health implications for predicting and preventing T2D development, as well as providing guidelines for diet and calcium supplementation. Type 2 diabetes (T2D) is one of the most important public health issues worldwide, with the past few decades witnessing a rapid increase in the global number of affected patients [1]. Recently, limited number of studies have reported that higher serum calcium levels are associated with an increased risk of T2D development [2,3,4,5]. However, the association between serum calcium levels and T2D development was not observed in another epidemiological study [6]. In addition, the positive associations between serum calcium levels and T2D were unexpected, considering that higher calcium intake has been associated with lower T2D risk [7, 8]. Because the previous evidences on the associations between serum calcium concentrations, dietary calcium intake, and T2D development are inconsistent and limited, further investigation into these complex relationships are needed for predicting and preventing T2D development and providing scientific guidelines for diet and/or calcium supplementation in the general population. Therefore, in the present study, we evaluated the longitudinal associations of serum calcium levels and dietary calcium intake with incidence of T2D in a community-based, prospective cohort, followed up for 10 years. We also assessed the association between dietary calcium intake and serum calcium levels at the baseline survey. Study design and population The present study was conducted using data from the Ansung–Ansan cohort, an ongoing community-based, prospective cohort. Detailed information on the Ansung–Ansan cohort are presented elsewhere [9]. In brief, 10,038 participants aged 40–69 years who resided in the Ansung or Ansan regions of the Republic of Korea were recruited between 2001 and 2003 using a two-stage cluster sampling method. Follow-up surveys were conducted biennially. During each survey, participants took part in interviews using structured questionnaires, health examinations, and laboratory tests. Among the 10,038 participants in the Ansung–Ansan cohort, 676 were excluded from analysis because they reported at enrollment that they had been previously diagnosed with T2D. In addition, 560 participants were excluded from analysis because they had prevalent T2D during the baseline survey, as determined by predefined criteria, based on fasting glucose concentration (≥ 126 mg/dL), post-load glucose concentration (≥ 200 mg/dL), and antidiabetic medication use. Two participants who lacked information on serum calcium levels were also excluded, resulting in the inclusion of 8800 participants in the final analysis. In summary, the exclusion criteria of the present study was presence of T2D and absence of serum calcium levels, and therefore, individuals with hypertension, dyslipidemia, or chronic kidney disease were included in the analysis. The present study used a sequential modeling approach and conducted several analyses with different covariate sets. Therefore, the number of samples used for each analysis was different between models, and this information, along with the results, is presented. The Ansung–Ansan study protocol was reviewed and approved by the Institutional Review Board of the Korea Centers for Disease Control and Prevention, and all study participants submitted written informed consent. The present study was approved by the Ethical Review Board of Seoul National University Hospital (C-1306-046-495). Serum and dietary calcium assessment Serum calcium and albumin were enzymatically measured at the baseline survey using a 747 Chemistry Analyzer (Hitachi, Tokyo, Japan). Since total calcium levels may vary with serum albumin levels due to calcium-albumin binding, serum calcium concentrations were adjusted for albumin concentrations [2] using the following equation: $$ {\text{Albumin-adjusted serum calcium }}\left( {{\text{mg}}/{\text{dL}}} \right) = {\text{serum calcium }}\left( {{\text{mg}}/{\text{dL}}} \right) + \left[ {0.8 \times \left( {4{-}{\text{albumin }}\left( {{\text{g}}/{\text{dL}}} \right)} \right)} \right]. $$ The adjusted level was then used as an explanatory variable in the analysis. Dietary data were collected during the baseline survey by trained interviewers using a validated semi-quantitative food frequency questionnaire [10]. The study participants reported the average consumption frequency and portion size of 103 food items during the previous years, and daily nutrient and total energy intake were calculated using the dietary habit information, as well as the nutrient and energy content for each food item. To control for potential confounding by energy intake, daily calcium intake was adjusted by total energy intake using the residual method [11]. Since the distribution of energy-adjusted calcium intake was highly skewed, log-transformed values were used in the analysis. The same analyses were also conducted using log-transformed calcium intake not adjusted for total energy intake, to assess the robustness of the results. Definition of T2D During each survey, a 75-g oral glucose tolerance test was performed for participants who were not known to have T2D and had not started diabetes treatment since the last survey. T2D was defined as any of the following conditions: fasting glucose concentration ≥ 126 mg/dL, post-load 2-h glucose concentration ≥ 200 mg/dL, or antidiabetic medication use. In the present analysis, data for T2D detection was available for every survey from the baseline survey conducted in 2001–2003, up to the fifth follow-up survey conducted in 2011–2012. The associations of albumin-adjusted serum calcium levels and energy-adjusted dietary calcium intake with T2D development were assessed using the Cox proportional hazards models with age as the time scale. The potential nonlinearity of the association was assessed through nonparametric analysis that applied spline smoothing methods [12]. We constructed analytical models by including covariates assessed at the baseline survey sequentially, to confirm robustness of the results. We selected the covariates based on previous literature and assumed biological pathways [2,3,4,5, 13]: (1) adjusted for age and sex; (2) further adjusted for residential area (Ansung or Ansan), monthly family income (< $869, $869–1738, $1738–3475, or ≥ $3475), tobacco smoking (non-smoker, ex-smoker, current smoker), alcohol intake (non-drinker, ex-drinker, or current drinker), physical activity (none, < 3, or ≥ 3 episodes/week, with each episode defined as exercising for more than 30 min), and body mass index (< 18.5, 18.5–25, 25–30, or ≥ 30; weight divided by height squared, kg/m2); (3) further adjusted for systolic blood pressure (mmHg), diastolic blood pressure (mmHg), and serum creatinine level (mg/dL). Finally, we constructed the analytical model mutually adjusted for serum calcium levels and dietary calcium intake, by including terms for serum calcium levels, dietary calcium intake, and above-mentioned covariates. To approximate the normal distribution, log-transformed values of dietary calcium intake and serum creatinine level were used in the analysis. We assessed the cross-sectional association between dietary calcium intake and serum calcium levels at the baseline survey by linear regression models adjusted for the same covariate sets. Potential heterogeneity of the associations of serum calcium levels and dietary calcium intake with T2D according to sex was explored because many of the previous studies only examined the association of calcium intake with T2D in women [7, 8, 14]. Interactions with sex were evaluated with the likelihood ratio test, by testing the product terms between sex and serum calcium levels or dietary calcium intake added to the main models. We evaluated the robustness of the results by excluding those who had used diuretics or those who were using diuretics, because diuretics could also affect serum calcium levels. All analyses were conducted using SAS version 9.4 (SAS Institute Inc., Cary, NC) and R version 3.2.5 (The Comprehensive R Archive Network, Vienna, Austria: http://cran.r-project.org). Baseline participant characteristics and associated serum calcium levels are presented in Table 1. The mean age at enrollment was 51.8 years, and there was a slightly higher proportion of women (53%) than men (47%). Majority of the participants were non-smokers (59%), current drinkers (47%), and did not exercise regularly (75%). The mean body mass index was 24.5 kg/m2. The mean (standard deviation) albumin-adjusted serum calcium level was 9.41 (0.52) mg/dL (first quartile, 9.12 mg/dL; median, 9.48 mg/dL; third quartile, 9.76 mg/dL). The median (interquartile range) dietary calcium intake was 389.59 (283.71) mg/day (first quartile, 268.94 mg/day; third quartile, 552.65 mg/day). Table 1 Baseline characteristics and albumin-adjusted serum calcium concentrations of the study participants (n = 8800) The associations of serum calcium levels and dietary calcium intake with T2D were found to be robust in penalized regression spline models (Fig. 1). In the fully-adjusted Cox proportional hazard models, we did not find the evidence for the association between serum calcium levels and risk of T2D (Table 2). However, a one-unit increase in log-transformed energy-adjusted dietary calcium intake was associated with a lower risk of T2D (Table 3). The associations between dietary calcium intake and T2D did not change appreciably between models not adjusted for serum calcium levels (hazard ratio [HR] = 0.87, 95% confidence interval [CI] 0.77, 0.99) and models adjusted for serum calcium levels (HR = 0.88, 95% CI 0.77, 1.00). These results were robust in analyses using dietary calcium not adjusted for total energy intake (data not shown). Penalized regression spline model for the associations of albumin-adjusted serum calcium levels and energy-adjusted dietary calcium intake with incidence of type 2 diabetes. a Association between albumin-adjusted serum calcium intake and risk of type 2 diabetes. b Association between energy-adjusted dietary calcium intake and risk of type 2 diabetes. Solid line, spline curve; shaded area, 95% confidence interval. The models were adjusted for age, sex, residential area, monthly family income, tobacco smoking, alcohol intake, physical activity, body mass index, systolic blood pressure, diastolic blood pressure, and serum creatinine level Table 2 Association between albumin-adjusted serum calcium levels and type 2 diabetes in the Ansung–Ansan cohort of the Korean Genome and Epidemiology Study (KoGES) Table 3 Association between dietary calcium intake and type 2 diabetes in the Ansung–Ansan cohort of the Korean Genome and Epidemiology Study (KoGES) When we assessed the association between dietary calcium intake and serum calcium levels at the baseline survey by linear regression model, dietary calcium intake was inversely associated with serum calcium levels after adjusting for potential confounders (β = − 0.04, 95% CI − 0.07, − 0.02). The associations of serum calcium levels and dietary calcium intake with T2D did not differ according to sex (p > 0.10 for all interactions). After excluding those who had used diuretics (14 participants, 0.16%) or those who were using diuretics at the baseline survey (8 participants, 0.09%), the results were almost similar. In the present study with a community-based cohort followed-up for 10 years, the association between serum calcium levels and a risk of incident T2D was not evident, while higher dietary calcium intake was associated with a decreased risk of incident T2D. Previous studies on the association between serum calcium levels and T2D have demonstrated inconsistent results. In a population-based prospective cohort study conducted in Norway, higher serum calcium concentrations were associated with an increased risk of incident T2D [3]. In a multicenter epidemiological study in the United States, in which participants were surveyed twice at an average of 5.2-year intervals, serum calcium levels from the first survey predicted an increased detection of T2D in the second survey [4]. In another prospective cohort study conducted among individuals with high cardiovascular risk in Spain, higher serum calcium levels were associated with T2D risk [2]. In a retrospective cohort study in China, elevated serum calcium levels were also associated with T2D risk [5]. However, in a prospective cohort study in Finland with a median follow-up of 23.1 years, levels of ionized calcium, a direct measurement of active serum calcium, were not associated with a T2D risk. The reason for the inconsistency in the results is not clear. However, differences in population characteristics such as race, genetic background, and vitamin D, parathyroid, and phosphorus levels might be responsible for the inconsistency. In addition, difference in measurement errors in exposure and outcome and different covariates adjusted in analytical models might also contribute to the inconsistency. Because the number of studies exploring this association is limited, further studies, especially those using direct measure of active calcium, are warranted. Four large cohort studies have investigated the association between calcium intake and T2D in the United States (n = 83,779 [7], (n = 41,186) [14]), China (n = 64,191) [8], and Japan (n = 59,796) [15]. These studies reported inverse associations between dietary or total calcium intake and T2D risk among women in the United States, women in China, or individuals with higher vitamin D intake in Japan. Another cohort study conducted in rural areas in Korea (n = 8313) also reported an inverse association between total and vegetable calcium intake, and T2D risk among women [16]. However, a relatively small study (n = 5200) in Australia found no association between dietary calcium and T2D [17]. Additionally, studies provided mixed results when investigating the association between dairy products (a key dietary calcium source) and T2D. One confounder could be the high fat content in some dairy products, which may mitigate the protective effects of calcium [8, 18]. Calcium intake may also depend on non-dairy foods, including tofu, fish, rice, vegetables, and legumes, so the main source of dietary calcium may differ across populations and cultures [8]. Thus, geographically diverse studies may help to evaluate the association between calcium intake and T2D by reducing the effects of this potential confounder. However, possibility that the observed inverse association between dietary calcium intake and T2D risk might be attributable to dietary source of calcium still remains. In the present study, serum calcium levels were not associated with T2D risk, while dietary calcium intake was inversely associated with T2D risk. This might be because serum calcium levels could reflect not only exogenous calcium intake but also the ability to maintain homeostasis. Dietary calcium intake was inversely associated with serum calcium levels. Although vitamin D insufficiency is highly prevalent in the Republic of Korea [19] and lower calcium intake was reported to be associated with an increased parathyroid hormone levels in individuals with low serum vitamin D levels [20], to our knowledge, there has been few study directly investigating these inverse association and underlying mechanisms. Further studies using the information on vitamin D or parathyroid hormone are warranted. Calcium reportedly functions in intracellular processes mediated by insulin in skeletal muscle and adipose tissue, and potentially affects insulin sensitivity in these insulin-responsive tissues [21, 22]. Calcium is also essential for insulin secretion from pancreatic β-cells in response to elevated blood glucose levels, acting via voltage-gated calcium channels or large transient receptor potential channels [23]. Increased calcium intake may also decrease the risk of osteoporosis and reduce the release of environmental pollutants such as lead, which are sequestered in bone and can increase insulin resistance and T2D risk [24]. In addition, an inverse association between dietary calcium intake and a risk of T2D may be indirectly attributable to changes in gastrointestinal hormones or intestinal microbiota and integrity [25]. There are several limitations of the present study. First, although ionized calcium is physiologically active and a measurable biomarker of calcium homeostasis, the pertinent data were not available. Instead, the present study used albumin-adjusted serum calcium levels, which is highly correlated with ionized calcium with large sample sizes [26], to consider variation in total calcium levels according to serum albumin [2, 4, 5]. Second, although vitamin D or parathyroid hormone may affect the association between serum calcium levels and T2D, we had no information available regarding these biomarkers. This limited our ability to comprehensively evaluate the associations between calcium, vitamin D, parathyroid hormone, and T2D. Third, calcium supplement information was lacking, which could result in misclassification of total exogenous calcium intake. However, the present study also has considerable strengths. The study used data from a well-designed, community-based prospective cohort that was followed up for 10 years. The data included various clinical, dietary, and epidemiological traits; they provided a unique opportunity to investigate the complex association between serum calcium levels, dietary calcium intake, and incident T2D. Serum calcium levels were not associated with a T2D risk, but higher dietary calcium intake was associated with a decreased T2D risk. The present results have potential public health implications for predicting and preventing T2D development, as well as providing guidelines for diet and calcium supplementation. Because previous evidence regarding the association between serum calcium levels, dietary calcium intake, and T2D are insufficient, further studies conducted among different populations are warranted to confirm the present results. confidence interval hazard ratio Danaei G, Finucane MM, Lu Y, Singh GM, Cowan MJ, Paciorek CJ, et al. National, regional, and global trends in fasting plasma glucose and diabetes prevalence since 1980: systematic analysis of health examination surveys and epidemiological studies with 370 country-years and 2·7 million participants. Lancet. 2011;378:31–40. Becerra-Tomás N, Estruch R, Bulló M, Casas R, Díaz-López A, Basora J, et al. Increased serum calcium levels and risk of type 2 diabetes in individuals at high cardiovascular risk. Diabetes Care. 2014;37:3084–91. Jorde R, Schirmer H, Njølstad I, Løchen M-L, Bøgeberg Mathiesen E, Kamycheva E, et al. Serum calcium and the calcium-sensing receptor polymorphism rs17251221 in relation to coronary heart disease, type 2 diabetes, cancer and mortality: the Tromsø Study. Eur J Epidemiol. 2013;28:569–78. Lorenzo C, Hanley AJ, Rewers MJ, Haffner SM. Calcium and phosphate concentrations and future development of type 2 diabetes: the insulin resistance atherosclerosis study. Diabetologia. 2014;57:1366–74. Sing CW, Cheng VKF, Ho DKC, Kung AWC, Cheung BMY, Wong ICK, et al. Serum calcium and incident diabetes: an observational study and meta-analysis. Osteoporos Int. 2016;27:1747–54. Zaccardi F, Webb DR, Carter P, Pitocco D, Khunti K, Davies MJ, et al. Association between direct measurement of active serum calcium and risk of type 2 diabetes mellitus: a prospective study. Nutr Metab Cardiovasc Dis. 2015;25:562–8. Pittas AG, Dawson-Hughes B, Li T, Van Dam RM, Willett WC, Manson JE, et al. Vitamin D and calcium intake in relation to type 2 diabetes in women. Diabetes Care. 2006;29:650–6. Villegas R, Gao Y-T, Dai Q, Yang G, Cai H, Li H, et al. Dietary calcium and magnesium intakes and the risk of type 2 diabetes: the Shanghai Women's Health Study. Am J Clin Nutr. 2009;89:1059–67. Shin C, Abbott RD, Lee H, Kim J, Kimm K. Prevalence and correlates of orthostatic hypotension in middle-aged men and women in Korea: the Korean Health and Genome Study. J Hum Hypertens. 2004;18:717–23. Ahn Y, Kwon E, Shim JE, Park MK, Joo Y, Kimm K, et al. Validation and reproducibility of food frequency questionnaire for Korean genome epidemiologic study. Eur J Clin Nutr. 2007;61:1435–41. Hu FB, Stampfer MJ, Rimm E, Ascherio A, Rosner BA, Spiegelman D, et al. Dietary fat and coronary heart disease: a comparison of approaches for adjusting for total energy intake and modeling repeated dietary measurements. Am J Epidemiol. 1999;149:531–40. Meira-Machado L, Cadarso-Suárez C, Gude F, Araújo A. smoothHR: an R package for pointwise nonparametric estimation of hazard ratio curves of continuous predictors. Comput Math Methods Med. 2013;2013:745742. Cho NH, Kim KM, Choi SH, Park KS, Jang HC, Kim SS, et al. High blood pressure and its association with incident diabetes over 10 years in the Korean Genome and Epidemiology Study (KoGES). Diabetes Care. 2015;38:1333–8. van Dam RM, Hu FB, Rosenberg L, Krishnan S, Palmer JR. Dietary calcium and magnesium, major food sources, and risk of type 2 diabetes in U.S. black women. Diabetes Care. 2006;29:2238–43. Kirii K, Mizoue T, Iso H, Takahashi Y, Kato M, Inoue M, et al. Calcium, vitamin D and dairy intake in relation to type 2 diabetes risk in a Japanese cohort. Diabetologia. 2009;52:2542–50. Oh JM, Woo HW, Kim MK, Lee Y-H, Shin DH, Shin M-H, et al. Dietary total, animal, vegetable calcium and type 2 diabetes incidence among Korean adults: the Korean Multi-Rural Communities Cohort (MRCohort). Nutr Metab Cardiovasc Dis. 2017;27:1152–64. Gagnon C, Lu ZX, Magliano DJ, Dunstan DW, Shaw JE, Zimmet PZ, et al. Serum 25-hydroxyvitamin D, calcium intake, and risk of type 2 diabetes after 5 years: results from a national, population-based prospective study (the Australian Diabetes, Obesity and Lifestyle study). Diabetes Care. 2011;34:1133–8. Aune D, Norat T, Romundstad P, Vatten LJ. Dairy products and the risk of type 2 diabetes: a systematic review and dose-response meta-analysis of cohort studies. Am J Clin Nutr. 2013;98:1066–83. Choi HS, Oh HJ, Choi H, Choi WH, Kim JG, Kim KM, et al. Vitamin D insufficiency in Korea—a greater threat to younger generation: the Korea National Health and Nutrition Examination Survey (KNHANES) 2008. J Clin Endocrinol Metab. 2011;96:643–51. Steingrimsdottir L, Gunnarsson O, Indridason OS, Franzson L, Sigurdsson G. Relationship between serum parathyroid hormone levels, vitamin D sufficiency, and calcium intake. JAMA. 2005;294:2336–41. Wright DC, Hucker KA, Holloszy JO, Han DH. Ca2+ and AMPK both mediate stimulation of glucose transport by muscle contractions. Diabetes. 2004;53:330–5. Zemel MB. Nutritional and endocrine modulation of intracellular calcium: implications in obesity, insulin resistance and hypertension. Mol Cell Biochem. 1998;188:129–36. Cheng H, Beck A, Launay P, Gross SA, Stokes AJ, Kinet J-P, et al. TRPM4 controls insulin secretion in pancreatic beta-cells. Cell Calcium. 2007;41:51–61. Hong H, Kim E-K, Lee J-S. Effects of calcium intake, milk and dairy product intake, and blood vitamin D level on osteoporosis risk in Korean adults: analysis of the 2008 and 2009 Korea National Health and Nutrition Examination Survey. Nutr. Res. Pract. 2013;7:409–17. Gomes JMG, Costa JA, Alfenas RC. Could the beneficial effects of dietary calcium on obesity and diabetes control be mediated by changes in intestinal microbiota and integrity? Br J Nutr. 2015;114:1756–65. Baird GS. Ionized calcium. Clin Chim Acta. 2011;412:696–701. KNK and YCH developed the study concept and design. KNK, SYO, and YCH analyzed and interpreted data, and drafted the manuscript. KNK performed the statistical analysis. KNK had full access to all data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. All authors read and approved the final manuscript. We thank the participants and survey staff of the Korean Genome and Epidemiology Study (KoGES) for their contributions to the present study. The authors declare they have no competing interests. The dataset used in this study (Ansung–Ansan cohort) can be provided after review and evaluation of research plan by the Korea Centers for Disease Control and Prevention (http://www.cdc.go.kr/CDC/eng/main.jsp). This study was supported by grants from the Korea Centers for Disease Control, Republic of Korea (4845-301, 4851-302, and 4851-307). This study was also supported in part by the R&D Program for Society of the National Research Foundation funded by the Ministry of Science, ICT & Future Planning, Republic of Korea (2014M3C8A5030619). The sponsors had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; or decision to submit the manuscript for publication. Division of Public Health and Preventive Medicine, Seoul National University Hospital, 101 Daehak-Ro Jongno-Gu, Seoul, Republic of Korea Kyoung-Nam Kim Department of Preventive Medicine, Seoul National University College of Medicine, 28 Yongon-Dong, Chongno-Gu, Seoul, 110-799, Republic of Korea & Yun-Chul Hong Department of Food and Nutrition, Research Center for Human Ecology, College of Human Ecology, Kyung Hee University, 26 Kyungheedae-Ro Dongdaemun-Gu, Seoul, Republic of Korea Se-Young Oh Institute of Environmental Medicine, Seoul National University Medical Research Center, 103 Daehak-Ro Jongno-Gu, Seoul, Republic of Korea Yun-Chul Hong Environmental Health Center, Seoul National University College of Medicine, 103 Daehak-Ro Jongno-Gu, Seoul, Republic of Korea Search for Kyoung-Nam Kim in: Search for Se-Young Oh in: Search for Yun-Chul Hong in: Correspondence to Yun-Chul Hong. Kim, K., Oh, S. & Hong, Y. Associations of serum calcium levels and dietary calcium intake with incident type 2 diabetes over 10 years: the Korean Genome and Epidemiology Study (KoGES). Diabetol Metab Syndr 10, 50 (2018) doi:10.1186/s13098-018-0349-y DOI: https://doi.org/10.1186/s13098-018-0349-y Dietary calcium
CommonCrawl
Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up. What intuitive explanation is there for the central limit theorem? In several different contexts we invoke the central limit theorem to justify whatever statistical method we want to adopt (e.g., approximate the binomial distribution by a normal distribution). I understand the technical details as to why the theorem is true but it just now occurred to me that I do not really understand the intuition behind the central limit theorem. So, what is the intuition behind the central limit theorem? Layman explanations would be ideal. If some technical detail is needed please assume that I understand the concepts of a pdf, cdf, random variable etc but have no knowledge of convergence concepts, characteristic functions or anything to do with measure theory. intuition central-limit-theorem $\begingroup$ Good question, although my immediate reaction, backed up by my limited experience of teaching this, is that the CLT isn't initially at all intuitive to most people. If anything, it's counter-intuitive! $\endgroup$ – onestop Oct 19 '10 at 2:39 $\begingroup$ @onestop AMEN! staring at the binomial distribution with p = 1/2 as n increases does show the CLT is lurking - but the intuition for it has always escaped me. $\endgroup$ – ronaf Oct 19 '10 at 3:18 $\begingroup$ Similar question with some nice ideas: stats.stackexchange.com/questions/643/… $\endgroup$ – user88 Oct 19 '10 at 6:42 $\begingroup$ Not an explanation but this simulation can be helpful understanding it. $\endgroup$ – David Lane May 12 '17 at 18:12 $\begingroup$ Wow, y'all sailed right past intuition on into a wonderful tutorial about the central limit theorem, convergence, long tails, and the lot. Definitely on the "D" side of "If you can't dazzle them with details, then baffle them with bull$hit.: $\endgroup$ – Mike Anderson Apr 9 '20 at 15:16 I apologize in advance for the length of this post: it is with some trepidation that I let it out in public at all, because it takes some time and attention to read through and undoubtedly has typographic errors and expository lapses. But here it is for those who are interested in the fascinating topic, offered in the hope that it will encourage you to identify one or more of the many parts of the CLT for further elaboration in responses of your own. Most attempts at "explaining" the CLT are illustrations or just restatements that assert it is true. A really penetrating, correct explanation would have to explain an awful lot of things. Before looking at this further, let's be clear about what the CLT says. As you all know, there are versions that vary in their generality. The common context is a sequence of random variables, which are certain kinds of functions on a common probability space. For intuitive explanations that hold up rigorously I find it helpful to think of a probability space as a box with distinguishable objects. It doesn't matter what those objects are but I will call them "tickets." We make one "observation" of a box by thoroughly mixing up the tickets and drawing one out; that ticket constitutes the observation. After recording it for later analysis we return the ticket to the box so that its contents remain unchanged. A "random variable" basically is a number written on each ticket. In 1733, Abraham de Moivre considered the case of a single box where the numbers on the tickets are only zeros and ones ("Bernoulli trials"), with some of each number present. He imagined making $n$ physically independent observations, yielding a sequence of values $x_1, x_2, \ldots, x_n$, all of which are zero or one. The sum of those values, $y_n = x_1 + x_2 + \ldots + x_n$, is random because the terms in the sum are. Therefore, if we could repeat this procedure many times, various sums (whole numbers ranging from $0$ through $n$) would appear with various frequencies--proportions of the total. (See the histograms below.) Now one would expect--and it's true--that for very large values of $n$, all the frequencies would be quite small. If we were to be so bold (or foolish) as to attempt to "take a limit" or "let $n$ go to $\infty$", we would conclude correctly that all frequencies reduce to $0$. But if we simply draw a histogram of the frequencies, without paying any attention to how its axes are labeled, we see that the histograms for large $n$ all begin to look the same: in some sense, these histograms approach a limit even though the frequencies themselves all go to zero. These histograms depict the results of repeating the procedure of obtaining $y_n$ many times. $n$ is the "number of trials" in the titles. The insight here is to draw the histogram first and label its axes later. With large $n$ the histogram covers a large range of values centered around $n/2$ (on the horizontal axis) and a vanishingly small interval of values (on the vertical axis), because the individual frequencies grow quite small. Fitting this curve into the plotting region has therefore required both a shifting and rescaling of the histogram. The mathematical description of this is that for each $n$ we can choose some central value $m_n$ (not necessarily unique!) to position the histogram and some scale value $s_n$ (not necessarily unique!) to make it fit within the axes. This can be done mathematically by changing $y_n$ to $z_n = (y_n - m_n) / s_n$. Remember that a histogram represents frequencies by areas between it and the horizontal axis. The eventual stability of these histograms for large values of $n$ should therefore be stated in terms of area. So, pick any interval of values you like, say from $a$ to $b \gt a$ and, as $n$ increases, track the area of the part of the histogram of $z_n$ that horizontally spans the interval $(a, b]$. The CLT asserts several things: No matter what $a$ and $b$ are, if we choose the sequences $m_n$ and $s_n$ appropriately (in a way that does not depend on $a$ or $b$ at all), this area indeed approaches a limit as $n$ gets large. The sequences $m_n$ and $s_n$ can be chosen in a way that depends only on $n$, the average of values in the box, and some measure of spread of those values--but on nothing else--so that regardless of what is in the box, the limit is always the same. (This universality property is amazing.) Specifically, that limiting area is the area under the curve $y = \exp(-z^2/2) / \sqrt{2 \pi}$ between $a$ and $b$: this is the formula of that universal limiting histogram. The first generalization of the CLT adds, When the box can contain numbers in addition to zeros and ones, exactly the same conclusions hold (provided that the proportions of extremely large or small numbers in the box are not "too great," a criterion that has a precise and simple quantitative statement). The next generalization, and perhaps the most amazing one, replaces this single box of tickets with an ordered indefinitely long array of boxes with tickets. Each box can have different numbers on its tickets in different proportions. The observation $x_1$ is made by drawing a ticket from the first box, $x_2$ comes from the second box, and so on. Exactly the same conclusions hold provided the contents of the boxes are "not too different" (there are several precise, but different, quantitative characterizations of what "not too different" has to mean; they allow an astonishing amount of latitude). These five assertions, at a minimum, need explaining. There's more. Several intriguing aspects of the setup are implicit in all the statements. For example, What is special about the sum? Why don't we have central limit theorems for other mathematical combinations of numbers such as their product or their maximum? (It turns out we do, but they are not quite so general nor do they always have such a clean, simple conclusion unless they can be reduced to the CLT.) The sequences of $m_n$ and $s_n$ are not unique but they're almost unique in the sense that eventually they have to approximate the expectation of the sum of $n$ tickets and the standard deviation of the sum, respectively (which, in the first two statements of the CLT, equals $\sqrt{n}$ times the standard deviation of the box). The standard deviation is one measure of the spread of values, but it is by no means the only one nor is it the most "natural," either historically or for many applications. (Many people would choose something like a median absolute deviation from the median, for instance.) Why does the SD appear in such an essential way? Consider the formula for the limiting histogram: who would have expected it to take such a form? It says the logarithm of the probability density is a quadratic function. Why? Is there some intuitive or clear, compelling explanation for this? I confess I am unable to reach the ultimate goal of supplying answers that are simple enough to meet Srikant's challenging criteria for intuitiveness and simplicity, but I have sketched this background in the hope that others might be inspired to fill in some of the many gaps. I think a good demonstration will ultimately have to rely on an elementary analysis of how values between $\alpha_n = a s_n + m_n$ and $\beta_n = b s_n + m_n$ can arise in forming the sum $x_1 + x_2 + \ldots + x_n$. Going back to the single-box version of the CLT, the case of a symmetric distribution is simpler to handle: its median equals its mean, so there's a 50% chance that $x_i$ will be less than the box's mean and a 50% chance that $x_i$ will be greater than its mean. Moreover, when $n$ is sufficiently large, the positive deviations from the mean ought to compensate for the negative deviations in the mean. (This requires some careful justification, not just hand waving.) Thus we ought primarily to be concerned about counting the numbers of positive and negative deviations and only have a secondary concern about their sizes. (Of all the things I have written here, this might be the most useful at providing some intuition about why the CLT works. Indeed, the technical assumptions needed to make the generalizations of the CLT true essentially are various ways of ruling out the possibility that rare huge deviations will upset the balance enough to prevent the limiting histogram from arising.) This shows, to some degree anyway, why the first generalization of the CLT does not really uncover anything that was not in de Moivre's original Bernoulli trial version. At this point it looks like there is nothing for it but to do a little math: we need to count the number of distinct ways in which the number of positive deviations from the mean can differ from the number of negative deviations by any predetermined value $k$, where evidently $k$ is one of $-n, -n+2, \ldots, n-2, n$. But because vanishingly small errors will disappear in the limit, we don't have to count precisely; we only need to approximate the counts. To this end it suffices to know that $$\text{The number of ways to obtain } k \text{ positive and } n-k \text{ negative values out of } n$$ $$\text{equals } \frac{n-k+1}{k}$$ $$\text{times the number of ways to get } k-1 \text{ positive and } n-k+1 \text { negative values.}$$ (That's a perfectly elementary result so I won't bother to write down the justification.) Now we approximate wholesale. The maximum frequency occurs when $k$ is as close to $n/2$ as possible (also elementary). Let's write $m = n/2$. Then, relative to the maximum frequency, the frequency of $m+j+1$ positive deviations ($j \ge 0$) is estimated by the product $$\frac{m+1}{m+1} \frac{m}{m+2} \cdots \frac{m-j+1}{m+j+1}$$ $$=\frac{1 - 1/(m+1)}{1 + 1/(m+1)} \frac{1-2/(m+1)}{1+2/(m+1)} \cdots \frac{1-j/(m+1)}{1+j/(m+1)}.$$ 135 years before de Moivre was writing, John Napier invented logarithms to simplify multiplication, so let's take advantage of this. Using the approximation $$\log\left(\frac{1-x}{1+x}\right) = -2x - \frac{2x^3}{3} + O(x^5),$$ we find that the log of the relative frequency is approximately $$-\frac{2}{m+1}\left(1 + 2 + \cdots + j\right) - \frac{2}{(m+1)^3}\left(1^3+2^3+\cdots+j^3\right) = -\frac{j^2}{m} + O\left(\frac{j^4}{m^3}\right).$$ Because the error in approximating this sum by $-j^2/m$ is on the order of $j^4/m^3$, the approximation ought to work well provided $j^4$ is small relative to $m^3$. That covers a greater range of values of $j$ than is needed. (It suffices for the approximation to work for $j$ only on the order of $\sqrt{m}$ which asymptotically is much smaller than $m^{3/4}$.) Consequently, writing $$z = \sqrt{2}\,\frac{j}{\sqrt{m}} = \frac{j/n}{1 / \sqrt{4n}}$$ for the standardized deviation, the relative frequency of deviations of size given by $z$ must be proportional to $\exp(-z^2/2)$ for large $m.$ Thus appears the Gaussian law of #3 above. Obviously much more analysis of this sort should be presented to justify the other assertions in the CLT, but I'm running out of time, space, and energy and I've probably lost 90% of the people who started reading this anyway. This simple approximation, though, suggests how de Moivre might originally have suspected that there is a universal limiting distribution, that its logarithm is a quadratic function, and that the proper scale factor $s_n$ must be proportional to $\sqrt{n}$ (as shown by the denominator of the preceding formula). It is difficult to imagine how this important quantitative relationship could be explained without invoking some kind of mathematical information and reasoning; anything less would leave the precise shape of the limiting curve a complete mystery. whuber♦whuber $\begingroup$ +1 It will take me some time to digest your answer. I admit that asking for an intuition for the CLT within the constraints I imposed may be nearly impossible. $\endgroup$ – user28 Oct 23 '10 at 1:42 $\begingroup$ Thank you for taking the time to write this, it's the most helpful exposition of the CLT I've seen that is also very accessible mathematically. $\endgroup$ – jeremy radcliff Jul 11 '16 at 3:25 $\begingroup$ Yes, quite dense.... so many questions. How does the first histogram have 2 bars (there was only 1 trial!) ; can I just ignore that? And the convention is usually to avoid horizontal gaps between bars of a histogram, right? (because , as you say, area is important, and the area will eventually be calculated over a continuous (i.e. no gaps) domain) ? So I'll ignore the gaps, too...? Even I had gaps when I first tried to understand it :) $\endgroup$ – The Red Pea May 12 '17 at 19:15 $\begingroup$ @TheRed Thank you for your questions. I have edited the first part of this post to make these points a little clearer. $\endgroup$ – whuber♦ May 12 '17 at 19:41 $\begingroup$ Ah, yes, I confused "number of trials= $n$ = "observations"" with "number of times (this entire procedure) is repeated". So if a ticket can only have the a value of the two values, 0 or 1, and you only observe one ticket, the sum of those tickets' values can only be one of two things: 0, or 1. Hence your first histogram has two bars. Moreover, these bars are roughly equal in height because we expect 0 and 1 to occur in equal proportions. $\endgroup$ – The Red Pea May 12 '17 at 20:27 The nicest animation I know: http://www.ms.uky.edu/~mai/java/stat/GaltonMachine.html The simplest words I have read: http://elonen.iki.fi/articles/centrallimit/index.en.html If you sum the results of these ten throws, what you get is likely to be closer to 30-40 than the maximum, 60 (all sixes) or on the other hand, the minumum, 10 (all ones). The reason for this is that you can get the middle values in many more different ways than the extremes. Example: when throwing two dice: 1+6 = 2+5 = 3+4 = 7, but only 1+1 = 2 and only 6+6 = 12. That is: even though you get any of the six numbers equally likely when throwing one die, the extremes are less probable than middle values in sums of several dice. The Red Pea glassyglassy $\begingroup$ that first diagram (dropping balls) only works because we drop the balls above the middle. if we dropped them from the corner, they'd stack up towards the corner. $\endgroup$ – d-_-b Jul 17 '20 at 10:20 An observation concerning the CLT may be the following. When you have a sum $$ S = X_1 + X_2 + \ldots + X_n $$ of a lot of random components, if one is "smaller than usual" then this is mostly compensated for by some of the other components being "larger than usual". In other words, negative deviations and positive deviations from the component means cancel each other out in the summation. Personally, I have no clear-cut intuition why exactly the remaining deviations form a distribution that looks more and more normal the more terms you have. There are many versions of the CLT, some stronger than others, some with relaxed conditions such as a moderate dependence between the terms and/or non-identical distributions for the terms. In the simplest-to-prove versions of the CLT, the proof is usually based on the moment-generating function (or Laplace-Stieltjes transform or some other appropriate transform of the density) of the sum $S$. Writing this as a Taylor expansion and keeping only the most dominant term gives you the moment-generating function of the normal distribution. So for me personally, the normality is something that follows from a bunch of equations and I can not provide any further intuition than that. It should be noted however that the sum's distribution, never really is normally distributed, nor does the CLT claims that it would be. If $n$ is finite, there is still some distance to the normal distribution and if $n=\infty$ both the mean and the variance are infinite as well. In the latter case you could take the mean of the infinite sum, but then you get a deterministic number without any variance at all, which could hardly be labelled as "normally distributed". This may pose problems with practical applications of the CLT. Usually, if you are interested in the distribution of $S/n$ close to its center, CLT works fine. However, convergence to the normal is not uniform everywhere and the further you get away from the center, the more terms you need to have a reasonable approximation. With all the "sanctity" of the Central Limit Theorem in statistics, its limitations are often overlooked all too easily. Below I give two slides from my course making the point that CLT utterly fails in the tails, in any practical use case. Unfortunately, a lot of people specifically use CLT to estimate tail probabilities, knowingly or otherwise. StijnDeVuystStijnDeVuyst $\begingroup$ This is great material and wise advice. I cannot upvote it, unfortunately, because the assertions in "This normality is a mathematical artifact and I think it is not useful to search for any deeper truth or intuition behind it" are deeply troubling. They seem to suggest that (1) we shouldn't rely on mathematics to help us theoretically and (2) there is no point to understanding the math in the first place. I hope that other posts in this thread already go a long way towards disproving the second assertion. The first is so self-inconsistent it hardly bears further analysis. $\endgroup$ – whuber♦ Mar 22 '15 at 17:56 $\begingroup$ @whuber. You are right, I am out of my league perhaps. I'll edit. $\endgroup$ – StijnDeVuyst Mar 22 '15 at 20:32 $\begingroup$ Thank you for reconsidering the problematic part, and a big +1 for the rest. $\endgroup$ – whuber♦ Mar 22 '15 at 21:17 Intuition is a tricky thing. It's even trickier with theory in our hands tied behind our back. The CLT is all about sums of tiny, independent disturbances. "Sums" in the sense of the sample mean, "tiny" in the sense of finite variance (of the population), and "disturbances" in the sense of plus/minus around a central (population) value. For me, the device that appeals most directly to intuition is the quincunx, or 'Galton box', see Wikipedia (for 'bean machine'?) The idea is to roll a tiny little ball down the face of a board adorned by a lattice of equally spaced pins. On its way down the ball diverts right and left (...randomly, independently) and collects at the bottom. Over time, we see a nice bell shaped mound form right before our eyes. The CLT says the same thing. It is a mathematical description of this phenomenon (more precisely, the quincunx is physical evidence for the normal approximation to the binomial distribution). Loosely speaking, the CLT says that as long as our population is not overly misbehaved (that is, if the tails of the PDF are sufficiently thin), then the sample mean (properly scaled) behaves just like that little ball bouncing down the face of the quincunx: sometimes it falls off to the left, sometimes it falls off to the right, but most of the time it lands right around the middle, in a nice bell shape. The majesty of the CLT (to me) is that the shape of the underlying population is irrelevant. Shape only plays a role insofar as it delegates the length of time we need to wait (in the sense of sample size). This answer hopes to give an intuitive meaning of the central limit theorem, using simple calculus techniques (Taylor expansion of order 3). Here is the outline: What the CLT says An intuitive proof of the CLT using simple calculus Why the normal distribution? We will mention the normal distribution at the very end; because the fact that the normal distribution eventually comes up does not bear much intuition. 1. What the central limit theorem says? Several versions of the CLT There are several equivalent versions of the CLT. The textbook statement of the CLT says that for any real $x$ and any sequence of independent random variables $X_1,\cdots,X_n$ with zero-mean and variance 1, \[P\left(\frac{X_1+\cdots+X_n}{\sqrt n} \le x\right) \to_{n\to+\infty} \int_{-\infty}^x \frac{e^{-t^2/2}}{\sqrt{2\pi}} dt.\] To understand on what is universal and intuitive about the CLT, let's forget the limit for a moment. The above statement says that if $X_1.,\ldots,X_n$ and $Z_1,\ldots,Z_n$ are two sequences of independent random variables each with zero-mean and variance 1, then \[E \left[ f\left(\tfrac{X_1+\cdots+X_n}{\sqrt n}\right) \right] - E \left[ f\left(\tfrac{Z_1+\cdots+Z_n}{\sqrt n}\right) \right] \to_{n\to+\infty} 0 \] for every indicator function $f$ of the form, for some fixed real $x$, \begin{equation} f(t) = \begin{cases} 1 \text{ if } t < x \\ 0 \text{ if } t\ge x.\end{cases} \end{equation} The previous display embodies the fact the limit is the same no matter the particular distributions of $X_1,\ldots,X_n$ and $Z_1,\ldots,Z_n$, provided that the random variables are independent with mean zero, variance one. Some other versions of the CLT mentions the class of Lipschtiz functions that are bounded by 1; some other versions of the CLT mentions the class of smooth functions with bounded derivative of order $k$. Consider two sequences $X_1,\ldots,X_n$ and $Z_1,\ldots,Z_n$ as above, and for some function $f$, the convergence result (CONV) \[E \left[ f\left(\tfrac{X_1+\cdots+X_n}{\sqrt n}\right) \right] - E \left[ f\left(\tfrac{Z_1+\cdots+Z_n}{\sqrt n}\right) \right] \to_{n\to+\infty} 0 \tag{CONV}\] It is possible to establish the equivalence ("if and only if") between the following statements: (CONV) above holds for every indicator functions $f$ of the form $f(t)=1$ for $t < x$ and $f(t)=0$ for $t\ge x$ for some fixed real $x$. (CONV) holds for every bounded lipschitz function $f:R\to R$. (CONV) holds for every smooth (i.e., $C^{\infty}$) functions with compact support. (CONV) holds for every functions $f$ three time continuously differentiable with $\sup_{x\in R} |f'''(x)| \le 1$. Each of the 4 points above says that the convergence holds for a large class of functions. By a technical approximation argument, one can show that the four points above are equivalent, we refer the reader to Chapter 7, page 77 of David Pollard's book A user's guide to measure theoretic probabilities from which this answer is highly inspired. Our assumption for the remaining of this answer... We will assume that $\sup_{x\in R} |f'''(x)| \le C$ for some constant $C>0$, which corresponds to point 4 above. We will also assume that the random variables have finite, bounded third moment: $E[|X_i|^3]$ and $E[|Z_i|^3]$ are finite. 2. The value of $E\left[ f\left( \tfrac{X_1+\cdots+X_n}{\sqrt n} \right) \right]$ is universal: it does not depend on the distribution of $X_1,...,X_n$ Let us show that this quantity is universal (up to a small error term), in the sense that it does not depend on which collection of independent random variables was provided. Take $X_1,\ldots,X_n$ and $Z_1,\ldots,Z_n$ two sequences of independent random variables, each with mean 0 and variance 1, and finite third moment. The idea is to iteratively replace $X_i$ by $Z_i$ in one of the quantity and control the difference by basic calculus (the idea, I believe, is due to Lindeberg). By a Taylor expansion, if $W = Z_1+\cdots+Z_{n-1}$, and $h(x)=f(x/\sqrt n)$ then \begin{align} h(Z_1+\cdots+Z_{n-1}+X_n) &= h(W) + X_n h'(W) + \frac{X_n^2 h''(W)}{2} + \frac{X_n^3/h'''(M_n)}{6} \\ h(Z_1+\cdots+Z_{n-1}+Z_n) &= h(W) + Z_n h'(W) + \frac{Z_n^2 h''(W)}{2} + \frac{Z_n^3 h'''(M_n')}{6} \\ \end{align} where $M_n$ and $M_n'$ are midpoints given by the mean-value theorem. Taking expectation on both lines, the zeroth order term is the same, the first order terms are equal in expectation because by independence of $X_n$ and $W$, $E[X_n h'(W)]= E[X_n] E[h'(W)] =0$ and similarly for the second line. Again by independence, the second order terms are the same in expectation. The only remaining terms are the third order one, and in expectation the difference between the two lines is at most \[ \frac{(C/6)E[ |X_n|^3 + |Z_n|^3 ]}{(\sqrt n)^3}. \] Here $C$ is an upper bound on the third derivative of $f'''$. The denominator $(\sqrt{n})^3$ appears because $h'''(t) = f'''(t/\sqrt n)/(\sqrt n)^3$. By independence, the contribution of $X_n$ in the sum is meaningless because it could be replaced by $Z_n$ without incurring an error larger than the above display! We now reiterate to replace $X_{n-1}$ by $Z_{n-1}$. If $\tilde W= Z_1+Z_2+\cdots+Z_{n-2} + X_n$ then \begin{align} h(Z_1+\cdots+Z_{n-2}+X_{n-1}+X_n) &= h(\tilde W) + X_{n-1} h'(\tilde W) + \frac{X_{n-1}^2 h''(\tilde W)}{2} + \frac{X_{n-1}^3/h'''(\tilde M_n)}{6}\\ h(Z_1+\cdots+Z_{n-2}+Z_{n-1}+X_n) &= h(\tilde W) + Z_{n-1} h'(\tilde W) + \frac{Z_{n-1}^2 h''(\tilde W)}{2} + \frac{Z_{n-1}^3/h'''(\tilde M_n)}{6}. \end{align} By independence of $Z_{n-1}$ and $\tilde W$, and by independence of $X_{n-1}$ and $\tilde W$, again the zeroth, first and second order terms are equal in expectation for both lines. The difference in expectation between the two lines is again at most \[ \frac{(C/6)E[ |X_{n-1}|^3 + |Z_{n-1}|^3 ]}{(\sqrt n)^3}. \] We keep iterating until we replaced all $Z_i$'s with $X_i$'s. By adding the errors made at each of the $n$ steps, we obtain \[ \Big| E\left[ f\left( \tfrac{X_1+\cdots+X_n}{\sqrt n} \right) \right]-E\left[ f\left( \tfrac{Z_1+\cdots+Z_n}{\sqrt n} \right) \right] \Big| \le n \frac{(C/6)\max_{i=1,\ldots,n} E[ |X_i|^3 + |Z_i|^3 ]}{(\sqrt n)^3}. \] as $n$ increases, the right hand side converges to 0 if the third moments of our random variables are finite (let's assume it is the case). This means that the expectations on the left become arbitrarily close to each other, no matter if the distribution of $X_1,\ldots,X_n$ is far from that of $Z_1,\ldots,Z_n$. By independence, the contribution of each $X_i$ in the sum is meaningless because it could be replaced by $Z_i$ without incurring an error larger than $O(1/(\sqrt n)^3)$. And replacing all $X_i$'s by the $Z_i$'s does not change the quantity by more than $O(1/\sqrt n)$. The expectation $E\left[ f\left( \frac{X_1+\cdots+X_n}{\sqrt n} \right) \right]$ is thus universal, it does not depend on the distribution of $X_1,\ldots,X_n$. On the other hand, independence and $E[X_i]=E[Z_i]=0,E[Z_i^2]=E[X_i^2]=1$ was of utmost importance for the above bounds. 3. Why the normal distribution? We have seen that the expectation $E\left[ f\left( \frac{X_1+\cdots+X_n}{\sqrt n} \right) \right]$ will be the same no matter what the distribution of $X_i$ is, up to a small error of order $O(1/\sqrt n)$. But for applications, it would be useful to compute such quantity. It would also be useful to get a simpler expression for this quantity $E\left[ f\left( \frac{X_1+\cdots+X_n}{\sqrt n} \right) \right]$. Since this quantity is the same for any collection $X_1,\ldots,X_n$, we can simply pick one specific collection such that the distribution $(X_1+\cdots+X_n)/\sqrt n$ is easy to compute or easy to remember. For the normal distribution $N(0,1)$, it happens that this quantity becomes really simple. Indeed, if $Z_1,\ldots,Z_n$ are iid $N(0,1)$ then $\frac{Z_1+\cdots+Z_n}{\sqrt n}$ has also the $N(0,1)$ distribution and it does not depend on $n$! Hence if $Z\sim N(0,1)$, then \[ E\left[ f\left( \frac{Z_1+\cdots+Z_n}{\sqrt n} \right) \right] = E[ f(Z)], \] and by the above argument, for any collection of independent random variables $X_1,\ldots,X_n$ with $E[X_i]=0,E[X_i^2]=1$, then \[ \left| E\left[ f\left( \frac{X_1+\cdots+X_n}{\sqrt n} \right) \right] -E[f(Z) \right| \le \frac{\sup_{x\in R} |f'''(x)| \max_{i=1,\ldots,n} E[|X_i|^3 + |Z|^3]}{6\sqrt n}. \] jlewkjlewk $\begingroup$ You seem to be asserting a law of large numbers rather than the CLT. $\endgroup$ – whuber♦ Mar 14 '19 at 14:59 $\begingroup$ I am not sure why you would say this, @whuber. The above give an intuitive proof that $E[f((X_1+...+X_n)/\sqrt n)]$ converges to $E[f(Z)]$ where $Z\sim N(0,1)$ for a large class of functions $f$. This is the CLT. $\endgroup$ – jlewk Mar 14 '19 at 16:09 $\begingroup$ I see what you mean. What gives me pause is that your assertion concerns only expectations and not distributions, whereas the CLT draws conclusions about a limiting distribution. The equivalence between the two might not immediately be evident to many. Might I suggest, then, that you provide an explicit connection between your statement and the usual statements of the CLT in terms of limiting distributions? (+1 by the way: thank you for elaborating this argument.) $\endgroup$ – whuber♦ Mar 14 '19 at 18:23 $\begingroup$ Really great answer. I find this much more intuitive than characteristic function kung-fu. $\endgroup$ – Eric Auld Mar 21 '20 at 23:49 Why the $\sqrt{n}$ instead of $n$? What's this weird version of an average? If you have a bunch of perpendicular vectors $x_1, \dotsc, x_n$ of length $\ell$, then $ \frac{x_1 + \dotsb + x_n}{\sqrt{n}}$ is again of length $\ell.$ You have to normalize by $\sqrt{n}$ to keep the sum at the same scale. There is a deep connection between independent random variables and orthogonal vectors. When random variables are independent, that basically means that they are orthogonal vectors in a vector space of functions. (The function space I refer to is $L^2$, and the variance of a random variable $X$ is just $\|X - \mu\|_{L^2}^2$. So no wonder the variance is additive over independent random variables. Just like $\|x + y\|^2 = \|x\|^2 + \|y\|^2$ when $x \perp y$.)** One thing that really confused me for a while, and which I think lies at the heart of the matter, is the following question: Why is it that the sum $\frac{X_1 + \dotsb + X_n} {\sqrt{n}}$ ($n$ large) doesn't care anything about the $X_i$ except their mean and their variance? (Moments 1 and 2.) This is similar to the law of large numbers phenomenon: $\frac{X_1 + \dotsb + X_n} {n}$ ($n$ large) only cares about moment 1 (the mean). (Both of these have their hypotheses that I'm suppressing (see the footnote), but the most important thing, of course, is that the $X_i$ be independent.) A more elucidating way to express this phenomenon is: in the sum $\frac{X_1 + \dotsb + X_n}{\sqrt{n}}$, I can replace any or all of the $X_i$ with some other RV's, mixing and matching between all kinds of various distributions, as long as they have the same first and second moments. And it won't matter as long as $n$ is large, relative to the moments. If we understand why that's true, then we understand the central limit theorem. Because then we may as well take $X_i$ to be normal with the same first and second moment, and in that case we know $\frac{X_1 + \dotsb + X_n}{\sqrt{n}}$ is just normal again for any $n$, including super-large $n$. Because the normal distribution has this special property ("stability") where you can add two independent normals together and get another normal. Voila. The explanation of the first-and-second-moment phenomemonon is ultimately just some arithmetic. There are several lenses through which once can choose to view this arithmetic. The most common one people use is the fourier transform (AKA characteristic function), which has the feel of "I follow the steps, but how and why would anyone ever think of that?" Similarly cumulants, where we find that the normal distribution is unique in that all its higher cumulants vanish. I'll show here a more elementary approach. As the sum $Z_n \overset{\text{(def)}}{=} \frac{X_1 + \dotsb + X_n}{\sqrt{n}}$ gets longer and longer, I'll show that all of the moments of $Z_n$ are functions only of the variances $\operatorname{Var}(X_i)$ and the means $\mathbb{E}X_i$, and nothing else. Now the moments of $Z_n$ determine the distribution of $Z_n$ (that's true not just for long independent sums, but for any nice distribution, by the Carleman continuity theorem). To restate, we're claiming that as $n$ gets large, $Z_n$ depends only on the $\mathbb{E}X_i$ and the $\operatorname{Var}X_i$. And to show that, we're going to show that $\mathbb{E}((Z_n - \mathbb{E}Z_n)^k)$ depends only on the $\mathbb{E}X_i$ and the $\operatorname{Var}X_i$. That suffices, by the Carleman continuity theorem. For convenience, let's require that the $X_i$ have mean zero and variance $\sigma^2$. Assume all their moments exist and are uniformly bounded. (But nevertheless, the $X_i$ can be all different independent distributions.) Claim: Under the stated assumptions, the $k$th moment $$\mathbb{E} \left[ \left(\frac{X_1 + \dotsb + X_n}{\sqrt{n}}\right)^k \right]$$ has a limit as $n \to \infty$, and that limit is a function only of $\sigma^2$. (It disregards all other information.) (Specifically, the values of those limits of moments are just the moments of the normal distribution $\mathcal{N}(0, \sigma^2)$: zero for $k$ odd, and $|\sigma|^k \frac{k!}{(k/2)!2^{k/2}}$ when $k$ is even. This is equation (1) below.) Proof: Consider $\mathbb{E} \left[ \left(\frac{X_1 + \dotsb + X_n}{\sqrt{n}}\right)^k \right]$. When you expand it, you get a factor of $n^{-k/2}$ times a big fat multinomial sum. $$n^{-k/2} \sum_{|\boldsymbol{\alpha}| = k} \binom{k}{\alpha_1, \dotsc, \alpha_n}\prod_{i=1}^n \mathbb{E}(X_i^{\alpha_i})$$ $$\alpha_1 + \dotsb + \alpha_n = k$$ $$(\alpha_i \geq 0)$$ (Remember you can distribute the expectation over independent random variables. $\mathbb{E}(X^a Y^b) = \mathbb{E}(X^a)\mathbb{E}(Y^b)$.) Now if ever I have as one of my factors a plain old $\mathbb{E}(X_i)$, with exponent $\alpha_i =1$, then that whole term is zero, because $\mathbb{E}(X_i) = 0$ by assumption. So I need all the exponents $\alpha_i \neq 1$ in order for that term to survive. That pushes me toward using fewer of the $X_i$ in each term, because each term has $\sum \alpha_i = k$, and I have to have each $\alpha_i >1$ if it is $>0$. In fact, some simple arithmetic shows that at most $k/2$ of the $\alpha_i$ can be nonzero, and that's only when $k$ is even, and when I use only twos and zeros as my $\alpha_i$. This pattern where I use only twos and zeros turns out to be very important...in fact, any term where I don't do that will vanish as the sum grows larger. Lemma: The sum $$n^{-k/2} \sum_{|\boldsymbol{\alpha}| = k}\binom{k}{\alpha_1, \dotsc, \alpha_n}\prod_{i=1}^n \mathbb{E}(X_i^{\alpha_i})$$ breaks up like $$n^{-k/2} \left( \underbrace{\left( \text{terms where some } \alpha_i = 1 \right)}_{\text{These are zero because $\mathbb{E}X_i = 0$}} + \underbrace{\left( \text{terms where }\alpha_i\text{'s are twos and zeros}\right)}_{\text{This part is } O(n^{k/2}) \text{ if $k$ is even, otherwise no such terms}} + \underbrace{\left( \text{rest of terms}\right)}_{o(n^{k/2})} \right)$$ In other words, in the limit, all terms become irrelevant except $$ n^{-k/2}\sum\limits_{\binom{n}{k/2}} \underbrace{\binom{k}{2,\dotsc, 2}}_{k/2 \text{ twos}} \prod\limits_{j=1}^{k/2}\mathbb{E}(X_{i_j}^2) \tag{1}$$ Proof: The main points are to split up the sum by which (strong) composition of $k$ is represented by the multinomial $\boldsymbol{\alpha}$. There are only $2^{k-1}$ possibilities for strong compositions of $k$, so the number of those can't explode as $n \to \infty$. Then there is the choice of which of the $X_1, \dotsc, X_n$ will receive the positive exponents, and the number of such choices is $\binom{n}{\text{# positive terms in }\boldsymbol{\alpha}} = O(n^{\text{# positive terms in }\boldsymbol{\alpha}})$. (Remember the number of positive terms in $\boldsymbol{\alpha}$ can't be bigger than $k/2$ without killing the term.) That's basically it. You can find a more thorough description here on my website, or in section 2.2.3 of Tao's Topics in Random Matrix Theory, where I first read this argument. And that concludes the whole proof. We've shown that all moments of $\frac{X_1 + … + X_n}{\sqrt{n}}$ forget everything but $\mathbb{E}X_i$ and $\mathbb{E}(X_i^2)$ as $n \to \infty$. And therefore swapping out the $X_i$ with any variables with the same first and second moments wouldn't have made any difference in the limit. And so we may as well have taken them to be $\sim \mathcal{N}(\mu, \sigma^2)$ to begin with; it wouldn't have made any difference. **(If one wants to pursue more deeply the question of why $n^{1/2}$ is the magic number here for vectors and for functions, and why the variance (square $L^2$ norm) is the important statistic, one might read about why $L^2$ is the only $L^p$ space that can be an inner product space. Because $2$ is the only number that is its own Holder conjugate.) Another valid view is that $n^{1/2}$ is not the only denominator can appear. There are different "basins of attraction" for random variables, and so there are infinitely many central limit theorems. There are random variables for which $\frac{X_1 + \dotsb + X_n}{n} \Rightarrow X$, and for which $\frac{X_1 + \dotsb + X_n}{1} \Rightarrow X$! But these random variables necessarily have infinite variance. These are called "stable laws". It's also enlightening to look at the normal distribution from a calculus of variations standpoint: the normal distribution $\mathcal{N}(\mu, \sigma^2)$ maximizes the Shannon entropy among distributions with a given mean and variance, and which are absolutely continuous with respect to the Lebesgue measure on $\mathbb{R}$ (or $\mathbb{R}^d$, for the multivariate case). This is proven here, for example. Eric AuldEric Auld $\begingroup$ +1 I like this for its insight and elementary nature. In spirit it looks the same as the answer by jlewk. $\endgroup$ – whuber♦ Mar 23 '20 at 11:00 I gave up on trying to come up with an intuitive version and came up with some simulations. I have one that presents a simulation of a Quincunx and some others that do things like show how even a skewed raw reaction time distribution will become normal if you collect enough RT's per subject. I think they help but they're new in my class this year and I haven't graded the first test yet. One thing that I thought was good was being able to show the law of large numbers as well. I could show how variable things are with small sample sizes and then show how they stabilize with large ones. I do a bunch of other large number demos as well. I can show the interaction in the Quincunx between the numbers of random processes and the numbers of samples. (turns out not being able to use a chalk or white board in my class may have been a blessing) $\begingroup$ Hi John: nice to see you back with this post after almost nine years! It would be interesting to read about the experiences you have had in the meantime with your use of simulations to teach the idea of the CLT and the LLNs. $\endgroup$ – whuber♦ Jun 26 '19 at 14:36 $\begingroup$ I stopped teaching that class a year later but the subsequent instructor picked up on the simulation idea. In fact, he carries it much farther and has developed a sequence of shiny apps and has students play with simulations for loads of things in the 250 person class. As near as I can tell from teaching the upper class the students seem to get a lot out of it. The difference between his students and those from equivalent feeder classes is noticeable. (but , of course, there are lots of uncontrolled variables there) $\endgroup$ – John Jul 2 '19 at 9:59 $\begingroup$ Thank you, John. It is so unusual to get even anecdotal feedback about lasting student performance after a class has finished that I find even this limited information of interest. $\endgroup$ – whuber♦ Jul 2 '19 at 13:34 I think it's important to realize that a histogram of coin flips already approximates the normal distribution, that's just a fact of reality (in fact this is probably why the normal distribution is called the normal distribution): a simple plot of a 1/2 bar in the middle, 1/4 bars on either side of that middle, 1/8 bars next to the 1/4 bars and so on (a histogram of the probabilities of getting heads or tails x number of times in a row) is already pretty close to the normal distribution, but here's the thing: when you have a loaded coin with 2/3 probability of getting tails, the histogram is also normally distributed. When you add a lot of histograms of random distributions together you either maintain the normal distribution shape because all of the individual histograms already have that shape or you get that shape because fluctuations in the individual histograms tend to cancel each other out if you add a large number of histograms. A histogram of a random distribution of one variable is already approximately distributed in a way that people have started calling the normal distribution because it's so common and that's a microcosm of the central limit theorem. This is not the whole story but I think it's as intuitive as it gets. PoelmoPoelmo $\begingroup$ Your description of a "normal distribution" sounds instead like a discrete version of the double exponential, which is not even remotely like a Gaussian normal distribution (except insofar both are unimodal and symmetric). The histogram of coin flips does not have bars that decrease by a factor of $2$ with each step! That suggests there may be some difficulties lurking in this explanation that have been papered over by an appeal to "intuition." $\endgroup$ – whuber♦ Aug 23 '13 at 18:18 $\begingroup$ This answer is mostly nonsense. No number of flips of a fair coin will result in a distribution of number of heads that has probabilities $\frac 18, \frac 14, \frac 12, \frac 14, \frac 18$; indeed that is not even a probability mass function! Nor does the number of heads in a row have anything to do with the question. $\endgroup$ – Dilip Sarwate Aug 23 '13 at 22:32 How does the Central Limit Theorem show that the Binomial Distribution is approximately Normal for a large value of n? Central Limit Theorem Is the mean of n independent Bernoulli random variables normal? The central limit theorem, What it means What does "properly normalized" mean in CLT? normalization coefficient in the central limit theorem Central limit theorem on distributions with support other than $\mathbb{R}$ What does the y-axis of a normal distribution represent? Normal distribution in nature: additive result of multiple variables? Why square the difference instead of taking the absolute value in standard deviation? Central limit theorem and the law of large numbers What does it mean to scale random variables? Central limit theorem question When I create a distribution by summing 5 different distributions and sample data from the summed distribution will I get normal distribution? Central limit theorem and convergence theory Understanding the Central Limit Theorem (CLT) Clarification - Central limit theorem using sample means Normally distributed errors and the central limit theorem Confusion about using box models to explain the central limit theorem
CommonCrawl
JGM Home Killing's equations for invariant metrics on Lie groups September 2011, 3(3): 337-362. doi: 10.3934/jgm.2011.3.337 Integrable Euler top and nonholonomic Chaplygin ball Andrey Tsiganov 1, St. Petersburg State University, St. Petersburg, Russian Federation Received April 2011 Revised July 2011 Published November 2011 We discuss the Poisson structures, Lax matrices, $r$-matrices, bi-hamiltonian structures, the variables of separation and other attributes of the modern theory of dynamical systems in application to the integrable Euler top and to the nonholonomic Chaplygin ball. Keywords: bi-Hamiltonian geometry., Integrable nonholonomic systems. Mathematics Subject Classification: Primary: 34D20; Secondary: 70E40, 37J3. Citation: Andrey Tsiganov. Integrable Euler top and nonholonomic Chaplygin ball. Journal of Geometric Mechanics, 2011, 3 (3) : 337-362. doi: 10.3934/jgm.2011.3.337 R. Abraham and J. E. Marsden, "Foundations of Mechanics," Second edition, revised and enlarged, With the assistance of Tudor Raţiu and Richard Cushman, Benjamin/Cummings Publishing Co., Inc., Advanced Book Program, Reading, Mass., 1978. Google Scholar M. Audin, "Spinning Tops. A Course on Integrable Systems," Cambridge Studies in Advanced Mathematics, 51, Cambridge University Press, Cambridge, 1996. Google Scholar O. Babelon and C.-M. Viallet, Hamiltonian structures and Lax equations, Phys. Lett. B, 237 (1990), 411-416. doi: 10.1016/0370-2693(90)91198-K. Google Scholar O. I. Bogoyavlenskiĭ , Integrable cases of rigid-body dynamics and integrable systems on the spheres $S^n$, Izv. Akad. Nauk SSSR Ser. Mat., 49 (1985), 899-915, 1119. Google Scholar A. V. Bolsinov and B. Jovanović, Noncommutative integrability, moment map and geodesic flows, Ann. Glob. Anal. and Geom., 23 (2003), 305-322. doi: 10.1023/A:1023023300665. Google Scholar A. V. Borisov and I. S. Mamaev, Chaplygin's ball rolling problem is Hamiltonian, Math. Notes, 70 (2001), 720-723. doi: 10.1023/A:1012995330780. Google Scholar A. V. Borisov and I. S. Mamaev, "Dynamics of a Rigid Body. Hamiltonian Methods, Integrability, Chaos," Second edition, Institut Komp'yuternykh Issledovaniĭ, Izhevsk, 2005. Google Scholar A. V. Borisov and I. S. Mamaev, Conservation laws, hierarchy of dynamics and explicit integration of nonholonomic systems, Reg. Chaotic Dyn., 13 (2008), 443-490. doi: 10.1134/S1560354708050079. Google Scholar A. V. Borisov, Yu. N. Fedorov and I. S. Mamaev, Chaplygin ball over a fixed sphere: An explicit integration, Reg. Chaotic Dyn., 13 (2008), 557-571. doi: 10.1134/S1560354708060063. Google Scholar S. A. Chaplygin, "On a Motion of a Heavy Body of Revolution on a Horizontal Plane," Translated from "Collected works. Vol. I. Theoretical Mechanics. Mathematics," 51-57, Gos. Izd. Tekhn.-Teoret. Lit., Moscow, 1948, and Regul. Chaotic Dyn., 7 (2002), 119-130. doi: 10.1070/RD2002v007n02ABEH000199. Google Scholar S. A. Chaplygin, On a ball's rolling on a horizontal plane, Regul. Chaotic Dyn., 7 (2002), 131-148. doi: 10.1070/RD2002v007n02ABEH000200. Google Scholar J. J. Duistermaat, Chaplygin's sphere,, preprint, (). Google Scholar K. Ehlers, J. Koiller, R. Montgomery and P. M. Rios, Nonholonomic systems via moving frames: Cartan equivalence and Chaplygin Hamiltonization, in "The Breath of Symplectic and Poisson Geometry," Progress in Mathematics, 232, Birkhäuser Boston, Boston, MA, (2005), 75-120. Google Scholar J. C. Eilbeck, V. Z. Énol'skiĭ , V. B. Kuznetsov and A. V. Tsiganov, Linear r-matrix algebra for classical separable systems, J. Phys. A, 27 (1994), 567-578. doi: 10.1088/0305-4470/27/2/038. Google Scholar G. Falqui and M. Pedroni, Separation of variables for bi-Hamiltonian systems, Math. Phys. Anal. Geom., 6 (2003), 139-179. doi: 10.1023/A:1024080315471. Google Scholar F. Fassò, The Euler-Poinsot top: A non-commutatively integrable system without global action-angle coordinates, Zeitschrift für Angewandte Mathematik und Physik, 47 (1996), 953-976. doi: 10.1007/BF00920045. Google Scholar Yu. N. Fedorov, Integration of a generalized problem on the rolling of a Chaplygin ball, in "Geometry, Differential Equations and Mechanics" (Moscow, 1985), Moskov. Gos. Univ., Mekh.-Mat. Fak., Moscow, (1986), 151-155. Google Scholar C. G. J. Jacobi, Vorlesungen über Dynamik, in "Jacobi's Lectures on Dynamics," given in Königsberg, 1842-1843, published by A. Clebsch, Georg Reimer, Berlin, 1866. Google Scholar B. Jovanovic, Hamiltonization and integrability of the Chaplygin sphere in $R^n$, J. of Nonlinear Science, 20 (2010), 569-593. Google Scholar E. G. Gallop, On the rise of a spinning top, Trans. Cambridge Phil. Society, 19 (1904), 356-373. Google Scholar S. Hochgerner, Chaplygin systems associated to Cartan decompositions of semi-simple Lie groups, Diff. Geom. Appl., 28 (2010), 436-453. doi: 10.1016/j.difgeo.2010.04.003. Google Scholar E. G. Kalnins, "Separation of Variables for Riemannian Spaces of Constant Curvature," Pitman Monographs and Surveys in Pure and Applied Mathematics, 28, Longman Scientific & Technical, Harlow, John Wiley & Sons, Inc., New York, 1986. Google Scholar I. V. Komarov and A. V.Tsiganov, On a trajectory isomorphism of the Kowalevski gyrostat and the Clebsch problem, Journal of Physics A, 38 (2005), 2917-2927. doi: 10.1088/0305-4470/38/13/007. Google Scholar J.-L. Koszul, Crochet de Schouten-Nijenhuis et cohomologie,, Astérisque, 1985 (): 257. Google Scholar V. V. Kozlov, Realization of nonintegrable constraints in classical mechanics, Dokl. Akad. Nauk SSSR, 272 (1983), 550-554. Google Scholar V. V. Kozlov, On the integration theory of equations of nonholonomic mechanics, Adv. in Mech., 8 (1985), 85-107. Google Scholar T. E. Kouloukas and V. G. Papageorgiou, Poisson Yang-Baxter maps with binomial Lax matrices, J. Math. Phys., 52 (2011), 073502 (18 pages). Google Scholar V. B. Kuznetsov, Quadrics on real Riemannian spaces of constant curvature: Separation of variables and connection with Gaudin magnet, J. Math. Phys., 33 (1992), 3240-3254. doi: 10.1063/1.529542. Google Scholar A. Lichnerowicz, Les variétés de Poisson et leurs algèbres de Lie associées, J. Diff. Geom., 12 (1977), 253-300. Google Scholar W. Macke, "Mechanik der Teilchen, Systeme und Kontinua: Ein Lehrbuch der theoretischen Physik," Akademische Verlagsgesellschaft Geest & Portig K.-G., Leipzig, 1962. Google Scholar A. P. Markeev, Integrability of a problem on rolling of ball with multiply connected cavity filled by ideal liquid, Izv. Akad. Nauk SSSR, Mekh. Tverd. Tela, 21 (1986), 64-65. Google Scholar C. Morosi and L. Pizzocchero, On the Euler equation: Bi-Hamiltonian structure and integrals in involution, Lett. Math. Phys., 37 (1996), 117-135. doi: 10.1007/BF00416015. Google Scholar T. Ohsawa, O. E. Fernandez, A. M. Bloch and D. V. Zenkov, Nonholonomic Hamilton-Jacobi theory via Chaplygin Hamiltonization, J. Geometry and Physics, 61 (2011), 1263-1291. doi: 10.1016/j.geomphys.2011.02.015. Google Scholar A. G. Reyman and M. A. Semenov-Tian-Shansky, Group-theoretical methods in the theory of finite-dimensional integrable systems, in "Current Problems in Mathematics. Fundamental Directions," Vol. 16, (Russian), Dynamical systems, 7, Akad. Nauk SSSR, Vsesoyuz. Inst. Nauchn. i Tekhn. Inform., Moscow, 1987. Google Scholar D. Schneider, Non-holonomic Euler-Poincaré equations and stability in Chaplygin's sphere, Dyn. Syst., 17 (2002), 87-130. doi: 10.1080/02681110110112852. Google Scholar J. L. Synge, "Classical Dynamics," 1960 Handbuch der Physik, Bd. III/1, 1-225, Springer, Berlin, 1960. Google Scholar A. V. Tsiganov, The Stäckel systems and algebraic curves, J. Math. Phys., 40 (1999), 279-298. doi: 10.1063/1.532789. Google Scholar A. V. Tsiganov, Duality between integrable Stäckel systems, J. Phys. A, 32 (1999), 7965-7982. doi: 10.1088/0305-4470/32/45/311. Google Scholar A. V. Tsiganov, The Maupertuis principle and canonical transformations of the extended phase space, J. Nonlinear Math. Phys., 8 (2001), 157-182. doi: 10.2991/jnmp.2001.8.1.12. Google Scholar A. V. Tsiganov, On the Steklov-Lyapunov case of the rigid body motion, Regular and Chaotic Dynamics, 9 (2004), 77-89. doi: 10.1070/RD2004v009n02ABEH000267. Google Scholar A. V. Tsiganov, Toda chains in the Jacobi method, Teor. Math. Phys., 139 (2004), 636-653. doi: 10.1023/B:TAMP.0000026181.79622.af. Google Scholar A. V. Tsiganov, A family of the Poisson brackets compatible with the Sklyanin bracket, J. Phys. A, 40 (2007), 4803-4816. doi: 10.1088/1751-8113/40/18/008. Google Scholar A. V. Tsiganov, On bi-hamiltonian geometry of the Lagrange top, J. Phys. A, 41 (2008), 315212, 12 pp. Google Scholar A. V. Tsiganov, New variables of separation for particular case of the Kowalevski top, Regular and Chaotic Dynamics, 15 (2010), 659-669. doi: 10.1134/S156035471006002X. Google Scholar A. V. Tsiganov, On natural Poisson bivectors on the sphere, J. Phys. A, 44 (2011), 105203, 15 pp. Google Scholar A. V. Tsiganov, On deformations of the canonical Poisson bracket for the nonholonomic Chaplygin and the Borisov–Mamaev–Fedorov systems on zero-level of the area integral I, Rus. J. Nonlin. Dynamics, 7 (2011), 577-599. Google Scholar I. Vaisman, "Lectures on the Geometry of Poisson Manifolds," Progress in Mathematics, 118, Birkhäuser Verlag, Basel, 1994. Google Scholar A. Weinstein, The modular automorphism group of a Poisson manifold, J. Geom. Phys., 23 (1997), 379-394. doi: 10.1016/S0393-0440(97)80011-3. Google Scholar S. Wojciechowski, Integrable one-particle potentials related to the Neumann systems and the Jacobi problem of geodesic motion on an ellipsoid, Phys. Lett. A, 107 (1985), 106-111. doi: 10.1016/0375-9601(85)90725-X. Google Scholar Y. A. Li, P. J. Olver. Convergence of solitary-wave solutions in a perturbed bi-Hamiltonian dynamical system I. Compactions and peakons. Discrete & Continuous Dynamical Systems, 1997, 3 (3) : 419-432. doi: 10.3934/dcds.1997.3.419 Manuel de León, Víctor M. Jiménez, Manuel Lainz. Contact Hamiltonian and Lagrangian systems with nonholonomic constraints. Journal of Geometric Mechanics, 2021, 13 (1) : 25-53. doi: 10.3934/jgm.2021001 Ernest Fontich, Pau Martín. Arnold diffusion in perturbations of analytic integrable Hamiltonian systems. Discrete & Continuous Dynamical Systems, 2001, 7 (1) : 61-84. doi: 10.3934/dcds.2001.7.61 Y. A. Li, P. J. Olver. Convergence of solitary-wave solutions in a perturbed bi-hamiltonian dynamical system ii. complex analytic behavior and convergence to non-analytic solutions. Discrete & Continuous Dynamical Systems, 1998, 4 (1) : 159-191. doi: 10.3934/dcds.1998.4.159 Sebastián Ferrer, Francisco Crespo. Parametric quartic Hamiltonian model. A unified treatment of classic integrable systems. Journal of Geometric Mechanics, 2014, 6 (4) : 479-502. doi: 10.3934/jgm.2014.6.479 Alicia Cordero, José Martínez Alfaro, Pura Vindel. Bott integrable Hamiltonian systems on $S^{2}\times S^{1}$. Discrete & Continuous Dynamical Systems, 2008, 22 (3) : 587-604. doi: 10.3934/dcds.2008.22.587 Fuzhong Cong, Jialin Hong, Hongtian Li. Quasi-effective stability for nearly integrable Hamiltonian systems. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 67-80. doi: 10.3934/dcdsb.2016.21.67 Luis C. García-Naranjo, Mats Vermeeren. Structure preserving discretization of time-reparametrized Hamiltonian systems with application to nonholonomic mechanics. Journal of Computational Dynamics, 2021, 8 (3) : 241-271. doi: 10.3934/jcd.2021011 Marcel Guardia. Splitting of separatrices in the resonances of nearly integrable Hamiltonian systems of one and a half degrees of freedom. Discrete & Continuous Dynamical Systems, 2013, 33 (7) : 2829-2859. doi: 10.3934/dcds.2013.33.2829 Matteo Petrera, Yuri B. Suris. Geometry of the Kahan discretizations of planar quadratic Hamiltonian systems. Ⅱ. Systems with a linear Poisson tensor. Journal of Computational Dynamics, 2019, 6 (2) : 401-408. doi: 10.3934/jcd.2019020 Miguel-C. Muñoz-Lecanda. On some aspects of the geometry of non integrable distributions and applications. Journal of Geometric Mechanics, 2018, 10 (4) : 445-465. doi: 10.3934/jgm.2018017 Răzvan M. Tudoran, Anania Gîrban. On the Hamiltonian dynamics and geometry of the Rabinovich system. Discrete & Continuous Dynamical Systems - B, 2011, 15 (3) : 789-823. doi: 10.3934/dcdsb.2011.15.789 Manuel F. Rañada. Quasi-bi-Hamiltonian structures and superintegrability: Study of a Kepler-related family of systems endowed with generalized Runge-Lenz integrals of motion. Journal of Geometric Mechanics, 2021, 13 (2) : 195-208. doi: 10.3934/jgm.2021003 Marin Kobilarov, Jerrold E. Marsden, Gaurav S. Sukhatme. Geometric discretization of nonholonomic systems with symmetries. Discrete & Continuous Dynamical Systems - S, 2010, 3 (1) : 61-84. doi: 10.3934/dcdss.2010.3.61 Oscar E. Fernandez, Anthony M. Bloch, P. J. Olver. Variational Integrators for Hamiltonizable Nonholonomic Systems. Journal of Geometric Mechanics, 2012, 4 (2) : 137-163. doi: 10.3934/jgm.2012.4.137 Jorge Cortés, Manuel de León, Juan Carlos Marrero, Eduardo Martínez. Nonholonomic Lagrangian systems on Lie algebroids. Discrete & Continuous Dynamical Systems, 2009, 24 (2) : 213-271. doi: 10.3934/dcds.2009.24.213 José F. Cariñena, Irina Gheorghiu, Eduardo Martínez, Patrícia Santos. On the virial theorem for nonholonomic Lagrangian systems. Conference Publications, 2015, 2015 (special) : 204-212. doi: 10.3934/proc.2015.0204 Janusz Grabowski, Katarzyna Grabowska, Paweł Urbański. Geometry of Lagrangian and Hamiltonian formalisms in the dynamics of strings. Journal of Geometric Mechanics, 2014, 6 (4) : 503-526. doi: 10.3934/jgm.2014.6.503 Aristophanes Dimakis, Folkert Müller-Hoissen. Bidifferential graded algebras and integrable systems. Conference Publications, 2009, 2009 (Special) : 208-219. doi: 10.3934/proc.2009.2009.208 Leo T. Butler. A note on integrable mechanical systems on surfaces. Discrete & Continuous Dynamical Systems, 2014, 34 (5) : 1873-1878. doi: 10.3934/dcds.2014.34.1873 Andrey Tsiganov
CommonCrawl
View source for Elementary theory ← Elementary theory <!-- e0353601.png $#A+1 = 101 n = 0 $#C+1 = 101 : ~/encyclopedia/old_files/data/E035/E.0305360 Elementary theory Automatically converted into TeX, above some diagnostics. Please remove this comment and the {{TEX|auto}} line below, if TeX found to be correct. --> {{TEX|auto}} {{TEX|done}} A collection of closed formulas of first-order predicate logic. The elementary theory $ \mathop{\rm Th} ( K) $ of a class $ K $ of algebraic systems (cf. [[Algebraic system|Algebraic system]]) of signature $ \Omega $ is defined to be the collection of all closed formulas of the first-order predicate logic of signature $ \Omega $ that are true in all systems of $ K $. If $ K $ consists of a single system $ A $, then the elementary theory of the class $ K $ is the elementary theory of the system $ A $. Two algebraic systems of the same signature are said to be elementarily equivalent if their elementary theories are the same. An algebraic system $ A $ of signature $ \Omega $ is called a model of an elementary theory $ T $ of signature $ \Omega $ if all formulas of $ T $ are true in $ A $. An elementary theory is called consistent if it has models. A consistent elementary theory is called complete if any two models of it are elementarily equivalent. The class of all models of an elementary theory $ T $ is denoted by $ \mathop{\rm Mod} ( T) $. An elementary theory $ T $ is called solvable (or decidable) if the set of formulas $ \textrm{ Th Mod } ( T) $( that is, the set of all logical consequences of $ T $) is recursive. A class $ K $ of algebraic systems of signature $ \Omega $ is called axiomatizable if there exists an elementary theory $ T $ of signature $ \Omega $ such that $ K = \mathop{\rm Mod} ( T) $. In this case $ T $ is called a collection of axioms for $ K $. A class $ K $ is axiomatizable if and only if $ K = \textrm{ Mod Th } ( K) $. For example, the class of dense linear orders without a smallest or largest element is axiomatizable, its elementary theory is solvable and any two systems of this class are elementarily equivalent, since the elementary theory of this class is complete; moreover, its elementary theory is finitely axiomatizable. The class of finite cyclic groups is not axiomatizable; however, its elementary theory is solvable, and hence recursively axiomatizable. There are examples of finitely-axiomatizable unsolvable elementary theories. They include those of groups, rings, fields, and others. However, a complete recursively-axiomatizable elementary theory is necessarily solvable. Therefore, to prove the solvability of a recursively-axiomatizable elementary theory it is sufficient to observe that it is complete. Several methods for proving completeness are known. An elementary theory is called categorical in cardinality $ \alpha $( cf. [[Categoricity in cardinality|Categoricity in cardinality]]) if all its models of cardinality $ \alpha $ are isomorphic. An elementary theory that is categorical in some infinite cardinality and has no finite models is necessarily complete. For example, the elementary theory of algebraically closed fields of a given characteristic is recursively axiomatizable and categorical in every uncountable cardinality; it has no finite models, and therefore it is complete and solvable. In particular, the elementary theory of the field of complex numbers is solvable. Two formulas in the same signature as that of a theory $ T $ are equivalent in the theory $ T $ if they contain the same variables and if, for any model $ A $ of $ T $ and any assignment of elements of $ A $ to their free variables, the formulas are either both true or both false. A complete elementary theory $ T $ of finite or countable signature is countably categorical if and only if for every $ n $ there are finitely many formulas with $ n $ free variables $ v _ {1} \dots v _ {n} $ such that every formula of the appropriate signature with $ v _ {1} \dots v _ {n} $ as free variables is equivalent in $ T $ to one of those formulas. A complete theory of finite or countable signature that is categorical in one uncountable cardinality is also categorical in every other uncountable cardinality. A system $ A $ of signature $ \Omega $ is called an elementary subsystem of a system $ B $ of the same signature if $ A $ is a subsystem of $ B $ and if for every formula $ \Phi ( v _ {1} \dots v _ {n} ) $ of the first-order predicate logic of $ \Omega $ with free variables $ v _ {1} \dots v _ {n} $ and all $ a _ {1} \dots a _ {n} \in A $, the truth of $ \Phi ( a _ {1} \dots a _ {n} ) $ in $ A $ implies its truth in $ B $. An elementary theory $ T $ is called model complete if for any two models $ A $ and $ B $ of it the fact that $ A $ is a subsystem of $ B $ implies that it is an elementary subsystem. It turns out that a model-complete theory having a model that can be isomorphically imbedded in every model of the theory is complete. Two systems of the same signature which satisfy the same prenex formulas without existential quantifiers are called universally equivalent. A model-complete elementary theory all models of which are universally equivalent is complete. Using the technique of model completeness one can prove that real-closed fields, in particular, the field of real numbers, have a complete and solvable elementary theory. Among the other solvable elementary theories are those of addition of natural numbers and integers, of Abelian groups, of $ p $- adic number fields, of finite fields, of residue class fields, of ordered Abelian groups, and of Boolean algebras. The general study of unsolvable elementary theories was initiated by A. Tarski in the 1940s, but even earlier, in 1936, A. Church had proved the unsolvability of first-order predicate logic and J. Rosser, also in 1936, had proved the unsolvability of the arithmetic of the natural numbers. The elementary theory $ \mathop{\rm Th} ( K) $ of a class $ K $ of algebraic systems of the same signature $ \Omega $ is said to be inseparable if there is no recursive set of formulas containing $ \mathop{\rm Th} ( K) $ and not containing any closed formula which is false in all systems in $ K $. The elementary theory of a class $ K _ {1} $ of systems of signature $ \langle P ^ {(} 2) \rangle $ consisting of a single two-place predicate is called relatively definable in the elementary theory of a class $ K _ {2} $ of systems of signature $ \Omega _ {2} $ if there exist formulas $ \Phi ( v _ {0} ; u _ {1} \dots u _ {s} ) $ and $ \Psi ( v _ {1} , v _ {2} ; u _ {1} \dots u _ {s} ) $ of $ \Omega _ {2} $ such that for every system $ A _ {1} $ in $ K _ {1} $ one can find a system $ A _ {2} $ in $ K _ {2} $ and elements $ b _ {1} \dots b _ {s} $ in $ A _ {2} $ for which the set $ X = \{ {x \in A _ {2} } : {\Phi ( x ; b _ {1} \dots b _ {s} ) \textrm{ is true in } A _ {2} } \} $ together with the predicate $ P ^ {(} 2) $, defined on $ X $ so that $ P ^ {(} 2) ( x , y ) $ is true if and only if $ \Psi ( x , y ; b _ {1} \dots b _ {s} ) $ is true in $ A _ {2} $, forms an algebraic system isomorphic to $ A _ {1} $. This definition extends naturally to the theory of classes $ K _ {1} $ of arbitrary signature. If the elementary theory of a class $ K _ {1} $ is inseparable and relatively definable in the elementary theory of a class $ K _ {2} $, then that of $ K _ {2} $ is also inseparable. This makes it possible to prove that the elementary theories of many classes of algebraic systems are inseparable. Here it is convenient to take as $ \mathop{\rm Th} ( K _ {1} ) $ the elementary theory of all finite binary relations or that of all finite symmetric relations, or similar elementary theories. Inseparable elementary theories are unsolvable. So are those of the field of rational numbers and of many classes of rings and fields. The unsolvability of the elementary theory of finite groups is an important result of A.I. Mal'tsev. ====References==== <table><TR><TD valign="top">[1]</TD> <TD valign="top"> Yu.L. Ershov, I.A. Lavrov, A.D. Taimanov, M.A. Taitslin, "Elementary theories" ''Uspekhi Mat. Nauk'' , '''20''' : 4 (1965) pp. 37–108 (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> Yu.L. Ershov, "Decision problems and constructivizable models" , Moscow (1980) (In Russian)</TD></TR></table> ====Comments==== Somewhat more generally one defines recursive separability and inseparability for pairs of theories rather than for a single theory. Thus, two disjoint sets of natural numbers $ A $ and $ B $ are said to be recursively separable if there exists a recursive set $ A _ {1} $ containing $ A $ which is disjoint from $ B $. This is a symmetric notion. The sets $ A $ and $ B $ are (recursively) inseparable if they are not recursively separable. The single theory definition of inseparability results if this is applied to the sets $ R $ and $ T $ of refutable and provable formulas, or rather their associated sets of Gödel numbers $ R ^ { \star } $ and $ T ^ { \star } $. ====References==== <table><TR><TD valign="top">[a1]</TD> <TD valign="top"> C.C. Chang, H.J. Keisler, "Model theory" , North-Holland (1973)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> R.M. Smullyan, "Theory of formal systems" , Princeton Univ. Press (1961) pp. Chapt. III</TD></TR></table> Return to Elementary theory. Elementary theory. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Elementary_theory&oldid=46803 This article was adapted from an original article by Yu.L. ErshovM.A. Taitslin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/wiki/Elementary_theory"
CommonCrawl
Explicit geodesics in Gromov-Hausdorff space ERA-MS Home On the torsion in the center conjecture 2018, 25: 36-47. doi: 10.3934/era.2018.25.005 On the norm continuity of the hk-fourier transform Juan H. Arredondo 1, , Francisco J. Mendoza 2, and Alfredo Reyes 1, Departamento de Matemáticas, Universidad Autónoma Metropolitana - Iztapalapa, Av. San Rafael Atlixco 186, CDMX, 09340, Mexico Facultad de Ciencias Físico Matemáticas, Benemérita Universidad Autónoma de Puebla, Av. San Claudio y 18 Sur S/N, Puebla, 72570, Mexico Received February 13, 2018 Published May 2018 Fund Project: This work is partially supported by CONACyT-SNI and VIEP-BUAP (Puebla, Mexico). Full Text(HTML) In this work we study the Cosine Transform operator and the Sine Transform operator in the setting of Henstock-Kurzweil integration theory. We show that these related transformation operators have a very different behavior in the context of Henstock-Kurzweil functions. In fact, while one of them is a bounded operator, the other one is not. This is a generalization of a result of E. Liflyand in the setting of Lebesgue integration. Keywords: Fourier transform, Henstock-Kurzweil integral, integrability, bounded variation function, Banach spaces. Mathematics Subject Classification: Primary: 26A39, 43A32; Secondary: 26A42, 26A45. Citation: Juan H. Arredondo, Francisco J. Mendoza, Alfredo Reyes. On the norm continuity of the hk-fourier transform. Electronic Research Announcements, 2018, 25: 36-47. doi: 10.3934/era.2018.25.005 R. G. Bartle, A Modern Theory of Integration, Graduate Studies in Mathematics, 32. American Mathematical Society, Providence, RI, 2001. doi: 10.1090/gsm/032. Google Scholar W. Beckner, Inequalities in Fourier analysis on $\mathbb{R}^n$, Proc. Nat. Acad. Sci., 72 (1975), 638-641. doi: 10.1073/pnas.72.2.638. Google Scholar B. Bongiorno and T. V. Panchapagesan, On the Alexiewicz topology of the Denjoy space, Real Anal. Exchange, 21 (1995/96), 604–614. Google Scholar H. Dym and H. P. McKean, Fourier Series and Integrals, Academic Press, San Diego, CA, 1972. Google Scholar T. H. Hildebrandt, Introduction to the Theory of Integration, Publisher Academic Press, New York, 1963. Google Scholar G. Jameson, Sine, cosine and exponential integrals, The Mathematical Gazette, 99 (2015), 276-289. doi: 10.1017/mag.2015.36. Google Scholar R. Kannan and C. K. Krueger, Advanced Analysis on the Real Line, Springer-Verlag, Harrisburg, VA, 1996. Google Scholar E. H. Lieb and M. Loss, Analysis, Graduate Studies in Mathematics, Vol. 14, American Mathematical Society, Providence, RI, 1997. Google Scholar E. Liflyand, Integrability spaces for the Fourier transform of a function of bounded variation, Journal of Mathematical Analysis and Applications, 436 (2016), 1082-1101. doi: 10.1016/j.jmaa.2015.12.042. Google Scholar F.J. Mendoza-Torres, On pointwise inversion of the Fourier transform of BV0 functions, Ann. Funct. Anal., 1 (2010), 112-120. doi: 10.15352/afa/1399900593. Google Scholar F. J. Mendoza-Torres, M. G. Morales-Macías, J. A. Escamilla-Reyna and J. H. ArredondoRuiz, Several aspects around the Riemann-Lebesgue lemma, Journal of Advance Research in Pure Mathematics, 5 (2013), 33-46. doi: 10.5373/jarpm.1458.052712. Google Scholar M.G. Morales-Macías and J. H. Arredondo-Ruiz, Factorization in the space of Henstock-Kurzweil integrable functions, Azerbaijan Journal of Mathematics, 7 (2017), 116-131. Google Scholar M. G. Morales-Macías, J. H. Arredondo-Ruiz and F. J. Mendoza-Torres, An Extension of some properties for the Fourier transform operator on Lp($\mathbb{R}$) spaces, Revista de la Unión Matemática Argentina, 57 (2016), 85-94. Google Scholar M. Reed and B. Simon, Methods of Modern Analysis, volume Ⅱ: Fourier Analysis, Self Adjointness, Academic Press, 1975. Google Scholar M. Riesz and A. E. Livingston, A short proof of a classical theorem in the theory of Fourier integrals, Amer. Math. Montly, 62 (1955), 434-437. doi: 10.2307/2307003. Google Scholar W. Rudin, Real and Complex Analysis, McGraw-Hill, New York, 1966. Google Scholar E. Talvila, Henstock-Kurzweil Fourier transforms, Ilinois Journal of Mathematics, 46 (2002), 1207-1226. Google Scholar M. Tvrdý, G. Antunes-Monteiro and A. Slavik, Kurzweil-Stieltjes Integral: Theory and Applications, Series in Real Analysis, World Scientific Publishing Co, Singapore, 2017. Google Scholar Sergiu Aizicovici, Yimin Ding, N. S. Papageorgiou. Time dependent Volterra integral inclusions in Banach spaces. Discrete & Continuous Dynamical Systems, 1996, 2 (1) : 53-63. doi: 10.3934/dcds.1996.2.53 Yunho Kim, Luminita A. Vese. Image recovery using functions of bounded variation and Sobolev spaces of negative differentiability. Inverse Problems & Imaging, 2009, 3 (1) : 43-68. doi: 10.3934/ipi.2009.3.43 Earl Berkson. Fourier analysis methods in operator ergodic theory on super-reflexive Banach spaces. Electronic Research Announcements, 2010, 17: 90-103. doi: 10.3934/era.2010.17.90 Pavel Krejčí, Harbir Lamba, Sergey Melnik, Dmitrii Rachinskii. Kurzweil integral representation of interacting Prandtl-Ishlinskii operators. Discrete & Continuous Dynamical Systems - B, 2015, 20 (9) : 2949-2965. doi: 10.3934/dcdsb.2015.20.2949 Carlos Conca, Luis Friz, Jaime H. Ortega. Direct integral decomposition for periodic function spaces and application to Bloch waves. Networks & Heterogeneous Media, 2008, 3 (3) : 555-566. doi: 10.3934/nhm.2008.3.555 X. Xiang, Y. Peng, W. Wei. A general class of nonlinear impulsive integral differential equations and optimal controls on Banach spaces. Conference Publications, 2005, 2005 (Special) : 911-919. doi: 10.3934/proc.2005.2005.911 Alexander Alekseenko, Jeffrey Limbacher. Evaluating high order discontinuous Galerkin discretization of the Boltzmann collision integral in $ \mathcal{O}(N^2) $ operations using the discrete fourier transform. Kinetic & Related Models, 2019, 12 (4) : 703-726. doi: 10.3934/krm.2019027 Sonja Cox, Arnulf Jentzen, Ryan Kurniawan, Primož Pušnik. On the mild Itô formula in Banach spaces. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2217-2243. doi: 10.3934/dcdsb.2018232 Georgi Grahovski, Rossen Ivanov. Generalised Fourier transform and perturbations to soliton equations. Discrete & Continuous Dynamical Systems - B, 2009, 12 (3) : 579-595. doi: 10.3934/dcdsb.2009.12.579 Sergey P. Degtyarev. On Fourier multipliers in function spaces with partial Hölder condition and their application to the linearized Cahn-Hilliard equation with dynamic boundary conditions. Evolution Equations & Control Theory, 2015, 4 (4) : 391-429. doi: 10.3934/eect.2015.4.391 Franco Obersnel, Pierpaolo Omari. Multiple bounded variation solutions of a capillarity problem. Conference Publications, 2011, 2011 (Special) : 1129-1137. doi: 10.3934/proc.2011.2011.1129 Yuri Latushkin, Valerian Yurov. Stability estimates for semigroups on Banach spaces. Discrete & Continuous Dynamical Systems, 2013, 33 (11&12) : 5203-5216. doi: 10.3934/dcds.2013.33.5203 Goro Akagi, Mitsuharu Ôtani. Evolution equations and subdifferentials in Banach spaces. Conference Publications, 2003, 2003 (Special) : 11-20. doi: 10.3934/proc.2003.2003.11 Kanghui Guo and Demetrio Labate. Sparse shearlet representation of Fourier integral operators. Electronic Research Announcements, 2007, 14: 7-19. doi: 10.3934/era.2007.14.7 Elena Cordero, Fabio Nicola, Luigi Rodino. Time-frequency analysis of fourier integral operators. Communications on Pure & Applied Analysis, 2010, 9 (1) : 1-21. doi: 10.3934/cpaa.2010.9.1 Li Zhang, Xiaofeng Zhou, Min Chen. The research on the properties of Fourier matrix and bent function. Numerical Algebra, Control & Optimization, 2020, 10 (4) : 571-578. doi: 10.3934/naco.2020052 Jorge J. Betancor, Alejandro J. Castro, Marta De León-Contreras. Variation operators for semigroups associated with Fourier-Bessel expansions. Communications on Pure & Applied Analysis, 2022, 21 (1) : 239-273. doi: 10.3934/cpaa.2021176 Ali Gholami, Mauricio D. Sacchi. Time-invariant radon transform by generalized Fourier slice theorem. Inverse Problems & Imaging, 2017, 11 (3) : 501-519. doi: 10.3934/ipi.2017023 Michael Music. The nonlinear Fourier transform for two-dimensional subcritical potentials. Inverse Problems & Imaging, 2014, 8 (4) : 1151-1167. doi: 10.3934/ipi.2014.8.1151 Jan-Cornelius Molnar. On two-sided estimates for the nonlinear Fourier transform of KdV. Discrete & Continuous Dynamical Systems, 2016, 36 (6) : 3339-3356. doi: 10.3934/dcds.2016.36.3339 2020 Impact Factor: 0.929 HTML views (2039) Juan H. Arredondo Francisco J. Mendoza Alfredo Reyes Article outline
CommonCrawl
Expand this Topic clickable element to expand a topic PRISM SUBMISSION This website uses cookies to deliver some of our products and services as well as for analytics and to provide you a more personalized experience. Click here to learn more. By continuing to use this site, you agree to our use of cookies. We've also updated our Privacy Notice. Click here to see what's new. Only if other resources available (images, video, datasets) All text • Use these formats for best results: Smith or J Smith • Use a comma to separate multiple people: J Smith, RL Jones, Macarthur Any : All : Tips for preparing a search: Keep it simple - don't use too many different parameters. Separate search groups with parentheses and Booleans. Note the Boolean sign must be in upper-case. Example: (diode OR solid-state) AND laser [search contains "diode" or "solid-state" and laser] Example: (photons AND downconversion) - pump [search contains both "photons" and "downconversion" but not "pump"] Improve efficiency in your search by using wildcards. Asterisk ( * ) -- Example: "elect*" retrieves documents containing "electron," "electronic," and "electricity" Question mark (?) -- Example: "gr?y" retrieves documents containing "grey" or "gray" Use quotation marks " " around specific phrases where you want the entire phrase only. For best results, use the separate Authors field to search for author names. Author name searching: Use these formats for best results: Smith or J Smith Use a comma to separate multiple people: J Smith, RL Jones, Macarthur Note: Author names will be searched in the keywords field, also, but that may find papers where the person is mentioned, rather than papers they authored. Paper # Report Year Enter only one date to search After ("From") or Before ("To") OPTICS & PHOTONICS TOPICS Find articles with any selected topics Find articles with all selected topics Click the to reveal subtopics. Use the checkbox to select a topic to filter your search. About Optics & Photonics Topics Optica Publishing Group developed the Optics and Photonics Topics to help organize its diverse content more accurately by topic area. This topic browser contains over 2400 terms and is organized in a three-level hierarchy. Read more. Topics can be refined further in the search results. The Topic facet will reveal the high-level topics associated with the articles returned in the search results. Energy and Environmental Optics Express Engineering and Laboratory Notes Spotlight on Optics Clear my choices above Advances in Optics and Photonics Applied Spectroscopy Biomedical Optics Express Current Optics and Photonics Journal of Lightwave Technology Journal of Near Infrared Spectroscopy Journal of Optical Technology Journal of Optical Communications and Networking Journal of the Optical Society of America A Journal of the Optical Society of America B Optical Materials Express Optics Continuum Optics Express Optics Letters Photonics Research Legacy Journals Journal of Display Technology (2005-2016) Journal of the Optical Society of Korea (1997-2016) Journal of Optical Networking (2002-2009) Journal of the Optical Society of America (1917-1983) Optics News (1975-1989) OSA Continuum (2018-2021) Optics and Photonics News Find Conferences Optical Fiber Communication (OFC) Conference on Lasers and Electro-Optics (CLEO) Frontiers in Optics (FiO) Frontiers in Optics Laser Science Quantum Information and Measurement Asia Communications and Photonics Conference Advanced Solid State Lasers Interactive Science Publishing (ISP) Optics ImageBank Optics and Photonics Topics On The Cover: Photonics Research Spotlight: Chinese Optics Letters Open Access Statement and Policy Terms for Journal Article Reuse Login to access favorites Journal Home About Issues in Progress Current Issue All Issues Feature Issues OSA Continuum Vol. 2, Issue 3, •https://doi.org/10.1364/OSAC.2.000703 Continuous amplified digital optical phase conjugator for focusing through thick, heavy scattering medium Yeh-Wei Yu, Ching-Cherng Sun, Xing-Chen Liu, Wei-Hsin Chen, Szu-Yu Chen, Yu-Heng Chen, Chih-Shun Ho, Che-Chu Lin, Tsung-Hsun Yang, and Po-Kai Hsieh Yeh-Wei Yu,1,2,3 Ching-Cherng Sun,1,2,* Xing-Chen Liu,2 Wei-Hsin Chen,2 Szu-Yu Chen,2 Yu-Heng Chen,2 Chih-Shun Ho,2 Che-Chu Lin,2 Tsung-Hsun Yang,1,2 and Po-Kai Hsieh2 1Optical Sciences Center, National Central University, Chung-Li, Taoyuan City 32001, Taiwan 2Department of Optics and Photonics, National Central University, Chung-Li, Taoyuan City 32001, Taiwan 3Department of Photonics, Feng Chia University, Taichung 407, Taiwan *Corresponding author: [email protected] Find other works by these authors Y Yu C Sun X Liu W Chen S Chen Y Chen C Ho C Lin T Yang P Hsieh Post on reddit Add to CiteULike Add to Mendeley Add to BibSonomy Copy Citation Text Yeh-Wei Yu, Ching-Cherng Sun, Xing-Chen Liu, Wei-Hsin Chen, Szu-Yu Chen, Yu-Heng Chen, Chih-Shun Ho, Che-Chu Lin, Tsung-Hsun Yang, and Po-Kai Hsieh, "Continuous amplified digital optical phase conjugator for focusing through thick, heavy scattering medium," OSA Continuum 2, 703-714 (2019) Endnote (RIS) Focusing through dynamic tissue with millisecond digital optical phase conjugation Daifa Wang, et al. Optica 2(8) 728-735 (2015) Focusing light inside dynamic scattering media with millisecond digital optical phase conjugation Yan Liu, et al. Method for auto-alignment of digital optical phase conjugation systems based on digital propagation Mooseok Jang, et al. Opt. Express 22(12) 14054-14071 (2014) Table of Contents Category Fourier Optics and Optical Processing The topics in this list come from the Optics and Photonics Topics applied to this article. Multiple scattering Phase conjugation Spatial light modulators Turbid media Original Manuscript: November 29, 2018 Revised Manuscript: January 11, 2019 Manuscript Accepted: January 13, 2019 PDF Article CA-DOP system System performance and analysis Digital optical phase conjugation (DOPC) is a well-known technique for generating a counter-propagating wavefront and reversing multiple scattering effects. Until now, implementations of DOPC are mostly based on a switching geometry. For some applications such as optical tweezers in turbid media, however, switching-based DOPC could fail to grab fast-moving particles. Besides, a DOPC modality with temporally-continuous gain is required. In this paper, a continuous amplified digital optical phase conjugator (CA-DOPC) is introduced to form a focusing point after passing through a heavily scattering medium. To achieve high-precision alignment between the CMOS image sensor and the spatial light modulator (SLM) in the CA-DOPC, an optical phase conjugator along with a specially designed alignment pattern was used. In this research, the CA-DOPC showed its ability to form a focus point in 2 -mm-thick chicken muscle tissue. In addition, a continuous gain of 166 and peak-to-background ratio (PBR) of 3×105 were observed in the case of 0.5-mm chicken muscle tissue. © 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement Point focusing plays an important role in optical-imaging and optical-manipulation techniques, such as microscope and optical tweezers [1–6]. In the field of microscopy, optical phase conjugation (OPC) has shown excellent turbidity suppression capability in a variety of applications [7–11]. Various applications, including opposite virtual objective [12], harmonic generation imaging [13–16], fluorescence imaging [17–19], 4-Pi microscopy [20,21] and endoscopic imaging [22,23], have been proposed. Moreover, along with the demonstration of in vivo applications [24–26], OPC has emerged as the candidate of next-generation biomedical imaging technique. Instead of traditional OPC, digital optical phase conjugator (DOPC) has been proposed to provide higher light sensitivity, higher stability and wider wavelength fidelity [27–29]. It has shown its ability to produce point light sources inside specimens with the aid of ultrasound encoded images [30–35] or time differential images [36–38]. In the field of optical tweezers, OPC has been applied to perform multiple trapping by counter-propagating structured light [39]. However, most DOPC techniques are based on a switching geometry and have to temporally switch between recording and reading modes [13,24,40–46]. During the period of the acquisition step, a spatial light modulator (SLM) switches to off state and create a half-period non-modulation window which is generally in an order of milliseconds to sub-ms. Due to this non-modulation window, Brownian motion of particles or in vivo dynamic activity could lead to unignorable noises or failure in particle trapping. Therefore, in order to continuously trap particles, a temporally-continuous gain is desired. Besides, continuous DOPC system was also desired and proposed to apply continuous scanning of time-reversed ultrasonically encoded optical focusing [47]. In this research, a continuous amplified DOPC (CA-DOPC) is proposed to provide temporally-continuous gain. Because the reading beam is separated from the reference beam in the CA-DOPC, the high-power reading beam can continuously illuminate the phase-only spatial light modulator (PSLM) to produce temporally-continuous gain. Based on a continuous refreshing mechanism, the CA-DOPC exists no non-modulation window so that the energy waste during this window and the possible failure in particle trapping can be avoided. However, CA-DOPC requires a reference beam separated from the reading beam, so typical alignment methods are no longer applicable [18,36,37,48–52]. Even the powerful iterative fine-tune alignment method can't be applied to the CA-DOPC directly, because it needs an optical system to evaluate the phase difference between the reading beam and the reference beam [40,41]. To successfully generate the phase-conjugate wave, two significant steps have to be accomplished. One is the alignment between the PSLM and CMOS-IS, and the other is to record the interferogram formed by the reference beam and the conjugate reading beam. In this paper, we use a novel approach to achieve six-dimensional alignment by using a Kitty-type self-pumped phase-conjugate mirror (Kitty-SPPCM). With the Kitty-SPPCM's capability of fast generating a conjugate wave with high sensitivity and phase fidelity [52,53], the alignment can be achieved with high precision. Besides, due to the phase-difference elimination and the SLM surface-undulation elimination aid by Kitty-SPPCM, the system peak to background ratio (PBR) approaches 38% of theoretical limit without iterative fine-tune process. With the aid of Kitty-SPPCM-based alignment, the CA-DOPC was constructed and applied to chicken muscle tissues with different thickness. 2. CA-DOP system Figure 1 shows the system configuration of the CA-DOPC including the acquisition step and the reconstruction step. A 532-nm continuous-wave laser (Verdi-V5, Coherent Inc.) is used as the light source. Figure 1 (a) shows the acquisition step. The light originated from the light source was divided into a high-power beam and a low-power beam. The high-power beam includes reading and signal beam, whereas the low-power beam was reference beam. All three beams were TE polarization. The signal beam was focused by the objective lens Obj3 in front of the specimen. After passing through the specimen, the divergent signal beam was collected by the lens L3 and was directed to the CMOS-IS1 (CMOS image sensor of Canon 650D; 5184×3456 pixels; 4.3-μm pixel size). The off-axis interferogram of the signal beam and reference beam was captured by the CMOS-IS1, and is expressed as (1)$${|{{R_{ref}} + O} |^2} = {|{{R_{ref}}} |^2} + {|O |^2} + {R_{ref}}^{\ast } \cdot O + {R_{ref}} \cdot {O^{\ast }}, $$ where O is the signal beam and Rref is the reference beam. The tilting angle was 1.32˚, which made the first-order signal deviating from the DC peak but overlapping with a part of zero-order signal in the Fourier domain. The noise caused by the zero-order signal can be ignored if the PBR of the system was high enough. In order to produce a continuous phase conjugate signal, the reading beam was always turn on, even in the acquisition step. To obtain the input signals for the PSLM (HOLOEYE PLUTO; 1920×1080 pixels; 8-μm pixel size), a band-pass filter was applied to the Fourier spectrum of ${|{{R_{ref}} + O} |^2}$ to extract (Rref · O*)bp. An aperture Ap1 was attached to the lens L3 in order to block the zero-order reading light reflected by PSLM in the reconstruction step. Since the reading beam is independent of the reference beam in the CA-DOPC, simply applying (Rref · O*)bp to the PSLM and illuminating the reading beam, P, will lead to a phase error of (Rref)bp · P in the phase- conjugate wave. Therefore, the phase information between the reference beam and the reading beam has to be acquired. Figure 2 shows we use the Kitty SPPCM to generate an optical phase conjugate reading beam. By recording the interferogram formed by the reference beam and the optical phase conjugate reading beam, (2)$${|{{R_{ref}} + {P^{\ast }}} |^2} = {|{{R_{ref}}} |^2} + {|P |^2} + {R_{ref}}^{\ast } \cdot {P^{\ast }} + {R_{ref}} \cdot P, $$ the information of (Rref · P)bp is obtained by applying a band-pass filter to the Fourier Spectrum of ${|{{R_{ref}} + {P^{\ast }}} |^2}$. Figure 1(b) shows only the reading beam was turn on and illuminated the PSLM in the reconstruction step. Taking the product of (Rref · O*)bp and the phase conjugate of (Rref · P)bp, the input signal for the PSLM can be obtained as (3)$${S_{PSLM}} = [{{{({{R_{ref}} \cdot {O^{\ast }}} )}_{bp}}} ]\cdot {[{{{({{R_{ref}} \cdot P} )}_{bp}}} ]^\ast } = P_{bp}^\ast \cdot O_{bp}^\ast . $$ When applying SPSLM to the PSLM, the phase-conjugate wave can be obtained by illuminating the reading light, (4)$$P \cdot {S_{PSLM}} = ({P_{bp}^\ast \cdot P} )\cdot O_{bp}^\ast \approx O_{bp}^\ast . $$ Fig. 1. CA-DOPC system with temporal-continuous gain: (a) The acquisition step for collecting the wave-front passing through the Specimen; (b) The reconstruction step for producing phase conjugate signal. Obj: Objective lens; PH: Pin hole; L: Lens; CL: Cylindrical lens; M: Mirror; BS: Beam splitters; PBS: Polarization beam splitters; HWP: Half wave plates; PL: Linear polarizers; PSLM: Phase-only spatial light modulator; CMOS-IS: CMOS image sensor; BK: Block. Download Full Size | PPT Slide | PDF Fig. 2. (a) Kitty SPPCM is used to generate an optical phase conjugate reading beam. (b) The interferogram formed by the reference beam and the optical phase conjugate reading beam is recorded by the CMOS-IS1. The phase error between the reference beam and the reading beam as well as the phase error caused by SLM surface-undulation can thus be eliminated. The Kitty-SPPCM-based optical phase conjugation is used to make alignment between the PSLM and CMOS-IS1. The system setup is shown in Fig. 3. Before the Kitty-SPPCM can be formed, a Cat-SPPCM has to be created in a photorefractive crystal [54], which is the BaTiO3 crystal in Fig. 3(a). The Cat-SPPCM is a self-induced phase conjugator based on two-wave mixing incorporated with crystal geometry. When a pumping beam was incident into BaTiO3 at an appropriate incident angle and position, a clear fanning loop was formed (Fig. 3(b)). Because two counter-propagation waves existed in the fanning loop, another incident wave ("Kitty" in Fig. 3(b)) passing through the loop would form two four-wave mixing regions and then be diffracted into a phase-conjugate wave. This is the mechanism of the Kitty-SPPCM. There are three advantages of using Kitty-SPPCM in the system alignment: (1) since the formation of the Kitty-SPPCM occurs at the same speed as recording an interference grating in BaTiO3, it can avoid the time delay for building the fanning loop in Cat-SPPCM; (2) the Kitty-SPPCM can achieve higher fidelity in phase reconstruction because it has larger acceptable numerical aperture than most of SPPCMs; (3) the adoption of Kitty-SPPCM makes the alignment free of lens distortion. As the alignment shown in Fig. 3(a), the mirror M4 direct a laser beam into the BaTiO3 crystal to form a Cat-SPPCM. Subsequently, the other laser beam was reflected by a polarized beam splitter (PBS) and expanded by Obj1, L7- L10 to form a convergent wave. This wave was then reflected by a beamsplitter (BS3) and impinge the PSLM. During the alignment, the PSLM transfered it function to amplitude modulation by changing the polarization of incident wave using the linear polarizer LP, and was given a specially designed pattern (Fig. 3(c)) and the light reflected by the PSLM will carry the information of this pattern. After reflected by the PSLM, the light being focused into the BaTiO3 crystal and the Kitty-SPPCM was then formed to generate a phase-conjugate wave. The phase-conjugate wave is reflected by the beamsplitter (BS2) and shined onto the CMOS-IS1. So that, an image of the designed pattern (Fig. 3(d)) can then be captured. By aligning the pattern given to the PSLM and captured by the CMOS-IS, six-axis geometrical alignment, including lateral shifts (x and y), axial shift (z), on-plane rotation (Φz) and out-of-plane rotation (Φx and Φy), can be accomplished. To benefit the alignment in axial shift (z) and out-of-plane rotation (Φx and Φy), the lenses L7- L10 were applied to produce a convergent wave with NA ∼ 0.035, it creates a vector magnification factor. Limiting by the panel size of PSLM (15.4 mm × 8.6 mm) and the pixel size of the CMOS-IS1 (4.3 μm), the accomplished shift tolerances were 4.3 μm (x), 4.3 μm (y), and 125 μm (z), and the rotation tolerances were 0.83° (Φx), 0.47° (Φy), and 0.01° (Φz). Fig. 3. (a) Alignment of CA-DOPC system using Kitty SPPCM; (b) Kitty SPPCM; (c) SLM input signal of alignment marks; (d) Conjugate images of alignment marks readout by CMOS-IS1. Obj: Objective lens; L: Lenses; M: Mirrors; CL: Cylindrical lens; BS: Beam splitters; PBS: Polarization beam splitters; HWP: Half wave plates; PSLM: Phase-only spatial light modulator; CMOS-IS: CMOS image sensor; BK: Block plate. 3. System performance and analysis To test the system's performance of turbidity suppression and continuous amplification, chicken breast tissues with thickness 0.5 mm, 1 mm, and 2 mm were used as specimens. The mean free path for the chicken breast tissue has been widely studied and the published data is around 30 µm [18,55–56]. As shown in Fig. 1, the signal beam was focused to form a focusing point before the specimens. The divergent signal beam passed through the specimens and then was guided into the CMOS-IS. The measured irradiance of the reference beam was roughly the same as the signal beam in the CMOS-IS plane. Using the interference fringe captured by the CMOS-IS, the phase modulation image, SPSLM, for the PSLM was calculated based on Eq. (3). The band-pass filter used for (Rref · O*)bp and (Rref · P)bp was a circle with 700-pixels radius. Since the pixel sizes of the CMOS-IS and the PSLM are 4.3 μm and 8 μm, respectively, nearest-neighbor interpolation has to be applied to get the input signals for PSLM. By applying SPSLM to the PSLM and illuminating the reading beam on it, the conjugate signal beam was then generated. The conjugate signal beam inversely propagated back through the specimen and formed a focusing point after the specimens. We used BS3 to direct the conjugate signal to the imaging system consisting of the objective lens Obj3, a lens L5, and the CMOS-IS2 image sensor (iDS UI-3590CP; 4912×3684 pixels; 1.25-μm pixel size; gamma correction = 1.0). In order to calculate the PBR of the conjugate signal, we took high dynamic range (HDR) image of the conjugate focusing image by applying the ND filter (THORLABS NDC-100S-4) to control the exposure power of CMOS-IS2. And each HDR image is combined from 5 images with different exposure power. The calculated PBR for tissue thickness of 0.5 mm, 1 mm and 2 mm are 3×105, 1.6×105 and 0.6×105, respectively. The first row of Fig. 4(a)-(c) show the HDR image of the phase conjugate point for the chicken breast tissues with thickness 0.5 mm, 1 mm, and 2 mm, respectively. The second row and the third row show the light distribution along the red dash line and the green dash line, respectively. The measured full width of half maximum (FWHM) of the light distribution along the two axis are all below 5 µm. For the case of 0.5 mm and 1 mm, a tail of the conjugate point is observed. It is because the reading beam kept turn-on in the acquisition step to continuously produce DOPC focusing beam, and thus induces some system noise. Fig. 4. The HDR image of the phase conjugate point for the chicken breast tissues with thickness (a) 0.5 mm, (b) 1 mm and (c) 2 mm, respectively. The second row and the third row show the light distribution along the red dash line and the green dash line, respectively. The theoretical PBR of the DOPC system follows the framework of adaptive optics proposed by Vellekoop et al., it is expressed [57,58] (5)$$PBR = \frac{\pi }{4}(N - 1) + 1, $$ because the size of CMOS-IS1 is longer in the vertical direction, its vertical unit length in Fourier domain is shorter. When the radius of circular band pass filter in Fourier domain is set as 700 pixels, it leads to oval-shape speckles, and the area of each speckle is around 131.46µm2, which is around the size of 2 SLM pixels. Since only 2 SLM pixels are used to sample one speckle grain, it contributes high PBR performance. [9] The mode number N is estimated as the area of SLM divided by the area of the speckle, and is 1.01×106. When applying Eq. (5) to the proposed system, the theoretical PBR is 7.93×105. It shows the experimental PBR are 38%, 20%, and 7.5% of the theoretical limit for tissue thickness 0.5 mm, 1 mm, and 2 mm, respectively. However, Eq. (5) can't explain the PBR degradation along with the specimen-thickness increase. According to the experimental observation, the detected information of the DOPC system tends to be lost when the specimen getting thicker. Not only because of the specimen absorption, but also because of the larger channel number. The channel number is roughly equal to (2L)2/(λ/2)2, where L is the thickness of the sample and λ is the wavelength. [54] In the perspective of adaptive optics, larger channel number of the specimen just benefits the optimizing wave front. Because it enables the degree of freedom to control the wave-front passing through the specimen and to accumulating constructive interference. However, DOPC is intrinsically different. It is designed to fast duplicate a phase conjugate wave-front of the detected wave-front. If the channel number of the specimen is larger than the DOPC system, the DOPC system can't resolve the extra information. The information loss leads to wave-front detection errors, and the duplication of error phase conjugated wave-front leads to PBR degradation. We can estimate the information loss by calculating the system fidelity (ϕ) [59,60]. (6)$$\phi = {\alpha _{specimen}}{\alpha _{Opt}}{\alpha _{Spectrum}}, $$ (7)$${\alpha _{specimen}} = \frac{{\int {{{|{{E_{{s_2}}}({{r_2}} )} |}^2}d{r_2}} }}{{\int {{{|{{E_{{s_1}}}({{r_1}} )} |}^2}d{r_1}} }}, $$ (8)$${\alpha _{Opt}} = \frac{{\int {{A_p}{{|{{E_{DOPC}}({{r_3}} )} |}^2}d{r_3}} }}{{\int {{{|{{E_{DOPC}}({{r_3}} )} |}^2}d{r_3}} }}, $$ (9)$${\alpha _{Spectrum}} = \frac{{\int {{A_{{p_f}}}{{|{{e_{{\mathop{\rm int}} }}(f )} |}^2}df} }}{{\int {{{|{{e_{{\mathop{\rm int}} }}(f )} |}^2}df} }}. $$ Here, αspecimen is the fidelity related to the specimen absorption, and αOpt and αspectrum are the fidelities related to the optical entrance pupil of the DOPC system and spectrum resolving capability, respectively. In addition, Es1 is the electrical field of the signal beam before it passing through the specimen; Es2 is the electrical field of the signal beam after it passing through the specimen; EDOPC is the electrical field at the entrance pupil of the DOPC system; Ap is the optical entrance pupil of the DOPC system; eint is the Fourier transform of the electrical field on the CMOS-IS1; Apf is the spectrum entrance pupil of the DOPC system. It shows αspecimenαOpt. equal to the ratio of DOPC-system collected power over the input signal power. And αspectrum equals to the integrated energy insides the band-pass filter over the total energy in the Fourier spectrum. Accordingly, the measured fidelities are 2.9×10−4, 1.7×10−4, and 1.2×10−4, respectively. Figure 5 compares PBR with Fidelity, and it shows both curves are in the same trend with a constant PBR degradation. We believe it is caused by some system noises, ex., the noise induced by the reading beam in the acquisition step. Fig. 5. PBR compared with Fidelity, it shows both curves are in the same trend with a constant PBR degradation. Table 1 shows the measured power in different positions. The power of the signal before the chicken tissue was 13 µW, 35.5µW, and 64 µW for 0.5 mm, 1 mm and 2 mm chicken breast tissue, respectively. Since, only 20% of signal can be collected by the optical system. The power of the signal before the chicken tissue and collected by the system (PS1) was 2.6 µW, 7.1µW, and 12.8 µW. When the light passing through the chicken tissue (PS2), it became 101 nW, 154 nW, and 200nW. In the acquisition step, the light power of the reading beam illuminating the SLM was 80 mW. The power of the phase conjugate signal propagating to the front surface of the chicken breast tissue (PPC1) was 970 µW, 893 µW, and 861µW. After passing through the chicken breast tissue, the power of the phase conjugate signal (PPC2) was 438.8 μW, 312.6 μW and 39.5μW. Thus, the system reflection ratio was 19208, 11597 and 8601. As a result, we get the continuous amplification ratio 166.2, 44 and 3.1 for 0.5 mm, 1 mm and 2 mm chicken breast tissue, respectively. Table 1. Measured power in different positions The temporal-continuous amplified DOPC system owns a bright future in the field of optical tweezers, and it is successfully demonstrated in this paper. Because the reference beam must be separated from the reading beam, a new method for alignment between the PSLM and the CMOS-IS is proposed. The six-axis alignment for the DOPC system and phase difference elimination between the reference beam and the reading beam were realized using the Kitty-SPPCM. Through precise alignment of the DOPC system with the Kitty-SPPCM, a focus point passing through a 2-mm-thick chicken tissue slice was obtained using the DOPC system with continuous optical gain at a wavelength of 532 nm. Subsequently, we modified the light path to enlarge the power difference between the reference beam and the reading beam. Besides, the continuous gain of 166 44, and 3.1 and PBR of 3×105, 1.6×105 and 0.6×105, were observed in the case of 0.5-mm, 1-mm and 2-mm chicken tissue slice, respectively. The PBR degradation along chicken tissue thickness can't be explained by the theoretical formula based on adaptive Optics. Therefore, we explain the PBR degradation using fidelity (ϕ) defined by Gu's work. The experimental result shows the curves of PBR and Fidelity are in the same trend. It demonstrates the PBR degradation is caused by information lose. Ministry of Science and Technology, Taiwan (MOST) (104-2221-E-008-073-MY3, 105-2218-E-035-009-MY3); Ministry of Education (MOE) (105G-903). The authors acknowledge support by Ministry of Science and Technology of ROC with the grant number MOST 104-2221-E-008-073-MY3, MOST 105-2218-E-035-009-MY3, and by the National Central University's "Plan to Develop First-class Universities and Top-level Research Centers" with the grant number 105G-903. 1. M. Minsky, "Memoir on Inventing the Confocal Scanning Microscope," Scanning 10(4), 128–138 (1988). [CrossRef] 2. A. Ashkin, "Acceleration and Trapping of Particles by Radiation Pressure," Phys. Rev. Lett. 24(4), 156–159 (1970). [CrossRef] 3. M. C. Zhong, L. Gong, J. H. Zhou, Z. Q. Wang, and Y. M. Li, "Optical trapping of red blood cells in living animals with a water immersion objective," Opt. Lett. 38(23), 5134–5137 (2013). [CrossRef] 4. B. R. J. Narayanareddy, Y. Jun, S. K. Tripathy, and S. P. Gross, "Calibration of optical tweezers for in vivo force measurements: how do different approaches compare?" Biophys. J. 107(6), 1474–1484 (2014). [CrossRef] 5. X. Li, C. Liu, S. Chen, Y. Wang, S. H. Cheng, and D. Sun, "Automated in-vivo transportation of biological cells with a robot-tweezers manipulation system," IEEE Int. Conf. Nanotech.73–75 (2015). 6. M. C. Zhong, X. B. Wei, J. H. Zhou, Z. Q. Wang, and Y. M. Li, "Trapping red blood cells in living animals using optical tweezers," Nat. Commun. 4(1), 1768 (2013). [CrossRef] 7. I. M. Vellekoop, A. Lagenkijk, and A. P. Mosk, "Exploiting disorder for perfect focusing," Nat. Photonics 4(5), 320–322 (2010). [CrossRef] 8. J. W. Czarske, D. Haufe, N. Koukourakis, and L. Büttner, "Transmission of independent signals through a multimode fiber using digital optical phase conjugation," Opt. Express 24(13), 15128–15136 (2016). [CrossRef] 9. Y. Shen, Y. Liu, C. Ma, and L. V. Wang, "Sub-Nyquist sampling boosts targeted light transport through opaque scattering media," Optica 4(1), 97–102 (2017). [CrossRef] 10. Z. Yaqoob, D. Psaltis, M. S. Feld, and C. Yang, "Optical phase conjugation for turbidity suppression in biological specimens," Nat. Photonics 2(2), 110–115 (2008). [CrossRef] 11. M. Cui, E. J. McDowell, and C. H. Yang, "Observation of polarization-gate based reconstruction quality improvement during the process of turbidity suppression by optical phase conjugation," Appl. Phys. Lett. 95(12), 123702 (2009). [CrossRef] 12. Y. W. Yu, S. Y. Chen, C. C. Lin, and C. C. Sun, "Inverse focusing inside turbid media by creating an opposite virtual objective," Sci. Rep. 6(1), 29452 (2016). [CrossRef] 13. C. L. Hsieh, Y. Pu, R. Grange, G. Laporte, and D. Psaltis, "Imaging through turbid layers by scanning the phase conjugated second harmonic radiation from a nanoparticle," Opt. Express 18(20), 20723–20731 (2010). [CrossRef] 14. Y. Pu, M. Centurion, and D. Psaltis, "Harmonic holography: a new holographic principle," Appl. Opt. 47(4), A103–A110 (2008). [CrossRef] 15. C. L. Hsieh, R. Grange, Y. Pu, and D. Psaltis, "Three dimensional harmonic holographic microcopy using nanoparticles as probes for cell imaging," Opt. Express 17(4), 2880–2891 (2009). [CrossRef] 16. C. L. Hsieh, Y. Pu, and D. Psaltis, "Three-dimensional scanning microscopy through thin turbid media," Opt. Express 20(3), 2500–2506 (2012). [CrossRef] 17. I. M. Vellekoop, M. Cui, and C. Yang, "Digital optical phase conjugation of fluorescence in turbid tissue," Appl. Phys. Lett. 101(8), 081108 (2012). [CrossRef] 18. Y. M. Wang, B. Judkewitz, C. A. DiMarzio, and C. Yang, "Deep-tissue focal fluorescence imaging with digitally time-reversed ultrasound-encoded light," Nat. Commun. 3(1), 928 (2012). [CrossRef] 19. K. Si, R. Fiolka, and M. Cui, "Fluorescence imaging beyond the ballistic regime by ultrasound-pulse-guided digital phase conjugation," Nat. Photonics 6(10), 657–661 (2012). [CrossRef] 20. A. Jang, M. Sentenac, and C. Yang, "Optical phase conjugation (OPC)-assisted isotropic focusing," Opt. Express 21(7), 8781–8792 (2013). [CrossRef] 21. S. N. Khonina and I. Golub, "Engineering the smallest 3D symmetrical bright and dark focal spots," J. Opt. Soc. Am. A 30(10), 2029–2033 (2013). [CrossRef] 22. I. N. Papadopoulos, S. Farahi, C. Moser, and D. Psaltis, "High-resolution, lensless endoscope based on digital scanning through a multimode optical fiber," Biomed. Opt. Express 4(2), 260–270 (2013). [CrossRef] 23. I. N. Papadopoulos, S. Farahi, C. Moser, and D. Psaltis, "Focusing and scanning light through a multimode optical fiber using digital phase conjugation," Opt. Express 20(10), 10583–10590 (2012). [CrossRef] 24. M. Jang, et al., "Relation between speckle decorrelation and optical phase conjugation (OPC)-based turbidity suppression through dynamic scattering media: a study on in vivo mouse skin," Biomed. Opt. Express 6(1), 72–85 (2015). [CrossRef] 25. M. Cui, E. J. McDowell, and C. Yang, "An in vivo study of turbidity suppression by optical phase conjugation (tsopc) on rabbit ear," Opt. Express 18(1), 25–30 (2010). [CrossRef] 26. Y. Liu, P. Lai, C. Ma, X. Xu, A. A. Grabar, and L. V. Wang, "Optical focusing deep inside dynamic scattering media with near-infrared time-reversed ultrasonically encoded (TRUE) light," Nat. Commun. 6(1), 5904 (2015). [CrossRef] 27. M. Cui and C. H. Yang, "Implementation of a digital optical phase conjugation system and its application to study the robustness of turbidity suppression by phase conjugation," Opt. Express 18(4), 3444–3455 (2010). [CrossRef] 28. C. L. Hsieh, Y. Pu, R. Grange, and D. Psaltis, "Digital phase conjugation of second harmonic radiation emitted by nanoparticles in turbid media," Opt. Express 18(12), 12283–12290 (2010). [CrossRef] 29. X. Yang, C. L. Hsieh, Y. Pu, and D. Psaltis, "Three dimensional scanning microscopy through thin turbid media," Opt. Express 20(3), 2500–2506 (2012). [CrossRef] 30. X. Xu, H. Liu, and L. V. Wang, "Time-reversed ultrasonically encoded optical focusing into scattering media," Nat. Photonics 5(3), 154–157 (2011). [CrossRef] 31. P. Lai, X. Xu, H. Liu, and L. V. Wang, "Time-reversed ultrasonically encoded optical focusing in biological tissue," J. Biomed. Opt. 17(3), 030506 (2012). [CrossRef] 32. B. Judkewitz, Y. M. Wang, R. Horstmeyer, A. Mathy, and C. Yang, "Speckle-scale focusing in the diffusive regime with time reversal of variance-encoded light (TROVE)," Nat. Photonics 7(4), 300–305 (2013). [CrossRef] 33. G. Lerosey and M. Fink, "Acousto-optic imaging: Merging the best of two worlds," Nat. Photonics 7(4), 265–267 (2013). [CrossRef] 34. K. Si, R. Fiolka, and M. Cui, "Breaking the spatial resolution barrier via iterative sound–light interaction in deep tissue microscopy," Sci. Rep. 2(1), 748 (2012). [CrossRef] 35. R. Horstmeyer, H. Ruan, and C. Yang, "Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue," Nat. Photonics 9(9), 563–571 (2015). [CrossRef] 36. E. H. Zhou, H. Ruan, C. Yang, and B. Judkewitz, "Focusing on moving targets through scattering specimens," Optica 1(4), 227–232 (2014). [CrossRef] 37. H. Ruan, M. Jang, and C. Yang, "Optical focusing inside scattering media with time-reversed ultrasound microbubble encoded light," Nat. Commun. 6(1), 8968 (2015). [CrossRef] 38. H. Ruan, et al., "Focusing light inside scattering media with magnetic-particle-guided wavefront shaping," Optica 4(11), 1337–1343 (2017). [CrossRef] 39. M. Woerdemann, K. Berghoff, and C. Denz, "Dynamic multiple-beam counter-propagating optical traps using optical phase-conjugation," Opt. Express 18(21), 22348–22357 (2010). [CrossRef] 40. M. Jang, H. Ruan, H. Zhou, B. Judkewitz, and C. Yang, "Method for auto-alignment of digital optical phase conjugation systems based on digital propagation," Opt. Express 22(12), 14054–14071 (2014). [CrossRef] 41. C. Ma, J. Di, Y. Li, F. Xiao, J. Zhang, K. Liu, X. Bai, and J. Zhao, "Rotational scanning and multiple-spot focusing through a multimode fiber based on digital optical phase conjugation," Appl. Phys. Express 11(6), 062501 (2018). [CrossRef] 42. M. Azimipour, F. Atry, and R. Pashaie, "Calibration of digital optical phase conjugation setups based on orthonormal rectangular polynomials," Appl. Opt. 55(11), 2873–2880 (2016). [CrossRef] 43. A. S. Hemphill, Y. Shen, J. Hwang, and L. V. Wang, "High-speed alignment optimization of digital optical phase conjugation systems based on autocovariance analysis in conjunction with orthonormal rectangular polynomials," J. Biomed. Opt. 24(03), 1 (2019). [CrossRef] 44. T. R. Hillman, et al., "Digital optical phase conjugation for delivering two-dimensional images through turbid media," Sci. Rep. 3(1), 1909 (2013). [CrossRef] 45. Y. Liu, C. Ma, Y. Shen, J. Shi, and L. V. Wang, "Focusing light inside dynamic scattering media with millisecond digital optical phase conjugation," Optica 4(2), 280–288 (2017). [CrossRef] 46. Y. Shen, Y. Liu, C. Ma, and L. V. Wang, "Focusing light through scattering media by full polarization digital optical phase conjugation," Opt. Lett. 41(6), 1130–1133 (2016). [CrossRef] 47. Y. Suzuki, J. W. Tay, Q. Yang, and L. V. Wang, "Continuous scanning of a time-reversed ultrasonically encoded optical focus by reflection-mode digital phase conjugation," Opt. Lett. 39(12), 3441–3444 (2014). [CrossRef] 48. C. Ma, X. Xu, Y. Liu, and L. V. Wang, "Time-reversed adapted-perturbation (TRAP) optical focusing onto dynamic objects inside scattering media," Nat. Photonics 8(12), 931–936 (2014). [CrossRef] 49. C. Ma, F. Zhou, Y. Liu, and L. V. Wang, "Single-exposure optical focusing inside scattering media using binarized time-reversed adapted perturbation," Optica 2(10), 869–876 (2015). [CrossRef] 50. D. Wang, et al., "Focusing through dynamic tissue with millisecond digital optical phase conjugation," Optica 2(8), 728–735 (2015). [CrossRef] 51. O. Katz, E. Small, Y. Guan, and Y. Silberberg, "Noninvasive nonlinear focusing and imaging through strongly scattering turbid layers," Optica 1(3), 170–174 (2014). [CrossRef] 52. C. C. Sun, et al., "Shearing interferometer with a Kitty self-pumped phase-conjugate mirror," Appl. Opt. 35(11), 1815–1819 (1996). [CrossRef] 53. C. C. Lin, Y. W. Yu, C. Y. Cheng, and C. C. Sun, "Discovery of a self-pumped, phase-conjugate mirror with high speed, high image quality, and large accepted incidence area," Opt. Eng. 54(2), 023101 (2015). [CrossRef] 54. J. Feinberg, "Self-pumped continuous-wave phase-cnjugator using internal reflection," Opt. Lett. 7(10), 486–488 (1982). [CrossRef] 55. E. J. McDowell, et al., "Turbidity suppression from the ballistic to the diff usive regime in biological tissues using optical phase conjugation," J. Biomed. Opt. 15(2), 025004 (2010). [CrossRef] 56. W. F. Cheong, S. A. Prahl, and A. J. Welch, "A review of the optical properties of biological tissues," IEEE J. Quantum Electron. 26(12), 2166–2185 (1990). [CrossRef] 57. I. M. Vellekoop and A. P. Mosk, "Focusing coherent light through opaque strongly scattering media," Opt. Lett. 32(16), 2309–2311 (2007). [CrossRef] 58. I. M. Vellekoop, "Controlling the Propagation of Light in Disordered Scattering Media," PhD thesis. Univ. of Twente (2008). 59. C. Gu and P. Yei, "Partical phase conjugation, fidelity, and reciprocity," Opt. Commun. 107(5–6), 353–357 (1994). [CrossRef] 60. E. Jakeman and K. D. Ridley, "Incomplete phase conjugation through a random-phase screen. I. Theory," J. Opt. Soc. Am. A 13(11), 2279–2287 (1996). [CrossRef] Article Order M. Minsky, "Memoir on Inventing the Confocal Scanning Microscope," Scanning 10(4), 128–138 (1988). [Crossref] A. Ashkin, "Acceleration and Trapping of Particles by Radiation Pressure," Phys. Rev. Lett. 24(4), 156–159 (1970). M. C. Zhong, L. Gong, J. H. Zhou, Z. Q. Wang, and Y. M. Li, "Optical trapping of red blood cells in living animals with a water immersion objective," Opt. Lett. 38(23), 5134–5137 (2013). B. R. J. Narayanareddy, Y. Jun, S. K. Tripathy, and S. P. Gross, "Calibration of optical tweezers for in vivo force measurements: how do different approaches compare?" Biophys. J. 107(6), 1474–1484 (2014). X. Li, C. Liu, S. Chen, Y. Wang, S. H. Cheng, and D. Sun, "Automated in-vivo transportation of biological cells with a robot-tweezers manipulation system," IEEE Int. Conf. Nanotech.73–75 (2015). M. C. Zhong, X. B. Wei, J. H. Zhou, Z. Q. Wang, and Y. M. Li, "Trapping red blood cells in living animals using optical tweezers," Nat. Commun. 4(1), 1768 (2013). I. M. Vellekoop, A. Lagenkijk, and A. P. Mosk, "Exploiting disorder for perfect focusing," Nat. Photonics 4(5), 320–322 (2010). J. W. Czarske, D. Haufe, N. Koukourakis, and L. Büttner, "Transmission of independent signals through a multimode fiber using digital optical phase conjugation," Opt. Express 24(13), 15128–15136 (2016). Y. Shen, Y. Liu, C. Ma, and L. V. Wang, "Sub-Nyquist sampling boosts targeted light transport through opaque scattering media," Optica 4(1), 97–102 (2017). Z. Yaqoob, D. Psaltis, M. S. Feld, and C. Yang, "Optical phase conjugation for turbidity suppression in biological specimens," Nat. Photonics 2(2), 110–115 (2008). M. Cui, E. J. McDowell, and C. H. Yang, "Observation of polarization-gate based reconstruction quality improvement during the process of turbidity suppression by optical phase conjugation," Appl. Phys. Lett. 95(12), 123702 (2009). Y. W. Yu, S. Y. Chen, C. C. Lin, and C. C. Sun, "Inverse focusing inside turbid media by creating an opposite virtual objective," Sci. Rep. 6(1), 29452 (2016). C. L. Hsieh, Y. Pu, R. Grange, G. Laporte, and D. Psaltis, "Imaging through turbid layers by scanning the phase conjugated second harmonic radiation from a nanoparticle," Opt. Express 18(20), 20723–20731 (2010). Y. Pu, M. Centurion, and D. Psaltis, "Harmonic holography: a new holographic principle," Appl. Opt. 47(4), A103–A110 (2008). C. L. Hsieh, R. Grange, Y. Pu, and D. Psaltis, "Three dimensional harmonic holographic microcopy using nanoparticles as probes for cell imaging," Opt. Express 17(4), 2880–2891 (2009). C. L. Hsieh, Y. Pu, and D. Psaltis, "Three-dimensional scanning microscopy through thin turbid media," Opt. Express 20(3), 2500–2506 (2012). I. M. Vellekoop, M. Cui, and C. Yang, "Digital optical phase conjugation of fluorescence in turbid tissue," Appl. Phys. Lett. 101(8), 081108 (2012). Y. M. Wang, B. Judkewitz, C. A. DiMarzio, and C. Yang, "Deep-tissue focal fluorescence imaging with digitally time-reversed ultrasound-encoded light," Nat. Commun. 3(1), 928 (2012). K. Si, R. Fiolka, and M. Cui, "Fluorescence imaging beyond the ballistic regime by ultrasound-pulse-guided digital phase conjugation," Nat. Photonics 6(10), 657–661 (2012). A. Jang, M. Sentenac, and C. Yang, "Optical phase conjugation (OPC)-assisted isotropic focusing," Opt. Express 21(7), 8781–8792 (2013). S. N. Khonina and I. Golub, "Engineering the smallest 3D symmetrical bright and dark focal spots," J. Opt. Soc. Am. A 30(10), 2029–2033 (2013). I. N. Papadopoulos, S. Farahi, C. Moser, and D. Psaltis, "High-resolution, lensless endoscope based on digital scanning through a multimode optical fiber," Biomed. Opt. Express 4(2), 260–270 (2013). I. N. Papadopoulos, S. Farahi, C. Moser, and D. Psaltis, "Focusing and scanning light through a multimode optical fiber using digital phase conjugation," Opt. Express 20(10), 10583–10590 (2012). M. Jang and et al., "Relation between speckle decorrelation and optical phase conjugation (OPC)-based turbidity suppression through dynamic scattering media: a study on in vivo mouse skin," Biomed. Opt. Express 6(1), 72–85 (2015). M. Cui, E. J. McDowell, and C. Yang, "An in vivo study of turbidity suppression by optical phase conjugation (tsopc) on rabbit ear," Opt. Express 18(1), 25–30 (2010). Y. Liu, P. Lai, C. Ma, X. Xu, A. A. Grabar, and L. V. Wang, "Optical focusing deep inside dynamic scattering media with near-infrared time-reversed ultrasonically encoded (TRUE) light," Nat. Commun. 6(1), 5904 (2015). M. Cui and C. H. Yang, "Implementation of a digital optical phase conjugation system and its application to study the robustness of turbidity suppression by phase conjugation," Opt. Express 18(4), 3444–3455 (2010). C. L. Hsieh, Y. Pu, R. Grange, and D. Psaltis, "Digital phase conjugation of second harmonic radiation emitted by nanoparticles in turbid media," Opt. Express 18(12), 12283–12290 (2010). X. Yang, C. L. Hsieh, Y. Pu, and D. Psaltis, "Three dimensional scanning microscopy through thin turbid media," Opt. Express 20(3), 2500–2506 (2012). X. Xu, H. Liu, and L. V. Wang, "Time-reversed ultrasonically encoded optical focusing into scattering media," Nat. Photonics 5(3), 154–157 (2011). P. Lai, X. Xu, H. Liu, and L. V. Wang, "Time-reversed ultrasonically encoded optical focusing in biological tissue," J. Biomed. Opt. 17(3), 030506 (2012). B. Judkewitz, Y. M. Wang, R. Horstmeyer, A. Mathy, and C. Yang, "Speckle-scale focusing in the diffusive regime with time reversal of variance-encoded light (TROVE)," Nat. Photonics 7(4), 300–305 (2013). G. Lerosey and M. Fink, "Acousto-optic imaging: Merging the best of two worlds," Nat. Photonics 7(4), 265–267 (2013). K. Si, R. Fiolka, and M. Cui, "Breaking the spatial resolution barrier via iterative sound–light interaction in deep tissue microscopy," Sci. Rep. 2(1), 748 (2012). R. Horstmeyer, H. Ruan, and C. Yang, "Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue," Nat. Photonics 9(9), 563–571 (2015). E. H. Zhou, H. Ruan, C. Yang, and B. Judkewitz, "Focusing on moving targets through scattering specimens," Optica 1(4), 227–232 (2014). H. Ruan, M. Jang, and C. Yang, "Optical focusing inside scattering media with time-reversed ultrasound microbubble encoded light," Nat. Commun. 6(1), 8968 (2015). H. Ruan and et al., "Focusing light inside scattering media with magnetic-particle-guided wavefront shaping," Optica 4(11), 1337–1343 (2017). M. Woerdemann, K. Berghoff, and C. Denz, "Dynamic multiple-beam counter-propagating optical traps using optical phase-conjugation," Opt. Express 18(21), 22348–22357 (2010). M. Jang, H. Ruan, H. Zhou, B. Judkewitz, and C. Yang, "Method for auto-alignment of digital optical phase conjugation systems based on digital propagation," Opt. Express 22(12), 14054–14071 (2014). C. Ma, J. Di, Y. Li, F. Xiao, J. Zhang, K. Liu, X. Bai, and J. Zhao, "Rotational scanning and multiple-spot focusing through a multimode fiber based on digital optical phase conjugation," Appl. Phys. Express 11(6), 062501 (2018). M. Azimipour, F. Atry, and R. Pashaie, "Calibration of digital optical phase conjugation setups based on orthonormal rectangular polynomials," Appl. Opt. 55(11), 2873–2880 (2016). A. S. Hemphill, Y. Shen, J. Hwang, and L. V. Wang, "High-speed alignment optimization of digital optical phase conjugation systems based on autocovariance analysis in conjunction with orthonormal rectangular polynomials," J. Biomed. Opt. 24(03), 1 (2019). T. R. Hillman and et al., "Digital optical phase conjugation for delivering two-dimensional images through turbid media," Sci. Rep. 3(1), 1909 (2013). Y. Liu, C. Ma, Y. Shen, J. Shi, and L. V. Wang, "Focusing light inside dynamic scattering media with millisecond digital optical phase conjugation," Optica 4(2), 280–288 (2017). Y. Shen, Y. Liu, C. Ma, and L. V. Wang, "Focusing light through scattering media by full polarization digital optical phase conjugation," Opt. Lett. 41(6), 1130–1133 (2016). Y. Suzuki, J. W. Tay, Q. Yang, and L. V. Wang, "Continuous scanning of a time-reversed ultrasonically encoded optical focus by reflection-mode digital phase conjugation," Opt. Lett. 39(12), 3441–3444 (2014). C. Ma, X. Xu, Y. Liu, and L. V. Wang, "Time-reversed adapted-perturbation (TRAP) optical focusing onto dynamic objects inside scattering media," Nat. Photonics 8(12), 931–936 (2014). C. Ma, F. Zhou, Y. Liu, and L. V. Wang, "Single-exposure optical focusing inside scattering media using binarized time-reversed adapted perturbation," Optica 2(10), 869–876 (2015). D. Wang and et al., "Focusing through dynamic tissue with millisecond digital optical phase conjugation," Optica 2(8), 728–735 (2015). O. Katz, E. Small, Y. Guan, and Y. Silberberg, "Noninvasive nonlinear focusing and imaging through strongly scattering turbid layers," Optica 1(3), 170–174 (2014). C. C. Sun and et al., "Shearing interferometer with a Kitty self-pumped phase-conjugate mirror," Appl. Opt. 35(11), 1815–1819 (1996). C. C. Lin, Y. W. Yu, C. Y. Cheng, and C. C. Sun, "Discovery of a self-pumped, phase-conjugate mirror with high speed, high image quality, and large accepted incidence area," Opt. Eng. 54(2), 023101 (2015). J. Feinberg, "Self-pumped continuous-wave phase-cnjugator using internal reflection," Opt. Lett. 7(10), 486–488 (1982). E. J. McDowell and et al., "Turbidity suppression from the ballistic to the diff usive regime in biological tissues using optical phase conjugation," J. Biomed. Opt. 15(2), 025004 (2010). W. F. Cheong, S. A. Prahl, and A. J. Welch, "A review of the optical properties of biological tissues," IEEE J. Quantum Electron. 26(12), 2166–2185 (1990). I. M. Vellekoop and A. P. Mosk, "Focusing coherent light through opaque strongly scattering media," Opt. Lett. 32(16), 2309–2311 (2007). I. M. Vellekoop, "Controlling the Propagation of Light in Disordered Scattering Media," PhD thesis. Univ. of Twente (2008). C. Gu and P. Yei, "Partical phase conjugation, fidelity, and reciprocity," Opt. Commun. 107(5–6), 353–357 (1994). E. Jakeman and K. D. Ridley, "Incomplete phase conjugation through a random-phase screen. I. Theory," J. Opt. Soc. Am. A 13(11), 2279–2287 (1996). Ashkin, A. Atry, F. Azimipour, M. Bai, X. Berghoff, K. Büttner, L. Centurion, M. Chen, S. Y. Cheng, C. Y. Cheng, S. H. Cheong, W. F. Cui, M. Czarske, J. W. Denz, C. Di, J. DiMarzio, C. A. Farahi, S. Feinberg, J. Feld, M. S. Fink, M. Fiolka, R. Golub, I. Gong, L. Grabar, A. A. Grange, R. Gross, S. P. Gu, C. Guan, Y. Haufe, D. Hemphill, A. S. Hillman, T. R. Horstmeyer, R. Hsieh, C. L. Hwang, J. Jakeman, E. Jang, A. Jang, M. Judkewitz, B. Jun, Y. Katz, O. Khonina, S. N. Koukourakis, N. Lagenkijk, A. Lai, P. Laporte, G. Lerosey, G. Li, X. Li, Y. Li, Y. M. Lin, C. C. Liu, C. Liu, H. Liu, K. Liu, Y. Ma, C. Mathy, A. McDowell, E. J. Minsky, M. Moser, C. Mosk, A. P. Narayanareddy, B. R. J. Papadopoulos, I. N. Pashaie, R. Prahl, S. A. Psaltis, D. Pu, Y. Ridley, K. D. Ruan, H. Sentenac, M. Shen, Y. Shi, J. Si, K. Silberberg, Y. Small, E. Sun, C. C. Sun, D. Suzuki, Y. Tay, J. W. Tripathy, S. K. Vellekoop, I. M. Wang, D. Wang, L. V. Wang, Y. Wang, Y. M. Wang, Z. Q. Wei, X. B. Welch, A. J. Woerdemann, M. Xiao, F. Xu, X. Yang, C. Yang, C. H. Yang, Q. Yang, X. Yaqoob, Z. Yei, P. Yu, Y. W. Zhang, J. Zhao, J. Zhong, M. C. Zhou, E. H. Zhou, F. Zhou, H. Zhou, J. H. Appl. Opt. (3) Appl. Phys. Express (1) Appl. Phys. Lett. (2) Biomed. Opt. Express (2) Biophys. J. (1) IEEE J. Quantum Electron. (1) J. Biomed. Opt. (3) J. Opt. Soc. Am. A (2) Nat. Commun. (4) Nat. Photonics (8) Opt. Commun. (1) Opt. Eng. (1) Opt. Express (12) Opt. Lett. (5) Phys. Rev. Lett. (1) Sci. Rep. (3) Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here. Alert me when this article is cited. Click here to see a list of articles that cite this paper View in Article | Download Full Size | PPT Slide | PDF Equations on this page are rendered with MathJax. Learn more. (1) | R r e f + O | 2 = | R r e f | 2 + | O | 2 + R r e f ∗ ⋅ O + R r e f ⋅ O ∗ , (2) | R r e f + P ∗ | 2 = | R r e f | 2 + | P | 2 + R r e f ∗ ⋅ P ∗ + R r e f ⋅ P , (3) S P S L M = [ ( R r e f ⋅ O ∗ ) b p ] ⋅ [ ( R r e f ⋅ P ) b p ] ∗ = P b p ∗ ⋅ O b p ∗ . (4) P ⋅ S P S L M = ( P b p ∗ ⋅ P ) ⋅ O b p ∗ ≈ O b p ∗ . (5) P B R = π 4 ( N − 1 ) + 1 , (6) ϕ = α s p e c i m e n α O p t α S p e c t r u m , (7) α s p e c i m e n = ∫ | E s 2 ( r 2 ) | 2 d r 2 ∫ | E s 1 ( r 1 ) | 2 d r 1 , (8) α O p t = ∫ A p | E D O P C ( r 3 ) | 2 d r 3 ∫ | E D O P C ( r 3 ) | 2 d r 3 , (9) α S p e c t r u m = ∫ A p f | e int ( f ) | 2 d f ∫ | e int ( f ) | 2 d f . Takashige Omatsu, Editor-in-Chief Issues in Progress Feature Issues Measured power in different positions Specimen Thickness of the chicken tissue 0.5 mm 1 mm 2 mm Acquisition Step Power of the signal before the chicken tissue and collected by the system (PS1) 2.64µW 7.1µW 12.84µW Power of the signal passing through the chicken tissue (PS2) 101nW 154nW 200.2nW Reconstruction Step Power of the phase conjugate signal before the chicken tissue (PPC1) 970µW 893µW 861µW Power of the phase conjugate signal passing through the chicken tissue (PPC2) 438.8µW 312.6µW 39.5µW System Gain Reflection ratio (RR = PPC1/PS2) 19208 11597 8601 Amplification ratio (AR = PPC2/PS1) 166.2 44 3.1 Confirm Citation Alert Please login to set citation alerts. MathJax Help Equations displayed with MathJax. Right click equation to reveal menu options. Field Error Select as filters Select Topics Cancel Publishing Home Optica Publishing Group Bookshelf Optics & Photonics News About Optica Publishing Group About My Account Optica Home © Copyright 2022 | Optica Publishing Group. All Rights Reserved Institutional Login (Optica Publishing Group participates in eduGAIN) China CARSI Member Access China CAoS Member Access | Sort Journals () Conferences () Industry Reports () Apply Filters Cancel include more topics » Browse the topics: Click the to reveal subtopics. Use the checkbox to select a topic to filter your search. Add Selected Topic Filters Cancel for="" class="sf-list_label sf-authors-bold" class="sf-list_label" > Frequency ascending Frequency descending Alphabetical A>Z Alphabetical Z>A Newest date first Oldest date first
CommonCrawl
Cross Validated Cross Validated Meta Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up. Degrees of freedom of $\chi^2$ in Hosmer-Lemeshow test The test statistic for the Hosmer-Lemeshow test (HLT) for goodness of fit (GOF) of a logistic regression model is defined as follows: The sample is then split into $d=10$ deciles, $D_1, D_2, \dots , D_{d}$, per decile one computes the following quantities: $O_{1d}=\displaystyle \sum_{i \in D_d} y_i$, i.e. the observed number of positive cases in decile $D_d$; $O_{0d}=\displaystyle \sum_{i \in D_d} (1-y_i)$, i.e. the observed number of negative cases in decile $D_d$; $E_{1d}=\displaystyle \sum_{i \in D_d} \hat{\pi}_i$, i.e. the estimated number of positive cases in decile $D_d$; $E_{0d}= \displaystyle \sum_{i \in D_d} (1-\hat{\pi}_i)$, i.e. the estimated number of negative cases in decile $D_d$; where $y_i$ is the observed binary outcome for the $i$-th observation and $\hat{\pi}_i$ the estimated probability for that observation. Then the test statistic is then defined as: $X^2 = \displaystyle \sum_{h=0}^{1} \sum_{g=1}^d \left( \frac{(O_{hg}-E_{hg})^2}{E_{hg}} \right)= \sum_{g=1}^d \left( \frac{ O_{1g} - n_g \hat{\pi}_g}{\sqrt{n_g (1-\hat{\pi}_g) \hat{\pi}_g}} \right)^2,$ where $\hat{\pi}_g$ is the average estimated probability in decile $g$ and let $n_g$ be the number of companies in the decile. According to Hosmer-Lemeshow (see this link) this statistic has (under certain assumptions) a $\chi^2$ distribution with $(d-2)$ degrees of freedom. On the other hand, if I would define a contingency table with $d$ rows (corresponding to the deciles) and 2 columns (corresponding to the true/false binary outcome) then the test-statistic for the $\chi^2$ test for this contingency table would the the same as the $X^2$ defined above, however, in the case of the contingency table, this test statistic is $\chi^2$ with $(d-1)(2-1)=d-1$ degrees of freedom. So one degree of freedom more ! How can one explain this difference in the number of degrees of freedom ? EDIT: additions after reading comments: @whuber They say (see Hosmer D.W., Lemeshow S. (1980), A goodness-of-fit test for the multiple logistic regression model. Communications in Statistics, A10, 1043-1069) that there is a theorem demonstrated by Moore and Spruill from which it follows that if (1) the parameters are estimated using likelihood functions for ungrouped data and (2) the frequencies in the 2xg table depend on the estimated parameters, namely the cells are random, not fixed, that then, under appropriate regularity conditions the goodness of fit statistic under (1) and (2) is that of a central chi-square with the usual reduction of degrees of freedom due to estimated parameters plus a sum of weighted chi-square variables. Then, if I understand their paper well, they try to find an approximation for this 'correction term' that, if I understand it well, is this weighted sum of chi-square random variables, and they do this by making simulations, but I must admit that I do not fully understand what they say there, hence my question; why are these cells random, how does that influence the degrees of freedom ? Would it be different if I fix the borders of the cells and then I classify the observations in fixed cells based on the estimated score, in that case the cells are not random, though the 'content' of the cell is ? @Frank Harell: couldn't it be that the 'shortcomings' of the Hosmer-Lemeshow test that you mention in your comments below, are just a consequence of the approximation of the weighted sum of chi-squares ? goodness-of-fit degrees-of-freedom hosmer-lemeshow-test user83346user83346 $\begingroup$ The book contains a detailed description of this test and the basis for it. Your question is fully answered on pp 145-149. Determining degrees of freedom in $\chi^2$ tests is a subtle thing, because most of these tests are approximations (in the first place) and those approximations are good only when seemingly minor technical conditions apply. For some discussion of all this, see stats.stackexchange.com/a/17148. H&L took a purely practical route: they base their recommendation of $d-2$ DF on "an extensive set of simulations." $\endgroup$ – whuber ♦ $\begingroup$ This test is now considered obsolete due to (1) lack of power, (2) binning of continuous probabilities, and (3) arbitrariness in choice of binning and choice of definition of deciles. The Hosmer - le Cessie 1 d.f. test or the Spiegelhalter test are recommended. See for example the R rms package residuals.lrm and val.prob functions. $\endgroup$ – Frank Harrell $\begingroup$ @Frank Harell: (a) even is the Hosmer-Lemeshow test is obsolete, I think it is still interesting to understand the difference with $\chi^2$ and (b) do you have a reference that shows that Spiegelhalter test has more power than the Hosmer-Lemeshow test ? $\endgroup$ $\begingroup$ These issues are IMHO very small in comparison with the original question. $\endgroup$ $\begingroup$ I think details appear elsewhere on this site. Briefly, (1) Hosmer showed the test is arbitrary - is very sensitive to exactly how deciles are computed; (2) it lacks power. You can see that it is based on imprecise quantities by plotting the binned calibration curve (as opposed to a smooth calibration curve) and noting the jumps. Also, it does not properly penalize for extreme overfitting. $\endgroup$ Hosmer D.W., Lemeshow S. (1980), A goodness-of-fit test for the multiple logistic regression model. Communications in Statistics, A10, 1043-1069 show that: If the model is a logistic regression model and the $p$ parameters are estimated by maximum likelihood and the $G$ groups are defined on the estimated probabilities then it holds that $X^2$ is asymptotically $\chi^2(G-p-1)+\sum_{i=1}^{p+1} \lambda_i \chi_i^2(1)$ (Hosmer,Lemeshow, 1980, p.1052, Theorem 2). (Note: the necessary conditions are not explicitly in Theorem 2 on page 1052 but if one attentively reads the paper and the proof then these pop up) The second term $\sum_{i=1}^{p+1} \lambda_i \chi_i^2(1)$ results from the fact that the grouping is based on estimated - i.e. random - quantities (Hosmer,Lemeshow, 1980, p.1051) Using simulations they showed that the second term can be (in the cases used in the simualtion) approximated by a $\chi^2(p-1)$ (Hosmer,Lemeshow, 1980, p.1060) Combining these two facts results in a sum of two $\chi^2$ variables, one with $G-p-1$ degrees of freedom and a second one with $p-1$ degrees of freedom or $X^2 \sim \chi^2(G-p-1+p-1=G-2)$ So the answer to the question lies in the occurrence of the 'weighted chi-square term' or in the fact that the groups are defined using estimated probabilities that are themselves random variables. See also Hosmer Lemeshow (1980) Paper - Theorem 2 edited Sep 8, 2017 at 6:43 answered Sep 8, 2017 at 6:35 $\begingroup$ 'So the answer to the question lies in the occurrence of the 'weighted chi-square term' and in the fact that the groups are defined using estimated probabilities that are themselves random variables.' A) The estimated probabilities makes that you get an extra reduction of p+1, which makes the main difference to the case of the contingency table (in which only g terms are estimated). B) The weighted chi-square term occurs as a correction because the estimate is not a likelihood estimate or equally efficient, and this makes that the effect of the reduction is less extra than (p+1). $\endgroup$ – Sextus Empiricus Sep 8, 2017 at 8:01 $\begingroup$ @Martijn Weterings: Am I right if I conclude that what you say in this comment is not exactly the same explanation (not to say completely different) as what you say in your answer ? Does your comment lead to the conclusion that the df are $G-2$ ? $\endgroup$ $\begingroup$ My answer explains the intuition behind the difference in degrees of freedom compared to the reasoning based on "the test-statistic for the $\chi^2$ test for this contingency table", it explains why they are different (case estimating fixed cells). It focuses on the 'usual reduction' from which you would conclude that the df would be G-3. However, certain conditions for the 'usual reduction' are not met. For this reason (random cells) you get the more complicated terms with the weighted chi-square term as a correction and you effectively end up with G-2. It is far from completely different. $\endgroup$ $\begingroup$ @ Martijn Weterings, sorry but I can't upvote because I don't see any notion like 'random cells' in your answer at all, do you mean that al your nice pictures (and I mean this, they are very nice) explain something about 'random cells' or did you come up with that notion after reading my answer ? $\endgroup$ $\begingroup$ Don't be sorry. I agree that my answer is not an exact answer to show exactly the degrees of freedom in the HL test. I am sorry for that. What you have is Chernoff Lehman statistic (with also random cells) that follows a $\sum_{i=1}^{k-s-1} \chi^2(1) + \sum_{i=k-s}^{k-1} \lambda_i \chi_i^2(1) $ distribution. It is currently unclear to me what part is troubling you, I hope you can be more constructive in this. If you want all explained, you already have the articles for that. My answer just tackled the $\sum_{i=1}^{k-s-1} \chi^2(1)$ explaining the main difference to contingency table test. $\endgroup$ The theorem that you refer to (the usual reduction part "usual reduction of degrees of freedom due to estimated parameters") has been mostly advocated by R.A. Fisher. In 'On the interpretation of Chi Square from Contingency Tables, and the Calculation of P' (1922) he argued to use the $(R-1) * (C-1)$ rule and in 'The goodness of fit of regression formulae' (1922) he argues to reduce the degrees of freedom by the number of parameters used in the regression to obtain expected values from the data. (It is interesting to note that people misused the chi-square test, with wrong degrees of freedom, for more than twenty years since it's introduction in 1900) Your case is of the second kind (regression) and not of the former kind (contingency table) although the two are related in that they are linear restrictions on the parameters. Because you model the expected values, based on your observed values, and you do this with a model that has two parameters, the 'usual' reduction in degrees of freedom is two plus one (an extra one because the O_i need to sum up to a total, which is another linear restriction, and you end up effectively with a reduction of two, instead of three, because of the 'in-efficiency' of the modeled expected values). The chi-square test uses a $\chi^2$ as a distance measure to express how close a result is to the expected data. In the many versions of the chi-square tests the distribution of this 'distance' is related to the sum of deviations in normal distributed variables (which is true in the limit only and is an approximation if you deal with non-normal distributed data). For the multivariate normal distribution the density function is related to the $\chi^2$ by $f(x_1,...,x_k) = \frac{e^{- \frac{1}{2}\chi^2} }{\sqrt{(2\pi)^k \vert \mathbf{\Sigma}\vert}}$ with $\vert \mathbf{\Sigma}\vert$ the determinant of the covariance matrix of $\mathbf{x}$ and $\chi^2 = (\mathbf{x}-\mathbf{\mu})^T \mathbf{\Sigma}^{-1}(\mathbf{x}-\mathbf{\mu})$ is the mahalanobis distance which reduces to the Euclidian distance if $\mathbf{\Sigma}=\mathbf{I}$. In his 1900 article Pearson argued that the $\chi^2$-levels are spheroids and that he can transform to spherical coordinates in order to integrate a value such as $P(\chi^2 > a)$. Which becomes a single integral. It is this geometrical representation, $\chi^2$ as a distance and also a term in density function, that can help to understand the reduction of degrees of freedom when linear restrictions are present. First the case of a 2x2 contingency table. You should notice that the four values $\frac{O_i-E_i}{E_i}$ are not four independent normal distributed variables. They are instead related to each other and boil down to a single variable. Lets use the table $O_{ij} = \begin{array}{cc} o_{11} & o_{12} \\ o_{21} & o_{22} \end{array}$ then if the expected values $E_{ij} = \begin{array}{cc} e_{11} & e_{12} \\ e_{21} & e_{22} \end{array}$ where fixed then $\sum \frac{o_{ij}-e_{ij}}{e_{ij}}$ would be distributed as a chi-square distribution with four degrees of freedom but often we estimate the $e_{ij}$ based on the $o_{ij}$ and the variation is not like four independent variables. Instead we get that all the differences between $o$ and $e$ are the same $ \begin{array}\\&(o_{11}-e_{11}) &=\\ &(o_{22}-e_{22}) &=\\ -&(o_{21}-e_{21}) &=\\ -&(o_{12}-e_{12}) &= o_{11} - \frac{(o_{11}+o_{12})(o_{11}+o_{21})}{(o_{11}+o_{12}+o_{21}+o_{22})} \end{array}$ and they are effectively a single variable rather than four. Geometrically you can see this as the $\chi^2$ value not integrated on a four dimensional sphere but on a single line. Note that this contingency table test is not the case for the contingency table in the Hosmer-Lemeshow test (it uses a different null hypothesis!). See also section 2.1 'the case when $\beta_0$ and $\underline\beta$ are known' in the article of Hosmer and Lemshow. In their case you get 2g-1 degrees of freedom and not g-1 degrees of freedom as in the (R-1)(C-1) rule. This (R-1)(C-1) rule is specifically the case for the null hypothesis that row and column variables are independent (which creates R+C-1 constraints on the $o_i-e_i$ values). The Hosmer-Lemeshow test relates to the hypothesis that the cells are filled according to the probabilities of a logistic regression model based on $four$ parameters in the case of distributional assumption A and $p+1$ parameters in the case of distributional assumption B. Second the case of a regression. A regression does something similar to the difference $o-e$ as the contingency table and reduces the dimensionality of the variation. There is a nice geometrical representation for this as the value $y_i$ can be represented as the sum of a model term $\beta x_i$ and a residual (not error) terms $\epsilon_i$. These model term and residual term each represent a dimensional space that is perpendicular to each other. That means the residual terms $\epsilon_i$ can not take any possible value! Namely they are reduced by the part which projects on the model, and more particular 1 dimension for each parameter in the model. Maybe the following images can help a bit Below are 400 times three (uncorrelated) variables from the binomial distributions $B(n=60,p={1/6,2/6,3/6})$. They relate to normal distributed variables $N(\mu=n*p,\sigma^2=n*p*(1-p))$. In the same image we draw the iso-surface for $\chi^2={1,2,6}$. Integrating over this space by using the spherical coordinates such that we only need a single integration (because changing the angle does not change the density), over $\chi$ results in $\int_0^a e^{-\frac{1}{2} \chi^2 }\chi^{d-1} d\chi$ in which this $\chi^{d-1}$ part represents the area of the d-dimensional sphere. If we would limit the variables $\chi$ in some way than the integration would not be over a d-dimensional sphere but something of lower dimension. The image below can be used to get an idea of the dimensional reduction in the residual terms. It explains the least squares fitting method in geometric term. In blue you have measurements. In red you have what the model allows. The measurement is often not exactly equal to the model and has some deviation. You can regard this, geometrically, as the distance from the measured point to the red surface. The red arrows $mu_1$ and $mu_2$ have values $(1,1,1)$ and $(0,1,2)$ and could be related to some linear model as x = a + b * z + error or $\begin{bmatrix}x_{1}\\x_{2}\\x_{3}\end{bmatrix} = a \begin{bmatrix}1\\1\\1\end{bmatrix} + b \begin{bmatrix}0\\1\\2\end{bmatrix} + \begin{bmatrix}\epsilon_1\\\epsilon_2\\\epsilon_3\end{bmatrix} $ so the span of those two vectors $(1,1,1)$ and $(0,1,2)$ (the red plane) are the values for $x$ that are possible in the regression model and $\epsilon$ is a vector that is the difference between the observed value and the regression/modeled value. In the least squares method this vector is perpendicular (least distance is least sum of squares) to the red surface (and the modeled value is the projection of the observed value onto the red surface). So this difference between observed and (modelled) expected is a sum of vectors that are perpendicular to the model vector (and this space has dimension of the total space minus the number of model vectors). In our simple example case. The total dimension is 3. The model has 2 dimensions. And the error has a dimension 1 (so no matter which of those blue points you take, the green arrows show a single example, the error terms have always the same ratio, follow a single vector). I hope this explanation helps. It is in no way a rigorous proof and there are some special algebraic tricks that need to be solved in these geometric representations. But anyway I like these two geometrical representations. The one for the trick of Pearson to integrate the $\chi^2$ by using the spherical coordinates, and the other for viewing the sum of least squares method as a projection onto a plane (or larger span). I am always amazed how we end up with $\frac{o-e}{e}$, this is in my point of view not trivial since the normal approximation of a binomial is not a devision by $e$ but by $np(1-p)$ and in the case of contingency tables you can work it out easily but in the case of the regression or other linear restrictions it does not work out so easily while the literature is often very easy in arguing that 'it works out the same for other linear restrictions'. (An interesting example of the problem. If you performe the following test multiple times 'throw 2 times 10 times a coin and only register the cases in which the sum is 10' then you do not get the typical chi-square distribution for this "simple" linear restriction) Sextus EmpiricusSextus Empiricus $\begingroup$ In my honest opinion this answer has very nice figures and arguments that are related to $\chi^2$ test but it has not so much to do with the question which is about the Hosmer-Lemeshow test for a logistic regression. You are arguing something with a regression where 1 parameters is estimated but the Hosmer-Lemeshow test is about a logistic regression where $p>1$ parameters are estimated. See also stats.stackexchange.com/questions/296312/… $\endgroup$ $\begingroup$ ... and, as you say, you end up with an $e$ in the denominator and not with a $np(1-p)$ , so this does not answer this question. Hence I have to downvote, sorry (but the graphs are very nice :-) ). $\endgroup$ $\begingroup$ You were asking in a comment for "to understand the formula or at least the 'intuitive' explanation". So that is what you get with these geometrical interpretations. To calculate exactly how these $np(1-p)$ cancel out if you add both the positive and negative cases is far from intuitive and does not help you understand the dimensions. $\endgroup$ $\begingroup$ In my answer I used the typical $(d - 1 - p)$ degrees of freedom and assumed that the regression was performed with one parameter (p=1), which was a mistake. The parameters in your references are two, a $\beta_0$ and $\beta$. These two parameters would have reduced the dimensionality to d-3 if only the proper conditions (efficient estimate) would have been met (see for instance again a nice article from Fisher 'The conditions under which the chi square measures the discrepancy between observation and hypothesis').... $\endgroup$ $\begingroup$ ....anyway, I explained why we don't get dimension d-1 (and should instead expect something like d-3, if you put two parameters in the regression) and how the dimensional reduction by an efficient estimate can be imagined. It is the Moore-Spruill article that works out the extra terms (potentially increasing the effective degrees of freedom) due to that inefficiency and it is the Hosmer-Lemeshow simulation that shows that d-2 works best. That theoretical work is far from intuitive and the simulation is far from exact. My answer is just the requested explanation for the difference with d-1. $\endgroup$ Thanks for contributing an answer to Cross Validated! How to understand degrees of freedom? Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit Misunderstanding a P-value? Goodness-of-fit test in Logistic regression; which 'fit' do we want to test? How to compare (probability) predictive ability of models developed from logistic regression? Is the distribution of the test statistic for the Hosmer-Lemeshow test $\chi^2$ in ''out of sample validation''? Hosmer Lemeshow (1980) Paper - Theorem 2 Is the goodness of fit test in JMP the Hosmer-Lemeshow goodness of fit test? How many groups to use in Hosmer and Lemeshow test? Hosmer-Lemeshow GOF test in Matlab Hosmer-Lemeshow recommendations
CommonCrawl
The impact of proliferation-migration tradeoffs on phenotypic evolution in cancer Jill A. Gallaher1, Joel S. Brown1 na1 & Alexander R. A. Anderson ORCID: orcid.org/0000-0002-2536-43831 na1 Cancer models Computational models Tumors are not static masses of cells but dynamic ecosystems where cancer cells experience constant turnover and evolve fitness-enhancing phenotypes. Selection for different phenotypes may vary with (1) the tumor niche (edge or core), (2) cell turnover rates, (3) the nature of the tradeoff between traits, and (4) whether deaths occur in response to demographic or environmental stochasticity. Using a spatially-explicit agent-based model, we observe how two traits (proliferation rate and migration speed) evolve under different tradeoff conditions with different turnover rates. Migration rate is favored over proliferation at the tumor's edge and vice-versa for the interior. Increasing cell turnover rates slightly slows tumor growth but accelerates the rate of evolution for both proliferation and migration. The absence of a tradeoff favors ever higher values for proliferation and migration, while a convex tradeoff tends to favor proliferation, often promoting the coexistence of a generalist and specialist phenotype. A concave tradeoff favors migration at low death rates, but switches to proliferation at higher death rates. Mortality via demographic stochasticity favors proliferation, and environmental stochasticity favors migration. While all of these diverse factors contribute to the ecology, heterogeneity, and evolution of a tumor, their effects may be predictable and empirically accessible. Tumors are thought to consist of 3 major populations of cells: actively dividing, quiescent and necrotic. Under idealized environments, such as the experimental system of spheroids1, a fast growing tumor becomes dense and quickly outgrows the supply of oxygen and nutrients. This gives rise to a layered tumor anatomy that consists of concentric regions encompassing the 3 populations (e.g. Fig. 1A). In real tumors, the geometry of these regions appears far more irregular and disordered (e.g. Fig. 1B), reflecting a more complex and dynamic environment. Regardless, it is a tempting simplification to view the tumor edge as the place where tumor cells primarily divide rather than die, the interior as generally quiescent with few births and deaths, and the necrotic zone where tumor cells mostly die. Tumor anatomy in spheroid models and human tumors. (A) Tumor spheroid model. Edge detection algorithm finds inner necrotic (green) and outer proliferating (blue) edges. Image provided by Mehdi Damaghi. (B) Digital pathology uses pattern recognition on histological sample from actual tumor. The proliferating, hypoxic and necrotic regions have the same broad structure but are more intermixed. Image provided by Mark Lloyd. Such a perspective has led to models of tumor growth and evolution where tumor cells expand to occupy space, either explicitly2,3,4,5,6,7,8,9,10,11 or implicitly12,13,14,15, as different clonal lineages proliferate and expand at different rates. When these models include evolution, one can determine the properties of tumor cells that are favored by natural selection. Such is the case for models that examine the joint evolution of proliferation and migration2,3,7. However, in the absence of cell turnover, such models can only show changes in the frequency of different clonal lineages while the replacement of less successful lineages by more successful ones is ignored. In reality, the turnover of tumor cells via proliferation and cell death occurs constantly throughout the entirety of the tumor. Turnover rates may be high, as high as every 10 days for the interior of breast cancer tumors. A tumor that looks static with an unchanging volume might actually be very dynamic as proliferation and apoptosis occur in parallel throughout a tumor. High but balanced proliferation and death rates have been measured in some cancers16,17,18,19. Furthermore, stimulatory factors from dying cells can cause compensatory proliferation of surviving cells20, and an increased proliferation along with an increased death rate may suggest a more aggressive disease18,19. High turnover rates facilitate evolution by natural selection21. This "struggle for existence" is seen in all organisms, and in cancer the cells have the capacity to produce more offspring than can possibly survive. Competition for space and resources limits cancer cell densities and population sizes. Limits to growth and cell turnover should select for genes and traits associated with proliferation rates and movement. All else equal, the cancer cell lineage with a higher proliferation rate will outcompete and replace one with a slower proliferation rate. However, higher proliferation rates will cause local crowding, limitations on resources, and other unfavorable conditions. Movement and migration away from such crowding should be favored. Even random migration can be favored by natural selection as a means of avoiding over-crowding22. Such migration can be particularly favorable at the edge of the tumor, but even in the interior of a tumor, migration may move cells from more to less dense locales. Many mutation models of cancer progression allow for unconstrained phenotypic improvement2,3,5 or infer increased fitness through the number of passenger/driver mutations23,24. Indeed, if both proliferation and migration enhance the fitness of cancer cells, then natural selection should favor higher rates for both3. Such selection will continue to improve proliferation and migration rates simultaneously until a point is reached where there are tradeoffs25,26,27. To improve proliferation rates further necessarily means sacrificing migration and vice-versa28,29,30. In his seminal book on evolution of changing environments, Levins (1967) proposed that the shape of the tradeoff curve should influence the evolutionary outcome31. A convex curve may favor a single population with a generalist phenotype whereas a concave curve may favor the coexistence of two specialist populations. Additionally, the specific shape of the tradeoff curve can significantly affect the evolutionary trajectory towards this curve32. The pattern of cancer cell mortality across a tumor may represent just demographic stochasticity or it may include environmental stochasticity33. The former happens when cell death is random and exhibits little temporal or spatial autocorrelations. Such patterns of mortality open up numerous but small opportunities for cell replacement. Environmental stochasticity happens when the sudden absence of nutrients or the accumulation of toxins causes wholesale death of the cells in some region of the tumor. This pattern of cell mortality creates fewer but much larger spaces for cell replacement. When regions are subject to catastrophic death (e.g., large or small temporary regions of necrosis) the distinctiveness of edge versus interior regions of a tumor are obscured, and the evolution of different combinations of proliferation and migration rates may be favored. Strictly demographic stochasticity should favor proliferation over migration and vice-versa for environmental stochasticity within the tumor. In what follows, we develop a spatially explicit agent-based model of tumor growth that includes cell turnover at both the edge and the interior of the tumor. We use this model to explore the joint evolution of proliferation and migration rates by cancer cells in response to: (1) rates of cell turnover, (2) different shapes of the tradeoff curve, (3) and different mortality regimes. Using an off-lattice agent-based model, we investigated how 2 traits (proliferation and migration) will evolve in response to space limitation and the continual turnover of cells. Initially, we start with a single cell in the center of a 2D circular domain with the least aggressive phenotype: a long cell cycle time and a slow migration speed. Figure 2A shows the 4 mm diameter circular space available to the tumor and its starting location. Density-dependence and limits to population growth comes from local crowding, similar to the methods presented in Gallaher et al.3. When a cell is completely surrounded by neighbors, we assume that it can neither move nor divide. Upon division, each daughter cell's trait may change in one of three ways: it can inherit the same trait as the original cell, or via mutation, its trait values for migration or proliferation rate can increase or decrease by a small value, so long as its trait values stay within the boundaries of what is evolutionarily feasible. Figure 2B shows the trait space with respect to proliferation and migration, and how an open, convex, or concave boundary in the trait space eliminates possible trait combinations. More details on the model specifics can be found in the Methods section. Mathematical model details. (A) A single cell with the smallest proliferation and migration rates centered in a 4-mm radial boundary initializes the simulation. The cell diameter is 20 μm, and its area of interaction is defined by a 200 × 200 micron neighborhood (more detail can be found in the Methods section). (B) Imposing tradeoffs by bounding the phenotype space. When the whole space is open (thin solid line), all phenotypes are allowed. The convex (thick solid line) and concave (dashed line) bound the space as shown. The set of evolutionarily feasible traits lies within (fitness set) and on the tradeoff line (active edge). The trait value of a daughter cell can mutate up to one unit in any direction as long as it stays within the bounded region. In the following, we used the model described in the previous section to investigate how proliferation-migration tradeoffs and cell turnover affect the evolution of phenotypes over time. Imposing a go-or-grow tradeoff selects for migration during growth Evolutionarily, we placed limits on the set of feasible combinations of migration and proliferation. The boundary of this set represents the tradeoff between the two traits. In our simulations, we considered three forms of the tradeoff: open, convex, and concave boundary conditions (Fig. 2B). Under an open tradeoff, each trait can achieve a maximum value independent of the value of the other trait (no tradeoffs). Under a convex (or concave) tradeoff, the maximum feasible values for migration and proliferation occur along a curve that bows outwards (or inwards). Regardless of the shape of the boundary, natural selection should favor cancer cells with ever greater migration and proliferation rates until reaching the boundary edge. However, the shape of the tradeoff may influence both the evolutionary trajectory of the cancer cells, their evolutionary endpoint, and the diversity or variance of trait values among the cancer cells. Ecologically, we first considered the case where there is no cell mortality. In this case, the population of cells will divide and migrate until the space is filled completely (see Fig. 3A for the spatial layout). In the absence of death, we see rings of cells with different phenotypes within the tumor. While natural selection favors cells with greater trait values, these trait values can only arise through successive cell divisions. The least aggressive cells, those with the lowest trait values, form the core (cyan color). Towards the outer edges, cells with more aggressive traits predominate at the periphery. Cells that mutate with higher proliferation rates can increase in frequency where space permits, and cells that mutate with higher migration rates can move into empty spaces where longer runs of proliferation are possible. Even as the whole population evolves, each step in this evolution leaves tree ring like layers in the tumor. With no cell death, the entire historical record in space and time is preserved. Joint evolution of migration and proliferation as influenced by three different tradeoff boundaries: open, convex, and concave. (A) The spatial layout and (B) the frequency of trait combinations is shown for a single representative simulation for each case, where the red points and line mark the average trait values every 5 days. The background colors correspond to the density of cell traits after reaching capacity; Brightly colored areas correspond to high densities, and the completely white area contains no cells. Replicate simulation runs are shown in Fig. S1 (top). Once the space has filled, the distribution of cancer cell phenotypes can be seen in Fig. 3B in the form of a density map. Color intensities correspond to the relative frequency of phenotypes where white indicates an absence of cells with that phenotype. As expected, when the tradeoff boundary between migration and proliferation is open, the most frequent phenotypes exhibit both fast proliferation and fast migration. However, we did observe some variation between replicate simulation runs in the mean proliferation rates due to stochasticity in mutations (see Fig. S1 - top), so Fig. 3 shows simulation runs with near average behavior. As the tradeoff boundary changes from open to convex to concave, we see natural selection favoring migration over proliferation and a reduction in variation between replicate runs. Contrary to expectation, the convex tradeoff boundary did not produce a generalist phenotype. Instead there is an apparent coexistence of two cell types: one with high migration but moderate proliferation, the other just the opposite. Also, contrary to expectations, a concave tradeoff boundary did not promote the coexistence of extreme phenotypes but instead, natural selection favored higher migration with little to no improvement in proliferation rates. The sequence of red dots in Fig. 3B show the evolutionary trajectory over time, of the average values, of proliferation and migration rates within the simulated tumor. Each point gives the average phenotype in increments of 5 days until the space is filled. From the spacing of the dots, we see that an open tradeoff boundary produces rapid evolution, rapid space filling, and the highest level of average proliferation rates. A concave tradeoff boundary results in the slowest evolution, slowest space filling, and the lowest average proliferation rate. In going from open to convex to concave tradeoff boundaries, the phenotypes become less proliferative. Thus, they divide, evolve, and fill space more slowly. An increased death rate selects for increased proliferation We examined the eco-evolutionary consequences of cell turnover by incorporating random cell death. Figure 4 shows the results when there is no death (top), and a low (middle) and high (bottom) random death rate. The spatial layout is shown to the left, and a density map representing the frequency of trait combinations is shown to the right for the 3-month time point. The evolutionary trajectory of the average trait values the population took for the first 3 months are overlaid on the density map, shown in red, while the black asterisk shows the average phenotype at 12 months. The effects of the death rate (no death, low, and high) and tradeoff boundaries (open, convex and concave) on the evolution of migration and proliferation rates. The probability of death for a single cell is once per week (high death rate) and once every two weeks (low death rate). (A) The spatial layout and (B) the frequency of trait combinations is shown for a single representative simulation for each case, where the red points and line mark the average trait values every 5 days for the first month. The black points show the continued evolutionary trajectory up until 3 months. The background colors correspond to the density of cell traits at 3 months; Brightly colored areas correspond to high densities, and the completely white area contains no cells. The asterisk shows the average trait values at 12 months. Replicate simulation runs are shown in Fig. S1. With non-zero death rates, the phenotypic evolution has two apparent phases: the first occurs while space is relatively sparsely occupied, and the second occurs through cell turnover after the space has filled. During the first phase, phenotypic evolution follows a similar trajectory as the case when there is no death (Fig. 3). However, as space fills, selection favors faster proliferation rates. When the cells have completely filled the space, the shape of the phenotypic tradeoff boundary (open, convex or concave) strongly influences the endpoint of evolution. Regardless of the death rate, the open boundary favors fast proliferation and high migration speeds. However, as space fills migration speeds matter much less than proliferation rates. Mutational drift in the migration trait leads to a lower mean and higher phenotypic variance than seen in the proliferation trait. Phenotypes with moderate to low proliferation rates become absent with time. With a low death rate, the convex boundary favors the coexistence of a more generalist phenotype with a fast proliferation phenotype. With a low death rate, the convex boundary sees the generalist phenotype outcompeted by fast proliferating cells with lower migration rates. With a low death rate, the concave boundary, as predicted, favors the coexistence between cancer cells with fast proliferation (but low migration) and cells with fast migration (but low proliferation). With a high death rate, the concave boundary favors cancer cells with high proliferation rates at the expense of migration. With no death, replicate runs of the simulation show wide variability in outcomes. Variability between replicate runs becomes greatly reduced when death rates increase (Fig. S1). Comparing the fitness landscapes for each tradeoff for high death rate and no death shows peaks where each of these phenotypes are favored (Fig. 4C). Spatially clustered death catastrophes select for migration We introduced significant environmental stochasticity by having all individuals die within a randomly selected 500 μm diameter circular area. This regional catastrophe might represent a sudden (and temporary) loss of blood vasculature, immune cell intrusion, or pooling of toxic metabolites. While keeping the probability of death constant at one death per week per cell (high death rate) we compared three mortality regimes where we varied the fraction of deaths occurring by demographic stochasticity (random cell death) relative to environmental stochasticity (catastrophes). The three regimens had 0%, 50% and 100% catastrophic death. The results are shown in Fig. 5. The percent of death that is random vs catastrophic is varied. The top row has 0% catastrophic and 100% random death, the middle row, 50% catastrophic and 50% random death, and the bottom row, 100% catastrophic and 0% random death. The death rate is once per week per cell (same as the high death rate from Fig. 4). (A) The spatial layout and (B) the frequency of trait combinations is shown for a single representative simulation, where the red points and line mark the average trait values every 5 days for the first month. The black points show the continued evolutionary trajectory up until 3 months. The background colors correspond to the density of cell traits at 3 months; Brightly colored areas correspond to high densities, and the completely white area contains no cells. The asterisk shows the average trait values at 12 months. Replicate simulation runs are shown in Fig. S2. In all cases raising the percent of deaths by catastrophes increases selection for migration, and this is consistent across replicate runs (Fig. S2). For the open tradeoff boundary, this results in a similarly high proliferation rate even as the migration rate increases with environmental stochasticity. For the convex tradeoff, there is more variance in phenotypic properties. But, as environmental stochasticity increases, migration is favored over proliferation with a very generalist phenotype emerging when all deaths are catastrophic. For the concave tradeoff boundary, the average phenotype switches from high proliferation and low migration to low proliferation and high migration as environmental stochasticity goes from 0% to 100% of the cause of death. In this case, while there was little variation between simulation runs at 0% and 100%, a greater spread in mean trait values was seen when there was an equal probability of random and catastrophic deaths (Fig. S2 – middle right). The long-term steady state values, however, were consistently toward high proliferation rates and low migration speeds. Rates of cell turnover matter. As expected, in our model, increasing the death rate speeds the rate of evolution while having little impact on the endpoint of evolution or the equilibrium population size of cancer cells at the end of the simulation (12 months). Our off-lattice model places an upper bound on the space available for cells. While increasing the death rate opens up space, cells fill it quickly as neighboring cells now have the opportunity to successfully proliferate (at even the lowest proliferation rate cells divide once every 50 hours). Longer runs of cell proliferation permit the accumulation of mutations that can increase migration and/or proliferation rates. However, with no deaths, evolution eventually stops on the interior of the tumor and can only occur along the expanding boundary. One sees concentric rings of more highly adapted cells as we move from the center to the edge of the tumor. This is not the case when there is continual cell turnover. While slower in the interior than edge of the tumor, evolution proceeds with the replacement of less fit individuals by those with either higher combined rates of proliferation and migration, or individuals with more successful combinations of traits when the trait-tradeoff boundary has been reached. The results illustrate the direct impact of cell turnover, throughout the habitable regions, on tumor evolutionary dynamics. However, not all ecological and evolutionary models in the literature incorporate cell death and cell replacement. The distribution of phenotypes among cancer cells in a tumor represent a balance between mutation, drift and selection. With each cell division, mutations can occur that randomly alter proliferation and migration. Those generating higher fitness should increase in frequency, but a large amount of heritable variation is maintained within the tumor due to the stochastic nature of births, deaths and mutations; the lower the rate of cell turnover, the higher the phenotypic variability among cancer cells. In reality, tumors exhibit large amounts of genetic variation – the extent to which this is maintained by mutation and drift and purged through selection remains an open and important question23,34,35,36,37,38. The edge of the tumor likely offers very different conditions in terms of substrates, normal tissue architecture and exposure to the immune system39,40. Hence, a number of agent-based models focus on tumor spatial heterogeneity in an environmental context, such as normal cells, stroma, and vasculature4,5,8,9,10,41,42. Here, we considered a much simpler model where all space is equal without regard for the position of blood vessels, and the only factor creating heterogeneity is phenotypic drift during division, which depends on the number and dispersion of cancer cells. The method of inheritance and how drift is imposed could impact the timescales associated with a specific evolutionary trajectory, but this does not affect the steady state values. Furthermore, the model could be extended to include other important tissue interactions such as the influence of vasculature, cell adhesion, and an immune response. These might impact evolution in interesting ways. Other tradeoffs could also be considered, such as proliferation versus survival. The model aims to specifically address the effects of tradeoffs and cell turnover rates on the speed, trajectory, and endpoint of ecological and evolutionary dynamics within an expanding tumor. As it is, our model has two rather distinct phases starting from a cell with slow proliferation and slow migration. During the first, natural selection favors migration over proliferation as the tumor expands into pristine space, the second favors proliferation over migration once the space has been filled by the cancer cells. This accords with the observation that the edge of tumors may select for more "aggressive" cancer cells defined as those more likely to migrate, invade surrounding tissue, and perhaps initiate metastases43,44. If instead, the simulations were initiated with either a fast proliferation rate and slow migration speed or vice versa, the evolutionary endpoint generally remains the same even though the trajectory is different (Fig. S3). There are only a few cases that result in different endpoints from different initial conditions. This happens when i) there is no death, so the space gets filled without achieving the evolutionary endpoint, or ii) there is 100% catastrophic death with a concave tradeoff. For the latter case, when starting with a fast proliferation rate and a slow migration speed the cells remain near the initial phenotype, because even with a more optimal global phenotype (fast migration and slow proliferation) fast proliferation is still selected over the intermediate phenotypes along the trajectory (slow proliferation and slow migration). There are direct parallels, of our results to ecological systems in which mortality can take the form of the stochastic death of an individual (demographic stochasticity) or the catastrophic death of a group of individuals (environmental stochasticity). In forests, for instance, individual deaths of trees create small gaps in the canopy whereas the blowdown of a group of neighboring trees create large gaps. The size and nature of gaps can result in the slower or faster regeneration of different tree species45. Our model considers the eco-evolutionary consequences of different size gaps in the tumor created by either demographic or environmental stochasticity (while holding overall mortality rates constant). As seen in many natural systems, small gaps select for proliferation over dispersal and vice-versa for large gaps46. While understudied, temporal variability in local blood flow, immune intrusion, hypoxia, and Ph likely result in varying degrees of local and catastrophic mortality followed by opportunities for recolonization. Histology from biopsies or radiographic imaging of tumors produce a static snapshot that cannot track the fates of individual cells within small regions of a tumor. Our model provides a platform to study how death affects the competition of cells for space and their subsequent evolution. Tradeoffs between dispersal and survival or fecundity and dispersal are common in natural plant and animal species47,48, and several lines of evidence empirically suggest tradeoffs between the traits proliferation versus migration in cancer cells49,50,51. Previous theoretical models have shown how proliferation-migration tradeoffs via phenotypic switching between 2 states can impact tumor growth and evolution13,28,29. Tradeoffs in our model are represented by boundaries rather than a switch so that heterogeneity is a spectrum of phenotypes gained on division existing within an allowed trait space. Cancer cells may also experience Allee effects when a critical number of neighboring cells are required for a cell to survive and proliferate52. In this case, cell migration from a neighborhood may cause the remaining cells to see a decline in fitness. Our model does not include such an Allee effect. But there is competition for space, and any movement of an individual away from a neighborhood results in a small public good of an increased probability of proliferation to the remaining cells. In the absence of tradeoffs, natural selection should favor improvements in all traits that enhance fitness. As expected in our model, the lack of a tradeoff saw rapid increases in both proliferation and migration. With demographic stochasticity and filled space, migration is no longer under strong selection, so at this point, mutation and drift created a lower mean migration rate with a large phenotypic variance. Generally, a convex tradeoff selected for a generalist phenotype. Under demographic stochasticity this resulted in high phenotypic variance and sometimes the dominance of high proliferation, low migration phenotypes. Environmental stochasticity selected for the more generalist phenotype. A concave phenotypic tradeoff boundary always selected for low proliferation, high migration phenotypes during the tumor's expansion phase. These were then replaced by high proliferation, low migration phenotypes once the tumor achieved maximum size. The only exception occurred with environmental stochasticity where the high migration phenotype continues to be favored. Experiments that establish the cost of resistance on proliferation of cancer cells often require conditions of strong nutrient limitation53,54,55,56. In addition, establishing the nature of tradeoffs between migration and proliferation should require the use of invasion/migration assays57,58,59,60,61,62. Such experiments could select for extreme or slow migrators and subsequently observe proliferation rates or select for extreme slow or fast proliferators and subsequently observe migration to determine whether tradeoffs exist. A number of fitness metrics can be used for mathematically modelling cancer cell population dynamics63. These include maximizing some balance of proliferation rates and death rates. In our simulations, the death rate was a constant regardless of phenotype. Hence, natural selection favors phenotypes that maximize the probability of cell division. However, this probability depends not only on the proliferation rate but the probability of having space around the cell to proliferate. Increasing the proliferation rate of a cell directly impacts its fitness. Increasing the migration rate of a cell can indirectly impact tumor fitness by opening up space and keep more neighboring cells cycling, even if they are proliferating more slowly. During the expansion phase of the tumor, available space is relatively large, and the space gained by increasing migration is also very large. Selection will be strong for both proliferation and migration but relatively stronger for migration. When the space in the tumor core is completely filled, both the available space and the fitness advantage gained by increasing migration goes to zero. Hence there is no longer positive selection for migration. By creating many small gaps, demographic stochasticity creates space and thus maintains selection for proliferation, and because the spaces are small, there are no benefits to migration. Environmental stochasticity creates the same amount of total space over time as demographic stochasticity, but this space is more contiguous, so migration is favored as a means of exploiting empty regions. Thus, the fitness advantage gained by migration will be larger and positive when large gaps are present, and there will be positive selection for both proliferative and migratory phenotypes, but a larger selection for migratory phenotypes. Increasing cell turnover directly affects cell fitness, so the indirect tumor fitness gains of increasing migration do not matter as much as individual cell survival so there is positive selection for faster proliferation. Some of these properties will be general to all organisms (e.g., cane toads64, and house sparrows spreading in Kenya)65, and others just to cancer because it is a densely packed, asexual, and single-celled organism. Our model has similarities to other models and systems where selection balances two traits. In natural systems this can take the form of seed dispersal versus dormancy in annual plants where the former transports the individual spatially and the latter temporally to less crowded and more favorable places66,67. In dispersal-dormancy models, the traits may exhibit tradeoffs via seed size, seed coat thickness, and features that enhance dispersal such as burrs and samara (wings). In cancer, a number of agent-based models consider vector-valued traits. These include degree of glycolytic respiration (Warburg effect) and tolerance to acidic conditions. While not necessarily linked through tradeoffs, the two traits become co-adapted as increased glycolysis promotes acidic conditions necessitating the evolution of acid tolerance5. Spatial models often see rings of different trait values extending from the interior to the tumor's boundary3,4,6. In these models, selection happens solely at the tumor edge where there is space to proliferate. In relation to these works our model invites spatially-explicit investigations into how traits evolve in response to population spread, death rates, and demographic versus environmental stochasticity. It emphasizes the critical need to estimate cell turnover via measurements of both death and proliferation rates. A variety of markers and metrics exist for measuring proliferation (e.g. Ki67, mitotic index) and death (e.g. caspases, TUNEL assay). However, these are often just surrogates, rarely measured simultaneously, and generally cannot be measured in vivo. Because of these challenges, most data simply describe net tumor growth (i.e. doubling times). We advocate deconstructing this net metric into distinct fractions of proliferating, quiescent, and dying. To study evolving traits such as proliferation-migration tradeoffs, we see a need for non-destructive sensors/markers of cell processes that can be measured through both space and time. We created a simple agent-based model of tumor growth to investigate evolving phenotypes under different constraints. The phenotype is defined as a combination of two traits: the intermitotic time τ, where τ = [10,40] hours, and the migration speed ν, where ν = [0,20] μm/h. The simulation is initialized with one cell in the least aggressive state (τmax = 40 h and νmin = 0 μm/h) centered in a 4 mm circular tissue (Fig. 2A). This cell, and every subsequent cell starts with the specified intermitotic time and counts down at each time step (every minute) until reaching zero. At this point, the cell will split into two daughter cells, keeping one cell at the original position and placing the other 1 diameter away at a random angle. Each daughter cell randomly inherits traits within a range of the parental cell's traits (τdaughter = τmother ± (0, 4.5) h and νdaughter = νmother ± (0, 3) μm/h) as long as it lies within the boundary of traits defined by the tradeoff (Fig. 2B). As long as a cell is not undergoing division during the time step, it will move at its specified speed throughout the 2-dimensional space. The movement of a cell follows a persistent random walk: it will move for a persistence time randomly drawn from a normal distribution p = \({\mathscr{N}}\) (80 min,40 min) and then turn at a random angle before starting over with a new persistence time. The cells can move off-lattice in the 2D space, meaning that they are not confined to reside in a regular gridded structure. A major problem with off-lattice models is that checking for one cell's interactions amongst all other cells becomes extremely computationally inefficient as the number of cells increase. In order to alleviate this cost, we create a grid (67 μm × 67 μm) that defines neighborhoods within the space. At the top of each frame, all cells are assigned a neighborhood according to their position. Each cell will only check for interactions within a Moore neighborhood of this grid (its current neighborhood as well as its 8 surrounding neighborhoods – see Fig. 2A). If a cell intersects another cell in space it will be assigned a new random direction and a new persistence time. If a cell intersects with the boundary of the space (circle of diameter ~2.7 mm) or is completely surrounded (there is no room for cell division without overlap) it will stop progressing through the cell cycle and stop migrating. Bounding the trait space We limit the possible trait combinations according to i) no tradeoff (open), ii) a convex bound, and iii) a concave bound (see Fig. 2B). Each time a cell divides, new traits are determined giving each option (improve, stay the same, or diminish) the same weight. If the current trait is already on the boundary of trait space, then only options that respect this bound are considered and weighted equally. For the convex case, the forbidden region is created by making a circular arc from the two extreme values where fitness is greatest for each trait but worst for the other (i.e. τmin = 10 h and νmin = 0 μm/h and τmax = 40 h IMT and νmax = 20 μm/h). The trait combinations with the shortest intermitotic times and fastest migration speeds are not allowed. For the concave case, the forbidden region cuts off this space as well, but the circular arc is created using the same points with opposite concavity. Cell death either occurs randomly distributed or regionally clustered (catastrophic). The probability of death is split between these two types of death with either all random, all catastrophic, or 1:1 mix of random and catastrophic. Random death occurs with a given probability for every cell at every frame. When there are catastrophic death events, all cells within a confined circular region 500 μm in diameter, which is randomly placed, will die. The cells don't automatically die but wait a randomly chosen period between 6–15 hours before being removed from the system. This is an estimate for how long it takes to go through apoptosis68,69. The probability of death for a single cell is once per week for the high death rate and once every two weeks for the low death rate. The actual death rate is variable because it depends on the number of cells at any time, but when the space is completely full (approximately 13,000 cells), around 2,000 cells are dying per day for the high death rate and 1,000 cells per day for the low death rate. For the catastrophes, we need to ensure that the number of cells deaths on average is similar to the random death rate, because they happen at a population level at certain time points rather than to individuals. We define the probability of a catastrophic event pcat based on the probability of death pdeath and the time intervals Tcat between catastrophic events: $${p}_{{\rm{cat}}}=\frac{f{p}_{{\rm{death}}}}{{N}_{{\rm{deaths}}}/N}{T}_{{\rm{cat}}}.$$ Here f is the fraction of deaths that are catastrophic, Ndeaths is the number of cells that die from each catastrophic event, and N is the total number of cells. Setting the probability of a catastrophic event pcat to 1, we can solve for Tcat to get the appropriate time between catastrophes: $${T}_{{\rm{cat}}}=\frac{{N}_{{\rm{deaths}}}/N}{f{p}_{{\rm{death}}}}$$ However, because the catastrophic death region will be spaced randomly there is a possibility that a new catastrophe will overlap with an old one before filling back in or will lie on an edge, so in general, there won't be the same number of cells that die each time a catastrophe occurs. This can be accounted for if the time between events is changed each time based on the number of deaths from the previous event. If the number died previously is less than what would be given by fpdeath, then the numerator gets smaller, making a smaller time interval between events, and if the number that died is larger, then the next time interval will be larger. By adjusting after each event, we can compensate for this variation. Trait distribution heat maps For each simulation we show how the 2D combination of traits of all cells are distributed at a specific time point. To create these graphs, we binned the values of intermitotic times and migration speeds for all cells into an 11 × 11 array and used the MATLAB function contour() to define the isolines. Using Pixelmator, each region in the resulting image was converted into white with the value of transparency as a linear gradient of the isoline color so that zero density corresponded to 0% transparent white and maximum density corresponded to 100% transparent white. We overlaid this on our color map to show the densest regions with more of the background color showing through. The average trait values over time made up the top layer of this graph. Code availability Code and interactive website available at https://github.com/jillagal/deathToy/wiki/The-impact-of-proliferation-migration-tradeoffs-on-phenotypic-evolution-in-cancer. Wallace, D. I. & Guo, X. Properties of Tumor Spheroid Growth Exhibited by Simple Mathematical Models. Front. Oncol. 3, 1–9 (2013). Anderson, A. R. A., Weaver, A. M., Cummings, P. T. & Quaranta, V. Tumor Morphology and Phenotypic Evolution Driven by Selective Pressure from the Microenvironment. Cell 127, 905–915 (2006). Gallaher, J. A. & Anderson, A. R. A. Evolution of intratumoral phenotypic heterogeneity: the role of trait inheritance. Interface Focus 3, 20130016–20130016 (2013). Frankenstein, Z., Basanta, D., Franco, O. E., Gao, Y. & Javier, R. A. Stromal Reactivity Differentially Drives Tumor Cell Evolution and Prostate Cancer Progression. bioRxiv Prepr (2017). Robertson-Tessi, M., Gillies, R. J., Gatenby, R. A. & Anderson, A. R. A. Impact of metabolic heterogeneity on tumor growth, invasion, and treatment outcomes. Cancer Res. 75, 1567–1579 (2015). Poleszczuk, J., Hahnfeldt, P. & Enderling, H. Evolution and Phenotypic Selection of Cancer Stem Cells. PLoS Comput. Biol. 11, 1–14 (2015). Waclaw, B. et al. A spatial model predicts that dispersal and cell turnover limit intratumour heterogeneity. Nature 525, 261–264 (2015). Mirams, G. R. et al. Chaste: An Open Source C++ Library for Computational Physiology and Biology. PLoS Comput. Biol. 9, (2013). Ghaffarizadeh, A., Heiland, R., Friedman, S. H., Mumenthaler, S. M. & Macklin, P. PhysiCell: An open source physics-based cell simulator for 3-D multicellular systems. PLoS Computational Biology 14, (2018). Bravo, R., Robertson-Tessi, M. & Anderson, A. R. A. Hybrid Automata Library. bioRxiv Prepr. 1–24. https://doi.org/10.1101/411538 (2018) Rejniak, K. A. & Anderson, A. R. A. Hybrid models of tumor growth. Wiley Interdiscip. Rev. Syst. Biol. Med. 3, 115–125 (2011). Bozic, I., Allen, B. & Nowak, M. A. Dynamics of Targeted Cancer Therapy. Trends Mol Med 18, 311–316 (2012). Kaznatcheev, A., Scott, J. G. & Basanta, D. Edge effects in game theoretic dynamics of spatially structured tumours. https://doi.org/10.1098/rsif.2015.0154 (2013). Kuang, Y., Nagy, J. D. & Eikenberry, S. E. Introduction to mathematical oncology. CRC Press https://doi.org/10.1080/17513758.2016.1224937 (2015). Benzekry, S. et al. Classical Mathematical Models for Description and Prediction of Experimental Tumor Growth. PLoS Comput. Biol. 10, (2014). Kerr, K. M. & Lamb, D. Actual growth rate and tumour cell proliferation in human pulmonary neoplasms. Br. J. Cancer 50, 343–349 (1984). Alenzi, F. Q. B. Links between apoptosis, proliferation and the cell cycle. Br. J. Biomed. Sci. 61, 99–102 (2004). Liu, S., Edgerton, S. M., Moore, D. H. & Thor, A. D. Measures of cell turnover (proliferation and apoptosis) and their association with survival in breast cancer. Clin. Cancer Res. 7, 1716–1723 (2001). Soini, Y., Pääkkö, P. & Lehto, V. P. Histopathological evaluation of apoptosis in cancer. Am. J. Pathol. 153, 1041–53 (1998). Zimmerman, M. A., Huang, Q., Li, F., Liu, X. & Li, C.-Y. Cell death-stimulated cell proliferation: A tissue regeneration mechanism usurped by tumors during radiotherapy. Semin Radiat Oncol 23, 288–295 (2013). Labi, V. & Erlacher, M. How cell death shapes cancer. Cell Death Dis. 6, e1675–11 (2015). Hamilton, W. D. & May, R. M. Dispersal in stable habitats. Nature 269, 578–581 (1977). Sottoriva, A. et al. A big bang model of human colorectal tumor growth. Nat. Genet. 47, 209–216 (2015). Bozic, I., Gerold, J. M. & Nowak, M. A. Quantifying Clonal and Subclonal Passenger Mutations in Cancer Evolution. PLoS Comput. Biol. 12, 1–19 (2016). Aktipis, C. A., Boddy, A. M., Gatenby, R. A., Brown, J. S. & Maley, C. C. Life history trade-offs in cancer evolution. Nat. Rev. Cancer 13, 883–892 (2013). Shoval, O. et al. Evolutionary Trade-Offs, Pareto Optimality, and the Geometry of Phenotype Space. Science (80-.). 336, 1157–60 (2012). Orlando, P. A., Gatenby, R. A. & Brown, J. S. Tumor Evolution in Space: The Effects of Competition Colonization Tradeoffs on Tumor InvasionDynamics. . Front. Oncol. 3, 1–12 (2013). Gerlee, P. & Nelander, S. The impact of phenotypic switching on glioblastoma growth and invasion. PLoS Comput. Biol. 8 (2012). Hatzikirou, H., Basanta, D., Simon, M., Schaller, K. & Deutsch, A. 'Go or grow': The key to the emergence of invasion in tumour progression? Math. Med. Biol. 29, 49–65 (2012). MathSciNet CAS Article Google Scholar Gerlee, P. & Anderson, A. R. A. Evolution of cell motility in an individual-based model of tumour growth. J. Theor. Biol. 259, 67–83 (2009). Levins, R. Evolution in Changing Environments. Evolution in Changing Environments (1968). Gatenby, R. A., Cunningham, J. J. & Brown, J. S. Evolutionary triage governs fitness in driver and passenger mutations and suggests targeting never mutations. Nat. Commun. 5, 1–9 (2014). Engen, S., Bakke, O. & Islam, A. Demographic and Environmental Stochasticity-Concepts and Definitions. Biometrics 54, 840–846 (1998). Iwasaki, W. M. & Innan, H. Simulation framework for generating intratumor heterogeneity patterns in a cancer cell population. PLoS One 12, (2017). Durrett, R., Foo, J., Leder, K., Mayberry, J. & Michor, F. Intratumor heterogeneity in evolutionary models of tumor progression. Genetics 188, 461–77 (2011). Horswell, S., Matthews, N. & Swanton, C. Cancer heterogeneity and 'The struggle for existence': Diagnostic and analytical challenges. Cancer Lett. 340, 220–226 (2013). Gerlinger, M. et al. Intratumor Heterogeneity and Branched Evolution Revealed by Multiregion Sequencing. N. Engl. J. Med. 366, 883–892 (2012). Lipinski, K. A. et al. Cancer Evolution and the Limits of Predictability in Precision Cancer Medicine. Trends in Cancer 2, 49–63 (2016). Ramamonjisoa, N. & Ackerstaff, E. Characterization of the Tumor Microenvironment and Tumor–Stroma Interaction by Non-invasive Preclinical Imaging. Front. Oncol. 7, 28–37 (2017). Lloyd, M. C. et al. Darwinian dynamics of intratumoral heterogeneity: Not solely random mutations but also variable environmental selection forces. Cancer Res. 76, 3136–3144 (2016). Ibrahim-Hashim, A. et al. Defining cancer subpopulations by adaptive strategies rather than molecular properties provides novel insights into intratumoral evolution. Cancer Res. 77, 2242–2254 (2017). Basanta, D. & Anderson, A. R. A. Homeostasis back and forth: An ecoevolutionary perspective of cancer. Cold Spring Harb. Perspect. Med. 7, (2017). Clark, A. G. & Vignjevic, D. M. Modes of cancer cell invasion and the role of the microenvironment. Curr. Opin. Cell Biol. 36, 13–22 (2015). Petrie, R. J. & Yamada, K. M. At the leading edge of three-dimensional cell migration. J. Cell Sci. 125, 5917–5926 (2012). Canham, C. D. Different Responses to gaps among shade-tolerant tree species. Ecology 70, 548–550 (1989). Nagel, T. A., Svoboda, M. & Kobal, M. Disturbance, life history traits, and dynamics in an old-growth forest landscape of southeastern Europe. Ecol. Appl. 24, 663–679 (2014). Duthie, A. B., Abbott, K. C. & Nason, J. D. Trade-Offs and Coexistence in Fluctuating Environments: Evidence for a Key Dispersal-Fecundity Trade-Off in Five Nonpollinating Fig. Wasps. Am. Nat. 186, 151–158 (2015). Weigang, H. C. & Kisdi, É. Evolution of dispersal under a fecundity-dispersal trade-off. J. Theor. Biol. 371, 145–153 (2015). MathSciNet Article Google Scholar Giese, A. et al. Dichotomy of Astrocytoma Migration and Proliferation. Int J Cancer 67, 275–282 (1996). Biddle, A. et al. Cancer stem cells in squamous cell carcinoma switch between two distinct phenotypes that are preferentially migratory or proliferative. Cancer Res. 71, 5317–5326 (2011). Garay, T. et al. Cell migration or cytokinesis and proliferation? - Revisiting the 'go or grow' hypothesis in cancer cells in vitro. Exp. Cell Res. 319, 3094–3103 (2013). Böttger, K. et al. An Emerging Allee Effect Is Critical for Tumor Initiation and Persistence. PLoS Comput. Biol. 11, 1–14 (2015). Chmielecki, J. et al. Optimization of Dosing for EGFR-Mutant Non–Small Cell Lung Cancer with Evolutionary Cancer Modeling. Sci Transl Med 3 (2011). Moore, N., Houghton, J. & Lyle, S. Slow-Cycling Therapy-Resistant Cancer Cells. Stem Cells Dev. 21, 1822–1830 (2012). Silva, A. S. et al. Evolutionary approaches to prolong progression-free survival in breast cancer. Cancer Res. 72, 6362–6370 (2012). Kreso, A. et al. Variable clonal repopulation dynamics influence chemotherapy response in colorectal cancer. Science (80-.). 339, 543–548 (2013). Giometto, A., Rinaldo, A., Carrara, F. & Altermatt, F. Emerging predictable features of replicated biological invasion fronts. Proc. Natl. Acad. Sci. 111, 297–301 (2014). Baym, M. et al. Spatiotemporal microbial evolution on antibiotic landscapes. Science (80-.). 353, 1147–1152 (2016). Kam, Y. et al. Nest expansion assay: A cancer systems biology approach to in vitro invasion measurements. BMC Res. Notes 2, 1–9 (2009). Decaestecker, C., Debeir, O., Van Ham, P. & Kiss, R. Can anti-migratory drugs be screened in vitro? A review of 2D and 3D assays for the quantitative analysis of cell migration. Med. Res. Rev. 27, 149–176 (2007). Taylor, T. B., Wass, A. V., Johnson, L. J. & Dash, P. Resource competition promotes tumour expansion in experimentally evolved cancer. BMC Evol. Biol. 17, 1–9 (2017). Eccles, S. A., Box, C. & Court, W. Cell migration/invasion assays and their application in cancer drug discovery. Biotechnol. Annu. Rev. 11, 391–421 (2005). Gregory, T. R. Understanding Natural Selection: Essential Concepts and Common Misconceptions. Evol. Educ. Outreach 2, 156–175 (2009). Phillips, B. L., Brown, G. P., Grennlees, M., Webb, J. P. & Shine, R. Rapid expansion of the cane toad (Bufo marinus) invasion front in tropical Australia. Austral Ecol. 32, 169–176 (2007). Schrey, A. W., Liebl, A. L., Richards, C. L. & Martin, L. B. Range expansion of house sparrows (Passer domesticus) in kenya: Evidence of genetic admixture and human-mediated dispersal. J. Hered. 105, 60–69 (2014). Gremer, J. R. & Venable, D. L. Bet hedging in desert winter annual plants: Optimal germination strategies in a variable environment. Ecol. Lett. 17, 380–387 (2014). Venable, D. L. & Brown, J. S. The Selective Interactions of Dispersal, Dormancy, and Seed Size as Adaptations for Reducing Risk in Variable Environments. Am. Nat. 131, 360–384 (1988). Gelles, J. D. & Edward Chipuk, J. Robust high-throughput kinetic analysis of apoptosis with real-time high-content live-cell imaging. Cell Death Dis. 7, e2493–9 (2016). Van Nieuwenhuijze, A. E. M., Van Lopik, T., Smeenk, R. J. T. & Aarden, L. A. Time between onset of apoptosis and release of nucleosomes from apoptotic cells: Putative implications for systemic lupus erythematosus. Ann. Rheum. Dis. 62, 10–14 (2003). The authors gratefully acknowledge Mehdi Damaghi for the tumor spheroid model graphic (Figure 1A) and Mark Lloyd for the digital pathology graphic (Figure 1B). Funding came from both the Cancer Systems Biology Consortium (CSBC) and the Physical Sciences Oncology Network (PSON) at the National Cancer Institute, through grants U01CA151924 (supporting A. Anderson and J. Gallaher) and U54CA193489 (supporting A. Anderson and J. Brown). Joel S. Brown and Alexander R. A. Anderson contributed equally. Department of Integrated Mathematical Oncology, H. Lee Moffitt Cancer Center, Tampa, FL, USA Jill A. Gallaher, Joel S. Brown & Alexander R. A. Anderson Jill A. Gallaher Joel S. Brown Alexander R. A. Anderson Conceptualization, methodology, writing, and editing: J.G., J.B., A.A. Investigation, software, analysis, and visualization: J.G. Funding: A.A. Correspondence to Jill A. Gallaher or Alexander R. A. Anderson. Gallaher, J.A., Brown, J.S. & Anderson, A.R.A. The impact of proliferation-migration tradeoffs on phenotypic evolution in cancer. Sci Rep 9, 2425 (2019). https://doi.org/10.1038/s41598-019-39636-x DOI: https://doi.org/10.1038/s41598-019-39636-x Normal tissue architecture determines the evolutionary course of cancer Jeffrey West Ryan O. Schenck Nature Communications (2021) Cancer metastasis as a non-healing wound Matthew Deyell Christopher S. Garris Ashley M. Laughney British Journal of Cancer (2021) A Mathematical Study of the Influence of Hypoxia and Acidity on the Evolutionary Dynamics of Cancer Giada Fiandaca Marcello Delitala Tommaso Lorenzi Bulletin of Mathematical Biology (2021) Modeling force application configurations and morphologies required for cancer cell invasion Yaniv Ben-David Daphne Weihs Biomechanics and Modeling in Mechanobiology (2021) A novel mathematical model of heterogeneous cell proliferation Sean T. Vittadello Scott W. McCue Matthew J. Simpson Journal of Mathematical Biology (2021) Top 100 in Cancer
CommonCrawl
Solving System Of Differential Equations With Initial Conditions Calculator Solve a system of several ordinary differential equations in several variables by using the dsolve function, with or without initial conditions. A long Taylor series method, pioneered by Prof. Explicit solution methods, existence and uniqueness for initial value problems. The Laplace transform can be used in some cases to solve linear differential equations with given initial conditions. In this post, we will talk about separable. ode::solve(o) returns the set of solutions of the ordinary differential equation o. Solve equation y'' + y = 0 with the same initial conditions. I need to use ode45 so I have to specify an initial value. • Use the method of integrating factor to integrate linear first order ODEs. While this gives a start to finding solutions of initial value problems, consideration must also be given to the domain of your final result. Partial Differential Equations (PDE) A partial differential equation is a differential equation that contains unknown multivariable functions and their partial derivatives. , no external forces. It can also accommodate unknown parameters for problems of the form. The "odesolve" package was the first to solve ordinary differential equations in R. Many problems in engineering and physics involve solving differential equations with initial conditions or boundary conditions or both. All your questions can be found in one convenient source from one of the most trusted names in reference solution guides. The solution of this problem involves three solution phases. It explains how to. Then, integrating both sides gives y y y as a function of x x x, solving the differential equation. The solver does not validate the Lipschitz-conditions on the ordinary differential equation for the Picard-Lindelöf Theorem. We've spent the last three sections learning how to take Laplace transforms and how to take inverse Laplace transforms. Solves the initial value problem for stiff or non-stiff systems of first order ode-s:. Finally we present Picard's Theorem, which gives conditions under which first-order differential equations have exactly one solution. Find a numerical solution to the following differential equations with the associated initial conditions. Get the free "General Differential Equation Solver" widget for your website, blog, Wordpress, Blogger, or iGoogle. Solving systems of linear equations online. However, it only covers single equations. - Solving ODEs or a system of them with given initial conditions (boundary value problems). Although some purely theoretical work has been done, the key element in this field of research is being able to link mathematical models and data. Modeling and simulation of differential equations in Scicos Masoud Naja Ramine Nikoukhah INRIA-Rocquencourt, Domaine de Voluceau, 78153, Le Chesnay Cedex France Abstract Block diagram method is an old approach for the mod-eling and simulation of differential equations. Ordinary differential equations (ODEs) play a vital role in engineering problems. If you are studying differential equations, I highly recommend Differential Equations for Engineers If your interests are matrices and elementary linear algebra, have a look at Matrix Algebra for Engineers And if you simply want to enjoy mathematics, try Fibonacci Numbers and the Golden Ratio Jeffrey R. Solve System of Linear Equations; Select Numeric or Symbolic Solver; Solve Parametric Equations in ReturnConditions Mode; Solve Differential Equation. However, it only covers single equations. Let's take a look at another example. solving systems of second order differential Learn more about ode, second order differential equations, initial conditions, systems of odes, plotting odes, trajectories, differential equations. Many problems in engineering and physics involve solving differential equations with initial conditions or boundary conditions or both. In this section some of the common definitions and concepts in a differential equations course are introduced including order, linear vs. The following examples show different ways of setting up and solving initial value problems in Python. The first argument, fcn, is a string, inline, or function handle that names the function f to call to compute the vector of right hand sides for the set of equations. It is the same concept when solving differential equations - find general solution first, then substitute given numbers to find particular solutions. 526 Systems of Differential Equations corresponding homogeneous system has an equilibrium solution x1(t) = x2(t) = x3(t) = 120. PARTIAL DIFFERENTIAL EQUATIONS Math 124A { Fall 2010 « Viktor Grigoryan [email protected] under consideration. Solvers for initial value problems of ordinary di erential equations Package deSolve contains several IVP ordinary di erential equation solvers, that belong to the most important classes of solvers. Two methods are described. It contains only one independent variable and one or more of its derivative with respect to the variable. desolve_system() - Solve a system of 1st order ODEs of any size using Maxima. and 'ode45' for solving systems of differential. Plots the direction field for a single differential equation. There's no immediate way to do this (AFAIK). The solution procedure requires a little bit of advance planning. For example, state the following initial value problem by defining an ODE with initial conditions:. Home About us Subjects About us Subjects. Textbook used at UMD before Differential Equations and Linear Algebra were combined. From here, substitute in the initial values into the function and solve for. nonlinear, initial conditions, initial value problem and interval of validity. The most convenient way to numerically solve a differential equation is the built-in numeric differential equation solver and its input form. Composite Waves in the Dafermos Regularization (with P. Non-homogeneous differential equations are the same as homogeneous differential equations, However they can have terms involving only x, (and constants) on the right side. The first argument, fcn, is a string, inline, or function handle that names the function f to call to compute the vector of right hand sides for the set of equations. where ti > tl. The idea is simple; the. With the initial conditions given by. Finally, substitute the value found for into the original equation. If the number of conditions is less than the number of dependent variables, the solutions contain the arbitrary constants C1, C2,. You may find the Maple manual (by Prof. You can get practical use out of some relatively simple math. The initial conditions are collected in a structure named initial. Substituting the values of the initial conditions will give. with solving ODE in MATLAB, the basic syntax for solving systems is the same as for solving single equations, where each scalar is simply replaced by an analogous vector. Each row in the solution array y corresponds to a value returned in column vector t. At the top of the applet you will see a graph showing a differential equation (the equation governing a harmonic oscillator) and its solution. The solver does not validate the Lipschitz-conditions on the ordinary differential equation for the Picard-Lindelöf Theorem. Tracing a path of vectors yields a solution to the ordinary differential equation at a set of initial conditions. Explicit solution methods, existence and uniqueness for initial value problems. Solving Third and Higher Order Differential Equations Remark: TI 89 does not solve 3rd and higher order differential equations. Do not change this name and define the initial values in the same order as you wrote down the equations. Such an equation is called an Ordinary Differential Equation (ODE), since the solution is a function, namely the function h(t). The solver detects the type of the differential equation and chooses an algorithm according to the detected equation type. Course Objectives. It can be referred to as an ordinary differential equation (ODE) or a partial differential equation (PDE) depending on whether or not partial derivatives are involved. A calculator for solving differential equations. Initial Conditions and Initial-Value Problems. It is the same concept when solving differential equations - find general solution first, then substitute given numbers to find particular solutions. the value of all the model variables at the start of the simulation (that is at time zero). Plot on the same graph the solutions to both the nonlinear equation (first) and the linear equation (second) on the interval from t = 0 to t = 40, and compare the two. m: function xdot = vdpol(t,x). Wilkinson House, Jordan Hill Road Oxford OX2 8DR, United Kingdom 1. Second, Nyström modification of the Runge-Kutta method is applied to find a. A calculator for solving differential equations. The differential equations must be IVP's with the initial condition (s) specified at x = 0. Solve ordinary differential equations and systems of equations using: a) Direct integration b) Separation of variables c) Reduction of order d) Methods of undetermined coefficients and variation of parameters e) Series. The theory of differential equations arose at the end of the 17th century in response to the needs of mechanics and other natural sciences, essentially at the same time as the integral calculus and the differential calculus. Mathcad Standard comes with the rkfixed function, a general-purpose Runge-Kutta solver that can be used on nth order differential equations with initial conditions or on systems of differential equations. The solutions display wide variety of behavior as you vary the coefficients. Ordinary Differential Equations 8-2 This chapter describes how to use MATLAB to solve initial value problems of ordinary differential equations (ODEs) and differential algebraic equations (DAEs). Dedalus solves differential equations using spectral methods. Jump to Content Jump to Main Navigation. Textbook used at UMD before Differential Equations and Linear Algebra were combined. Example 1 - A Generic ODE Consider the following ODE: x ( b cx f t) where b c f2, x ( 0) , (t)u 1. Function dede is a general solver for delay differential equations, i. Next, here is a script that uses odeint to solve the equations for a given set of parameter values, initial conditions, and time interval. m function (system), time-span and initial-condition (x0) only. Let's start our discussion of solving differential equations using our simple population model. In this section, we first provide a brief overview of deep neural networks, and present the algorithm and theory of PINNs for solving PDEs. To solve differential equations, use the dsolve function. 94, that it satisfies the linear ODE system 0. Let's say you want to design a series of steps that you can handle to a student and he will be able to obtain E and B for any. Maple is the world leader when it comes to solving differential equations, finding closed-form solutions to problems no other system can handle. Solve equation y'' + y = 0 with the same initial conditions. Consider the differential equation: The first step is to convert the above second-order ode into two first-order ode. An example is displayed in Figure 3. Most functions are based on original (FORTRAN) im-. Solve a differential equation analytically by using the dsolve function, with or without initial conditions. Home Heating. Solve System of Linear Equations; Select Numeric or Symbolic Solver; Solve Parametric Equations in ReturnConditions Mode; Solve Differential Equation. In the field of differential equations, an initial value problem (also called a Cauchy problem by some authors [citation needed]) is an ordinary differential equation together with a specified value, called the initial condition, of the unknown function at a given point in the domain of the solution. We're going to use Solver to find it later. Then, integrating both sides gives y y y as a function of x x x, solving the differential equation. Differential-Algebraic Equations (DAEs), in which some members of the system are differential equations and the others are purely algebraic, having no derivatives in them. We've spent the last three sections learning how to take Laplace transforms and how to take inverse Laplace transforms. A differential equation that can be written in the form. Note that the differential equations depend on the unknown parameter. Second Order Differential Equations Distinct Real Roots 41 min 5 Examples Overview of Second-Order Differential Equations with Distinct Real Roots Example - verify the Principal of Superposition Example #1 - find the General Form of the Second-Order DE Example #2 - solve the Second-Order DE given Initial Conditions Example #3 - solve the Second-Order DE…. 30, x2(0) ≈119. This article describes how to numerically solve a simple ordinary differential equation with an initial condition. It discusses how to represent initial value problems (IVPs) in MATLAB and how to apply MATLAB's ODE solvers to such problems. To solve this system of equations in MATLAB, you need to code the equations, initial conditions, and boundary conditions, then select a suitable solution mesh before calling the solver pdepe. Solving systems of linear equations online. dx / dt + 7x = 5 cos 2t d2x / dt2 + 6 dx / dt + 8x = 5 sin 3t d3x / dt2 + 8 dx / dt + 25x = 10u(t). real vector, the times at which the solution is computed. Systems of Equations Calculator is a calculator that solves systems of equations step-by-step. HP 50g Solving differential equations hp calculators - 4 - HP 50g Solving differential equations Figure 3 The input field f: is where we enter the right hand side of the differential equation of the form Y'(t)=F(T,Y). This course is a study in ordinary differential equations, including linear equations, systems of equations, equations with variable coefficients, existence and uniqueness of solutions, series solutions, singular points, transform methods, and boundary value problems; application of differential equations to real-world problems is also included. Below is an example of solving a first-order decay with the APM solver in Python. But since it is not a prerequisite for this course, we have to limit ourselves to the simplest. Also it calculates the inverse, transpose, eigenvalues, LU decomposition of square matrices. Some possible workarounds would be to make a larger system of equations (ie just stack the x-y pairs into one big vector), or to run multiple times and specify the time points where you want the solution. Solution methods for first order equations and for second and higher order linear equations. This function numerically integrates a system of ordinary differential equations given an initial value:. This is the three dimensional analogue of Section 14. " * If you mean "graph approximate solutions to first-order ODEs for a given set of initial conditions," then the answer is "yes"—if you install a program to do so, like the ones seen here: TI 83 Plus SLOPE FI. equations where the derivative depends on past values of the state variables or their derivatives. An example is displayed in Figure 3. Each Problem Solver is an insightful and essential study and solution guide chock-full of clear, concise problem-solving gems. It is an array of state vectors for each time point. We will also discuss methods for solving certain basic types of differential equations, and we will give some applications of our work. Enter initial conditions (for up to six solution curves), and press "Graph. The set of such linearly independent vector functions is a fundamental system of solutions. The mathematics of diseases is, of course, a data-driven subject. I need to use ode45 so I have to specify an initial value. Each row in the solution array y corresponds to a value returned in column vector t. f (t 0, y, y ′) = 0. Fundamentals of Differential Equations, by Nagle and Saff. solve_ivp to solve a differential equation. Differential equations play a prominent role in engineering, physics, economics, and other disciplines. A final value must also be specified for the independent variable. To obtain the graph of a solution of third and higher order equation, we convert the equation into systems of first order equations and draw the graphs. " The numerical results are shown below the graph. In this section we discuss the solution to homogeneous, linear, second order differential equations, ay'' + by' + c = 0, in which the roots of the characteristic polynomial, ar^2 + br + c = 0, are complex roots. Put initial conditions into the resulting equation. Now for some initial conditions-- suppose the initial conditions are that x of 0 is 0, and x prime of 0 is 1. So this is a separable differential equation, but. Solving Systems of Equations by Matrix Method. For a better understanding of the syntax we are going to solve an ODE analytically. In this section some of the common definitions and concepts in a differential equations course are introduced including order, linear vs. Solve a System of Differential Equations. When you start learning how to integrate functions, you'll probably be introduced to the notion of Differential Equations and Slope Fields. The solvers can work on stiff or nonstiff problems, problems with a mass matrix, differential algebraic equations (DAEs), or fully implicit problems. Get result from Laplace Transform tables. The mathematics of diseases is, of course, a data-driven subject. First Order Non-homogeneous Differential Equation. MatLab Function Example for Numeric Solution of Ordinary Differential Equations This handout demonstrates the usefulness of Matlab in solving both a second-order linear ODE as well as a second-order nonlinear ODE. What about equations that can be solved by Laplace transforms? Not a problem for Wolfram|Alpha: This step-by-step program has the ability to solve many. • Use the method of integrating factor to integrate linear first order ODEs. Remark: If the coefficient function $\alpha$ is piecewise constant as you said, I dont think that you can solve it analyticaly. Boundary-ValueProblems Ordinary Differential Equations: Discrete Variable Methods INTRODUCTION Inthis chapterwe discuss discretevariable methodsfor solving BVPs for ordinary differential equations. Let's take a look at another example. Learn how to use the Algebra Calculator to solve systems of equations. In terms of the vector y, that's y1 of 0, the first component of y is 0. 1 then we have. In this video, we solve a separable differential equation that has an initial condition. Reynolds Department of Mathematics & Applied Mathematics Virginia Commonwealth University Richmond, Virginia, 23284 Publication of this edition supported by the Center for Teaching Excellence at vcu Ordinary and Partial Differential Equations: An Introduction to Dynamical. VODE is a package of subroutines for the numerical solution of the initial-value problem for systems of first-order ordinary differential equations. Sage Math Program Program - Solving a System of Linear Equations - Matrix Inverse Program - First Order Systems - Eigenvalues, Eigenvectors, and Initial Conditions for Systems Program - Eigenvalue Method - Lead in Body Example Program - DE_SOLVER - Richardsons Arms Race. 2 Package deSolve: Solving Initial Value Di erential Equations in R with the initial conditions: X(0) = Y(0) = Z(0) = 1 Where a, band care three parameters, with values of -8/3, -10 and 28 respectively. dx / dt + 7x = 5 cos 2t d2x / dt2 + 6 dx / dt + 8x = 5 sin 3t d3x / dt2 + 8 dx / dt + 25x = 10u(t). A basic example showing how to solve systems of differential equations. exp(t) and sinh(t), are supported and whitespace is allowed. As you recall, this model was: What is the size of the population, at t = 10, given an α of 0. In this case we need to solve differential equations so select "DEQ Differential Equations". If you ever get lost, just refer to the System Dynamics to differential equation translation table. Find a numerical solution to the following differential equations with the associated initial conditions. 3 Second-Order Systems and. Systems of PDEs, ODEs, algebraic equations Dene Initial and or boundary conditions to get a well-posed problem Create a Discrete (Numerical) Model Discretize the domain ! generate the grid ! obtain discrete model Solve the discrete system Analyse Errors in the discrete system Consistency, stability and convergence analysis Multiscale Summer. Then the BVP solver uses these three inputs to solve the equation. Second Order Linear Differential Equations How do we solve second order differential equations of the form , where a, b, c are given constants and f is a function of x only? In order to solve this problem, we first solve the homogeneous problem and then solve the inhomogeneous problem. The toy model below was built to simulate a simple experiment. The argument list is equation,indep-var,dep-var. View Intro_Linear_Diff_Eqn. This article describes how to numerically solve a simple ordinary differential equation with an initial condition. m function (system), time-span and initial-condition (x0) only. Numerical methods for solving different types of PDE's reflect the different character of the problems. The purpose of this package is to supply efficient Julia implementations of solvers for various differential equations. However, it only covers single equations. Ordinary differential equations (ODEs) play a vital role in engineering problems. 1 Recall from Section 6. In this section we focus on Euler's method, a basic numerical method for solving initial value problems. As you recall, this model was: What is the size of the population, at t = 10, given an α of 0. • This is a stiff system because the limit cycle has portions where the solution components change slowly alternating with regions of very sharp change - so we will need ode15s. Consider the differential equation: The first step is to convert the above second-order ode into two first-order ode. The second uses Simulink to model and solve a. To compare and contrast the syntax of these two solvers, consider the differential equation y′(t. Eigenvectors and Eigenvalues. Ordinary differential equations (ODEs) and delay differential equations (DDEs) are used to describe many phenomena of physical interest. Advanced Math Solutions - Ordinary Differential Equations Calculator, Exact Differential Equations In the previous posts, we have covered three types of ordinary differential equations, (ODE). What is the finite difference method? The finite difference method is used to solve ordinary differential equations that have conditions imposed on the boundary rather than at the initial point. The techniques for solving differential equations based on numerical approximations were developed before programmable computers existed. Such an equation is called an Ordinary Differential Equation (ODE), since the solution is a function, namely the function h(t). You may find the Maple manual (by Prof. These methods produce solutions that are defined on a set of discrete points. under consideration. Note that the differential equations depend on the unknown parameter. The Laplace Transform can be used to solve differential equations using a four step process. Plot on the same graph the solutions to both the nonlinear equation (first) and the linear equation (second) on the interval from t = 0 to t = 40, and compare the two. Many problems in engineering and physics involve solving differential equations with initial conditions or boundary conditions or both. You can also call the generic function solve(o). ode::solve computes solutions for ordinary differential equations. For equations of physical interest these appear naturally from the context in which they are derived. When it is applied, the functions are physical quantities while the derivatives are their rates of change. Ordinary differential equations (ODEs) play a vital role in engineering problems. 2 Package deSolve: Solving Initial Value Di erential Equations in R with the initial conditions: X(0) = Y(0) = Z(0) = 1 Where a, band care three parameters, with values of -8/3, -10 and 28 respectively. Differential Equations Calculator Applet This is a general purpose tool to help you solve differential equations numerically by any one of several methods. Now for some initial conditions--suppose the initial conditions are that x of 0 is 0, and x prime of 0 is 1. Wolfram|Alpha can solve many problems under this important branch of mathematics, including solving ODEs, finding an ODE a function satisfies and solving an ODE using a slew of. Linear first-order systems. 2, we notice that the solution in the first three cases involved a general constant C, just like when we determine indefinite integrals. Differential-Algebraic Equations (DAEs), in which some members of the system are differential equations and the others are purely algebraic, having no derivatives in them. We can use the linearity property of the Laplace transform to obtain. In this section we discuss the solution to homogeneous, linear, second order differential equations, ay'' + by' + c = 0, in which the roots of the characteristic polynomial, ar^2 + br + c = 0, are complex roots. An ordinary differential equation involves function and its derivatives. Again this is done quite easily using the dsolve command. Then, integrating both sides gives y y y as a function of x x x, solving the differential equation. First Order Non-homogeneous Differential Equation. Capable of finding both exact solutions and numerical approximations, Maple can solve ordinary differential equations (ODEs), boundary value problems (BVPs), and even differential algebraic equations (DAEs). 1? Calculus can be used to solve the model and answer. Report the final value of each state as `t \to \infty`. The types of equations that can be solved with this method are of the following form. fences, vertical asymptotes, behavior at infinity. Remark: If the coefficient function $\alpha$ is piecewise constant as you said, I dont think that you can solve it analyticaly. Many problems in engineering and physics involve solving differential equations with initial conditions or boundary conditions or both. Use * for multiplication a^2 is a 2. 94, that it satisfies the linear ODE system 0. Differential equations are in engineering, physics, economics and even biology. Jang et al. Empirical measures of the order of a method. Differential equations is a challenging subject. • Initial value delay differential equations (DDE), using packages deSolve or PBSddes-olve (Couture-Beil et al. Solving Differential Equations 20. Indeed, many numerical methods require that you write your differential equation as a system of first order differential equations. Section 4-5 : Solving IVP's with Laplace Transforms. 1 then we have. This course is a study in ordinary differential equations, including linear equations, systems of equations, equations with variable coefficients, existence and uniqueness of solutions, series solutions, singular points, transform methods, and boundary value problems; application of differential equations to real-world problems is also included. The Linear System Solver is a Linear Systems calculator of linear equations and a matrix calcularor for square matrices. Because there is an unknown parameter, the function must be of the form dydx = odefun(x,y. There is no universally accepted definition of stiffness. Differential Equations Calculator Applet This is a general purpose tool to help you solve differential equations numerically by any one of several methods. " * If you mean "graph approximate solutions to first-order ODEs for a given set of initial conditions," then the answer is "yes"—if you install a program to do so, like the ones seen here: TI 83 Plus SLOPE FI. The MATLAB PDE solver, pdepe, solves initial-boundary value problems for systems of parabolic and elliptic PDEs in the one space variable and time. Ordinary Differential Equations (ODEs) In an ODE, the unknown quantity is a function of a single independent variable. Each row in the solution array y corresponds to a value returned in column vector t. Usually when faced with an IVP, you first find. Systems of PDEs, ODEs, algebraic equations Dene Initial and or boundary conditions to get a well-posed problem Create a Discrete (Numerical) Model Discretize the domain ! generate the grid ! obtain discrete model Solve the discrete system Analyse Errors in the discrete system Consistency, stability and convergence analysis Multiscale Summer. Solve Differential Equations in Matrix Form. Explicit and Implicit Methods in Solving Differential Equations A differential equation is also considered an ordinary differential equation (ODE) if the unknown function depends only on one independent variable. m: function xdot = vdpol(t,x). Video Lectures for Ordinary Differential Equations, MATH 3301 View these 3 videos below for Tuesday 6/11/2013. MATLAB Tutorial on ordinary differential equation solver (Example 12-1) Solve the following differential equation for co-current heat exchange case and plot X, Xe, T, Ta, and -rA down the length of the reactor (Refer LEP 12-1, Elements of chemical reaction engineering, 5th edition) Differential equations. Some of the higher end models have other other functions which can be used: Graphing Initial Value Problems - TI-86 & TI-89 have functions which will numerically solve (with Euler or Runge-Kutta) and graph a solution. In particular, to determine how solutions depend on the signs and magnitudes of the coefficients a and b and on the initial conditions. The set of such linearly independent vector functions is a fundamental system of solutions. This built-in application is accessed in several ways. Solve a system of several ordinary differential equations in several variables by using the dsolve function, with or without initial conditions. People often think that to find solutions of differential equations, you simply find an antiderivative and then use an initial condition to evaluate the constant. This gives us a way to formally classify any (linear) relationships between Romeo and Juliet. For example, state the following initial value problem by defining an ODE with initial conditions:. I divided these into initial conditions that will serve as initial conditions for the marching algorithm, and a boundary condition at the end of the problem domain (t = 1). Solves the initial value problem for stiff or non-stiff systems of first order ode-s:. If the number of conditions is less than the number of dependent variables, the solutions contain the arbitrary constants C1, C2,. Make sure you notice that the initial condition y1(0) is unknown. original problem usually to a system of algebraic equations. 6 Package deSolve: Solving Initial Value Di erential Equations in R 2. Initial Conditions and Initial-Value Problems. ode::solve(o) returns the set of solutions of the ordinary differential equation o. Or in vector terms, the initial vector is 0, 1. with solving ODE in MATLAB, the basic syntax for solving systems is the same as for solving single equations, where each scalar is simply replaced by an analogous vector. The final argument is an array containing the time points for which to solve the system. The following examples show different ways of setting up and solving initial value problems in Python. Systems of First Order Linear Differential Equations We will now turn our attention to solving systems of simultaneous homogeneous first order linear differential equations. solving systems of second order differential Learn more about ode, second order differential equations, initial conditions, systems of odes, plotting odes, trajectories, differential equations. Frequently exact solutions to differential equations are unavailable and numerical methods become. The Laplace Transform can be used to solve differential equations using a four step process. Q&A for active researchers, academics and students of physics. The Wolfram Language's differential equation solving functions can be applied to many different classes of differential equations, automatically selecting the appropriate algorithms without needing preprocessing by the user. It depends on the differential equation, the initial condition and the interval. 13 and Corollary 12. Identify and solve Cauchy-Euler equations. This gives us a way to formally classify any (linear) relationships between Romeo and Juliet. A calculator for solving differential equations. There is no universally accepted definition of stiffness. equations where the derivative depends on past values of the state variables or their derivatives. Example 2 Write the following 4 th order differential equation as a system of first order, linear differential equations. The choice of boundary condition and initial conditions, for a given PDE, is very important. MATLAB Tutorial on ordinary differential equation solver (Example 12-1) Solve the following differential equation for co-current heat exchange case and plot X, Xe, T, Ta, and -rA down the length of the reactor (Refer LEP 12-1, Elements of chemical reaction engineering, 5th edition) Differential equations. Solving a Separable Differential Equation, Another Example #4, Initial Condition. The solver detects the type of the differential equation and chooses an algorithm according to the detected equation type. It depends on what you mean by "solve. Using experimental data, the methods are investigated over initial conditions and with sinusoidal reactivity. If you ever get lost, just refer to the System Dynamics to differential equation translation table. We can use the linearity property of the Laplace transform to obtain. Let's simplify things and set , i. Online differential equations course. Many problems in engineering and physics involve solving differential equations with initial conditions or boundary conditions or both. Although some purely theoretical work has been done, the key element in this field of research is being able to link mathematical models and data. That is the main idea behind solving this system using the model in Figure 1. 1 Recall from Section 6. Next, reduce each one of the second order equations into two first order equations. You can get practical use out of some relatively simple math. Differential Equations and Separation of Variables A differential equation is basically any equation that has a derivative in it. would you please help me to solve it here are the matlab codes. We use this to help solve initial value problems for constant coefficient DE's. on the interval , subject to general two-point boundary conditions. How to solve. And the system is implemented on the basis of the popular site WolframAlpha will give a detailed solution to the differential equation is. $\begingroup$ Your second paragraph describes a standard approach for solving this sort of problem (called 4DVAR in numerical weather prediction, where finding initial conditions from observations of the state are the crucial step in getting reasonably accurate forecasts). For example, see Solve Differential Equations with Conditions. This course is a study in ordinary differential equations, including linear equations, systems of equations, equations with variable coefficients, existence and uniqueness of solutions, series solutions, singular points, transform methods, and boundary value problems; application of differential equations to real-world problems is also included. Name (required) Email Address (required)
CommonCrawl
Neural mechanisms underlying the hierarchical construction of perceived aesthetic value Aesthetic preference for art can be predicted from a mixture of low- and high-level visual features Kiyohito Iigaya, Sanghyun Yi, … John P. O'Doherty Joint coding of shape and blur in area V4 Timothy D. Oleskiw, Amy Nowack & Anitha Pasupathy Stimulus- and goal-oriented frameworks for understanding natural vision Maxwell H. Turner, Luis Gonzalo Sanchez Giraldo, … Fred Rieke Real-world size of objects serves as an axis of object space Taicheng Huang, Yiying Song & Jia Liu Representations of naturalistic stimulus complexity in early and associative visual and auditory cortices Yağmur Güçlütürk, Umut Güçlü, … Rob van Lier Predicting how surface texture and shape combine in the human visual system to direct attention Zoe Jing Xu, Alejandro Lleras & Simona Buetti Visual prototypes in the ventral stream are attuned to complexity and gaze behavior Olivia Rose, James Johnson, … Carlos R. Ponce Prior object-knowledge sharpens properties of early visual feature-detectors Christoph Teufel, Steven C. Dakin & Paul C. Fletcher Tracking cortical representations of facial attractiveness using time-resolved representational similarity analysis Daniel Kaiser & Karen Nyga Kiyohito Iigaya ORCID: orcid.org/0000-0002-4748-84321,2,3, Sanghyun Yi ORCID: orcid.org/0000-0003-1274-65231, Iman A. Wahle1, Sandy Tanwisuth1, Logan Cross1,4 & John P. O'Doherty ORCID: orcid.org/0000-0003-0016-35311 Nature Communications volume 14, Article number: 127 (2023) Cite this article Little is known about how the brain computes the perceived aesthetic value of complex stimuli such as visual art. Here, we used computational methods in combination with functional neuroimaging to provide evidence that the aesthetic value of a visual stimulus is computed in a hierarchical manner via a weighted integration over both low and high level stimulus features contained in early and late visual cortex, extending into parietal and lateral prefrontal cortices. Feature representations in parietal and lateral prefrontal cortex may in turn be utilized to produce an overall aesthetic value in the medial prefrontal cortex. Such brain-wide computations are not only consistent with a feature-based mechanism for value construction, but also resemble computations performed by a deep convolutional neural network. Our findings thus shed light on the existence of a general neurocomputational mechanism for rapidly and flexibly producing value judgements across an array of complex novel stimuli and situations. How it is that humans are capable of making aesthetic judgments has long been a focus of enquiry in psychology, more recently gaining a foothold in neuroscience with the emerging field of neuroaesthetics1,2,3,4,5,6,7,8,9,10. Yet in spite of the long tradition of studying value judgments, we still have a very limited understanding of how people form aesthetic value, let alone of the neural mechanisms underlying this enigmatic process. So far, neuroscience studies into aesthetic judgments have been largely limited to identifying brain regions showing increased activity to stimuli with higher compared to lower aesthetic value (e.g.,11,12), leaving it an open question of how the brain computes aesthetic value from visual stimuli in the first place. To fill this gap, we approach this problem from a computational neuroscience perspective, by leveraging computational methods to gain insight into the neural computations underlying aesthetic value construction. Considerable progress has been made toward understanding how the brain represents the value of stimuli in the world. Value signals have been found throughout the brain, but most prominently in the medial prefrontal (mPFC) and adjacent orbitofrontal cortex. Activity has been found in this region tracking the experienced value of reward outcomes, as well as during anticipation of future rewards11,13,14,15,16,17,18,19,20,21,22,23,24,25. The mPFC, especially its ventral aspects, has been found to correlate with the experienced pleasantness of gustatory, olfactory, music, and visual stimuli including faces, but also visual art12,26,27,28,29,30,31. While much is known about how the brain represents value, much less is known about how those value signals come to be generated by the brain in the first place. A typical approach to this question in the literature to date is to assume that stimuli acquire value through associative learning, in which the value of a particular stimulus is modulated by being associated with other stimuli with extant (perhaps even innate) value. Seminal work has identified key neural computations responsible for implementing this type of reward-based associative learning in the brain32,33,34. However, the current valuation of an object is not solely dependent on its prior associative history. Even novel stimuli never before seen, can be assigned a value judgment35. Moreover, the current value of a stimulus depends on one's internal motivational state, as well as the context in which a stimulus is presented. Consequently, the value of an object may be computed on-line in a flexible manner that goes beyond simple associative history. Previous work in neuroeconomics has hinted at the notion that value can be actively constructed by taking into account different underlying features or attributes of a stimulus. For instance, a t-shirt might have visual and semantic components36, a food item might vary in its healthfulness and taste or in its nutritive content37,38, an odor is composed of underlying odor molecules39. Furthermore, potential outcomes in many standard economic decision-making problems can be described in terms of the magnitude and probability of those outcomes40,41. For a given outcome, these individual features of such potential outcomes can be each weighted so that they are taken into account when making an overall value determination. Building upon these ideas, we recently proposed that the value of a stimulus, including a piece of art, is actively constructed in a two-step process by first breaking down a stimulus into its constituent features and then by recombining these features in a weighted fashion, to compute an overall subjective value judgment42,43. In a recent study we showed that it is possible to demonstrate that this same feature-based value construction process can be used to gain an understanding about how humans might value works of art as well as varieties of online photograph images42. Using a combination of computer vision tools and machine learning, we showed that it is possible to predict an individual's subjective aesthetic valuation for a work of art and photography, by segmenting a visual scene into its underlying visual features, and then combining those features together in a weighted fashion. While this prior work42 provides empirical support for the applicability of the value construction process for understanding aesthetic valuation, nothing is yet known about whether this approach is actually a plausible description of what might actually be occurring in the brain, a crucial step for validating this model as a biological mechanism. Establishing how the brain might solve the feature integration process for art is uniquely challenging because of the complexity and diversity of visual art. Even in paintings alone, there are an overwhelmingly broad range of objects, themes, as well as styles that are used across artworks. The brain's value computation mechanism therefore needs to generalize across all of these diverse stimuli, in order to compute the value of them reliably. However, it is not known how the brain can transform heterogeneous, high-dimensional input, into a simple output of an aesthetic judgment. Here we address these challenges by combining computational modelling with neuroimaging data. Following our prior behavioral evidence, we propose that the brain performs aesthetic value computations for visual art through extracting and integrating visual and abstract features of a visual stimulus. In our linear feature summation model (LFS)42, the input is first decomposed into various visual features characterizing the color or shape of the whole and segments of the paintings. These features are then transformed into abstract high-level features that also affect value judgement (e.g., how concrete or abstract the painting is). This feature space enables a robust value judgment to be formed for visual stimuli even never before seen, through a simple linear regression over the features. We also recently reported that the features predicting value judgment in the LFS model naturally emerge in a generic deep convolutional neural network (DCNN) model, suggesting a close relationship between these two models42. Here we test whether these computational models actually approximate what is going on in the brain. By doing so, we will attempt to link an explicit, interpretable feature-based value computation and a generic DCNN model to actual neural computations. Because our model of value construction is agnostic about the type of object that is being valued, our proposed mechanism has the potential to not only account for aesthetic value computation but also to value judgments across stimulus domains beyond the domain of aesthetics for art. Linear feature summation (LFS) model predicts human valuation of visual art We conducted an fMRI study in which we aimed to link two complementary computational models (LFS and DCNN) to neural data as a test of how feature-based value construction might be realized in the brain. Rather than collecting noisy data from a large number of participants with very short scanning times to perform group averaging, here, we engaged in deep fMRI scanning of a smaller group of individuals (n = 6), who each completed 1000 trials of our art rating task (ART) each over four days of scanning. This allowed us to test for the representation of the features in each individual participant with sufficient fidelity to perform reliable single subject inference. This well-justified approach essentially treats the individual participant as the replication unit, rather than relying on group-averaged data from participants in different studies44. This has been a dominant and highly successful approach in most non-human animal studies (e.g.,14,18,20,22,24,33,45,46), as well as in two subfields of human psychology and neuroscience: psychophysics and visual neuroscience, respectively (e.g.,47,48). On each trial, participants were presented with an image of a painting on a computer screen and asked to report how much they liked it on a scale of 0 (not at all) to 3 (very much) (Fig. 1a). Each of the participants rated all of the paintings without repetition (1000 different paintings). The stimulus set consisted of paintings from a broad range of art genres (Fig. 1b)42. Fig. 1: Neuroimaging experiments and the model of value construction. a Neuroimaging experiments. We administered our task (ART: art rating task) to human participants in an fMRI experiment. Each participant completed 20 scan sessions spread over four separate days (1000 trials in total with no repetition of the same stimuli). On each trial, a participant was presented with a visual art stimulus (paintings) for 3 s. The art stimuli were the same as in our previous behavioral study42. After the stimulus presentation, a participant was presented with a set of possible ratings (0,1,2,3), where they had to choose one option within 3 s, followed by brief feedback with their selected rating (0.5 s). The positions of the numbers were randomized across trials, and the order of presented stimuli was randomized across participants. b Example stimuli. The images were taken from four categories from Wikiart.org.: Cubism, Impressionism, Abstract art and Color Fields, and supplemented with art stimuli previously used52. c The idea of value construction. An input is projected into a feature space, in which the subjective value judgment is performed. Importantly, the feature space is shared across stimuli, enabling this mechanism to generalize across a range of stimuli, including novel ones. d Schematic of the LFS model42. A visual stimulus (e.g., artwork) is decomposed into various low-level visual features (e.g., mean hue, mean contrast), as well as high-level features (e.g., concreteness, dynamics). We hypothesized that in the brain high-level features are constructed from low-level features, and that subjective value is constructed from a linear combination of all low and high-level features. e How features can help construct subjective value. In this example, preference was separated by the concreteness feature. Reproduced from42. f In this example, the value over the concreteness axis was the same for four images; but another feature, in this case, the brightness contrast, could separate preferences over art. Reproduced from42. g The LFS model successfully predicts participants' liking ratings for the art stimuli. The model was fit to each participant (cross-validated). Statistical significance was determined by a permutation test (one-sided). Three stars indicate p < 0.001. Due to copyright considerations, some paintings presented here are not identical to that used in our studies. Credit. Jean Metzinger, Portrait of Albert Gleizes (public domain; RISD Museum). We recently showed that a simple linear feature summation (LFS) model can predict subjective valuations for visual art both for paintings and photographs drawn from a broad range of scenery, objects, and advertisements42. The idea is that the subjective value of an individual painting can be constructed by integrating across features commonly shared across all paintings. For this, each image was decomposed into its fundamental visual and emotional features. These feature values are then integrated linearly, with each participant being assigned a unique set of features weights from which the model constructs a subjective preference (Fig. 1c, d). This model embodies the notion that subjective values are computed in a common feature space, whereby overall subjective value is computed as a weighted linear sum over feature content (Fig. 1e, f). The LFS model extracts various low-level visual features from an input image using a combination of computer vision methods (e.g.,49). This approach computes numerical scores for different aspects of visual content in the image, such as the average hue and brightness of image segments, as well as the entirety of the image itself, as identified by machine learning techniques, e.g., Graph-Cuts50 (Details of this approach are described in the Methods section; see also42). The LFS model also includes more abstract or "high-level" features that can contribute to valuation. Based on previous literature51,52, we introduced three features: the image is 'abstract or concrete'53, 'dynamic or still', 'hot or cold'. These three features are introduced in ref. 52, by taking principal components of features originally introduced in ref. 51. We also included a fourth feature: whether the image evinces a positive or negative emotional valence42. Note that image valence is not the same as valuation because even negative valence images can elicit a positive valuation (e.g., Edvard Munch's "The Scream"); moreover we have previously shown that valence is not among the features that account for valuation the best42. These high-level features across images were annotated by participants with familiarity and experience in art (n = 13) as described in our previous study42. We then took the average score over these experts' ratings as the input into the model representing the content of each high-level attribute feature for each image. We have previously shown that it is possible to re-construct significant, if not all of the variance explained by high-level features using combinations of low-level features42, supporting the possibility that low-level features can be used to construct high level features. The final output of the LFS model is a linear combination of low- and high-level features. We assumed that weights over the features are fixed for each individual across stimuli, so that we can derive generalizable conclusions about the features used to generate valuation across images. In our behavioral fitting, we treat low-level and high-level features equally as features of a linear regression model, in order to determine the overall predictive power of our LFS model. In our recent behavioral study, we identified a minimal feature set that predicts subjective ratings across participants using a group-level lasso regression42. Here, we applied the same analysis, except that we distinguished low- and high-level features for the purpose of the fMRI analysis. Since our interests hinge on the representational relationship between low- and high-level features in the brain, we first identified a set of low-level features that can predict behavioral liking ratings across participants, further augmenting this to a richer feature set that includes the four human-annotated features. By doing so, we aimed to identify brain signals that are uniquely correlated with low-level and high-level features (i.e., partial correlations between features and fMRI signals). Before turning to the fMRI data, we first aimed to replicate the behavioral results reported in our previous study42 in these fMRI participants. Indeed, using the LFS model with the shared feature set, we confirmed that the model could predict subjective art ratings across participants, replicating our previous behavioral findings (Fig. 1g; see Supplementary Figs. 1 and 2 for the estimated weights for each participant and their correlations). A deep convolutional neural network (DCNN) model also predicts human liking ratings for visual art An alternative approach to predict human aesthetic valuation for visual images is to use a generic deep neural network model that takes as its input the visual images, and ordinarily generates outputs related to object recognition. Here, we utilized a standard DCNN (VGG 1654) that had been pre-trained for object recognition with ImageNet55, and adapted it to instead output aesthetic ratings by training it on human aesthetic ratings. This approach means that we do not need to identify or label specific features that lead to aesthetic ratings, instead we can use the network to automatically detect the relevance of an image and use those to predict aesthetic ratings. Though the nature of computation that DCNN performs is usually very difficult to interpret, we have recently found that this type of DCNN model can produce results that are strongly related to the LFS model42. In particular, we have found that the LFS model features are represented in the DCNN model, even though we did not train the DCNN on any features explicitly (Fig. 2a). By performing a decoding analysis on each layer of the DCNN, we found that the low-level features show decreased decoding accuracy with increased depth of the layer, while the high-level features are more decodable in deeper layers. This suggests that the DCNN may also utilize similar features to those that we introduced in the LFS model, and the fact that features are represented hierarchically in a DCNN model that is blind to specific features suggests that these features might emerge spontaneously via a natural process of visual and cognitive development through interacting with natural stimuli56. Fig. 2: The deep convolutional neural network (DCNN) model naturally encodes low-level and high-level features and predict participants' choice behavior. a Schematic of the deep convolutional neural network (DCNN) model and the results of decoding analysis42. The DCNN model was first trained on ImageNet object classifications, and then the average ratings of art stimuli. We computed correlations between each of the LFS model features and activity patterns in each of the hidden layers of the DCNN model. We found that some low-level visual features exhibit significantly decreasing predictive accuracy over hidden layers (e.g., the mean hue and the mean saturation). We also found that a few computationally demanding low-level features showed the opposite trend (see the main text). We further found that some high-level visual features exhibit significantly increasing predictive accuracy over hidden layers (e.g., concreteness and dynamics). Results reproduced from42. b The DCNN model could successfully predict human participants' liking ratings significantly greater than chance across all participants. Statistical significance (p < 0.001, indicated by three stars) was determined by a permutation test (one-sided). Credit. Jean Metzinger, Portrait of Albert Gleizes (public domain; RISD Museum). Here we found that the DCNN model can predict subjective liking ratings of art across all fMRI participants (Fig. 2b), once again replicating our previous finding in a different dataset42. Predictive accuracy across participants was found to be similar to that of the LFS model, though the DCNN model could potentially perform better with even more training. So far, we have confirmed the validity of our LFS model and the use of a DCNN model to predict human behavioral ratings reported in our new fMRI experiments, replicating our previous behavioral study42. Now that we have validated our behavioral predictions, next we turn to the brain data to address how the brain implements the value construction process. The subjective value of art is represented in the medial prefrontal cortex (mPFC) We first tested for brain regions correlating with the subjective liking ratings of each individual stimulus at the time of stimulus onset. We expected to find evidence for subjective value signals in the medial prefrontal cortex (mPFC), given this is the main area found to correlate with value judgments for many different stimuli from an extensive prior literature, including for visual art (e.g.,11,12,14,17,23,31,57). Consistent with our hypothesis, we found that voxels in the mPFC are positively correlated with subjective value across participants (Fig. 3; See Supplementary Fig. 3 for the timecourse of the BOLD signals in the mPFC cluster). Consistent with previous studies, e.g.,38,58,59,60,61,62,63, other regions are also correlated with liking value (Supplementary Figs. 5 and 6). Fig. 3: Subjective value (i.e., liking rating). Subjective value for art stimuli at the time of stimulus onset was found in the medial prefrontal cortex in all six fMRI participants (One-sided t-test. An adjustment was made for multiple comparisons: whole-brain cFWE p < 0.05 with height threshold at p < 0.001). These subjective value signals could reflect other psychological processes such as attention. Therefore we performed a control analysis with the same GLM with additional regressors that can act as proxies for the effects of attention and memorability of stimuli, operationalized by reaction times, squared reaction times and the deviation from the mean rating64. We found that subjective value signals in all participants that we report in Fig. 3c survived this control analysis (Supplementary Fig. 7). Visual stream shows hierarchical, graded, representations of low-level and high-level features As illustrated in Fig. 1d, and reflecting our hypothesis regarding the encoding of low vs. high-level features across layers of the DCNN, we hypothesized that the brain would decompose visual input similarly, with early visual regions first representing low-level features, and with downstream regions representing high-level features. Specifically, we analyzed visual cortical regions in the ventral and dorsal visual stream65 to test the degree to which low-level and high-level features are encoded in a graded, hierarchical manner. In pursuit of this, we constructed a GLM that included the shared feature time-locked to stimulus onset. We identified voxels that are significantly modulated by at least one low-level feature by performing an F-test over the low-level feature beta estimates, repeating the same analysis with high-level features. We then compared the proportion of voxels that were significantly correlated with low-level features vs. high-level features in each region of interest in both the ventral and dorsal visual streams. This method allowed us to compare results across regions while controlling for different signal-to-noise ratios in the BOLD signal across different brain regions66. Regions of interest were independently identified by means of a detailed probabilistic visual topographical map65. Consistent with our hypothesis, our findings suggest that low- and high-level features relevant for aesthetic valuation are indeed represented in the visual stream in a graded hierarchical manner. Namely, the relative encoding of high-level features with respect to low-level features dramatically increases across the visual ventral stream (Fig. 4a). We found a similar, hierarchical organization in the dorsolateral visual stream (Fig. 4b), albeit less clearly demarcated than in the ventral case. We also confirmed in a supplementary analysis that referring to feature levels (high or low) according to our DCNN analysis, i.e., by using the slopes of our decoding results42, did not change the results of our fMRI analyses qualitatively and does not affect our conclusions (see Supplementary Fig. 8). Fig. 4: fMRI signals in visual cortical regions show similarity to our LFS model and DCNN model. a Encoding of low and high-level features in the visual ventral-temporal stream in a graded hierarchical manner. In general, the relative encoding of high-level features with respect to low-level features increases dramatically across the ventral-temporal stream. The maximum probabilistic map65 is shown color-coded on the structural MR image at the top to illustrate the anatomical location of each ROI. The proportion of voxels that significantly correlated with low-level features (blue; one-sided F-test p < 0.001) against high-level features (red; one-sided F-test p < 0.001) are shown for each ROI. See the "Methods" section for detail. b Encoding low and high-level features in the dorsolateral visual stream. The anatomical location of each ROI65 is color-coded on the structural MR image. c Encoding of DCNN features (hidden layers' activation patterns) in the ventral-temporal stream. The top three principal components (PCs) from each layer of the DCNN were used as features in this analysis. In general, early regions more heavily encode representations found in early layers of the DCNN, while higher-order regions encode representations found in deeper CNN layers. The proportion of voxels that significantly correlated with PCs of convolutional layers 1–4 (light blue), convolutional layers 5–9 (blue), convolutional layers 10–13 (purple), fully connected layers 14–15 (pink) are shown for each ROI. The significance was set at p < 0.001 by one-sided F-test. d Encoding of DCNN features in the dorsolateral visual stream. Credit. Jean Metzinger, Portrait of Albert Gleizes (public domain; RISD Museum). We also performed additional encoding analysis using cross validation at each voxel of each participant67. Specifically, we performed a lasso regression at each voxel with the low- and high-level features that we considered in our original analyses. Hyperparameters are optimized in 12-fold cross validation at each voxel across stimuli. As a robustness check, we determined if our GLM results can be reproduced using the lasso regression analysis. We analyzed how low-level feature weights and high-level feature weights changed across ROIs. For this, we computed the sum of squares of low-level feature weights and the sum of squares of high-level feature weights at each voxel. Because these weights estimates include those that can be obtained by chance, we also computed the same quantities by performing the lasso regression with shuffled stimuli labels (labels were shuffled at every regression). The null distribution of feature magnitudes (the sum of squares) was estimated for low-level features and high-level features at each ROI. For each voxel, we asked if estimated low-level features and high-level features are significantly larger than what is expected from noise, by comparing the magnitude of weights against the weights from null distribution (p < 0.001). We then examined how encoding of low-level vs high-level features varied across ROIs, as we did in our original GLM analysis. As seen in Supplementary Fig. 9, the original GLM analysis results were largely reproduced in the lasso regression. Namely, low-level features are more prominently encoded in early visual regions, while high-level features are more prominently encoded in higher visual regions. In this additional analysis, such effects were clearly seen across five out of six participants, while one participant (P1) showed less clear early vs late region-specific differentiation with regard to low vs high-level feature representation. We also note that the model's predictive accuracy in visual regions was lower for this participant (P1) than for the rest of the participants (Supplementary Fig. 10). Non-linear feature representations We found that features of the LFS model are represented across brain region and contribute to value computation. However, it is possible that nonlinear combinations of these features are also represented in the brain and that these may contribute to value computation. To explore this possibility, we constructed a new set of nonlinear features by multiplying pairs of the LFS model's features (interaction terms). We grouped these new features into three groups: interactions between pairs of low-level features (low-level × low-level), interactions between pairs of low-level and high-level features (low-level × high-level), and interactions between pairs of high-level features (high-level × high-level). To control the dimensionality of the new feature groups, we performed principal component analysis within each of the three groups of non-linear features, and took the first five PCs to match the number of the high-level features specified in our original LFS model. We performed a LASSO regression analysis with these new features and the original features. We found that in most participants, non-linear features created from pairs of high-level features produced significant correlations with neural activity across multiple regions, while also showing similar evidence for a hierarchical organization from early to higher-order regions, as found for the linear high-level features (Fig. 5, Supplementary Fig. 11). Though comparisons between separately optimized lasso regressions should be cautiously interpreted, the mean correlations of the model with both linear and nonlinear features across ROIs showed a slight improvement in predictive accuracy compared to the original LFS model with only linear features (Supplementary Fig. 10), while the DCNN model features out-performed both the original LFS model and the LFS model + nonlinear features. Fig. 5: Encoding of nonlinear feature representations. We performed encoding analysis of low-level, high-level, and interaction term features (low × low, high × high, low × high), using lasso regression with cross validation within subject. The results of ROIs in the ventral-temporal and dorso-lateral visual streams are shown. Indeed, nonlinear features created from pairs of high-level features significantly contribute more to behavioral choice predictions than do other nonlinear features not built solely from high-level features (Supplementary Fig. 12). The first principal component of high level x high level features well captured three participants (3, 5, 6) behavior, while other participants show somewhat different weight profiles. However, we found that these newly added features only modestly improved the model's behavioral predictions (Supplementary Fig. 13). DCNN model representations We then tested whether activity patterns in these regions resemble the computations performed by the hidden layers of the DCNN model. We extracted the first three principal components from each layer of the DCNN, and included each as regressors in a GLM. Indeed, we found evidence that both the ventral and dorsal visual stream exhibits a similar hierarchical organization to that of the DCNN, such that lower visual areas correlated better with activity in the early hidden layers of the DCNN, while higher-order visual areas (in both visual streams) tend to correlate better with activity in deeper hidden layers of the DCNN (Fig. 4c, d). We also performed additional analyses with LASSO regression using the DCNN features. To test if we can reproduce the DCNN results originally performed with the GLM approach (as shown in Fig. 4), we first performed LASSO regression with the same 45 features from all hidden layers. Hyperparameters were optimized by 12-fold cross-validation. The estimated weights were compared against the null distribution of each ROI constructed from the same analysis with shuffled stimuli labels. We then also performed the same analysis but with a larger set of features (150 features). In Supplementary Figs. 14 and 15, we show how the weights on features from different layers varied across different ROIs in the visual stream. We computed the sum of squared weights of hidden layer groups (layer 1–4, 5–9, 10–13, 14–15). Again, in order to discard weight estimates that can be obtained by chance, we computed a null distribution by repeating the same analysis with shuffled labels and took the weight estimates that are significantly larger than the null distribution (at p < 0.001) in each ROI. We again found that LASSO regression with within-subject cross validation reproduced our original GLM analysis results. As a further control analysis, we asked whether similar results could be obtained from a DCNN model with random, untrained, weights68. We repeated the same LASSO regression analysis as we did in our analysis with the trained DCNN model. We found that such a model does not reproduce the finding of a hierarchical representation of layers that we found across the visual stream and other cortical areas as in the analysis with trained DCNN weights (Supplementary Figs. 16 and 17). PPC and PFC show mixed coding of low- and high-level features We next probed these representations in downstream regions of association cortex69,70. We performed the same analysis with the same GLM as before in regions of interest that included the posterior parietal cortex (PPC), lateral prefrontal cortex (lPFC) and medial prefrontal cortex (mPFC). We found that both the LFS model features and the DCNN layers were represented in these regions in a mixed manner71,72. We found no clear evidence for a progression of the hierarchical organization that we had observed in the visual cortex; instead, each of these regions appeared to represent both low and high-level features to a similar degree (Fig. 6a). Activity in these regions also correlated with hidden layers of the DCNN model (Fig. 6b). We obtained similar results using a LASSO regression analysis with cross validation based on either the LFS model features (Supplementary Fig. 18) or the DCNN features (Supplementary Figs. 19 and 20). These findings suggest that, as we will see, these regions appear to play a primary role in feature integration as required for subjective value computations. Fig. 6: Parietal and prefrontal cortex encode features in a mixed manner. a Encoding of low- and high-level features from the LFS model in posterior parietal cortex (PPC), lateral prefrontal cortex (lPFC) and medial prefrontal cortex (mPFC). The ROIs used in this analysis are indicated by colors shown in a structural MR image at the top. b Encoding of the DCNN features (activation patterns in the hidden layers) in PPC and PFC. The same analysis method as Fig. 4 was used. Credit.Jean Metzinger, Portrait of Albert Gleizes (public domain; RISD Museum). Features encoded in PPC and lPFC are strongly coupled to the subjective value of visual art in mPFC Having established that both the engineered LFS model and the emergent DCNN model features are hierarchically represented in the brain, we asked if and how these features are ultimately integrated to compute the subjective value of visual art. First, we analyzed how aesthetic value is represented across cortical regions alongside the model features by adding the participant's subjective ratings to the GLM. We found that subjective values are, in general, more strongly represented in the PPC as well as in the lateral and medial PFC than in early and late visual areas (Fig. 7a and Supplementary Fig. 21). Furthermore, value signals appeared to become more prominent in medial prefrontal cortex compared to the lateral parietal and prefrontal regions (consistent with a large prior literature, e.g.,11,14,17,23,31,57,73). This pattern was not altered when we control for reaction times and the distance of individual ratings from the mean ratings, proxy measures for the degree of attention paid to each image (Supplementary Fig. 22). In a further validation of our earlier feature encoding analyses, we found that the pattern of hierarchical feature representation in visual regions was unaltered by the inclusion of ratings in the GLM (Supplementary Fig. 23). We note that even when using the DCNN model to classify features as either high or low as opposed to relying on the a-priori assignment from the LFS model, this did not change the results of our fMRI analyses qualitatively and does not affect our conclusions (Supplementary Fig. 8). Fig. 7: Features are integrated from PPC and lateral PFC to medial PFC when constructing the subjective value of visual art. a Encoding of low- and high-level features (green) and liking ratings (red) across brain regions. Note that the ROIs for the visual areas are now grouped as V1-2-3 (V1, V2 and V3) and V-high (Visual areas higher than V3). See the Methods section for detail. b The schematics of functional coupling analysis to test how feature representations are coupled with subjective value. We identified regions that encode features (green), by performing a one-sided F-test (p < 0.05 whole-brain cFWE with the height threshold p < 0.001). We also performed a psychophysiological interaction (PPI) analysis (orange: p < 0.001 uncorrected) to determine the regions that are coupled to the seed regions in mPFC that encode subjective value (i.e., liking rating) during stimulus presentation (red: seed, see Supplementary Fig. 24). We then tested for the proportion of overlap between voxels identified in these analyses in a given ROI. c The results of the functional coupling analysis show that features represented in the PPC and lPFC are coupled with the region in mPFC encoding subjective value. This result dramatically contrasts with a control analysis focusing on ITI instead of stimulus presentations (Supplementary Fig. 26). Credit. Jean Metzinger, Portrait of Albert Gleizes (public domain; RISD Museum). These results suggest that rich feature representations in the PPC and lateral PFC could potentially be leveraged to construct subjective values in mPFC. However, it is also possible that features represented in visual areas are directly used to construct subjective value in mPFC. To test this, we examined which of the voxels representing the LFS model features across the brain are coupled with voxels that represent subjective value in mPFC at the time when participants make decisions about the stimuli. A strong coupling would support the possibility that such feature representations are integrated at the time of decision-making in order to support a subjective value computation. To test for this, we first performed a psychological-physiological interaction (PPI) analysis, examining which voxels are coupled with regions that represent subjective value when participants made decisions (Fig. 7b and Supplementary Fig. 24). We stress that this is not a trivial signal correlation, as in our PPI analysis all the value and feature signals are regressed out. Therefore the coupling is due to noise correlations between voxels. Then we asked how much of the feature-encoding voxels overlap with these PPI voxels. Specifically, we tested for the fraction of feature-encoding voxels that are also correlated with the PPI regressor across each ROI. Finding overlap between feature encoding voxels and PPI connectivity effects would be consistent with a role for these feature encoding representations in value construction. We found that the overlap was most prominent in the PPC and lPFC, while there was virtually no overlap in the visual areas at all (Fig. 7c), consistent with the idea that features in the PPC and lPFC, instead of visual areas, are involved in constructing subjective value representations in mPFC. A more detailed decomposition of the PFC ROI from the same analysis shows the contribution of individual sub-regions of lateral and medial PFC (Supplementary Fig. 25). We also performed a control analysis to test the specificity of the coupling to an experimental epoch by constructing a similar PPI regressor locked to the epoch of inter-trial-intervals (ITIs). This analysis showed a dramatically altered coupling that did not involve the same PPC and PFC regions (Supplementary Fig. 26). These findings indicate that coupling between PPC and LPFC with mPFC value representations occurs specifically at the time that subjective value computations are being performed, suggesting that these regions are playing an integrative role of feature representations at the time of valuation. We however note that all of our analyses are based on correlations, which do not provide information about the direction of the coupling. It is an open question how the human brain computes the value of complex stimuli such as visual art1,3,6,74. Here, we addressed this question by applying two different computational models to neuroimaging data, demonstrating how the brain transforms visual stimuli into subjective value, all the way from the primary visual cortex to parietal and prefrontal cortices. The linear feature summation (LFS) model directly formulates our hypothesis, by extracting interpretable stimulus features and integrating over the features to construct subjective value. This linear regression model is related to a standard deep convolutional neural network (DCNN) trained on object recognition, because we found that the LFS model features are represented in hidden layers of the DCNN in a hierarchical manner. Here we found that both of these two models predict participants' activity across the brain from the visual cortex to prefrontal cortex. Though our correlation-based analyses do not address the directionality of information processing across regions, our results shed light on a possible mechanism by which the brain could transform a complex visual stimulus into a simple value that informs decisions, using a rich feature space shared across stimuli. Focusing first on the visual system, we found that low-level features that predict visual art preferences are represented more robustly in early visual cortical areas, while high-level features that predict preferences are increasingly represented in higher-order visual areas. These results support a hierarchical representation of the features required for valuation of visual imagery, and further support a model whereby lower-level features extracted by early visual regions are integrated to produce higher-level features in the higher visual system75. While the notion of hierarchical representations in the visual system is well established in the domain of object recognition76,77,78,79, our results substantially extend these findings by showing that features relevant to a very different behavioral task-forming value judgments, are also represented robustly in a similar hierarchical fashion. We then showed that the process through which feature representations are mapped into a singular subjective value dimension in a network of brain regions, including the posterior parietal cortex (PPC), lateral and medial prefrontal cortices (lPFC and mPFC). While previous studies have hinted at the use of such a feature-based framework in the prefrontal cortex (PFC), especially in orbitofrontal cortex (OFC), in those previous studies the features were more explicit properties of a stimulus (e.g., the movement and the color of dots45,80,81, or items that are suited to a functional decomposition such as food odor39 or nutritive components of food38; see also refs. 36, 37). Here we show that features relevant for computing subjective value of visual stimuli are widely represented in lPFC and PPC, whereas subjective value signals are more robustly represented in parietal and frontal regions, with the strongest representation in mPFC. Further, we showed that PFC and PPC regions encoding low- and high-level features enhanced their coupling with the mPFC region encoding subjective value at the time of image presentation. While further experiments are needed to infer the directionality of the connectivity effects, our findings are compatible with a framework in which low and high-level feature representations in lPFC and PPC are utilized to construct value representations in mPFC, as we hypothesized in the LFS model. Going beyond our original LFS model, we also found that in most participants, non-linear features created from pairs of high-level features specified in the original model produced significant correlations with neural activity across multiple regions, while largely showing similar evidence for a hierarchical organization from early to higher-order regions, as found for the linear high-level features. These findings indicate that the brain encodes a much richer set of features than our original proposed set of low-level and high-level features as specified in the original LFS model. It will be interesting to see if the nonlinear features that we introduced here, especially the ones that were constructed from pairs of high-level features, can also be used to support behavioral judgments beyond the simple value judgments studied here, such as object recognition and other more complex judgements53. We also note that there are other ways to construct nonlinear features. Further studies with richer set of features, e.g, other forms of interactions, may improve behavioral and neural predictions While previous studies have suggested similarities between representations of units in DCNN models for object recognition and neural activity in the visual cortex (e.g.,82,83,84,85), here we show that the DCNN model can also be useful to inform how visual features are utilized for value computation across a broader expanse of the brain. Specifically, we found evidence to support the hierarchical construction of subjective value, where the early layers of DCNN correlate early areas of the visual system, and the deeper layers of DCNN correlate with higher areas of the visual system. All of the DCNN layers' information was equally represented in the PPC and PFC. These findings are consistent with the suggestion that the hierarchical features which emerge in the visual system are projected into the PPC and PFC to form a rich feature space to construct subjective value. Further studies using neural network models with recurrent connections46 may illuminate more detail, such as the temporal dynamics, of value construction in such a feature space across brain regions. Accumulating evidence has suggested that value signals can be found widely across the brain including even in sensory regions (e.g.,38,58,59,60,61,62,63), posing a question about the differential contribution of different brain regions if value representations are so ubiquitous. While we also saw multiple brain regions that appeared to correlate with value signals during aesthetic valuation, our results suggest an alternative account for the widespread prevalence of value signals, which is that some components of the value signals especially in sensory cortex might reflect features that are ultimately used to construct value in later stages of information processing, instead of the value itself. Because neural correlates of features have not been probed previously, our results suggest that it may be possible to reinterpret at least some apparent value representations as reflecting the encoding of precursor features instead of value per se. In the present case even after taking into account feature representations, value signals were still detectable in the medial prefrontal cortex and elsewhere, supporting the notion that some brain regions are especially involved in value coding more than others. In future work it may be possible to even more clearly dissociate value from its sensory precursors by manipulating the context in which stimuli are presented, wherein features remain invariant across contexts, while the value changes. In doing so, further studies can illuminate finer dissociations between features and value signals43. One open question is how the brain has come to be equipped with a feature-based value construction architecture. We recently showed that a DCNN model trained solely on object recognition tasks represents the LFS's low- and high-level features in the hidden layers in a hierarchical manner, suggesting the possibility that such features could naturally emerge over development42. While the similarity between the DCNN and the LFS model correlations with fMRI responses in adult participants provides a promising link between these models and the brain, further investigations applying these models to studies with children or other species has the potential to inform understanding of the origin of feature-based value construction across development and across species. Following the typical approach utilized in non-human primate and other animal neurophysiology as well as in human visual neuroimaging, we performed in-depth scanning (20 sessions) in a relatively small number of participants (six) in order to address our neural hypotheses. Because we were able to obtain a sufficient amount of fMRI data in individual participants, we were able to reliably perform single-subject inference in each participant and evaluate the results across participants side-by-side. This approach contrasts with a classic group-based neuroimaging study in which results are obtained from the group average of many participants, where each participant performs short sessions, thus providing data with low signal to noise. One advantage of our approach over the group averaging approach is that we can treat each participant as a replication unit, meaning that we can obtain multiple replications44 from one study instead of just one group result. If every participant shows similar patterns, then it is unlikely that those results are spurious, and much more likely they reflect a true property of human brain function. We indeed found that all participants similarly performed our-hypothesized feature-based value construction across the brain. Another advantage of our methodological approach concerns possible heterogeneity across participants. Not all brains are the same, and there is known to be considerable variation in the location and morphology of different brain areas across individuals86. Thus, it is unlikely that all brains actually represent the same variable at the same MNI coordinates. The individual subject-based approach to fMRI analyses used here takes individual neuroanatomical variation into account, allowing for generalization that goes beyond a spatially smoothed average that does not represent any real brain. We note that one important limitation of this in-depth fMRI method is that it is not ideal for studying and characterizing differences across individuals. To gain a comprehensive account of such variability across individuals. it would be necessary to collect data from a much larger cohort of participants. As it is not feasible to scale the in-depth approach to such large cohorts due to experimenter time and resource constraints, such individual difference studies would necessarily require adopting more standard group-level scanning approaches and analyses. While we found that results from the visual cortex were largely consistent across participants, the proportion of features represented in PCC and PFC, as well as the features that were used, were quite different across participants. Understanding such individual differences will be important in future work. For instance, there is evidence that art experts tend to evaluate art differently from people with no artistic training87,88. It would be interesting to study if feature representations may differ between experts and non-experts, while probing whether the computational motif that we found here (hierarchical visual feature representation in visual areas, value construction in PPC and PFC) might be conserved across different levels of expertise. We should also note that the model's predictive accuracy about liking ratings varied across participants. It is likely that some participants used features that our model did not consider, such as personal experience associated with stimuli. Brain regions such as the hippocampus may potentially be involved in such additional feature computations. Further, behavior and fMRI signals can be inherently noisy in that there will be a portion of data that cannot be predicted (i.e. a noise ceiling). Characterizing the contribution of these noise components will require further experiments with repeated measurements of decisions about the same stimuli. Taken together, these findings are consistent with the existence of a large-scale processing hierarchy in the brain that extends from early visual cortex to medial prefrontal cortex, whereby visual inputs are transformed into various features through the visual stream. These features are then projected to PPC and lPFC, and subsequently integrated into subjective value judgment in mPFC. Crucially, the flexibility afforded by such a feature-based mechanism of value construction ensures that value judgments can be formed even for stimuli that have never before been seen, or in circumstances where the goal of valuation varies (e.g., selecting a piece of art as a gift). Therefore, our study proposes a brain-wide computational mechanism that does not limit to aesthetics, but can be generalized to value constrictions of a wide range of visual and other sensory stimuli. All participants provided informed consent for their participation in the study, which was approved by the Caltech IRB. Six volunteers (female: 6; age 18–24 yr: 4; 25–34 yr: 1; 35–44 yr: 1. 4 White, 2 Asian) were recruited into our fMRI study. one participant completed master's degree or higher, four participants earned a college degree as the highest level, and one participant had a high-school degree as the highest degree. None of the participants possessed an art degree. All of the participants reported that they visit art museums less than once a month. In addition, thirteen art-experienced participants [reported in our previous behavioral paper42] (female: 6; ages 18–24 yr: 3; 25–34 yr: 9; 35–44 yr: 1) were invited to evaluate the high-level feature values (outside the scanner). These participants for annotation were primarily recruited from the ArtCenter College of Design community. The same stimuli as our recent behavioral study42 were used in the current fMRI study. Painting stimuli were taken from the visual art encyclopedia www.wikiart.org. Using a script that randomly selects images in a given category of art, we downloaded 206 or 207 images from four categories of art (825 in total). The categories were 'Abstract Art', 'Impressionism', 'Color Fields', and 'Cubism'. We randomly downloaded images with each tag using our custom code in order to avoid subjective bias. We supplemented this database with an additional 176 paintings that were used in a previous study52. For the fMRI study reported here, one image was excluded from the full set of 1001 images to have an equal number of trials per run (50 images/run × 20 runs = 1000 images). fMRI task On each trial, participants were presented with an image of the artwork on the computer screen for three seconds. Participants were then presented with a scale from 0, 1, 2, 3 in which they had to indicate how much they liked the artwork. The location of each numerical score was randomized across trials. Participants had to press a button of a button box that they hold with both hands to indicate their rating within three seconds, where each of four buttons corresponded to a particular location on the screen from left to right. The left (right) two buttons were instructed to be pressed by their left (right) thumb. After a brief feedback period showing their chosen rating (0.5 s), a center cross was shown for inter-trial intervals (jittered between 2 and 9 s). Each run consists of 50 trials. Participants were invited to the study over four days to complete twenty runs, where participants completed on average five runs on each day. fMRI data acquisition fMRI data were acquired on a Siemens Prisma 3T scanner at the Caltech Brain Imaging Center (Pasadena, CA). With a 32-channel radiofrequency coil, a multi-band echo-planar imaging (EPI) sequence was employed with the following parameters: 72 axial slices (whole-brain), A-P phase encoding, −30 degrees slice tilt with respect to AC-PC line, echo time (TE) of 30ms, multi-band acceleration of 4, repetition time (TR) of 1.12 s, 54-degree flip angle, 2 mm isotropic resolution, echo spacing of 0.56 ms. 192 mm × 192 mm field of view, in-plane acceleration factor 2, multi-band slice acceleration factor 4. Positive and negative polarity EPI-based field maps were collected before each run with very similar factors as the functional sequence described above (same acquisition box, number of slices, resolution, echo spacing, bandwidth and EPI factor), single band, TE of 50 ms, TR of 5.13 s, 90-degree flip angle. T1-weighted and T2-weighted structural images were also acquired once for each participant with 0.9 mm isotropic resolution. T1's parameters were: repetition time (TR) 2.4 s: echo time (TE), 0.00232 s; inversion time (TI) 0.8 s; flip angle, 10 degrees; in-plane acceleration factor 2. T2's parameters were: TR 3.2 s; TE 0.564 s; flip angle, 120 degrees; in-plane acceleration factor 2. fMRI data processing Results included in this manuscript come from preprocessing performed using fMRIPrep 1.3.2 (ref. 89; RRID:SCR_016216), which is based on Nipype 1.1.9 (ref. 90; RRID:SCR_002502). Anatomical data preprocessing The T1-weighted (T1w) image was corrected for intensity non-uniformity (INU) with N4Bias Field Correction91, distributed with ANTs 2.2.0 [92, RRID:SCR_004757] and used as T1w-reference throughout the workflow. The T1w-reference was then skull-stripped with a Nipype implementation of the antsBrainExtraction.sh workflow (from ANTs), using OASIS30ANTs as target template. Spatial normalization to the ICBM 152 Nonlinear Asymmetrical template version 2009c was performed through nonlinear registration with antsRegistration (ANTs 2.2.0), using brain-extracted versions of both T1w volume and template. Brain tissue segmentation of cerebrospinal fluid (CSF), white-matter (WM) and gray-matter (GM) was performed on the brain-extracted T1w using fast. Functional data preprocessing For each of the 20 BOLD runs found per subject (across all tasks and sessions), the following preprocessing was performed. First, a reference volume and its skull-stripped version were generated using a custom methodology of fMRIPrep. A deformation field to correct for susceptibility distortions was estimated based on two echo-planar imaging (EPI) references with opposing phase-encoding directions, using 3dQwarp (AFNI 20160207). Based on the estimated susceptibility distortion, an unwarped BOLD reference was calculated for a more accurate co-registration with the anatomical reference. The BOLD reference was then co-registered to the T1w reference using flirt with the boundary-based registration cost-function. Co-registration was configured with nine degrees of freedom to account for distortions remaining in the BOLD reference. Head-motion parameters with respect to the BOLD reference (transformation matrices, and six corresponding rotation and translation parameters) are estimated before any spatiotemporal filtering using mcflirt The BOLD time-series (including slice-timing correction when applied) were resampled onto their original, native space by applying a single, composite transform to correct for head-motion and susceptibility distortions. These resampled BOLD time series will be referred to as preprocessed BOLD in original space, or just preprocessed BOLD. The BOLD time series were resampled to MNI152NLin2009cAsym standard space, generating a preprocessed BOLD run in MNI152NLin2009cAsym space. First, a reference volume and its skull-stripped version were generated using a custom methodology of fMRIPrep. Several confounding time series were calculated based on the preprocessed BOLD: framewise displacement (FD), DVARS and three region-wise global signals. FD and DVARS are calculated for each functional run, both using their implementations in Nipype. The three global signals are extracted within the CSF, the WM, and the whole-brain masks. In addition, a set of physiological regressors were extracted to allow for component-based noise correction. Principal components are estimated after high-pass filtering the preprocessed BOLD time series (using a discrete cosine filter with 128 s cut-off) for the two CompCor variants: temporal (tCompCor) and anatomical (aCompCor). Six tCompCor components are then calculated from the top 5% variable voxels within a mask covering the subcortical regions. This subcortical mask is obtained by heavily eroding the brain mask, which ensures it does not include cortical GM regions. For aCompCor, six components are calculated within the intersection of the aforementioned mask and the union of CSF and WM masks calculated in T1w space, after their projection to the native space of each functional run (using the inverse BOLD-to-T1w transformation). The head-motion estimates calculated in the correction step were also placed within the corresponding confounds file. All resamplings can be performed with a single interpolation step by composing all the pertinent transformations (i.e. head-motion transform matrices, susceptibility distortion correction when available, and co-registrations to anatomical and template spaces). Gridded (volumetric) resamplings were performed using ants Apply Transforms (ANTs), configured with Lanczos interpolation to minimize the smoothing effects of other kernels. Computational models The computational methods and behavioral modeling reported in this manuscript overlap with that reported in our recent article focusing exclusively on behavior42. For completeness, we reproduce some of the descriptions of these methods as first described in ref. 42. Linear feature summation model (LFS model) We hypothesized that subjective preferences for visual stimuli are constructed by the influence of visual and emotional features of the stimuli. As its simplest, we assumed that the subjective value of the i-th stimulus vi is computed by a weighted sum of feature values fi,j: $${v}_{i}=\mathop{\sum }\limits_{j=0}^{{n}_{f}}{w}_{j}{f}_{i,j}$$ where wj is a weight of the j-th feature, fi,j is the value of the j-th feature for stimulus i, and nf is the number of features. The 0-th feature is a constant fi,0 = 1 for all i's. Importantly, wj is not a function of a particular stimulus but shared across all visual stimuli, reflecting the taste of a participant. The same taste (wj's) can also be shared across different participants, as we showed in our behavioral analysis. The features fi,j were computed using visual stimuli; we used the same feature values to predict liking ratings across participants. We used the simple linear model Eq. (1) to predict liking ratings in our behavioral analysis (see below for how we determined features and weights). As we schematically showed in Fig. 1, we hypothesized that the input stimulus is first broke down into low-level features and then transformed into high-level features, and indeed we found that a significant variance of high-level features can be predicted by a set of low-level features. This hierarchical structure of the LFS model was further tested in our DCNN and fMRI analysis. Because we did not know a priori what features would best describe human aesthetic values for visual art, we constructed a large feature set using previously published methods from computer vision augmented with additional features that we ourselves identified using additional existing machine learning methods. Visual low-level features introduced in ref. 49 We employed 40 visual features introduced in ref. 49. We do not repeat descriptions of the features here; but briefly, the feature sets consist of 12 global features that are computed from the entire image that include color distributions, brightness effects, blurring effects, and edge detection, and 28 local features that are computed for separate segments of the image (the first, the second and the third largest segments). Most features are computed straightforwardly in either HSL (hue, saturation, lightness) or HSV (hue, saturation, value) space (e.g., average hue value). One feature that deserves description is a blurring effect. Following49,93, we assumed that the image I was generated from a hypothetical sharp image with a Gaussian smoothing filter with an unknown variance σ. Assuming that the frequency distribution for the hypothetical image is approximately the same as the blurred, actual image, the parameter σ represents the degree to which the image was blurred. The σ was estimated by the Fourier transform of the original image by the highest frequency, whose power is greater than a certain threshold. $${f}_{{{{{{{{\rm{blur}}}}}}}}}=max\left({k}_{x},\,{k}_{y}\right)\propto \frac{1}{\sigma }$$ where kx = 2(x − nx/2)/nx and ky = 2(y − ny/2)/ny with (x, y) and (nx, ny) are the coordinates of the pixel and the total number of pixel values, respectively. The above max was taken within the components whose power is larger than four49. The segmentation for this feature set was computed by a technique called kernel GraphCut50,94. Following ref. 49, we generated a total of at least six segments for each image using a C++ and Matlab package for kernel graph cut segmentation94. The regularization parameter that weighs the cost of cut against smoothness was adjusted for each image in order to obtain about six segments. See refs. 49, 94 for the full description of this method and examples. Of these 40 features, we included all of them in our initial feature set except for local features for the third-largest segment, which were highly correlated with features for the first and second-largest segments and were thus deemed unlikely to add unique variance to the feature prediction stage. Additional low-level features We assembled the following low-level features to supplement the set by Li & Chen49. These include both global features and local features. Local features were calculated on segments determined by two methods. The first method was statistical region merging (SRM) as implemented by ref. 95, where the segmentation parameter was incremented until at least three segments were calculated. The second method converted paintings into LAB color space and used k-means clustering of the A and B components. While the first method reliably identified distinct shapes in the paintings, the second method reliably identified distinct color motifs in the paintings. The segmentation method for each feature is indicated in the following descriptions. Each local feature was calculated on the first and second-largest segments. Local features: Segment size (SRM): Segment size for segment i was calculated as the area of segment i over the area of the entire image: $${f}_{{{{{{{{\rm{segment}}}}}}}}\,{{{{{{{\rm{size}}}}}}}}}=\frac{{{{{{{{\rm{area}}}}}}}}\,{{{{{{{\rm{segment}}}}}}}}\,i}{{{{{{{{\rm{total}}}}}}}}\,{{{{{{{\rm{area}}}}}}}}}$$ HSV mean (SRM): To calculate mean hue, saturation and color value for each segment, segments were converted from RGB to HSV color space. $${f}_{{{{{{{{\rm{mean}}}}}}}}\,{{{{{{{\rm{hue}}}}}}}}}={{{{{{{\rm{mean}}}}}}}}({{{{{{{\rm{hue}}}}}}}}\,{{{{{{{\rm{values}}}}}}}}\,{{{{{{{\rm{in}}}}}}}}\,{{{{{{{\rm{segment}}}}}}}}\,i)$$ $${f}_{{{{{{{{\rm{mean}}}}}}}}\,{{{{{{{\rm{saturation}}}}}}}}}={{{{{{{\rm{mean}}}}}}}}({{{{{{{\rm{saturation}}}}}}}}\,{{{{{{{\rm{values}}}}}}}}\,{{{{{{{\rm{in}}}}}}}}\,{{{{{{{\rm{segment}}}}}}}}\,i)$$ $${f}_{{{{{{{{\rm{mean}}}}}}}}\,{{{{{{{\rm{color}}}}}}}}{{{{{{{\rm{value}}}}}}}}}={{{{{{{\rm{mean}}}}}}}}({{{{{{{\rm{color}}}}}}}}\,{{{{{{{\rm{values}}}}}}}}\,{{{{{{{\rm{in}}}}}}}}\,{{{{{{{\rm{segment}}}}}}}}\,i)$$ Segment moments (SRM): $${f}_{{{{{{{{\rm{CoM}}}}}}}}{{{{{{{\rm{X}}}}}}}}\,{{{{{{{\rm{coordinate}}}}}}}}}=\frac{\mathop{\sum}\nolimits_{k\in {{{{{{{\rm{segment}}}}}}}}\,i}{x}_{k}}{{{{{{{{\rm{area}}}}}}}}\,{{{{{{{\rm{segment}}}}}}}}\,i}$$ $${f}_{{{{{{{{\rm{CoM}}}}}}}}{{{{{{{\rm{Y}}}}}}}}\,{{{{{{{\rm{coordinate}}}}}}}}}=\frac{\mathop{\sum}\nolimits_{k\in {{{{{{{\rm{segment}}}}}}}}\,i}{y}_{k}}{{{{{{{{\rm{area}}}}}}}}\,{{{{{{{\rm{segment}}}}}}}}i}$$ $${f}_{{{{{{{{\rm{V\; ariance}}}}}}}}}=\frac{\mathop{\sum}\nolimits_{k\in {{{{{{{\rm{segment}}}}}}}}\,i}{({x}_{k}-\bar{x})}^{2}+{({y}_{k}-\bar{y})}^{2}}{{{{{{{{\rm{area}}}}}}}}\,{{{{{{{\rm{segment}}}}}}}}\,i}$$ $${f}_{{{{{{{{\rm{Skew}}}}}}}}}=\frac{\mathop{\sum}\nolimits_{k\in {{{{{{{\rm{segment}}}}}}}}\,i}{({x}_{k}-\bar{x})}^{3}+{({y}_{k}-\bar{y})}^{3}}{{{{{{{{\rm{area}}}}}}}}\,{{{{{{{\rm{segment}}}}}}}}\,i}$$ where \((\bar{x},\,\bar{y})\) is the center of mass coordinates of the corresponding segment. Entropy (SRM): $${f}_{{{{{{{{\rm{entropy}}}}}}}}}=-\mathop{\sum}\limits_{j}({p}_{j}*{\log }_{2}({p}_{j}))$$ where p equals the normalized intensity histogram counts of segment i. Symmetry (SRM): For each segment, the painting was cropped to maximum dimensions of the segment. The horizontal and vertical mirror images of the rectangle were taken, and the mean squared error of each was calculated from the original. $${f}_{{{{{{{{\rm{horizontal}}}}}}}}\,{{{{{{{\rm{symmetry}}}}}}}}}=\frac{\mathop{\sum}\nolimits_{x,y\in {{{{{{{\rm{segment}}}}}}}}}{({{{{{{{{\rm{segment}}}}}}}}}_{x,y}-{{{{{{{\rm{horizontal}}}}}}}}\_{{{{{{{\rm{flip}}}}}}}}{({{{{{{{\rm{segment}}}}}}}})}_{x,y})}^{2}}{\#\,{{{{{{{\rm{pixels}}}}}}}}\,{{{{{{{\rm{in}}}}}}}}\,{{{{{{{\rm{segment}}}}}}}}}$$ $${f}_{{{{{{{{\rm{vertical}}}}}}}}\,{{{{{{{\rm{symmetry}}}}}}}}}=\frac{\mathop{\sum}\nolimits_{x,y\in {{{{{{{\rm{segment}}}}}}}}}{({{{{{{{{\rm{segment}}}}}}}}}_{x,y}-{{{{{{{\rm{vertical}}}}}}}}\_{{{{{{{\rm{flip}}}}}}}}{({{{{{{{\rm{segment}}}}}}}})}_{x,y})}^{2}}{\#\,{{{{{{{\rm{pixels}}}}}}}}\,{{{{{{{\rm{in}}}}}}}}\,{{{{{{{\rm{segment}}}}}}}}}$$ R-value mean (K-means): Originally, we took the mean of R, G, and B values for each segment, but found these values to be highly correlated, so we reduced these three features down to just one feature for mean R value. $${f}_{{{{{{{{\rm{R-value}}}}}}}}}={{{{{{{\rm{mean}}}}}}}}({{{{{{{\rm{R}}}}}}}}-{{{{{{{\rm{values}}}}}}}}\,{{{{{{{\rm{in}}}}}}}}\,{{{{{{{\rm{segment}}}}}}}})$$ HSV mean (K-means): As with SRM-generated segments, we took the hue, saturation, and color value means of segments generated by K-means segmentation as described in equations 2–4. Global features: Image intensity: Paintings were converted from RGB to grayscale from 0 to 255 to yield a measure of intensity. The 0-255 scale was divided into five equally sized bins. Each bin count accounted for one feature. $${f}_{{{{{{{{\rm{intensity}}}}}}}}\,{{{{{{{\rm{count}}}}}}}}\,{{{{{{{\rm{bin}}}}}}}}\,i\in \{1,4\}}=\frac{\#\,{{{{{{{\rm{pixels}}}}}}}}\,{{{{{{{\rm{with}}}}}}}}\,{{{{{{{\rm{intensity}}}}}}}}\in \big\{\frac{255(i-1)}{5},\frac{255i}{5}\big\}}{{{{{{{{\rm{total}}}}}}}}\,{{{{{{{\rm{area}}}}}}}}}$$ HSV modes: Paintings were converted to HSV space, and the modes of the hue, saturation, and color value across the entire painting were calculated. While we took mean HSV values over segments in an effort to calculate overall-segment statistics, we took the mode HSV values across the entire image in an effort to extract dominating trends across the painting as a whole. $${f}_{{{{{{{{\rm{mode}}}}}}}}{{{{{{{\rm{hue}}}}}}}}}={{{{{{{\rm{mode}}}}}}}}({{{{{{{\rm{hue}}}}}}}}\,{{{{{{{\rm{values}}}}}}}}\,{{{{{{{\rm{in}}}}}}}}\,{{{{{{{\rm{segment}}}}}}}}\,i)$$ $${f}_{{{{{{{{\rm{mode}}}}}}}}{{{{{{{\rm{saturation}}}}}}}}}={{{{{{{\rm{mode}}}}}}}}({{{{{{{\rm{saturation}}}}}}}}\,{{{{{{{\rm{values}}}}}}}}\,{{{{{{{\rm{in}}}}}}}}\,{{{{{{{\rm{segment}}}}}}}}\,i)$$ $${f}_{{{{{{{{\rm{mode}}}}}}}}{{{{{{{\rm{color}}}}}}}}{{{{{{{\rm{value}}}}}}}}}={{{{{{{\rm{mode}}}}}}}}({{{{{{{\rm{color}}}}}}}}\,{{{{{{{\rm{values}}}}}}}}\,{{{{{{{\rm{in}}}}}}}}\,{{{{{{{\rm{segment}}}}}}}}\,i)$$ Aspect (width-height) Ratio: $${f}_{{{{{{{{\rm{aspect}}}}}}}}{{{{{{{\rm{ratio}}}}}}}}}=\frac{{{{{{{{\rm{image}}}}}}}}\,{{{{{{{\rm{width}}}}}}}}}{{{{{{{{\rm{image}}}}}}}}\,{{{{{{{\rm{height}}}}}}}}}$$ Entropy: Entropy over the entire painting was calculated according to Eq. (9). High-level feature set51,52 We also introduced features that are more abstract and not easily computed by a simple algorithm. In ref. 51, Chatterjee et al. pioneered this by introducing 12 features (color temperature, depth, abstract, realism, balance, accuracy, stroke, animacy, emotion, color saturation, complexity) that were annotated by human participants for 24 paintings, in which the authors have found that annotations were consistent across participants, regardless of their artistic experience. Vaidya et al.52 further collected annotations of these feature sets from artistically experienced participants for an additional 175 paintings and performed a principal component analysis, finding three major components that summarize the variance of the original 12 features. Inspired by the three principal components, we introduced three high-level features: concreteness, dynamics, and temperature. Also, we introduced valence as an additional high-level feature. The four high-level features were annotated in a similar manner to the previous studies51,52. We took the mean annotations of all 13 participants for each image as feature values. In addition, we also annotated our image set with whether or not each image included a person. This was done by manual annotation, but it can also be done with a human detection algorithm (e.g., see ref. 96). We included this presence-of-a-person feature in the low-level feature set originally97, though we found in our DCNN analysis that the feature shows a signature of a high-level feature97. Therefore in this current study, we included this presence of a person to the high-level feature set. As we showed in the main text, classifying this feature as a low-level feature or as a high-level feature does not change our results. Identifying the shared feature set that predicts aesthetic preferences The above method allowed us to have a set of 83 features in total that are possibly used to predict human aesthetic valuation. These features are likely redundant because some of them are highly correlated, and many may not contribute to decisions at all. We thus sought to identify a minimal subset of features that are commonly used by participants. In ref. 97, we performed this analysis using Matlab Sparse Gradient Descent Library (https://github.com/hiroyuki-kasai/SparseGDLibrary). For this, we first orthogonalized features by sparse PCA98. Then we performed a regression with a LASSO penalty at the group level using participants' behavioral data with a function group − lasso − problem. We used Fast Iterative Soft Thresholding Algorithm (FISTA) with cross-validation. After eliminating PC's that were not shared by more than one participant, we transformed the PC's back to the original space. We then eliminated one of the two features that were most highly correlated (r2 > 0.5) to obtain the final set of shared features. To identify relevant features for use in the current fMRI analysis, we utilized behavioral data from both our previous in-lab behavioral study (ref. 97 and the fMRI participants included in the current study (13 participants in total). Because the goal of the fMRI analysis is to highlight the hierarchical nature in neural coding between low and high-level features, we first repeated the above procedure with low-level features alone (79 features in total) and then we added high-level features (the concreteness, the dynamics, the temperature, and the valance) to the obtained shared low-level features. The identified shared features are the following: the concreteness, the dynamics, the temperature, the valence, the global average saturation from ref. 49, the global blurring effect from ref. 49, the horizontal coordinate of mass center for the largest segment using the Graph-cut from ref. 49, the vertical coordinate of mass center for the largest segment using the Graph-cut from ref. 49, the mass skewness for the second largest segment using the Graph-cut from ref. 49, the size of the largest segment using SRM, the mean hue of the largest segment using SRM, the mean color value of largest segment using SRM, the mass variance of the largest segment using SRM, global entropy, the entropy of the second-largest segment using SRM, the image intensity in bin 1, the image intensity in bin 2, and the presence of a person. Nonlinear interaction features We constructed additional feature sets by multiplying pairs of LFS features. We grouped the resulting features into three groups. (1) features created from interactions between high-level features (2) features created from interactions between low-level features and (3) features created from interactions between a high-level and a low-level feature. In order to determine the contribution of these three groups of features, we performed PCA on each group so that we can take the same number of components from each group. In our analysis, we took five PCs from each group to match with the number of features of original high-level features. Behavioral model fitting We tested how our shared-feature model can predict human liking ratings using out-of-sample tests. All models were cross-validated in twenty folds, and we used ridge regression unless otherwise stated. Hyperparameters were tuned by cross-validation. We calculated the Pearson correlation between model predictions (pooled predictions from all cross-validation sets) and actual data, and defined it as the predictive accuracy. We estimated individual participant's feature weights by fitting a linear regression model with the shared feature set to each participant. For illustrative purposes, the weights were normalized for each participant by the maximum feature value (concreteness) in Fig. 1g, Supplementary Figs. 1, 12. The significance of the above analyses was measured by generating a null distribution constructed by the same analyses but with permuted image labels. The null distribution was construed by 10,000 permutations. The chance level was determined by the mean of the null distribution. Deep convolutional neural network (DCNN) analysis Network architecture The deep convolutional neural network (DCNN) we used consists of two parts. An input image feeds into convolutional layers from the standard VGG-16 network that is pre-trained on ImageNet. The output of the convolutional layers then projects to fully connected layers. This architecture follows the current state-of-the-art model on aesthetic evaluation99,100. The details of the convolutional layers from the VGG network can be found in ref. 54; but briefly, it consists of 13 convolutional layers and 5 intervening max pooling layers. Each convolutional layer is followed by a rectified linear unit (ReLU). The output of the final convolutional layer is flattened to a 25088-dimensional vector so that it can be fed into the fully connected layer. The fully connected part has two hidden layers, where each layer has 4096 dimensions. The fully connected layers are also followed by a ReLU layer. During training, a dropout layer was added with a drop out probability 0.5 after every ReLU layer for regularization. Following the current state of the art model100, the output of the fully connected network is a 10-dimensional vector that is normalized by a softmax. The output vector was weighted averaged to produce a scalar value100 that ranges from 0 to 3. Network training We trained our model on our behavioral data set by tuning weights in the fully connected layers. We employed 10-fold cross-validation to benchmark the art rating prediction. The model was optimized using a Huber loss metric, which is robust to outliers101. We used stochastic gradient descent (SGD) with momentum to train the model. We used a batch size of 100, a learning rate of 10−4, the momentum of 0.9, and weight decay of 5 × 10−4. The learning rate decayed by a factor of 0.1 every 30 epochs. To handle various sizes of images, we used the zero-padding method. Because our model could only have a 224 × 224 sized input, we first scaled the input images to have the longer edges be 224 pixels long. Then we filled the remaining space with 0 valued pixels (black). We used Python 3.7, Pytorch 0.4.1.post2, and CUDA 9.0 throughout the analysis. Retraining DCNN to extract hidden layer activations We also trained our network on single-fold ART data in order to obtain a single set of hidden layer activations. We prevented over-fitting by stopping our training when the model performance (Pearson correlation between the model's prediction and data) reached the mean correlation from the 10-folds cross-validation. Decoding features from the deep neural network We decoded the LFS model features from hidden layers by using linear (for continuous features) and logistic (for categorical features) regression models, as we described in ref. 97. We considered the activations of outputs of ReLU layers (total of 15 layers). First, we split the data into ten folds for the 10-fold cross-validation. In each iteration of the cross-validation, because dimensions of the hidden layers are much larger (64 × 224 × 224 = 3,211,264) than the actual data size, we first performed PCA on the activation of each hidden layer from the training set. The number of principal components was chosen to account for 80% of the total variance. By doing so, each layer's dimension was reduced to less than 536. Then the hidden layers' activations from the test set were projected onto the principal component space by using the fitted PCA transformation matrices. The hyperparameter of the ridge regression was tuned by doing a grid search, and the best-performing coefficient for each layer and feature was chosen based on the scores from the 10-folds cross-validation. We tested for a total of 19 features, including all 18 features that we used for our fMRI analysis, as well as the simplest feature that was not included into our fMRI analysis (as a result of our group-level feature selection) but that was also of interest here: the average hue value. For the continuous features (e.g., rating, mean hue), Pearson correlation between the model's prediction and data were used as the metric for the goodness of fit, while for the categorical features (e.g., presence of person), we calculated accuracy, the area under curve (AUC), and F1 scores. The sign of slopes of decoding plots from these metrics were identical. In a supplementary analysis, we also explored whether adding 'style matrices' of hidden layers102 to the PCA-transformed hidden layer's activations can improve the decoding accuracy; however, we found the style matrices do not improve the decoding accuracy. Sklearn 0.19.2 on Python 3.7 was used for the decoding analyses. Reclassifying features according to the slopes of the decoding accuracy across hidden layers In our LFS model, we classified putative low-level and high-level features simply by whether a feature is computed by a computer algorithm vs annotated by humans respectively. In reality, however, some putative low-level features are more complex in terms of how they could be constructed than other lower-level features, while some putative high-level features could in fact be computed straightforwardly from raw pixel inputs. Using the decoding results of the features from hidden layers in the DCNN, we identified DCNN-defined low-level and high-level features. For this, we fit a linear slope to the estimated decoding accuracy vs hidden layers. We permuted layer labels 10,000 times and performed the same analysis to construct null distribution as described earlier. We classified a feature as high-level if the slope was significantly positive at p < 0.001, and we classified a feature as a low-level feature if the slope was signifcantly negative at p < 0.001. The features showing negative slopes were: the average hue, the average saturation, the average hue of the largest segment using GraphCut, the average color value of the largest segment using GraphCut, the image intensity in bin 1, the image intensity in bin 3, and the temperature. The features showing positive slopes were: the concreteness, the dynamics, the presence of a person, the vertical coordinate of the mass center for the largest segment using the Graph Cut, the mass variance of the largest segment using the SRM, the entropy in the 2nd largest segment using SRM. All of these require relatively complex computations, such as localization of segments or image identification. This is consistent with a previous study showing that object-related local features showed a similar increased decodability at a deeper layer85. fMRI analysis Standard GLM analysis We conducted a standard GLM analysis on the fMRI data with SPM 12. The SPM feature for asymmetrically orthogonalizing parametric regressors was disabled throughout. We collected enough data from each individual participant (four days of scanning) so that we can analyze and interpret each participant's results separately. The following regressors were obtained from the fmriprep preprocessing pipeline and added to all analysis as nuisance regressors: framewise displacement, comp-cor, non-steady, trans, rot. The onsets of stimulus, decision, and action were also controlled by stick regressors in all GLMs described below. In addition, we added the onset of the Decision period, the onset of feedback to all GLM as nuisance regressors, because we focused on the stimulus presentation period. Identifying subjective value coding (GLM 1) In order to gain insight into how the subjective value of art was represented in the brain, we performed a simple GLM analysis with a parametric regressor at the onset of the Stimulus (GLM 1). The parameter was linearly modulated by participant's liking ratings on each trial. The results were cluster FWE collected with a height threshold of p < 0.001. Identifying feature coding (GLM 2, 3, 2', 3') In order to gain insight into how features were represented in the brain, we performed another GLM analysis with a parametric regressor at the onset of Stimulus (GLM 2, 3). In GLM 2, there are in total 18 feature-modulated regressors; each representing the value of one of the shared features for the fMRI analysis. We then performed F-tests on high-level features and low-level features (a diagonal contrast matrix with an entry set to one for each feature of interest was constructed in SPM) in order to test whether a voxel is significantly modulated by any of the high and/or low-level features. We then counted the number of voxels that are significantly correlated (p < 0.001) in each ROI (note that the F-value for significance is different for high and low features due to the difference in the number of consisting features). We then displayed the proportions of two numbers in a given ROI. We performed a similar analysis using the DCNN hidden layers (GLM 3). We took the first three principal components of each convolutional and fully connected layers (three PCs times 15 layers = 45 parametric regressors). We then performed F-tests on PCs from layers 1 to 4, layers 5 to 9, layers 10 to 13, and fully connected layers (layers 14 and 15). The proportions of the survived voxels were computed for each ROI. In addition, we also performed the same analyses with GLMs to which we added liking ratings for each stimulus. We call these analyses GLM 2' and GLM 3', respectively. We note that, because in our LFS model the liking rating is a linear integration of features, adding liking rating regressor means to identify neural correlates of the liking ratings that are outside of the LFS model's prediction. Region of interests (ROI) We constructed ROIs for visual topographic areas using a previously published probabilistic map65. We constructed 17 masks based on the 17 probabilistic maps taken from ref. 65, consisting of 8 ventral-temporal (V1v, V2v, V3v, hV4, VO1, VO2, PHC1, and PHC2) and 9 dorsal-lateral (V1d, V2d, V3d, V3A, V3B, LO1, LO2, hMT, and MST) masks. In this, ventral and dorsal regions for early visual areas V1, V2, V3 are separately defined. Each mask was constructed by thresholding the probability map at p > 0.01. We defined V123 as V1v +V2v+ V3v+ V1d + V2d + V3d + V3A + V3B, and Vhigh as hV4 + VO1 + VO2 + PHC1 + PHC2 + LO1 + LO2 + hMT + MST. (hV4: human V4, VO: ventral occipital cortex, PHC: posterior parahippocampal cortex, LO: lateral occipital cortex, hMT: human middle temporal area, MST: medial superior temporal area.) We also constructed ROIs for parietal and prefrontal cortices using the AAL database. Posterior parietal cortex (PPC) was defined by bilateral MNI-Parietal-Inf + MNI-Parietal-Sup. lateral orbitofrontal cortex (lOFC) was defined by bilateral MNI-Frontal-Mid-Orb + MNI-Frontal-Inf-Orb + MNI-Frontal-Sup-Orb, and medial OFC (mOFC) was defined by bilateral MNI-Frontal-Med-Orb + bilateral MNI-Rectus. Dorsomedial PFC (dmPFC) was defined by bilateral MNI-Frontal-Sup-Medial + MNI-Cingulum-Ant, and dorsolateral PFC (dlPFC) was defined by bilateral MNI-Frontal-Mid + MNI-Frontal-Sup. Ventrolateral PFC (vlPFC) was defined by bilateral MNI-Frontal-Inf-Oper + MNI-Frontal-Inf-Tri. We also constructed lateral PFC (LPFC) as vlPFC + dlPFC +lOFC, and medial PFC (MPFC) as mOFC + dmPFC. PPI analysis (GLM 4, 4') We conducted a psychobiological-physiological interaction analysis. We took a seed from the GLM 1 identified cluster showing subjective value in MPFC (Supplementary Fig. 24), and a psychological regressor as a box function, which is set to one during the stimulus epoch and 0 otherwise. We added the time course of the seed, the PPI regressor, to a variant of GLM 2' (the parametric regressors in which feature values and liking values were constructed using a boxcar function at stimulus periods, instead of its onsets) and determined which voxels were correlated with the PPI regressor (GLM 4). Following ref. 38, boxcar functions were used because feature integration can take place throughout the duration of each stimulus presentation. We also conducted a control PPI analysis. For this we took the same seed, but now the psychological regressor was a box function which is one during ITI, and 0 otherwise. We added the time course of the seed and the PPI regressor, the box function for ITI, and the PPI regressor to the same variant of GLM 2' (the parametric regressors with feature and liking values were constructed using boxcar function at Stimulus periods, instead of its onsets). We refer to this as GLM 4'. Feature integration analysis We conducted an F-test using GLM 2, to test whether any of the shared features were significantly correlated with a given voxel (a diagonal with one at all features in SPM). The resulting F-map is thresholded at p < 0.05 cFWE at the whole-brain with height threshold at p < 0.001. We then asked within the survived voxels, which of them were also significantly positively correlated with PPI regressor in GLM 4, using at value thresholded at p < 0.001 uncorrected. We then counted the fraction of voxels that survived this test in a given ROI. Regression analysis with cross validation In addition to the SPM GLM analysis, we also performed regression analyses with cross validation within each participant67. We first extracted beta estimates at stimulus presentation time on each trial from a GLM with regressors at each stimulus onset, where the GLM also included other nuisance regressors, including framewise displacement, comp-cor, non-steady, trans, rot, the onsets of Decision, Action and feedback. We then used these beta estimates at the stimulus presentation time as dependent variables in our regression analysis. In all fMRI analyses, we used a Lasso penalty unless otherwise stated. The hyperparameters were optimized using 12-fold cross validation. The Matlab lasso function was used. We note that each stimulus was presented only once in our experiment in a given participant. We performed a feature coding analysis analogous to what we performed using SPM. We first estimated the weights of the LFS model features using lasso regression at each voxel. We then computed a sum of squared weights for low-level features and high-level features separately. In order to discard weight estimates that can be obtained by chance, we also performed the same lasso regression analysis using shuffled stimuli labels. We then constructed a null distribution with a sum of squared weights at each ROI using the weight estimates from this analysis. If the sum of squared weights of low (or high) -level features obtained from correct stimuli labels at a given voxel is significantly larger than the null distribution of low (or high) level features in the ROI (p < 0.001), we identified the voxel as encoding low-level (or high-level) features. We also ran a similar analysis with the LFS model's features where we also included 'nonlinear features' that are constructed by multiplying pairs of the LFS model's features. As described above, we grouped the nonlinear features into three groups. (1) features created from interactions between high-level features (2) features created from interactions between low-level features and (3) features created from interactions between high-level and low-level features. We took five PCs from each group to match with the number of original high-level features from the model. When comparing predictive accuracy across different models, we calculated Pearson correlations between the data and each model's predictions, where the model's predictions were pooled over predictions from testing sets across cross-validations. We performed a similar analysis using the DCNN's features, where the DCNN was trained to predict behavioral data. Using the obtained results, we computed the sum of squared features from layers one to four, layers five to nine, layers ten to thirteen, and layers fourteen to fifteen. Again, estimates that are significantly greater than the ones obtained by chance (at p < 0.001) were included in our results, using the same regression analysis with shuffled labeled data. We performed analyses with 45 features (3 PCs from each layer) and 150 features (10 PCs from each layer). We also performed the same DCNN analysis using untrained, random, weights. Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. The data that support the findings of this study are available at https://github.com/kiigaya/Art or from the corresponding author upon request. Source data are provided with this paper. The code that supports the findings of this study are available at https://github.com/kiigaya/Art or from the corresponding author upon request. Fechner, G. T. Vorschule der aesthetik, vol. 1 (Breitkopf & Härtel, 1876). Ramachandran, V. S. & Hirstein, W. The science of art: a neurological theory of aesthetic experience. J. Conscious. Stud. 6, 15–51 (1999). Zeki, S. Inner vision: An exploration of art and the brain (2002). Leder, H., Belke, B., Oeberst, A. & Augustin, D. A model of aesthetic appreciation and aesthetic judgments. Br. J. Psychol. 95, 489–508 (2004). Biederman, I. & Vessel, E. A. Perceptual pleasure and the brain: A novel theory explains why the brain craves information and seeks it through the senses. Am. Scientist 94, 247–253 (2006). Chatterjee, A. Neuroaesthetics: a coming of age story. J. Cogn. Neurosci. 23, 53–62 (2011). Shimamura, A. P. & Palmer, S. E. Aesthetic Science: Connecting Minds, Brains, and Experience (OUP USA, 2012). Palmer, S. E., Schloss, K. B. & Sammartino, J. Visual aesthetics and human preference. Annu. Rev. Psychol. 64, 77–107 (2013). Leder, H. & Nadal, M. Ten years of a model of aesthetic appreciation and aesthetic judgments: the aesthetic episode–developments and challenges in empirical aesthetics. Br. J. Psychol. 105, 443–464 (2014). Iigaya, K., O'Doherty, J. P. & Starr, G. G. Progress and promise in neuroaesthetics. Neuron 108, 594–596 (2020). Cela-Conde, C. J. et al. Activation of the prefrontal cortex in the human visual aesthetic perception. Proc. Natl Acad. Sci. USA 101, 6321–6325 (2004). Kawabata, H. & Zeki, S. Neural correlates of beauty. J. Neurophysiol. 91, 1699–1705 (2004). O'Doherty, J. P., Deichmann, R., Critchley, H. D. & Dolan, R. J. Neural responses during anticipation of a primary taste reward. Neuron 33, 815–826 (2002). Padoa-Schioppa, C. & Assad, J. A. Neurons in the orbitofrontal cortex encode economic value. Nature 441, 223–226 (2006). Hampton, A. N., Bossaerts, P. & O'doherty, J. P. The role of the ventromedial prefrontal cortex in abstract state-based inference during decision making in humans. J. Neurosci. 26, 8360–8367 (2006). Daw, N. D., O'Doherty, J. P., Dayan, P., Seymour, B. & Dolan, R. J. Cortical substrates for exploratory decisions in humans. Nature 441, 876–879 (2006). Kable, J. W. & Glimcher, P. W. The neural correlates of subjective value during intertemporal choice. Nat. Neurosci. 10, 1625 (2007). Lee, D., Rushworth, M. F., Walton, M. E., Watanabe, M. & Sakagami, M. Functional specialization of the primate frontal cortex during decision making. J. Neurosci. 27, 8170–8173 (2007). Rushworth, M. F., Buckley, M. J., Behrens, T. E., Walton, M. E. & Bannerman, D. M. Functional organization of the medial frontal cortex. Curr. Opin. Neurobiol. 17, 220–227 (2007). Buckley, M. J. et al. Dissociable components of rule-guided behavior depend on distinct medial and prefrontal regions. Science 325, 52–58 (2009). Kahnt, T., Heinzle, J., Park, S. Q. & Haynes, J.-D. The neural code of reward anticipation in human orbitofrontal cortex. Proc. Natl Acad. Sci. USA 107, 6010–6015 (2010). Kennerley, S. W., Behrens, T. E. & Wallis, J. D. Double dissociation of value computations in orbitofrontal and anterior cingulate neurons. Nat. Neurosci. 14, 1581 (2011). Grabenhorst, F. & Rolls, E. T. Value, pleasure and choice in the ventral prefrontal cortex. Trends Cogn. Sci. 15, 56–67 (2011). Stalnaker, T. A. et al. Orbitofrontal neurons infer the value and identity of predicted outcomes. Nat. Commun. 5, 1–13 (2014). Iigaya, K. et al. The value of what's to come: neural mechanisms coupling prediction error and the utility of anticipation. Sci. Adv. 6, eaba3828 (2020). Small, D. M., Zatorre, R. J., Dagher, A., Evans, A. C. & Jones-Gotman, M. Changes in brain activity related to eating chocolate: from pleasure to aversion. Brain 124, 1720–1733 (2001). Blood, A. J. & Zatorre, R. J. Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. Proc. Natl Acad. Sci. USA 98, 11818–11823 (2001). O'Doherty, J. et al. Beauty in a smile: the role of medial orbitofrontal cortex in facial attractiveness. Neuropsychologia 41, 147–155 (2003). Gottfried, J. A., O'Doherty, J. & Dolan, R. J. Encoding predictive reward value in human amygdala and orbitofrontal cortex. Science 301, 1104–1107 (2003). Cloutier, J., Heatherton, T. F., Whalen, P. J. & Kelley, W. M. Are attractive people rewarding? sex differences in the neural substrates of facial attractiveness. J. Cogn. Neurosci. 20, 941–951 (2008). Ishizu, T. & Zeki, S. The brain's specialized systems for aesthetic and perceptual judgment. Eur. J. Neurosci. 37, 1413–1420 (2013). Montague, P. R., Dayan, P. & Sejnowski, T. J. A framework for mesencephalic dopamine systems based on predictive hebbian learning. J. Neurosci. 16, 1936–1947 (1996). Schultz, W., Dayan, P. & Montague, P. R. A neural substrate of prediction and reward. Science 275, 1593–1599 (1997). O'Doherty, J. P., Dayan, P., Friston, K., Critchley, H. & Dolan, R. J. Temporal difference models and reward-related learning in the human brain. Neuron 38, 329–337 (2003). Barron, H. C., Dolan, R. J. & Behrens, T. E. Online evaluation of novel choices by simultaneous representation of multiple memories. Nat. Neurosci. 16, 1492 (2013). S.-L., Lim J. P., O'Doherty A., Rangel Stimulus Value Signals in Ventromedial PFC Reflect the Integration of Attribute Value Signals Computed in Fusiform Gyrus and Posterior Superior Temporal Gyrus. Journal of Neuroscience 33 8729–8741 (2013) Hare, T. A., Camerer, C. F. & Rangel, A. Self-control in decision-making involves modulation of the vmpfc valuation system. Science 324, 646–648 (2009). Suzuki, S., Cross, L. & O'Doherty, J. P. Elucidating the underlying components of food valuation in the human orbitofrontal cortex. Nat. Neurosci. 20, 1780 (2017). Howard, J. D. & Gottfried, J. A. Configural and elemental coding of natural odor mixture components in the human brain. Neuron 84, 857–869 (2014). Koechlin, E. Human decision-making beyond the rational decision theory. Trends Cogn. Sci. 24, 4–6 (2020). Farashahi, S., Donahue, C. H., Hayden, B. Y., Lee, D. & Soltani, A. Flexible combination of reward information across primates. Nat. Hum. Behav. 3, 1215–1224 (2019). Iigaya, K., Yi, S., Wahle, I. A., Tanwisuth, K. & O'Doherty, J. P. Aesthetic preference for art can be predicted from a mixture of low- and high-level visual features. Nat. Hum. Behav. 5, 743–755 (2021). O'Doherty, J. P., Rutishauser, U. & Iigaya, K. The hierarchical construction of value. Curr. Opin. Behav. Sci. 41, 71–77 (2021). Smith, P. L. & Little, D. R. Small is beautiful: In defense of the small-n design. Psychonomic Bull. Rev. 25, 2083–2101 (2018). Mante, V., Sussillo, D., Shenoy, K. V. & Newsome, W. T. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503, 78 (2013). Kar, K. & DiCarlo, J. J. Fast recurrent processing via ventrolateral prefrontal cortex is needed by the primate ventral stream for robust core visual object recognition. Neuron 109, 164–176.e5 (2021). Shams, L., Kamitani, Y. & Shimojo, S. What you see is what you hear. Nature 408, 788–788 (2000). Kay, K. N., Naselaris, T., Prenger, R. J. & Gallant, J. L. Identifying natural images from human brain activity. Nature 452, 352 (2008). Li, C. & Chen, T. Aesthetic visual quality assessment of paintings. IEEE J. Sel. Top. Signal Process. 3, 236–252 (2009). Rother, C., Kolmogorov, V. & Blake, A. Grabcut: interactive foreground extraction using iterated graph cuts. In ACM Transactions on Graphics (TOG), vol. 23, 309–314 (ACM, 2004). Chatterjee, A., Widick, P., Sternschein, R., Smith, W. B. & Bromberger, B. The assessment of art attributes. Empir. Stud. Arts 28, 207–222 (2010). Vaidya, A. R., Sefranek, M. & Fellows, L. K. Ventromedial frontal lobe damage alters how specific attributes are weighed in subjective valuation. Cereb. Cortex 1–11 (2017). Durkin, C., Hartnett, E., Shohamy, D. & Kandel, E. R. An objective evaluation of the beholder's response to abstract and figurative art based on construal level theory. Proc. Natl Acad. Sci. USA 117, 19809–19815 (2020). Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. Preprint at https://arxiv.org/abs/1409.1556 (2014). Deng, J. et al. Imagenet: a large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, 248–255 (IEEE, 2009). Hasson, U., Nastase, S. A. & Goldstein, A. Direct fit to nature: an evolutionary perspective on biological and artificial neural networks. Neuron 105, 416–434 (2020). Gläscher, J., Hampton, A. N. & O'Doherty, J. P. Determining a role for ventromedial prefrontal cortex in encoding action-based value signals during reward-related decision making. Cereb. Cortex 19, 483–495 (2008). Hampton, A. N. & O'doherty, J. P. Decoding the neural substrates of reward-related decision making with functional mri. Proc. Natl Acad. Sci. USA 104, 1377–1382 (2007). Serences, J. T. Value-based modulations in human visual cortex. Neuron 60, 1169–1181 (2008). Chatterjee, A., Thomas, A., Smith, S. E. & Aguirre, G. K. The neural response to facial attractiveness. Neuropsychology 23, 135 (2009). Stănişor, L., van der Togt, C., Pennartz, C. M. & Roelfsema, P. R. A unified selection signal for attention and reward in primary visual cortex. Proc. Natl Acad. Sci. USA 110, 9136–9141 (2013). FitzGerald, T. H., Friston, K. J. & Dolan, R. J. Characterising reward outcome signals in sensory cortex. Neuroimage 83, 329–334 (2013). Bach, D. R., Symmonds, M., Barnes, G. & Dolan, R. J. Whole-brain neural dynamics of probabilistic reward prediction. J. Neurosci. 37, 3789–3798 (2017). O'Doherty, J. P. The problem with value. Neurosci. Biobehav. Rev. 43, 259–268 (2014). Wang, L., Mruczek, R. E., Arcaro, M. J. & Kastner, S. Probabilistic maps of visual topography in human cortex. Cereb. Cortex 25, 3911–3931 (2014). Barch, D. M. et al. Function in the human connectome: task-fmri and individual differences in behavior. Neuroimage 80, 169–189 (2013). Naselaris, T., Kay, K. N., Nishimoto, S. & Gallant, J. L. Encoding and decoding in fmri. Neuroimage 56, 400–410 (2011). Kell, A. J., Yamins, D. L., Shook, E. N., Norman-Haignere, S. V. & McDermott, J. H. A task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchy. Neuron 98, 630–644 (2018). Baizer, J. S., Ungerleider, L. G. & Desimone, R. Organization of visual inputs to the inferior temporal and posterior parietal cortex in macaques. J. Neurosci. 11, 168–190 (1991). Rao, S. C., Rainer, G. & Miller, E. K. Integration of what and where in the primate prefrontal cortex. Science 276, 821–824 (1997). Rigotti, M. et al. The importance of mixed selectivity in complex cognitive tasks. Nature 497, 585 (2013). Zhang, C. Y. et al. Partially mixed selectivity in human posterior parietal association cortex. Neuron 95, 697–708 (2017). Noonan, M. et al. Separate value comparison and learning mechanisms in macaque medial and lateral orbitofrontal cortex. Proc. Natl Acad. Sci. USA 107, 20547–20552 (2010). Kant, I. Critique of Judgment (Hackett Publishing, 1987). Chatterjee, A. Prospects for a cognitive neuroscience of visual aesthetics. Bull. Psychol. 4 (2003). Van Essen, D. C. & Maunsell, J. H. Hierarchical organization and functional streams in the visual cortex. Trends Neurosci. 6, 370–375 (1983). Felleman, D. J. & Van, D. E. Distributed hierarchical processing in the primate cerebral cortex. Cereb. Cortex 1, 1–47 (1991). Hochstein, S. & Ahissar, M. View from the top: hierarchies and reverse hierarchies in the visual system. Neuron 36, 791–804 (2002). Konen, C. S. & Kastner, S. Two hierarchically organized neural systems for object information in human visual cortex. Nat. Neurosci. 11, 224 (2008). Kahnt, T., Heinzle, J., Park, S. Q. & Haynes, J.-D. Decoding different roles for vmpfc and dlpfc in multi-attribute decision making. Neuroimage 56, 709–715 (2011). Pelletier, G. & Fellows, L. K. A critical role for human ventromedial frontal lobe in value comparison of complex objects based on attribute configuration. J. Neurosci. 39, 4124–4132 (2019). Cadieu, C. F. et al. Deep neural networks rival the representation of primate it cortex for core visual object recognition. PLoS Comput. Biol. 10, e1003963 (2014). Khaligh-Razavi, S.-M. & Kriegeskorte, N. Deep supervised, but not unsupervised, models may explain it cortical representation. PLoS Comput. Biol. 10, e1003915 (2014). Güçlü, U. & van Gerven, M. A. Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream. J. Neurosci. 35, 10005–10014 (2015). Hong, H., Yamins, D. L., Majaj, N. J. & DiCarlo, J. J. Explicit information for category-orthogonal object properties increases along the ventral stream. Nat. Neurosci. 19, 613 (2016). Llera, A., Wolfers, T., Mulders, P. & Beckmann, C. F. Inter-individual differences in human brain structure and morphology link to variation in demographics and behavior. Elife 8, e44443 (2019). Hekkert, P. & van Wieringen, P. C. The impact of level of expertise on the evaluation of original and altered versions of post-impressionistic paintings. Acta Psychologica 94, 117–131 (1996). Chatterjee, A. & Vartanian, O. Neuroaesthetics. Trends Cogn. Sci. 18, 370–375 (2014). Esteban, O. et al. fmriprep. Software (2018). Gorgolewski, K. J. et al. Nipype. Software (2018). Tustison, N. J. et al. N4itk: improved n3 bias correction. IEEE Trans. Med. Imaging 29, 1310–1320 (2010). Avants, B., Epstein, C., Grossman, M. & Gee, J. Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Med. Image Anal. 12, 26–41 (2008). Ke, Y., Tang, X. & Jing, F. The design of high-level features for photo quality assessment. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), vol. 1, 419–426 (IEEE, 2006). Salah, M. B., Mitiche, A. & Ayed, I. B. Multiregion image segmentation by parametric kernel graph cuts. IEEE Trans. Image Process. 20, 545–557 (2010). Article ADS MathSciNet MATH Google Scholar Nock, R. & Nielsen, F. Statistical region merging. IEEE Trans. Pattern Anal. Mach. Intell. 26, 1452–1458 (2004). Zhu, Q., Yeh, M.-C., Cheng, K.-T. & Avidan, S. Fast human detection using a cascade of histograms of oriented gradients. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), vol. 2, 1491–1498 (IEEE, 2006). Iigaya, K., Yi, S., Wahle, I. A., Tanwisuth, K. & O'Doherty, J. P. Aesthetic preference for art emerges from a weighted integration over hierarchically structured visual features in the brain. Preprint at bioRxiv https://doi.org/10.1101/2020.02.09.940353 (2020). Hein, M. & Bühler, T. An inverse power method for nonlinear eigenproblems with applications in 1-spectral clustering and sparse pca. In Advances in Neural Information Processing Systems, 847–855 (2010). Murray, N., Marchesotti, L. & Perronnin, F. Ava: a large-scale database for aesthetic visual analysis. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, 2408–2415 (IEEE, 2012). Murray, N. & Gordo, A. A deep architecture for unified aesthetic prediction. Preprint at arXiv https://arxiv.org/abs/1708.04890 (2017). Huber, P. J. Robust estimation of a location parameter. Ann. Math. Stat. 35, 73–101 (1964). Gatys, L. A., Ecker, A. S. & Bethge, M. Image style transfer using convolutional neural networks. In Proc. IEEE conference on computer vision and pattern recognition, 2414–2423 (2016). We thank Peter Dayan, Shin Shimojo, Pietro Perona, Lesley Fellows, Avinash Vaidya and Jeff Cockburn for discussions and suggestions. We also thank Ronan O'Doherty for drawing the bird and fruit-bowl paintings, Seiji Iigaya and Erica Iigaya for drawing the color field painting presented in this manuscript. This work was supported by NIDA grant R01DA040011 and the Caltech Conte Center for Social Decision Making (P50MH094258) to J.O.D., the Japan Society for Promotion of Science, the Swartz Foundation and the Suntory Foundation to K.I., and the William H. and Helen Lang SURF Fellowship to I.A.W. Division of Humanities and Social Sciences, California Institute of Technology, 1200 E California Blvd, Pasadena, CA, 91125, USA Kiyohito Iigaya, Sanghyun Yi, Iman A. Wahle, Sandy Tanwisuth, Logan Cross & John P. O'Doherty Department of Psychiatry, Columbia University Irving Medical Center, New York, NY, 10032, USA Kiyohito Iigaya Center for Theoretical Neuroscience and Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, 10027, USA Department of Computer Science, Stanford University, Stanford, CA, USA Logan Cross Sanghyun Yi Iman A. Wahle Sandy Tanwisuth John P. O'Doherty K.I. and J.P.O. conceived and designed the project. K.I., S.Y., I.A.W., S.T., performed experiments and K.I., S.Y., I.A.W., L.C., J.P.O. analyzed and discussed results. K.I., S.Y., I.A.W., J.P.O. wrote the manuscript. Correspondence to Kiyohito Iigaya or John P. O'Doherty. Peer review information Nature Communications thanks Oshin Vartanian and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Peer Review File Iigaya, K., Yi, S., Wahle, I.A. et al. Neural mechanisms underlying the hierarchical construction of perceived aesthetic value. Nat Commun 14, 127 (2023). https://doi.org/10.1038/s41467-022-35654-y Editors' Highlights Nature Communications (Nat Commun) ISSN 2041-1723 (online)
CommonCrawl
Mathplanet Geometry / Area / The surface area and the volume of pyramids, prisms, cylinders and cones The surface area is the area that describes the material that will be used to cover a geometric solid. When we determine the surface areas of a geometric solid we take the sum of the area for each geometric form within the solid. The volume is a measure of how much a figure can hold and is measured in cubic units. The volume tells us something about the capacity of a figure. A prism is a solid figure that has two parallel congruent sides that are called bases that are connected by the lateral faces that are parallelograms. There are both rectangular and triangular prisms. To find the surface area of a prism (or any other geometric solid) we open the solid like a carton box and flatten it out to find all included geometric forms. To find the volume of a prism (it doesn't matter if it is rectangular or triangular) we multiply the area of the base, called the base area B, by the height h. $$V=B\cdot h$$ A cylinder is a tube and is composed of two parallel congruent circles and a rectangle which base is the circumference of the circle. The area of one circle is: $$A=\pi r^{2}$$ $$A=\pi \cdot 2^{2}$$ $$A=\pi \cdot 4$$ $$A\approx 12.6$$ The circumference of a circle: $$C=\pi d$$ $$C=\pi \cdot 4$$ $$C\approx 12.6$$ The area of the rectangle: $$A=C\cdot h$$ $$A=12.6 \cdot 6$$ The surface area of the whole cylinder: $$A=75.6+12.6+12.6=100.8\, units^{2}$$ To find the volume of a cylinder we multiply the base area (which is a circle) and the height h. $$V=\pi r^{2}\cdot h$$ A pyramid consists of three or four triangular lateral surfaces and a three or four sided surface, respectively, at its base. When we calculate the surface area of the pyramid below we take the sum of the areas of the 4 triangles area and the base square. The height of a triangle within a pyramid is called the slant height. The volume of a pyramid is one third of the volume of a prism. $$V=\frac{1}{3}\cdot B\cdot h$$ The base of a cone is a circle and that is easy to see. The lateral surface of a cone is a parallelogram with a base that is half the circumference of the cone and with the slant height as the height. This can be a little bit trickier to see, but if you cut the lateral surface of the cone into sections and lay them next to each other it's easily seen. The surface area of a cone is thus the sum of the areas of the base and the lateral surface: $$A_{base}=\pi r^{2}\: and\: A_{LS}=\pi rl$$ $$A=\pi r^{2}+\pi rl$$ $$\begin{matrix} A_{base}=\pi r^{2}\: \: &\, \, and\, \, & A_{LS}=\pi rl\: \: \: \: \: \: \: \\ A_{base}=\pi \cdot 3^{2} & & A_{LS}=\pi \cdot 3\cdot 9\\ A_{base}\approx 28.3\: \: && A_{LS}\approx 84.8\: \: \: \: \: \\ \end{matrix}$$ $$A=\pi r^{2}+\pi rl=28.3+84.8=113.1\, units^{2}$$ The volume of a cone is one third of the volume of a cylinder. $$V=\frac{1}{3}\pi \cdot r^{2}\cdot h$$ Find the volume of a prism that has the base 5 and the height 3. $$B=3\cdot 5=15$$ $$V=15\cdot 3=45\: units^{3}$$ Find the surface area of a cylinder with the radius 4 and height 8 Find the volume of a cone with height 5 and the radius 3 Next Chapter: Pre-Algebra Overview Algebra 1 Overview Geometry Overview Points, Lines, Planes and Angles An introduction to geometry Measure line segments Finding distances and midpoints Measure and classify an angle Working with logic If-then statement Proofs using algebra Perpendicular and parallel Angles, parallel lines and transversals Congruent triangles More about triangles Right triangles and trigonometry Mean and geometry The converse of the Pythagorean theorem and special triangles Properties of parallelograms Common types of transformation Transformation using matrices Basic information about circles Inscribed angles and polygons Advanced information about circles Parallelogram, triangles etc About Mathplanet SAT Overview ACT Overview Mathplanet is licensed by Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 Internationell-licens.
CommonCrawl
Fibrations in complex geometry Let $X^n$ be a compact Kähler manifold with $K_X$ semi-ample, i.e., a sufficiently high power of $K_X$ is basepoint free. The associated pluricanonical system $| K_X^{\ell} |$ furnishes a birational map $$f : X \dashrightarrow \mathbb{P}^{\dim H^0(X, K_X^{\ell})-1}$$ onto some normal projective variety $Y \subset \mathbb{P}^{\dim H^0(X, K_X^{\ell})-1}$ of dimension $\kappa(X)$. Here, $\kappa(X)$ denotes the Kodaira dimension of $X$. The semi-ampleness of $K_X$ further implies that $K_X^{\ell} \simeq f^{\ast} \mathcal{O}(1)$. In particular, for every $y \in Y$ that is not contained in the discriminant locus of $f$, $K_X^{\ell} \vert_{f^{-1}(y)} \simeq \mathcal{O}_{X_y}$. Since $f$ is a submersion near $f^{-1}(y)$, we have the adjunction-type relation $K_{f^{-1}(y)} \simeq K_X \vert_{f^{-1}(y)}$ and therefore the fibres of $f$ are Calabi--Yau manifolds of dimension $n- \kappa(X)$. In the Kähler geometry literature, it is common to refer to this map $f$ as a Calabi--Yau fibration. My question may be extremely obvious, but nevertheless: Question: Is this a fibration in the sense of homotopy theory, i.e., does this map satisfy the homotopy lifting property? ag.algebraic-geometry complex-geometry kahler-manifolds AmorFatiAmorFati I would say that the answer is in general no. Think of an elliptic surface $X$ with Kodaira dimension $1$ and whose elliptic fibration contains a cuspidal curve. Then the general fibre is not homotopically equivalent to the special one (the former is homeomorphic to $S^1 \times S^1$, the latter to $S^1$, in particular their fundamental groups are different), whereas all the fibres of a Hurewicz fibration have the same homotopy type. Edit. Actually, any elliptic surface $X$ with Kodaira dimension $1$ and whose elliptic fibration contains a nodal curve also provides a counterexample. In fact, a nodal cubic is homeomorphic to a torus "with one cycle shrunk away"; in particular, it has the homotopy type of $S^1 \vee S^2$ and the previous argument applies. answered Aug 14, 2020 at 8:08 Francesco PolizziFrancesco Polizzi 63.5k55 gold badges172172 silver badges268268 bronze badges $\begingroup$ Thankyou! Do you have a reference for these homeomorphisms? Admittedly, I have never thought about homotopy types of the cuspoidal/nodal curves. $\endgroup$ – AmorFati $\begingroup$ It is rather elementary. The cuspidal cubic has affine equation $y^2=x^3$, and the projection $(x, \, y) \mapsto y$ induces a homeomorphism onto $\mathbb{P}^1$, which is topologically $S^2$. Regarding the nodal cubic, in can be seen as $\mathbb{P}^1$ with two points identified. Hence, topologically, we are identifying two points on $S^2$; this is clearly the same thing as shrinking a cycle on a torus, and the corresponding quotient space is homotopically equivalent to a sphere with a circle attached at a point. $\endgroup$ – Francesco Polizzi Calabi-Yau fiber space without singular fibers implies finite quotient of product? Which varieties of general type admit fibrations with non-general type fibres Additivity of Kodaira dimension for a nice fibration The period map and the Kodaira--Spencer map Examples of CY fibrations over $\mathbb P^1$ Ricci curvature of the Weil-Petersson metric? Rational curves on the image of the pluricanonical maps
CommonCrawl
Relative Hausdorff distance for network analysis Sinan G. Aksoy ORCID: orcid.org/0000-0002-3466-33341, Kathleen E. Nowak1, Emilie Purvine2 & Stephen J. Young1 Similarity measures are used extensively in machine learning and data science algorithms. The newly proposed graph Relative Hausdorff (RH) distance is a lightweight yet nuanced similarity measure for quantifying the closeness of two graphs. In this work we study the effectiveness of RH distance as a tool for detecting anomalies in time-evolving graph sequences. We apply RH to cyber data with given red team events, as well to synthetically generated sequences of graphs with planted attacks. In our experiments, the performance of RH distance is at times comparable, and sometimes superior, to graph edit distance in detecting anomalous phenomena. Our results suggest that in appropriate contexts, RH distance has advantages over more computationally intensive similarity measures. Similarity measures play a crucial role in many machine learning and data science algorithms such as image classification and segmentation, community detection, and recommender systems. A good deal of effort has gone into developing similarity measures for graphs, in particular, since they often provide a natural framework for representing unstructured data that accompanies many real-world applications. Some popular graph similarity measures currently used are graph edit distance (Sanfeliu and Fu 1983), iterative vertex-neighborhood identification (Blondel et al. 2004; Kleinberg 1999), and maximum common subgraph based distance (Fernández and Valiente 2001). However, as graph datasets grow larger and more complex, the need for tools that can both capture meaningful differences and scale well is becoming more critical. In this respect, a number of sophisticated yet costly graph similarity measures, such as those listed above, fall short. The recently proposed graph Relative Hausdorff (RH) distance (Simpson et al. 2015) is a promising measure for quantifying similarity between graphs via their degree distributions. Inspired by the Hausdorff metric from topology (Hausdorff 1914), RH distance was devised to capture degree distribution closeness at all scales, and hence is well-suited for comparing the heavy-tailed degree distributions frequently exhibited by real-world graphs. Furthermore, as recent work has shown (Aksoy et al. 2018), RH distance is extremely lightweight, with time complexity linear in the maximum degrees of the graphs being compared. However, as this metric is relatively new, it has not yet been extensively vetted. In particular, current research has not addressed its potential as an anomaly detection method for time-evolving graphs. In this work, we conduct a statistical and experimental study of RH distance in the context of dynamic graphs. While RH distance may be applied to arbitrary pairs of networks, we focus our attention on sequences of time-evolving networks arising from cyber-security applications. We begin by first applying RH distance to cyber-security logs recently released by Los Alamos National Laboratory, and investigate the extent to which it detects identified "red team" events. Then we follow up by studying RH distance in the more general and controlled context of random dynamic graph models. Here we generate sequences of correlated Chung-Lu random graphs using a simplified cyber-security model proposed by Hagberg et al. (2016), and test the extent to which RH distance detects several planted attack profiles. Throughout our analysis, we compare the performance of RH to that of more well-known graph similarity measures, such as edit distance and Kolmogorov-Smirnov distance of degree distributions. With this work, we better clarify the range of differences captured by RH, and also highlight its practical advantages and disadvantages over other methods. Graph similarity measures Below we define the graph similarity measures we consider for anomaly detection in time-evolving graphs. We begin with the graph Relative Hausdorff distance, the primary focus of our study. Relative Hausdorff distance Originally introduced by Simpson et al. (2015), the Relative Hausdorff (RH) distance between graphs is a numerical measure of closeness between their complementary cumulative degree histograms (ccdh). More precisely, the (discrete) ccdh of a graph G is defined as \({(N(k))}_{k=1}^{\infty }\), where N(k) denotes the number of vertices of degree at leastk. This is related to the commonly used degree distribution, which is defined as \((n(k))_{k=1}^{\infty }\) where n(k) denotes the number of vertices with degree exactlyk. Note that the ccdh and degree distribution are equivalent in the sense that each can be uniquely obtained from the other; nonetheless, for the purpose of this exposition, it is more convenient to work with the ccdh. Slightly abusing notation, we write G(d) for a graph G to mean the value of the ccdh of G at d, and let Δ(G) denote the maximum degree of G. With these definitions in hand, the (discrete) RH distance between F and G is then defined as: ((Discrete) Relative Hausdorff distance (Simpson et al. 2015)) Let F,G be graphs. The discrete directional Relative Hausdorff distance from F to G, denoted \(\overrightarrow {\mathcal {R}\mathcal {H}}(F,G)\), is the minimum ε such that $${}{\begin{aligned} \forall d \in \{1,\dots, \Delta(F)\}, \exists d' \in \{1, \dots, \Delta(G)+1\} \text{ such that} |d-d'| \leq \epsilon d \text{ and} |F(d)-G(d')|\leq \epsilon F(d), \end{aligned}} $$ and \(\mathcal {R}\mathcal {H}(F,G)=\max \{\overrightarrow {\mathcal {R}\mathcal {H}}(F,G),\overrightarrow {\mathcal {R}\mathcal {H}}(G,F)\}\) is the discrete Relative Hausdorff distance between F and G. In this paper, we compute RH distance using smoothed ccdhs, in which successive points are connected via line segments, as recommended by Matulef (2017); Stolman and Matulef (2017). Specifically, the authors define the smooth ccdh of a graph G, \(G(d): \mathbb {R}_{\geq 1} \rightarrow \mathbb {R}_{\geq 0}\), as $$G(d) = \left\{\begin{array}{ll} \text{\# of vertices of degree at least} d, &d \in \mathbb{Z}_{\geq 1} \\ (d-\lfloor d \rfloor)G(\lfloor d \rfloor)+(\lceil d\rceil-d)G(\lceil d \rceil), & d \in \mathbb{R}_{\geq 1}\setminus \mathbb{Z}. \end{array}\right. $$ In this case, the RH distance is defined much the same as before, except that the ccdh is piecewise linear. An illustration of the RH equivalent of an ε-ball at points on a smooth ccdh is given in Fig. 1 and the precise definition of smooth RH distance is given below. Henceforth, we focus exclusively on smooth RH distance so we will drop the qualifier. An illustration of the RH equivalent of an ε-ball (in green) on one graph ccdh (in blue) compared with another (in red) at different points ((Smooth) Relative Hausdorff distance (Matulef 2017; Stolman and Matulef 2017)) Let F,G be graphs. The smooth directional Relative Hausdorff distance from F to G, denoted \(\overrightarrow {\mathcal {R}\mathcal {H}}(F,G)\), is the minimum ε such that $$\begin{array}{*{20}l} \forall d \in \{1,\dots, \Delta(F)\}, \exists d' \in \mathbb{R}_{\geq 1} \text{ such that} |d-d'| \leq \epsilon d \text{ and} |F(d)-G(d')|\leq \epsilon F(d), \end{array} $$ and \(\mathcal {R}\mathcal {H}(F,G)=\max \{\overrightarrow {\mathcal {R}\mathcal {H}}(F,G),\overrightarrow {\mathcal {R}\mathcal {H}}(G,F)\}\) is the smooth Relative Hausdorff distance between F and G. By definition, \(\mathcal {R}\mathcal {H}(F,G)=\epsilon \) means that for every degree k in the graph F, F(k) is within ε-fractional error of G(k′) for some k′ within ε-fractional error of k. Hence, the RH measure is flexible in accommodating some error in both vertex degree values as well as their respective counts, yet strict in requiring that every point in F be ε-close to G (and vice versa). While RH distance was inspired by the Hausdorff distance metric (Hausdorff 1914) after which it is named, the concept underlying RH distance between graphs differs from Hausdorff distance in several important regards. Recall the directional Hausdorff distance from non-empty subset X to Y of a metric space (M,d) is supx∈X infy∈Yd(x,y), or equivalently, \(\sup _{x \in X} \inf \{ \varepsilon >0: B(x;\varepsilon) \cap Y \not = \varnothing \}\), where B(x;ε) denotes the closed ε-ball centered at x. Crucially, RH distance replaces B(x;ε) with balls that are non-uniform in X, as illustrated in Fig. 1. This relative notion of ball, while no longer a true metric ball, is more appropriate for analyzing differences in highly-skewed degree distributions exhibited by complex networks. However, as discussed and analyzed in Aksoy et al. (2018), this also decouples the "distance" from the underlying topology, yielding a function \(\mathcal {R}\mathcal {H}(F,G)\) that does not satisfy the triangle inequality and hence is best viewed as a similarity measure rather than bona-fide distance metric. However, to match existing literature we will still use the term "RH distance." Lastly, it is worth noting \(\mathcal {R}\mathcal {H}(F,G)\) is extremely lightweight, and can be computed with run time \(\mathcal {O}(\Delta (F)+\Delta (G))\); for a linear-time algorithm and more on theoretical properties of RH distance, the reader is referred to Aksoy et al. (2018). While RH distance will be the focus of the present work, we also consider several other graph similarity measures in order to provide relevant context for its performance. First, we consider another comparable lightweight, ccdh-based measure called Kolmogorov-Smirnov (KS) distance. KS distance is a widely-used statistical measure of similarity between distributions, and serves as the test statistic for the two-sample KS hypothesis test (Gibbons and Chakraborti 2011; Young 1977). In what follows, we will not only compute KS distance directly between graph degree distributions, but also between distributions of graph similarity values, such as RH values. To avoid confusion, below we define both KS distance as well as the two-sample KS hypothesis test (for which KS distance is a test statistic) for general empirical distributions. (KS distance and two-sample KS test (Gibbons and Chakraborti 2011)) Let F and G be empirical cumulative distribution functions formed from n and m samples, respectively. The Kolmogorov–Smirnov distance is $$\mathcal{K}\mathcal{S}(F,G)=\max_{x} |F(x)-G(x)|.$$ For the asymptotic null distribution, the null hypothesis that F and G are samples of two identical probability density functions is rejected at the α confidence level if $$\mathcal{K}\mathcal{S}(F,G) > c(\alpha) \sqrt{\frac{n+m}{nm}}, $$ where c(α) is given asymptotically by \(\sqrt {-\frac {1}{2} \log {\alpha }}\). See ((Gibbons and Chakraborti 2011), Ch. 6) and the references contained therein for a detailed overview of the two-sample KS test and derivation of the asymptotic expressions above. For small sample sizes, tables of critical values may be used in place of asymptotic estimates (e.g. for n,m≤25, see (Siegel and N.J.C. 1988), Table LI−LII). The reader is referred to Marsaglia et al. (2003); Simard and L'Ecuyer (2011) for further discussion on exact vs. approximate methods for computing the Kolmogorov-Smirnov distribution. For clarity, we emphasize that smaller values of KS distance indicate greater similarity between distributions, whereas smaller p-values permit one to reject the null hypothesis of identical underlying distributions at a higher confidence level, thereby presenting stronger evidence the empirical distributions were drawn from different underlying distributions. For the special case that F and G are the ccdhs of two graphs on n and m vertices, respectively, KS distance is given by \(\mathcal {K}\mathcal {S}(F,G)=\max _{x \in \mathbb {N}} |\widetilde {F}(x)-\widetilde {G}(x)|\), where \(\widetilde {F}=\tfrac {1}{n}\cdot F\) and \(\widetilde {G}=\tfrac {1}{m}\cdot G\). As argued in Matulef (2017); Stolman and Matulef (2017), KS distance between graph ccdhs can sometimes be large in graphs that are intuitively similar; furthermore, KS distance may also be insensitive to certain important differences between graph ccdhs, particularly in the tails of ccdhs (which correspond to high-degree vertices). On the other end of the computational spectrum, we also consider graph edit distance (GED). Arguably one of the most well-known graph similarity measures, GED has been widely used throughout machine learning, particularly in computer vision and pattern recognition contexts (Gao et al. 2009). An edit operation on a graph consists of either an insertion, deletion, or substitution of a single vertex or edge.Footnote 1 An edit path of length k between F and G is a sequence of edit operations \(\mathcal {P}=(e_{1},\dots,e_{k})\) that takes F to a graph that is isomorphic to G. Edit distance is the total weight of the minimum-cost edit path, i.e. (Graph edit distance) Let F,G be graphs. The graph edit distance between F and G, denoted GED(F,G) is given by $$\begin{array}{*{20}l} \text{GED}(F,G)&= \min_{\mathcal{P} \in \Upsilon(F,G)} \sum_{e_{i} \in \mathcal{P}} c(e_{i}), \end{array} $$ where Υ(F,G) denotes the set of possible edit paths from F to G, and c(ei)≥0 denotes a cost-function measuring the weight of edit operation ei. In what follows, we simply take c(ei)=1 for any edit operation, in which case GED(F,G) is the minimum number of edit operations needed to transform F to G. While in this work we focus on the graph Relative Hausdorff distance, we note that a wide variety of graph similarity measures have been utilized for anomaly detection in time-evolving graphs. In Ishibashi et al. (2010), the authors propose detecting anomalies in communication network traffic data by measuring cosine similarity between the principal eigenvectors of graph adjacency matrices. In Akoglu and Faloutsos (2010), also take an eigenvector-based approach for measuring graph anomalousness. Matrix-analytic graph similarity measures have also been based on eigenvalue residuals (Giuseppe et al. 2011), non-negative matrix factorization (Tong and Lin 2011), and tensor decompositions (Sapienza et al. 2015). Other popular approaches for graph-based anomaly detection are via distance metrics, such as those based on edit distance, maximum common subgraph distance, or mean vertex eccentricity (Gaston et al. 2006), or take a community-detection approach towards identifying anomalies by tracking changes between clusters of well-connected vertices (Aggarwal et al. 2011; Wang and Paschalidis 2017). For a broader survey of graph-based anomaly detection techniques see (Akoglu et al. 2014; Ranshous et al. 2015; Sensarma and Sarma 2015) and the references contained therein. Lastly, we note that applications of graph similarity functions extend far beyond anomaly detection. Graph similarity functions are also ubiquitous in inexact graph matching and graph classification problems. For instance, graph edit distance is a key tool for error-tolerant pattern recognition and computer vision techniques (Gao et al. 2009). While in this work we explore Relative Hausdorff distance through the lens of anomaly detection, we note its application as a graph similarity measure in other contexts such as these remains unexplored. As our focus in this paper will be on cyber anomaly detectionFootnote 2 in network flow data, we also mention some of the existing graph-based methods specifically for cyber anomaly detection. In this domain, some researchers focus on detecting anomalies edge-by-edge or target specific types of behavior, e.g., (François et al. 2011; Noble and Adams 2018), while others look at the graph more globally or structurally and are agnostic to the type of anomalous behavior being detected, e.g., (Chen et al. 2016). In Noble and Adams (2018) describe a real-time unsupervised framework for detecting anomalies in network data. They consider "edge activity" as the sequence of flows on a single edge and compute correlations between event inter-arrival times and other edge data (e.g., byte count or protocol). Statistically significant changes in those correlations are flagged as anomalies. Groups of adjacent anomalies can be combined to form larger anomalies perhaps indicating coordinated behavior. The authors of François et al. (2011) use PageRank to perform linkage analysis followed by clustering techniques to identify groups of IPs with similar behavior. These groups are then compared with known bot behavior to detect botnets within the network. In the category of more structural and behavior-agnostic algorithms (Chen et al. 2016) introduces multi-centrality graph PCA and multi-centrality graph dictionary learning which use structural properties of a graph, e.g., walk statistics and centrality measures, to learn normal structure and thus detect abnormal structure. This method is not tailored to the cyber use case, but the authors use network flow as one of their examples. Our work is similarly not targeted towards a specific cyber use case and is focused on detecting structural perturbations rather than clustering behavioral patterns. Finally, we note others have measured network similarity using Hausdorff distance and the related Gromov-Hausdorff distance (Edwards 1975; Gromov 1981) on metric spaces. Banič and Taranenko (2015) define the Hausdorff distance between two simple, connected graphs based on the lattice of all subgraphs of the graphs in question. In Lee et al. (2011), a quantity inspired by Gromov-Hausdorff distance is applied to analyze brain networks. This quantity relies on the embedding of the network in a geometric structure; in general, Gromov-Hausdorff distance is defined over all isometric embeddings of a metric space. Recent work in Choi (2019) proposes fundamental definitions toward a theory of Gromov-Hausdorff distances between graphs, and includes exact calculations for a few simple classes of graphs. The wider applicability of this notion of Gromov-Hausdorff distance to real graph data is likely to face significant theoretical obstacles, as (Nowak et al.) shows that only a few special classes of graphs can be isometrically embedded in the class of metric spaces arising from a Hilbert space. Furthermore, both exact computation as well as approximation of Gromov-Hausdorff distance present computational challenges (Agarwal et al. 2018). In contrast, we emphasize RH distance is applicable to any pair of graphs, admits linear-time computation, and (rather than requiring an associated graph embedding) is defined on graphs solely as abstract combinatorial objects. Los Alamos National Laboratory (LANL) cybersecurity data To begin our study of RH distance as an anomaly detection method for dynamic graphs, we will first consider a dataset recently released by LANL with known red team events from their internal corporate computer network (Kent 2015a; 2015b). The dataset represents 58 consecutive days of de-identified event data collected from four sources, namely: Windows-based authentication events from both individual computers and centralized active directory domain controller servers, Process start and stop events from individual Windows computers, Domain Name Service (DNS) lookups collected by internal DNS servers, and Network flow data collected at several key router locations. In total, the data set is approximately 87.4 gigabytes, spread across the four modalities, including 1,648,275,307 events coming from 12,425 users, 17,684 computers, and 62,974 processes. Ground truth for the red team events is given as a set of authentication events that are known red team compromise events. In this section, we will demonstrate that Relative Hausdorff distance is effectively able to identify anomalous behavior around the red team events in the LANL data. As stated above, LANL captured network evolution in four different modalities, namely authentication, process, network flow, and DNS events. In order to apply the RH distance, we first must convert these network event files into a time series of graphs. To do so, we consider 60 sec moving windows that advance 20 sec at a time. For each window we use the events in that window to construct a graph. For the 58 consecutive days, this yields a time sequence of 250,560 graphs for each modality. Further details for constructing each type of graph are given below. Authentication Graphs. The authentication data is a record of authentication events collected from individual Windows-based desktop computers, servers, and Active Directory servers. Each line of the data file reports a separate authentication event in the form time, sourceUser@domain, destUser@domain, source computer, dest computer, auth type, logon type, auth orientation, pass/fail. For a given window, we construct an unweighted graph with edges {sourceUser, destUser} for each user pair present in the logs within the window. Authentication Failure Graphs. These are constructed in the same manner as the Authentication Graphs, except we restrict the edge set to those corresponding to failed authentications only. Process Graphs. The process data is a record of process start and stop events collected from individual Windows-based desktop computers and servers. Each line of the data file reports a separate process start/stop in the form time, user@domain, computer, process name, start/end. For a given window, we construct an unweighted graph with edges {computer, process name} for each computer-process pair present in the logs within the window. DNS Graphs. The DNS data is a record of DNS lookup events collected from the central DNS servers within the network. Each line of the data file reports a separate lookup event in the form time, source computer, computer resolved, representing a DNS lookup at the given time by the source computer for the resolved computer. For a given window, we construct an unweighted graph with edges {source computer, computer resolved} for each source-resolved computer pair present in the logs within the window. Flow Graphs. The flow data is a record of the network flow events collected from central routers within the network. Each line of the data file reports a separate network flow event in the form time, duration, source computer, source port, dest computer, dest port, protocol, packet count, byte count. For a given window, we construct an unweighted graph with edges {source computer, dest computer} for each source-destination computer pair that communicate during that time. While working with real-world data often presents challenges, testing graph-based anomaly detection methods on the LANL dataset is particularly difficult for several reasons. First and foremost, the data only provides red team authentication attempt time stamps and does not specify the nature, extent or duration of the red team events. This makes it difficult to segregate benign from anomalous time periods. Additionally, without knowing the specific red team actions, it is difficult to determine which (if any) of the aforementioned modalities a red team signature may appear in. Finally, it is worth noting the data exhibited large periods of time in which no events occurred that did not correspond to regular lulls such as weekends and nighttime. In particular, the flow data has records from only the first 37 of the 58 days. To address some of these limitations, in "Simulated evolving networks" section we extend our analyses to a generalized dynamic network model (Hagberg et al. 2016) proposed by LANL scientists Hagberg, Mishra, and Lemons. While no synthetic model is a perfect substitute for real data, this model's conception and design was directly informed by direct access to the LANL cyber data (Kent 2014; 2016) and provides a framework under which we may draw more certain and rigorous conclusions regarding the behavior of RH distance. First, we present our analysis of the real LANL data. Experiment and results As a first-pass approach towards studying the sensitivity of RH distances to red team events in the LANL dataset, we test whether the distribution of pairwise RH distance values before a red team event differs significantly from the post red team event distribution. In this way, we assess whether there is statistical evidence to support that red team events demarcate "change-points" in RH distance distribution. To that end, for each red team event at time r, we associate a time window w of length ℓ centered at r, which we denote wℓ(r). Each such window can be naturally partitioned into a "before" period (i.e. the time interval (r−ℓ/2,r)) and "after" period, (r,r+ℓ/2). To avoid overlapping windows and ensure the "before" periods are in fact devoid of red team events, we restrict attentions to windows in which no red team event occurs in the time interval (r−ℓ/2,r). Put equivalently, we consider the set $$W_{\ell}=\{w_{\ell}(r): \text{red team event occurs at \textit{r}, no red team events occur in \((r-\nicefrac{\ell}{2},r)\)}\}. $$ We note that it is possible for the after period of a window in Wℓ to contain additional red team events. For each window in Wℓ, we compute the RH distances between pairs of graphs separated by δ seconds in the before period, as well as such pairs belonging to the after period. We then aggregate the RH distances over all before periods and all after periods. More precisely, if \(G_{0},G_{1},\dots \) denotes the time-ordered sequence of graphs for a particular mode in the LANL data, we compute the aggregate before and after distributions as $$\begin{array}{*{20}l} D_{b} &= \{\mathcal{R}\mathcal{H}(G_{t},G_{t+\delta}): t,t+\delta \in (r-\nicefrac{\ell}{2},r) \text{ and} w_{\ell}(r) \in W_{\ell} \},\\ D_{a} &= \{\mathcal{R}\mathcal{H}(G_{t},G_{t+\delta}): t,t+\delta \in (r, r+\nicefrac{\ell}{2}) \text{ and} w_{\ell}(r) \in W_{\ell} \}, \end{array} $$ respectively. Recalling that we processed the LANL graph sequence for each modality by generating graphs for windows shifted by 20 sec, we may choose the parameter δ controlling the granularity of pairwise RH measurements to be as small as 20 sec and and as large as ℓ/2−20 sec. Finally, we assess whether these aggregated before and after RH distance distributions differ significantly by conducting a two-sample Kolmogorov-Smirnov test. Table 1 presents the resulting p-values for δ=20,40,60,120,240 sec, under window lengths ℓ=30,60,120 min, for each LANL modality. Table 1 The p-values of the two-sample KS test comparing RH distance distributions of aggregated before and after periods of time windows centered at red team events in the LANL data The p-values in Table 1 suggest that whether the aggregated distribution of RH values before red team events differs significantly from the post red team events depends crucially on the cyber modality, window length and granularity parameter δ. In the case of a 30 min window, almost none of the parameter settings for any modality result in statistical significance, while for a 2-h window, a majority of parameter settings are significant at a level of 0.05. In this case, the before and after RH distance distributions over longer time windows surrounding red team events more frequently show significant differences, which is perhaps unsurprising. On the other hand, the changes in significance levels as the granularity parameter δ varies are more difficult to interpret. Even for a fixed window length and modality, the significance levels neither consistently increase nor decrease in δ. One plausible hypothesis for this experiments sensitivity to δ is that RH distance values exhibit periodic behavior both within and across modalities, reflecting the natural circadian rhythms one might expect from temporal cyber data. If this were the case, the choice of δ may skew the RH values sampled when constructing the representative before and after distributions. To check whether such periodicity is indeed present in the RH distance measurements on LANL, we constructed heatmaps of RH distances between all pairs of graphs over given time windows. As this requires a quadratic number of comparisons, it is worth noting this analysis is crucially facilitated by the lightweight computational complexity of RH distance. We examined heatmaps not only for windows surrounding anomalies, but also for time windows away from red team events. Figure 2 (left column) presents sample heatmaps for the Authentication, Flow and Process modalities spanning a 2-h time period. In an effort to select a representative window for short-term nominal RH behavior within each modality, this time period was selected so as to not include any red team events nor be preceded or followed by any red team events for 20 h. It is also worth pointing out that the RH distance between pairs of flow graphs regularly exceeds one, indicating that the rough guide for detecting anomalous behavior given in Simpson et al. (2015) is inappropriate for the cyber-security context. We also transformed each heatmap of pairwise RH distance values into a similarity matrix by applying the Gaussian kernel with σ=1, and performed normalized Laplacian spectral clusteringFootnote 3 Left Column: RH distance heatmaps for nominal two-hour period. Right Column: Spectral clustering of similar graphs based on RH distances. a Authentication Heatmap. b Authentication Clustering. c Flow Heatmap. d Flow Clustering. e Process Heatmap. f Process Clustering , as described by Ng et al. (2002). Under the corresponding heatmap, Fig. 2 (right column) plots the pairs of graphs belonging to common clusters, using a different color for each cluster (and white for different clusters). A cursory examination of the heatmaps and their clustering suggests that the RH distance values for a given modality exhibit persistent and striking periodic patterns. Furthermore, in comparing the plots for Flow, Authentication, and Process, the differences in the periodic behavior of RH values are also apparent. While these periodicities are not entirely unexpected they are likely network- and data-dependent. Detecting and visualizing periodic behavior in network data is an active area of research, e.g., (Gove and Deason 2018; Hubballi and Goyal 2013; Price-Williams et al. 2017). As a consequence of this experiment and examination of the heatmaps in Fig. 2, it is clear that a single choice of granularity parameter δ is likely insufficient in establishing a representative distribution of RH values within a time window for any given modality. Accordingly, next we refine our experiment to better account for the inherent multi-scale and multi-modal nature of the LANL data. One of the many difficulties with investigating the effectiveness of RH distance in detecting anomalous behavior associated with red team events in the LANL data sets is that the red team process is inherently multi-modal and multi-scale. That is, the red team events identified in the data are simply the first step of the red team intrusion process which could potentially affect all of the data modalities (process, DNS, flow, and authentication) and occur over multiple time scales. To attempt to deal with this issue, we craft an indicator for each time that considers the RH distance between graphs with multiple time differences and in multiple modalities. More concretely, let \(\mathcal {S}\) be the set of potential data sources and let \(\mathcal {G} = \{G_{t,s}\}\) be the collection of observed graphs indexed by the time t and data source \(s \in \mathcal {S}\). For any fixed timestamp t and collection of differences \(\mathcal {D}\), we will define the profile vector at time t, v(t), as the vector given by \((\mathcal {R}\mathcal {H}(G_{t,s},G_{t-\delta,s}))_{s\in \mathcal {S},\delta \in \mathcal {D}}\). Ideally, this profile could be used to aggregate the behavior across multiple modalities and multiple time scales and give a clearer picture of the overall state of system. However, there is a further complication with this approach in that the LANL data represents a system which has a naturally evolving behavior based on various temporal patterns of human activity (i.e. weekday vs. weeknight, circadian rhythms, etc.). To adjust for these temporal patterns, for every time t, source \(s \in \mathcal {S}\), and difference \(\delta \in \mathcal {D}\), we define a baseline behavior random variable \(\mathcal {B}_{t,s,\delta }\) which is the random variable which represents the "typical" behavior of \(\mathcal {R}\mathcal {H}(G_{t,s},G_{t-\delta,s})\). This baseline behavior can then be combined with the profile vector v(t) to generate a temporal profile vector\(\hat {v}^{(t)}\) where for any \((s,\delta) \in \mathcal {S} \times \mathcal {D}\) we have \(\hat {v}^{(t)}_{s,\delta } = \mathbb {P}\left (v^{(t)}_{s,\delta } - \epsilon < \mathcal {B}_{t,s,\delta } < v^{(t)}_{s,\delta } + \epsilon \right) \). We define the temporal score of the time t as the geometric mean of the entries of \(\hat {v}^{(t)}\)Footnote 4. In what follows, we will calculate the temporal scores of time periods before and after a red team authentication event and show that there is a statistically significant difference between the behaviors. In fact, we will show that this temporal scoring methodology is more sensitive than using raw RH scores evaluated in Fig. 1 indicating that there are significant potential gains to be found by considering multi-modal and multi-scale indicators for anomalous behaviors. Before applying our results to the LANL data sets, it remains to address how to estimate the distribution of the random variable \(\mathcal {B}_{t,s,\delta }\) and how the ε term defines the temporal profile vector is chosen. For a fixed t we estimate the empirical distribution for \(\phantom {\dot {i}\!}\mathcal {B}_{t,s,\delta }\), we consider the RH distance between all pairs of graphs \((G_{t^{*},s},G_{t^{*}-\delta,s})\phantom {\dot {i}\!}\) where t∗ ranges over all times that differ from t by a multiple of a week, plus or minus 10 min. In order to avoid biasing this empirical estimate we exclude times t∗ where there is a red team event in the interval [t∗−δ,t∗] as well as those that are within 10 min of t. As we see in Fig. 2 the typical variation of a RH distance changes significantly based on the modality of the observation, both in source and elapsed time between graphs. Thus, rather than fixing a particular value of ε, we choose ε as one twentieth of the range of the empirical distribution for \(\mathcal {B}_{t,s,\delta }\). Finally, for each of the 712 red team times provided in the LANL data we calculate the temporal scores for each graph in the 30 min before and after the red team time and apply the two-sample Kolmogorov-Smirnov test, see Table 2. We further segregate this data by whether or not additional red team events occur during the 30 min prior to the red team time. Table 2 Aggregate behavior of temporal scoring on a per event basis It is clear from Table 2 that the temporal scores is far from a perfect indicator, as a non-negligible fraction of the changes associated with a red team event are not detected. Nonetheless, it is also apparent that for a relatively lightweight measure the RH distance exhibits reasonable effectiveness in distinguishing between nominal and anomalous behaviors. However, our conclusions must be somewhat tempered by the challenging nature of real world data and the LANL data in particular. Specifically, the lack of clear demarkation between anomalous and non-anomalous behavior as well as the limited time-scope of the investigation are significant caveats to any conclusions we make about the effectiveness of RH distance. In following section, we attempt to address these caveats by analyzing a synthetic temporal graph model inspired by the LANL cyber data. Simulated evolving networks The study of temporal networks is concerned with the analysis and modeling of time-ordered sequences of graphs. In order to better understand temporal network dynamics, researchers have proposed a plethora of abstract models for their simulation (for a survey, see (Holme and Saramäki 2012)). In the present work, we consider a temporal graph model that belongs to the broader class of Markovian Evolving Graphs (MEGs) (Avin et al. 2008). Given a probability distribution over the set of all graphs on a fixed vertex set, MEGs have the defining property that the distribution at time t is completely determined by that at t−1, thereby forming a sequence of random variables which satisfy the Markov property. Because of their generality and flexibility, MEGs have been popularly used to study information spreading processes, such as file sharing on peer-to-peer networks, social network memes, and disease spreading (Clementi et al. 2014; Clementi et al. 2010). In Hagberg et al. (2016) proposed a new MEG model, the design of which was informed by their study of LANL centralized authentication system cyber data (Kent 2014; 2016). In particular, they observed that these sequences of graphs exhibit certain stable global properties, such as skewed degree distributions, while local dynamics such as individual vertex neighborhoods change rapidly. To capture these dynamics, they designed a temporal model that can be used to preserve certain random graph structure while affording tunable control over the rate of dynamics. We refer to their model as the HLM model. Ultimately, Hagberg et. al utilized the HLM model to study temporal reachability; that is, the expected time (number of evolutions or transitions) before a constant fraction of the vertices are reachable from an arbitrary vertex. We note that although the HLM model was developed to capture abstract dynamics exhibited by cyber data, the HLM model need not be limited to simulating cyber phenomena. Although, the experiment that follows is driven by cyber-security structures and data, the is no a priori reason that a similar experiment could not be applied across the variety of domains for which the evolving nature of the HLM model is appropriate, such as communication networks, social networks, and (on a much slower time scale) transportation networks. In the remainder of this section, we study the sensitivity of RH distance in detecting several planted attack profiles, utilizing the HLM model to simulate the natural time evolution of a generic cyber network graph topology. Before describing our experimental methodology, we first begin by defining and briefly discussing the HLM model. Hagberg-Lemons-Mishra (HLM) model As the HLM model can be viewed as a time-evolving generalization of the Chung-Lu model \(\mathcal {G}(w)\), we will first briefly review the Chung-Lu model as introduced in Chung and Lu (2002); Chung and Lu (2004). The parameterization vector of the Chung-Lu model, w, is n-dimensional where n is the number of vertices in the graph. Additionally, the vector w satisfies that \(w_{v} \leq \sqrt {\rho }\) for all v where \(\rho = \sum _{i=1}^{n} w_{i}\). From the parameter w the Chung-Lu model is generated by including each edge {u,v}, independently, with probability wuwv/ρ. For overview of many of the known properties of the Chung-Lu model see the recent monograph (Chung and Lu 2006). The HLM model generates an infinite sequence of graphs G0,G1,G2,… with the property that there is a fixed vector w such that for all i, \(G_{i} \overset {\mathcal {D}}{=} \mathcal {G}(w)\) where \(\overset {\mathcal {D}}{=}\) is equality in distribution. In order to generate this sequence an additional parameter, α, is introduced to tune the extent to which graph Gi+1 is controlled by Gi. Specifically, α∈[0,1]n and Gi+1 is formed from Gi by generating a masking set M where each pair {u,v} is in M independently with probability \(\sqrt {\alpha _{u}\alpha _{v}}\). For an edge {u,v}∉M, {u,v}∈Gi+1 if and only if {u,v}∈Gi, while each potential edge {u,v} in M is present independently with probability wuwv/ρ. In summary, we have that $$\mathbb{P}(\{u,v\} \in G_{i+1}) = \left\{\begin{array}{ll} 1 - \sqrt{\alpha_{u}\alpha_{v}} + \sqrt{\alpha_{u}\alpha_{v}} \nicefrac{w_{u}w_{v}}{\rho}& \{u,v\} \in G_{i} \\ \sqrt{\alpha_{u}\alpha_{v}}\nicefrac{w_{u}w_{v}}{\rho} & \{u,v\} \not\in G_{i}\end{array}\right..$$ The fact that \(G_{i+1} \overset {\mathcal {D}}{=} G_{i}\) follows by induction and the observation that $$\nicefrac{w_{u}w_{v}}{\rho}(1 - \sqrt{\alpha_{u}\alpha_{v}} + \sqrt{\alpha_{u}\alpha_{v}} \nicefrac{w_{u}w_{v}}{\rho}) + (1-\nicefrac{w_{u}w_{v}}{\rho})\sqrt{\alpha_{u}\alpha_{v}}\nicefrac{w_{u}w_{v}}{\rho} = \nicefrac{w_{u}w_{v}}{\rho}.$$ We note that there is a natural trivial generalization of the HLM model where the edge probability wuwv/ρ is replaced with arbitrary values in puv∈[0,1]. In this case, at each time step the network is distributed over graphs like \(\mathcal {G}(P)\), the generic independent edge graph model with parameter P. Similarly, the evolution parameter α can be generalized to a symmetric matrix A∈[0,1]n×n. We note that several well studied models fall into this framework, including the stochastic block model, stochastic Kronecker graphs (Leskovec et al. 2005; Mahdian and Xu 2007), random dot product graphs (Young 2008; Young and Scheinerman 2008; 2007), and the inhomogeneous random graph model (Bollobás et al. 2007; Söderberg 2002). In order to maintain consistent notation, we will specify all of the experiments in this work in terms of this generalized HLM model even though most of generative matrices P come from the Chung-Lu model. Further, with the aim of having the minimum number of free-parameters we will only consider HLM evolutions where Auv=Axy for all u≠v and x≠y. We will further slightly abuse notation and refer to this common value as α. Finally, we note that this generalized framework can be further expanded by allowing the parameter matrix P to depend on the time step t. In particular, we have $$\mathbb{P}(\{u,v\} \in G_{t+1} \mid G_{t}) = \left\{\begin{array}{ll} (1-\alpha) + \alpha p^{(t+1)}_{uv} & \{u,v\} \in G_{t} \\ \alpha p^{(t+1)} & \{u,v\} \notin G_{t}\end{array}\right..$$ It is worth mentioning that in this case Gt+1 is not distributed like \(\mathcal {G}(P^{(t+1)})\) because of the possibility of edges being present from earlier timesteps. In fact, it is an easy exercise to show that the edges of Gt are distributed according to \((1-\alpha)^{t}P^{(0)} + \sum _{i=1}^{t} \alpha (1-\alpha)^{t-i}P^{(i)}.\) Experimental setup In our experimental setup, in keeping with the lightweight nature of the RH calculation, we focus on the detection of small anomalies in extremely sparse graphs, such as we observed in small time windows for the LANL data set and other proprietary network flow data. For the sparse graphs we consider two different fixed degree distributions. Both of these degree distributions are formed by choosing 5000 samples from some fixed probability distribution. For the first degree distribution, we estimate a degree density function using a smoothed median estimator from a selection of one minute graphs in the LANL network flow data set, see Fig. 3a. The resulting degree density function results in a "power-law" like degree distribution with exponent approximately 3.5. Although the resulting degree density function is not truly a power-law distribution, we will abuse notation and refer to it as a "power-law" degree distribution. For a discussion of the difficulties and appropriateness of the power-law degree distribution for real data the interested reader is referred to the recent work (Broido and Clauset 2018). The resulting distribution has 4742 edges in expectation as well as maximum expected degree 961. The second distribution represents what we call a "bump power-law," that is, a power-law distribution coupled with an approximately binomially distributed "bump" at higher degrees. This can be thought of as a more hub-and-spoke style network where the degree of the spoke vertices are approximately power-law distributed while the degree of the hub vertices are approximately binomially distributed. For the bump power-law distribution, the degree probabilities were explicitly estimated from a collection of several thousand graphs generated from a proprietary enterprise boundary network flow data set (see Fig. 3b). This resulting distribution has 6067 edges in expectation as well as maximum expected degree 327.Footnote 5 log- log Degree Distribution. a Power-Law. b Bump Power-Law We will also consider two different styles of anomalies involving between 10 and 50 edges. The first anomaly involves three randomly chosen vertices adding some number of edges to the rest of the network uniformly at random. We view this as behavior consistent with a probe or scan of the network structure. For the second anomaly a random collection of vertices are chosen and a random spanning tree is added among those vertices. We view this as behavior consistent with lateral movement scenario where an attacker is exploring the network by moving from machine to machine. They may backtrack and try different routes (thus a tree rather than just a path) as needed. For each of these 420 scenarios (two different degree distributions, two different anomaly types, five different anomaly sizes, and 21 different values of α) we produce 1000 different pairs of graphs (G,G′) where G is a random instance of the Chung-Lu model with the chosen degree distribution and G′ is formed from G by performing one step of the HLM evolution with the chosen parameter α, and then adding a random instance of the chosen anomaly of the chosen size. In this way, the anomaly occurs concurrently with the natural evolution of the network, as might be typical of real-world data. For each of our 420 scenarios, and 1000 pairs of graphs within the scenario, we compute the RH distance between G and G′ to get a distribution of RH distances for the anomalous transition. Anomalous versus nominal relative hausdorff distance In this section we consider whether the anomalous transitions in the HLM model result in a different distribution of RH distances than a nominal transition. To this end, for each degree distribution and choice of α, we simulate 10,000 different HLM transitions to develop a baseline distribution of RH distances, see Fig. 4. For each of the 420 anomalous scenarios we calculate the 2-sample Kolmogorov-Smirnov p-value (Young 1977) between the previously calculated anomalous distribution and this baseline distribution. For each of the 420 different anomaly scenarios the KS test significance value is less that 0.01, indicating that we can reject the null hypothesis that the distribution of RH distances for an anomalous HLM transition is the same as the distribution for non-anomalous transitions. In particular, this means that in a statistical sense the RH distance is able to pick up on anomalous evolution of the degree distribution, even when the anomaly only consists of 10 edges.Footnote 6 Distribution of RH distance under HLM evolution. a Power-Law. b Bump Power-Law In the next subsection we will consider the effectiveness of the RH distance in detecting anomalous behavior directly, rather than statistically. Anomaly detection In this section we consider how RH distance could be used to detect anomalous behavior in a streaming environment and compare to the performance with a similarly lightweight measure (KS distance) as well as a "ideal" measure (graph edit distance). To compare between these three methods in a non-parametric way (i.e. without introducing a "anomaly threshold") we introduce the idea of an anomaly score of an observation with respect to a theoretically or empirically observed baseline distribution. Specifically, let the random variable Z have theoretical or empirical cumulative distribution function \(f_{Z} \colon \mathbb {R} \rightarrow [0,1]\). We will then say that a particular observation z (not necessarily distributed as Z) has an anomaly score relative to fZ of \(2\left \|f_{Z}(z) - \frac {1}{2}\right \|\). Note that this score takes on values from [0,1] with values closer to one being more "anomalous." This score can be thought of as measuring the deviation of the observation z from the bulk of the distribution of Z. Before turning to a direct comparison between anomaly scores for RH distance, KS p-valuesFootnote 7, and edit distance, we consider the performance of each of these anomaly scores in isolation via ROC-like curves, presented for a subset of our 20 scenarios (2 distributions, 2 anomaly types, 5 levels of each anomaly) in Figs. 5, 6, 7, and 8. Note that as the relative frequency of anomalous and non-anomalous behavior is unknown, these are not truly ROC curves but rather implicit plots (x(t),y(t)) where t is some threshold value. Specifically, y(t) is the fraction of the anomalous transitions that have anomaly score at least t, i.e. "true positives", where x(t) is the fraction of non-anomalous transitions that have anomaly score at least t, i.e. "false positives". At this point it is worth pointing out that if z is identically distributed with the random variable Z, then the anomaly score for z is uniformly distributed over [0,1]. As a consequence, we can explicitly define x(t)=1−t. To compute the ROC curve for one scenario we used the previously computed cumulative distribution function for the 10,000 non-anomalous transitions as fZ. Then, for each of the 1000 anomalous transitions we use the RH distance as z and compute the anomaly score for that value in the context of fZ. Power-Law Distribution, Scan, 10 edges. a Kolmogorov-Smirnov. b Relative Hausdorff. c Edit Power-Law, Lateral Movement, 50 edges. a Kolmogorov-Smirnov. b Relative Hausdorff. c Edit Bump Power-Law, Lateral Movement, 10 edges. a Kolmogorov-Smirnov. b Relative Hausdorff. c Edit Bump Power-Law, Scan, 50 edges. a Kolmogorov-Smirnov. b Relative Hausdorff. c Edit Overall we can see that, for detecting anomalies, edit distance would be preferred to RH distance, which would in turn be preferred to the KS statistic. However, for the bump power-law, under the lateral movement anomaly with 10 edges, we see that the RH distance outperforms the edit distance, see Fig. 7. Kolmogorov-Smirnov (KS) For this section we will compare the anomaly score of the RH distance with the anomaly score of the KS p-value (significance value) between successive degree distributions both for the 420 anomalous scenarios and the 40 baseline distributions. We note that since the degree distributions are discrete valued, the application of the KS test for hypothesis testing is not necessarily appropriate, however as we are interested in statistical behavior of the significance test and KS is widely used in the network analysis literature (see, for instance, (Aliakbary et al. 2014; Broido and Clauset 2018; Simpson et al. 2015)) we will ignore these technical issues. Figure 9 gives the relative performance of the KS and RH anomaly scores across all 420 anomaly scenarios. The y value counts how many of the 1000 cases RH outperforms KS. We can see that the RH distance outperforms KS the most for the scenario where there is a 50-edge scan anomaly on the power-law distribution with an evolution rate of 0.23. In this case, the RH anomaly score is larger in 924 of the 1000 different trials. We see that overall, excepting cases with a low-evolution rate and larger, lateral movement anomalies, RH distance is clearly superior, especially for the power-law degree distribution. It is worth noting that relative performance of the KS statistic improves when considering the lateral movement anomaly rather than the scan anomaly. Since the degree change caused by lateral movement is spread across many vertices (as opposed to scan where the primary change is spread across only three vertices), this result can be explained by the well known sensitivity of the KS test to variation away from the a tails of the distribution (Simpson et al. 2015). Relative Performance of RH and KS Anomaly Scores. a RH versus KS. b RH versus Edit. a RH versus KS. b RH versus Edit We note that a direct binary comparison between the two measures may not tell the whole story of their relative performance. For instance, in an extreme case, one can imagine one of the two measures taking on a fixed large value (indicating an anomaly) while the other takes on both small values and, more frequently, a value that is slightly larger than the other anomaly score. We separate the anomalous pairs of graphs into two sets according to whether their RH or KS anomaly score is higher. Then, for each of these two classes we report in Fig. 10 (across all 420 anomaly scenarios) the mean difference between the scores with error bars representing one standard deviation of range around this mean. We note that the RH anomaly score typically exceeds the KS anomaly score by about 0.4, while the KS anomaly score typically exceeds the RH anomaly score by between 0.2 and 0.3. Further, the standard deviation across all cases the average gap between the RH and KS anomaly score is fairly consistently in the range [0.2,0.3] essentially independent of all parameters. Together, the data in Figs. 9 and 10, indicates that the RH distance is significantly more sensitive than KS distance to the broad range of anomalies we have investigated. Difference between KS anomaly scores and RH anomaly scores. a Power-Law Scan. b Bump Power-Law Scan. c Power-Law Lateral movement. d Bump Power-Law Lateral movement Graph edit distance In this section we compare the sensitivity of RH distance to a "perfect information" aggregate measure, in particular graph edit distance. Recall that the edit distance between two graphs F and G is the minimum "weight" of a sequence of edge/vertex additions/deletions needed to transform F to G. In general this quantity is \(\mathcal {N}\mathcal {P}\)-complete to compute (see (Zeng et al. 2009)) and likely impractical to even approximate (Lin 1994). This complexity is driven by the difficulty in finding the optimal alignment between the vertices of F and G which maximizes the edge overlap between F and G. For the HLM model, this problem is mitigated by the natural alignment between the graphs generated at consecutive time steps. Thus, for purposes of this section we will approximate the graph edit distance as the number of edges that "flip" during each evolution of the HLM model. The following lemma allows us to significantly simplify the calculation of the anomaly score for edit distance by approximating the baseline distribution with the large n limit. Lemma 1 Let G be an random graph distributed according to \(\mathcal {G}(P)\) and let G′ be the graph formed by one iteration of the Hagberg-Lemons-Mishra evolution with evolution parameter α and probability matrix P′. Let X be the random variable that counts the number of edges that differ between G and G′. If Var (X)→∞, then X is asymptotically normally distributed. Let Xij be the indicator function for the random variable that edge {i,j} is present in precisely one of G′ and G and observe that \(X = \sum _{i < j} X_{ij}\). We recall that by the Lyapunov Central Limit Theorem ((Billingsley 2008), p. 362), we have that $$\frac{X - \mathbb{E}[X]}{\sqrt{\text{Var}(X)}} \overset{\mathcal{D}}{\longrightarrow} \mathcal{N}(0,1)$$ if there is some δ>0 such that $${\lim}_{n \rightarrow \infty} \frac{1}{\text{Var}(X)^{\nicefrac{2+\delta}{2}}} \sum_{i < j} \mathbb{E}\left[\left|X_{ij} - \mathbb{E}[X_{ij}]\right|^{2+\delta}\right] = 0.$$ Fixing δ=1, we note that $$\begin{array}{*{20}l} \sum_{i < j} \mathbb{E}\left[|X_{ij} - \mathbb{E}[X_{ij}]|^{3}\right] &= \sum_{i < j} \mathbb{E}[X_{ij}] (1-\mathbb{E}[X_{ij}])^{3} + (1-\mathbb{E}[X_{ij}])\mathbb{E}[X_{ij}]^{3} \\ &= \sum_{i < j} \mathbb{E}[X_{ij}](1-\mathbb{E}[X_{ij}])\left(\mathbb{E}[X_{ij}]^{2} + (1-\mathbb{E}[X_{ij}])^{2}\right) \\ &\leq \sum_{i < j} \mathbb{E}[X_{ij}](1-\mathbb{E}[X_{ij}]) \\ &= \text{Var}(X). \end{array} $$ Thus, if Var(X)→∞, then $${\lim}_{n \rightarrow \infty} \frac{1}{\text{Var}(X)^{\nicefrac{3}{2}}} \sum_{i < j} \mathbb{E}\left[|X_{ij} - \mathbb{E}[X_{ij}]|^{3}\right] = 0$$ and X is normally distributed. □ It is worth mentioning that this, in principle, allows for an explicit formula for the distribution of the anomaly score for edit distance in a wide range of baseline and anomalous behaviors, namel $$\mathbb{P}(S \leq s) = \Phi\left(\frac{\mu - \mu_{A}}{\sigma_{A}} + \frac{\sigma_{A}}{\sigma} \Phi^{-1}\left(\frac{s+1}{2}\right)\right) - \Phi\left(\frac{\mu - \mu_{A}}{\sigma_{A}} - \frac{\sigma_{A}}{\sigma} \Phi^{-1}\left(\frac{s+1}{2}\right)\right),$$ where (μ,σ) and (μA,σA) are the mean and distribution of the baseline and anomalous evolutions, respectively, and Φ is the cumulative distribution function of the standard normal distribution. However, given the correlated nature of the anomalies the calculation of σA is tedious, so we will empirically estimate this distribution. Figure 9b again presents the relative performance of the anomaly scores, this time for edit distance and RH distance, for all 420 anomaly trials. Again the y value counts how many of the 1000 cases RH outperforms edit distance. We note that in the best case (bump power-law degree distribution, lateral movement anomaly, 10 edges, α=0.24), the RH distance anomaly score is larger than the edit distance anomaly scores 646 times. However, for the power-law case the RH anomaly scores essentially never outperform the edit distance anomaly scores. This failure is mitigated by the fact that, as mentioned earlier, in many cases the edit distance is computationally infeasible, while the RH distance requires minimal computational overhead. It is also worth mentioning that we can see a clear degradation of performance for edit distance as the size of the anomaly decreases and the evolution rate increases. This phenomenon can be explained by observing that the anomaly score for edit distance is driven by a z-score of the anomaly, which is linearly correlated with the anomaly size and inversely correlated with the standard deviation of the baseline distribution. Additionally, the variance baseline distribution of edit distance is linear related to the evolution rate, resulting in significantly decreased sensitivity at high evolution rates. We further compare the relative behavior of the edit distance anomaly scores and the RH distance anomaly scores, in the same way as we did for KS above, by considering the average difference between the anomaly scores in the cases where the RH anomaly score is larger (positive values) and in the case the edit distance anomaly score is larger (negative values). As the RH distance anomaly score is essentially never larger than the edit distance anomaly score for the power-law distribution, we restrict our attention here to the bump power-law distribution. In Fig. 11, we again report the relative magnitude of the differences with the error bars representing an interval one standard deviation away from the mean. Again we can see a clear stratification of the behavior with the RH anomaly scores performing better as the size of the anomaly decreases. We also note the mild improvement in the performance of RH distance as the evolution rate increases, likely reflecting the decreased sensitivity of edit distance (due to larger variance). Difference between edit distance and RH distance. a Bump Power-Law Scan. b Bump Power-Law Lateral movement Interestingly, the standard deviation is essentially constant over all choices of degree distribution, anomaly type, anomaly size, and evolution rate and is also roughly equal to the standard deviations shown in Fig. 10. Furthermore, the magnitude of the standard deviation is close to the minimal possible standard deviation given by the generalization of Bhatia-Davis inequality for the variance of a bounded random variable (Agarwal et al. 2005). As the extremal distribution is given by point masses at the end points of the distribution, this indicates that there are three essentially distinct outcomes: the RH distance anomaly score is significantly larger than the edit distance anomaly score, the RH and edit distance anomaly scores are essentially the same, and the edit distance anomaly score is significantly larger than the RH distance anomaly score. Furthermore, this holds regardless of the size and nature of anomaly or evolution rate and also holds when replacing edit distance with KS distance (for both degree distributions). In this work, we conducted an experimental and statistical study of Relative Hausdorff distance in the context of time-evolving sequences of graphs. Applying RH distance as an anomaly detection tool, we first tested its detection of red team events across multiple modalities in real cyber security data. We found evidence that RH distance values register statistical change-points at red team events, although these results were sensitive to window length, the granularity of pairwise RH measurements, and subject to the limitations of the data. In order to test RH distance in a more controlled and rigorous manner, we then turned our attention to a temporal graph model inspired by cyber-data. Using this temporal graph model to generate synthetic sequences of evolving graphs, we experimentally tested the sensitivity of RH distance to two attack profiles. To broaden our tests scope, we considered a multitude of parameter settings in which we varied the input degree distribution, temporal evolution rate, and intensity of the attack signal. In its own right, RH distance performed respectably, yielding ROC curves above the line of no-discrimination for every scenario tested. Compared with other similarity measures, RH distance consistently outperformed another lightweight similarity measure based on Kolmogorov-Smirnov distance, while its performance against the computationally-intensive edit distance was more mixed: while edit distance clearly outperformed RH distance under scenarios featuring the power law degree distribution, RH distance was better able to detect the low-intensity lateral movement attack under the bump power law degree distribution. Anomaly detection generally, and even specifically in cyber security, is not amenable to a "one method to rule them all" mentality. Indeed, there are many types of anomalies and one does not expect them to all be caught by the same detector. It is important to recognize that our analysis does not use all of the available information pertinent to real cyber data. Distilling a time interval of data down to a single graph and removing all metadata is likely to introduce many false positives. It could be that a graph is anomalous given the recent context, but the behavior is fully expected by cyber security operations analysts (e.g. a daily backup may appear to be an exfiltration if the IP addresses involved aren't considered). In the other direction, if the graph does not contain the metadata that would flag an anomaly, this may similarly introduce false negatives. By integrating metadata into our analysis, it is possible that as anomalies are discovered, this metadata could be used to help classify them as benign and nefarious anomalies. Lastly, it is also worth noting that our analyses considered the entire data in a given period, as opposed to an online approach. Whether and how RH distance might be utilized in online anomaly detection frameworks remains another open topic for future research. The data used in "Los Alamos National Laboratory (LANL) cybersecurity data" section is available at https://csr.lanl.gov/data/cyber1/. The proprietary enterprise boundary network flow data used to estimate the bump power-law distribution featured in "Simulated evolving networks" section is unavailable. The remaining data and code will be made available upon request to the corresponding author, subject to institutional approval. While others sometimes including merging and splitting edit operations, we restrict our attention to edit distance based on the three aforementioned operations. Note that we do not consider signature-based methods like those employed in intrusion detection/prevention systems (e.g., Snort) to be anomaly detection methods. Instead these are rule-based behavior identification tools. The prescribed number of clusters was chosen to coincide with the first observed gap in Laplacian eigenvalues, as in von Luxburg (2007). As a practical matter, for those entries (t,s,δ) where there is insufficient or no data to estimate \(\hat {v}^{(t)}_{s,\delta }\), the entry is dropped from the vector and ignored in the calculation of the temporal score. It is worth mentioning that both of these degree distributions violate the standard assumption for the Chung-Lu model \(\max _{v} w_{v} \leq \sqrt {\rho }.\) To deal with this, we replace that edge probabilities of wuwv/ρ with min{1,wuwv/ρ}. However, as there are under 200 pairs {u,v} where wuwv>ρ for each of the degree distributions, this makes a minimal difference in the model. It is important to note that this is not always the case. For example, in an experiment that is not reported for space limitations, we synthetically generated a degree distribution with a power-law exponent of 4 and average degree around 1.4, the anomalies resulted in a range of KS statistics including several scenarios which were not statistically distinguishable. From this point on in this work, although we will be using the KS p-value we will be treating it simply as a distance measure rather than a statistical quantity. In particular, we will make no assumptions about the meaning of large or small values of the p-value other than as a means of measuring the "closeness" between two degree distributions. RH: Relative Hausdorff Kolmogorov-Smirnov GED: HLM: Hagberg-Lemons-Mishra MEG: Markovian evolving graph Agarwal, PK, Fox K, Nath A, Sidiropoulos A, Wang Y (2018) Computing the gromov-hausdorff distance for metric trees. ACM Trans Algoritm 14:1–20. Agarwal, R, Barnett NS, Cerone P, Dragomir SS (2005) A survey on some inequalities for expectation and variance. Comput Math Appl 49:429–480. MathSciNet MATH Article Google Scholar Aggarwal, CC, Zhao Y, Philip SY (2011) Outlier detection in graph streams. IEEE. https://doi.org/10.1109/icde.2011.5767885. Akoglu, L, Faloutsos C (2010) Event detection in time series of mobile communication graphs In: 27th Army science conference, 77–79, Orlando. Akoglu, L, Tong H, Koutra D (2014) Graph based anomaly detection and description: a survey. Data Min. Knowl. Discov. 29:626–688. Aksoy, S, Nowak K, Young S (2018) A linear-time algorithm and analysis of graph relative hausdorff distance. in preprint. 1906.04936. Aliakbary, S, Habibi J, Movaghar A (2014) Quantification and comparison of degree distributions in complex networks In: 7'th International Symposium on Telecommunications (IST'2014), 464–469.. IEEE. https://doi.org/10.1109/istel.2014.7000748. Avin, C, Koucký M, Lotker Z (2008) How to explore a fast-changing world (cover time of a simple random walk on evolving graphs) In: Automata, Languages and Programming, 121–132.. Springer, Berlin, Heidelberg. Banič, I, Taranenko A (2015) Measuring closeness of graphs—the hausdorff distance. Bull Malays Math Sci Soc 40:75–95. Billingsley, P (2008) Probability and measure. Wiley, Hoboken. Blondel, VD, Gajardo A, Heymans M, Senellart P, Dooren PV (2004) A measure of similarity between graph vertices: Applications to synonym extraction and web searching. SIAM Rev 46:647–666. Bollobás, B, Janson S, Riordan O (2007) The phase transition in inhomogeneous random graphs. Random Struct Algoritm 31:3–122. Broido, AD, Clauset A (2018) Scale-free networks are rare. arXiv preprint. arXiv:1801.03400. Chen, P, Choudhury S, Hero AO (2016) Multi-centrality graph spectral decompositions and their application to cyber intrusion detection In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 4553–4557.. IEEE. https://doi.org/10.1109/icassp.2016.7472539. Choi, J (2019) Gromov-hausdorff distance between metric graphs. https://math.mit.edu/research/highschool/primes/materials/2018/Choi.pdf. Chung, F, Lu L (2002) The average distances in random graphs with given expected degrees. Proc Natl Acad Sci 99:15879–15882. Chung, F, Lu L (2004) The average distance in a random graph with given expected degrees. Internet Math 1:91–113. Chung, F, Lu L (2006) Complex graphs and networks, vol. 107 of CBMS Regional Conference Series in Mathematics. Published for the Conference Board of the Mathematical Sciences, Washington, DC. Clementi, A, Silvestri R, Trevisan L (2014) Information spreading in dynamic graphs. Distrib Comput 28:55–73. Clementi, AEF, Macci C, Monti A, Pasquale F, Silvestri R (2010) Flooding time of edge-markovian evolving graphs. SIAM J Discrete Math 24:1694–1712. Edwards, DA (1975) The structure of superspace In: Studies in Topology, 121–133.. Elsevier. https://doi.org/10.1016/b978-0-12-663450-1.50017-7. Fernández, M-L, Valiente G (2001) A graph distance metric combining maximum common subgraph and minimum common supergraph. Pattern Recogn Lett 22:753–758. MATH Article Google Scholar François, J, Wang S, Engel T, et al. (2011) Bottrack: tracking botnets using netflow and pagerank In: International Conference on Research in Networking, 1–14.. Springer. Gao, X, Xiao B, Tao D, Li X (2009) A survey of graph edit distance. Pattern Anal Appl 13:113–129. Gaston, ME, Kraetzl M, Wallis WD (2006) Using graph diameter for change detection in dynamic networks. Australas J Comb 35:299–312. Gibbons, JD, Chakraborti S (2011) Nonparametric statistical inference. Springer, New York. Giuseppe, J, Roberto V, Cesare F (2011) An introduction to spectral distances in networks. Front Artif Intell Appl 226:227–234. Gove, R, Deason L (2018) Visualizing automatically detected periodic network activity In: Proceedings of the IEEE Symposium on Visualization for Cyber Security.. Center for Open Science. https://doi.org/10.31219/osf.io/xpwfe. Gromov, M (1981) Structures métriques pour les variétés riemanniennes. Textes Math Matiques Math Texts 1:iv+152. Hagberg, A, Lemons N, Misra S (2016) Temporal reachability in dynamic networks In: Dynamic Networks and Cyber-Security, WORLD SCIENTIFIC (EUROPE), 181–208.. WORLD SCIENTIFIC (EUROPE). https://doi.org/10.1142/9781786340757_0009. Hausdorff, F (1914) Grundzuge der Mengenlehre. Am Math Soc. Leipzig: Veit, ISBN 978-0-8284-0061-9 Reprinted by Chelsea in 1949. Holme, P, Saramäki J (2012) Temporal networks. Phys Rep 519:97–125. Hubballi, N, Goyal D (2013) Flowsummary: Summarizing network flows for communication periodicity detection In: International Conference on Pattern Recognition and Machine Intelligence, 695–700.. Springer. https://doi.org/10.1007/978-3-642-45062-4_98. Ishibashi, K, Kondoh T, Harada S, Mori T, Kawahara R, Asano S (2010) Detecting anomalous traffic using communication graphs In: Telecommunications: The Infrastructure for the 21st Century (WTC), 2010, 1–6.. VDE, Berlin. Kent, A (2014) Anonymized user-computer authentication associations in time, tech. report. Los Alamos National Lab.(LANL), Los Alamos. Kent, AD (2015) Comprehensive, Multi-Source Cyber-Security Events. Los Alamos National Laboratory, London. Kent, AD (2015) Cybersecurity Data Sources for Dynamic Network Research In: Dynamic Networks in Cybersecurity.. Imperial College Press. Kent, AD (2016) Cyber security data sources for dynamic network research In: Dynamic Networks and Cyber-Security, 37–65.. World Scientific, Singapore. Kleinberg, JM (1999) Authoritative sources in a hyperlinked environment. J. ACM 46:604–632. Lee, H, Chung MK, Kang H, Kim B-N, Lee DS (2011) Computing the shape of brain networks using graph filtration and gromov-hausdorff metric In: Lecture Notes in Computer Science, 302–309.. Springer, Berlin Heidelberg. Leskovec, J, Chakrabarti D, Kleinberg J, Faloutsos C (2005) Realistic, mathematically tractable graph generation and evolution, using kronecker multiplication In: Knowledge Discovery in Databases: PKDD 2005, 133–145.. Springer, Berlin Heidelberg. Lin, CL (1994) Hardness of approximating graph transformation problem In: Algorithms and Computation, 74–82.. Springer, Berlin Heidelberg. Mahdian, M, Xu Y (2007) Stochastic kronecker graphs In: International workshop on algorithms and models for the web-graph, 179–186.. Springer, Berlin Heidelberg. Marsaglia, G, Tsang WW, Wang J (2003) Evaluating kolmogorov's distribution. J. Stat. Softw. 8:1–4. Matulef, KM (2017) Final report: Sampling-based algorithms for estimating structure in big data. tech. report. Sandia National Laboratory, Livermore. Ng, AY, Jordan MI, Weiss Y (2002) On spectral clustering: Analysis and an algorithm In: NIPS'01 Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and Synthetic, 849–856.. MIT Press, Cambridge, MA. Noble, J, Adams N (2018) Real-time dynamic network anomaly detection. IEEE Intell. Syst. 33:5–18. Nowak, K, Marrero CO, Young SJOn the structure of isometrically embeddable metric spaces. arxiv:1808.10509. Price-Williams, M, Heard N, Turcotte M (2017) Detecting periodic subsequences in cyber security data In: 2017 European Intelligence and Security Informatics Conference (EISIC), 84–90.. IEEE. https://doi.org/10.1109/eisic.2017.40. Ranshous, S, Shen S, Koutra D, Harenberg S, Faloutsos C, Samatova NF (2015) Anomaly detection in dynamic networks: a survey. Wiley Interdiscip. Rev. Comput. Stat. 7:223–247. Sanfeliu, A, Fu K-S (1983) A distance measure between attributed relational graphs for pattern recognition In: IEEE Transactions on Systems, Man, and Cybernetics, 353–362.. SMC-13. https://doi.org/10.1109/tsmc.1983.6313167. Sapienza, A, Panisson A, Wu J, Gauvin L, Cattuto C (2015) Anomaly detection in temporal graph data: An iterative tensor decomposition and masking approach In: International Workshop on Advanced Analytics and Learning on Temporal Data.. AALTD 2015, New York. Söderberg, B (2002) General formalism for inhomogeneous random graphs. Phys Rev E 66. https://doi.org/10.1103/physreve.66.066121. Sensarma, D, Sarma SS (2015) A survey on different graph based anomaly detection techniques. Indian J Sci Technol 8. https://doi.org/10.17485/ijst/2015/v8i1/75197. Siegel, S, N.J.C. Jr (1988) Nonparametric Statistics for The Behavioral Sciences. McGraw-Hill Humanities/Social Sciences/Languages, New York. Simard, R, L'Ecuyer P (2011) Computing the two-sided kolmogorov-smirnov distribution. J Stat Softw 39. https://doi.org/10.18637/jss.v039.i11. Simpson, O, Seshadhri C, McGregor A (2015) Catching the head, tail, and everything in between: A streaming algorithm for the degree distribution In: 2015 IEEE International Conference on Data Mining.. IEEE. Stolman, A, Matulef K (2017) HyperHeadTail In: Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017 - ASONAM '17.. ACM Press. https://doi.org/10.1145/3110025.3119395. Tong, H, Lin C-Y (2011) Non-negative residual matrix factorization with application to graph anomaly detection In: Proceedings of the 2011 SIAM International Conference on Data Mining, Society for Industrial and Applied Mathematics.. Society for Industrial and Applied Mathematics. https://doi.org/10.1137/1.9781611972818.13. von Luxburg, U (2007) A tutorial on spectral clustering. Stat Comput 17:395–416. Wang, J, Paschalidis I. C. (2017) Botnet detection based on anomaly and community detection. IEEE Trans Control Netw Syst 4:392–404. Young, IT (1977) Proof without prejudice: use of the kolmogorov-smirnov test for the analysis of histograms from flow systems and other sources. J Histochem Cytochem 25:935–941. Young, SJ (2008) Random dot product graphs: a flexible model for complex networks. PhD thesis. Georgia Institute of Technology. Young, SJ, Scheinerman E (2008) Directed random dot product graphs. Internet Math 5:91–111. Young, SJ, Scheinerman ER (2007) Random dot product graph models for social networks In: Algorithms and, Models for the Web-Graph, 138–149.. Springer, Berlin Heidelberg. Zeng, Z, Tung AK, Wang J, Feng J, Zhou L (2009) Comparing stars: On approximating graph edit distance. Proc VLDB Endowment 2:25–36. The authors would like to thank our colleague Carlos Ortiz-Marrero for reviewing the manuscript. PNNL is operated by Battelle for the United States Department of Energy under Contract DE-AC05-76RL01830. The design of the study and all data analysis and interpretation was done under the direction of the authors of this paper. Pacific Northwest National Laboratory, Richland, 99352, WA, United States Sinan G. Aksoy, Kathleen E. Nowak & Stephen J. Young Pacific Northwest National Laboratory, Seattle, 98109, WA, United States Emilie Purvine Sinan G. Aksoy Kathleen E. Nowak Stephen J. Young EHP and SGA conducted the literature review, KEN processed the LANL data, SGA and SJY implemented the experiments in "Los Alamos National Laboratory (LANL) cybersecurity data" section, SJY implemented the experiments in "Simulated evolving networks" section and proved Lemma 1. All authors jointly conceived and wrote the paper, designed and analyzed the experiments, and approved the final manuscript. At the preference of all authors, the authors are listed in alphabetical order. All authors read and approved the final manuscript. Correspondence to Sinan G. Aksoy. The authors have no competing interests to declare. Aksoy, S.G., Nowak, K.E., Purvine, E. et al. Relative Hausdorff distance for network analysis. Appl Netw Sci 4, 80 (2019). https://doi.org/10.1007/s41109-019-0198-0 Graph similarity measure Cyber anomaly detection Temporal graphs Machine Learning with Graphs
CommonCrawl
A generic property of exact magnetic Lagrangians On essential coexistence of zero and nonzero Lyapunov exponents December 2012, 32(12): 4171-4182. doi: 10.3934/dcds.2012.32.4171 On a double penalized Smectic-A model Blanca Climent-Ezquerra 1, and Francisco Guillén-González 1, Dpto. Ecuaciones Diferenciales y Análisis Numérico, Universidad de Sevilla, Aptdo. 1160, 41080 Sevilla Received September 2011 Published August 2012 In smectic-A liquid crystals, a unity director vector $\boldsymbol{n}$ appear modeling an average preferential direction of the molecules and also the normal vector of the layer configuration. In the E's model [5], the Ginzburg-Landau penalization related to the constraint $|\boldsymbol{n}|=1$ is considered and, assuming the constraint $\nabla\times \boldsymbol{n}=0$, $\boldsymbol{n}$ is replaced by the so-called layer variable $\varphi$ such that $\boldsymbol{n}=\nabla\varphi$. In this paper, a double penalized problem is introduced related to a smectic-A liquid crystal flows, considering a Cahn-Hilliard system to model the behavior of $\boldsymbol{n}$. Then, the issue of the global in time behavior of solutions is attacked, including the proof of the convergence of the whole trajectory towards a unique equilibrium state. Keywords: Smectic-A liquid crystals, Cahn-Hilliard sys-tem, Navier-Stokes equations, coupled non-linear parabolic system, convergence to equilibrium.. Mathematics Subject Classification: Primary: 76A15; Secondary: 35A35, 35Q35, 35K30, 76D05, 76A10, 76D0. Citation: Blanca Climent-Ezquerra, Francisco Guillén-González. On a double penalized Smectic-A model. Discrete & Continuous Dynamical Systems - A, 2012, 32 (12) : 4171-4182. doi: 10.3934/dcds.2012.32.4171 F. Bethuel, H. Brezis and F. Hélein, Asymptotics for the minimization of a Ginzburg-Landau functional,, Calc. Var. Partial Differential Equations, 1 (1993), 123. doi: 10.1007/BF01191614. Google Scholar B. Climent-Ezquerra, F. Guillén-González and M. J. Moreno-Iraberte, Regularity and time-periodicity for a nematic liquid crystal model,, Nonlinear Analysis, 71 (2009), 539. Google Scholar B. Climent-Ezquerra, F. Guillén-González and M. A. Rodrĺguez Bellido, Stability for nematic liquid crystals with stretching terms,, International Journal of Bifurcations and Chaos, 20 (2010), 2937. doi: 10.1142/S0218127410027477. Google Scholar B. Climent-Ezquerra and F. Guillén-González, Global in time solutions and time-periodicity for a Smectic-A liquid crystal model,, Communications on Pure and Applied Analysis, 9 (2010), 1473. doi: 10.3934/cpaa.2010.9.1473. Google Scholar W. E, Nonlinear continuum theory of smectic-A liquid crystals,, Arch. Rat. Mech. Anal., 137 (1997), 159. doi: 10.1007/s002050050026. Google Scholar M. Grasselli and H. Wu, Long-time behavior for a nematic liquid crystal model with asymptotic stabilizing boundary condition and external force,, preprint., (). Google Scholar F. H. Lin and C. Liu, Non-parabolic dissipative systems modelling the flow of liquid crystals,, Comm. Pure Appl. Math., 48 (1995), 501. doi: 10.1002/cpa.3160480503. Google Scholar C. Liu, Dynamic Theory for Incompressible Smectic Liquid Crystals: Existence and Regularity,, Discrete and Continuous Dynamical Systems, 6 (2000), 591. doi: 10.3934/dcds.2000.6.591. Google Scholar A. Segatti and H. Wu, Finite dimensional reduction and convergence to equilibrium for incompressible Smectic-A liquid crystal flows,, preprint, (). Google Scholar H. Wu, Long-time behavior for nonlinear hydrodynamic system modeling the nematic liquid crystal flows,, Discrete and Continuous Dynamical System, 26 (2010), 379. doi: 10.3934/dcds.2010.26.379. Google Scholar S. Zheng, "Nonlinear Evolution Equations,", Chapman & Hall/CRC Monographs and Surveys in Pure and Applied Mathematics, 133 (2004). Google Scholar Erica Ipocoana, Andrea Zafferi. Further regularity and uniqueness results for a non-isothermal Cahn-Hilliard equation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020289 Tomáš Roubíček. Cahn-Hilliard equation with capillarity in actual deforming configurations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 41-55. doi: 10.3934/dcdss.2020303 Hussein Fakih, Ragheb Mghames, Noura Nasreddine. On the Cahn-Hilliard equation with mass source for biological applications. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020277 Ahmad El Hajj, Hassan Ibrahim, Vivian Rizik. $ BV $ solution for a non-linear Hamilton-Jacobi system. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020405 Hyung-Chun Lee. Efficient computations for linear feedback control problems for target velocity matching of Navier-Stokes flows via POD and LSTM-ROM. Electronic Research Archive, , () : -. doi: 10.3934/era.2020128 Imam Wijaya, Hirofumi Notsu. Stability estimates and a Lagrange-Galerkin scheme for a Navier-Stokes type model of flow in non-homogeneous porous media. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1197-1212. doi: 10.3934/dcdss.2020234 Pierluigi Colli, Gianni Gilardi, Gabriela Marinoschi. Solvability and sliding mode control for the viscous Cahn–Hilliard system with a possibly singular potential. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020051 Fang Li, Bo You. On the dimension of global attractor for the Cahn-Hilliard-Brinkman system with dynamic boundary conditions. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021024 Blanca Climent-Ezquerra Francisco Guillén-González
CommonCrawl
Highly responsive tellurium-hyperdoped black silicon photodiode with single-crystalline and uniform surface microstructure Zixi Jia, Qiang Wu, Xiaorong Jin, Song Huang, Jinze Li, Ming Yang, Hui Huang, Jianghong Yao, and Jingjun Xu Zixi Jia,1 Qiang Wu,1,* Xiaorong Jin,1 Song Huang,1 Jinze Li,1 Ming Yang,2 Hui Huang,3 Jianghong Yao,1,4 and Jingjun Xu1 1Key Laboratory of Weak-Light Nonlinear Photonics, Ministry of Education, TEDA Institute of Applied Physics and School of Physics, Nankai University, Tianjin 300457, China 2National Key Laboratory of Science and Technology on Power Sources, Tianjin Institute of Power Sources, Tianjin 300384, China 3Kunming Institute of Physics, Yunnan 650223, China [email protected] *Corresponding author: [email protected] Qiang Wu https://orcid.org/0000-0003-3189-2219 Z Jia Q Wu X Jin S Huang J Li M Yang H Huang J Yao J Xu Zixi Jia, Qiang Wu, Xiaorong Jin, Song Huang, Jinze Li, Ming Yang, Hui Huang, Jianghong Yao, and Jingjun Xu, "Highly responsive tellurium-hyperdoped black silicon photodiode with single-crystalline and uniform surface microstructure," Opt. Express 28, 5239-5247 (2020) Enhanced responsivity of co-hyperdoped silicon photodetectors fabricated by femtosecond laser... Sheng-Xiang Ma, et al. J. Opt. Soc. Am. B 37(3) 730-735 (2020) Improving crystallinity of femtosecond-laser hyperdoped silicon via co-doping with nitrogen Haibin Sun, et al. Opt. Mater. Express 6(4) 1321-1328 (2016) Visible-blind short-wavelength infrared photodetector with high responsivity based on hyperdoped... Xiaodong Qiu, et al. Laser Machining and Material Processing Femtosecond lasers Laser irradiation Laser materials processing Optical receivers Photonic integration Surface plasmon polaritons Original Manuscript: December 17, 2019 Revised Manuscript: January 22, 2020 Manuscript Accepted: January 26, 2020 Experimental section Femtosecond laser hyperdoped silicon, also known as the black silicon (BS), has a large number of defects and damages, which results in unstable and undesirable optical and electronic properties in photonics platform and optoelectronic integrated circuits (OEICs). We propose a novel method that elevates the substrate temperature during the femtosecond laser irradiation and fabricates tellurium (Te) hyperdoped BS photodiodes with high responsivity and low dark current. At 700 K, uniform microstructures with single crystalline were formed in the hyperdoped layer. The velocity of cooling and resolidification is considered as an important role in the formation of a high-quality crystal after irradiation by the femtosecond laser. Because of the high crystallinity and the Te hyperdoping, a photodiode made from BS processed at 700 K has a maximum responsivity of 120.6 A/W at 1120 nm, which is far beyond the previously reported Te-doped silicon photodetectors. In particular, the responsivity of the BS photodiode at 1300 nm and 1550 nm is 43.9 mA/W and 56.8 mA/W with low noise, respectively, which is valuable for optical communication and interconnection. Our result proves that hyperdoping at a high substrate temperature has great potential for femtosecond-laser-induced semiconductor modification, especially for the fabrication of photodetectors in the silicon-based photonic integration circuits. The silicon photodetector is one of the most critical elements in a silicon photonics platform as an integrated optical receiver due to its low cost, low noise, and great compatibility with the current complementary metal-oxide-semiconductor (CMOS) fabrication processes [1–3]. It is well known that long wavelengths are used for the information transmission in silicon photonic systems in optical communication and optical interconnection, particularly at 1300 nm and 1550 nm. However, due to the 1.12-eV bandgap of silicon, it is unable to respond to a characteristic near-infrared (IR) communication band. To achieve silicon-based IR detection, narrow-gap semiconductors, such as Ge, InGaAs, and PbS, are usually embedded into the Si platform either through epitaxial growth or via bonding methods. This will, however, increase the processing complexity and compromise the CMOS compatibility [4]. Intriguingly, hyperdoping is an effective method to extend the absorption range of silicon. Many recent studies have reported that femtosecond laser hyperdoping can broaden the dopant energy level of the hyperdoped silicon (BS) into an intermediate band [5,6]. Then, two sub-bandgap IR photons can be used to excite an electron from the valence band to the conduction band [7,8]. Given its broadband absorption capability and low cost, hyperdoped silicon is of prime interest for silicon-based detectors [4,9,10] for security and telecommunication, and for highly-efficient photovoltaic cells that harvest additional energy from the sub-bandgap spectrum [11–13]. Despite the excellent IR absorption, the application of hyperdoped silicon in silicon-based detections still faces some problems. Inevitably, the crystal lattice is extremely damaged by a high concentration of undesirable defects and damages incorporated into the silicon [14–16]. These defects induce a high density of recombination centers and the resultant internal leakage, which greatly reduce the responsivity and increase the dark current of the device. To date, several studies were devoted to improving the crystallinity of hyperdoped silicon. The approaches included femtosecond laser irradiation followed by thermal annealing [17], ion implantation followed by proper annealing [18,19], and the co-doping of supersaturated nitrogen and sulfur [20]. Nevertheless, defects and dislocations still formed in the sample, thereby hindering the industrialization. In this work, we report an optimized method that elevates the silicon substrate temperature during the femtosecond laser irradiation and manufactures highly responsive Te-hyperdoped BS photodiodes. When the temperature increases, a uniform surface microstructure with an excellent single-crystal hyperdoped layer is formed in the sample surface, which helps to enhance the photoresponse and the stability. The n-n$\def\upmu{\unicode[Times]{x00B5}}^+$ BS photodiode prepared at 700 K through annealing has a responsivity of 120.6 A/W at 1120 nm, which is the highest reported value for a Te-doped silicon photodiode, to the best of our knowledge. Particularly, the sub-bandgap responsivity of the BS photodiode is 43.9 mA/W and 56.8 mA/W at 1300 nm and 1550 nm with low noise, respectively, which is valuable for optical communication and interconnection. Our research provides new ideas for the development of silicon-based detectors. 2. Experimental section A Te film with a thickness of 50 nm was thermally evaporated onto a 400-${\upmu }$m-thick high-resistivity (3-5 k$\Omega \cdot$cm) n-type silicon (100) wafer after cleaning by the RCA standard method. The Te-coated silicon wafers were mounted on a three-axis translation stage controlled by a computer in a vacuum chamber. A heating device was installed in the vacuum chamber to control the temperature of the substrate silicon wafer at 300 K, 500 K, and 700 K. 700 K was the highest temperature tested because of the melting point of Te (T$_{m}$=725 K). Heated wafers were processed in a nitrogen (N$_{2}$) atmosphere by a Ti:sapphire femtosecond laser (central wavelength: 800 nm; pulse duration: 120 fs; repetition rate: 1 kHz) doping system at a fluence of 1.3 kJ/m$^{2}$. The laser pulses were focused by a lens with a focal length of 50 cm at a normal incident with the sample. The femtosecond laser spot moved on the sample surface at a scan speed of 1 mm/s and a line spacing of 50 ${\upmu }$m. The laser processing area of each sample was a square of 8 mm $\times$ 8 mm. The spatial profile of the laser pulse was nearly Gaussian with a laser spot diameter around 164 ${\upmu }$m on the sample surface. A half-wave plate (HWP) and a Glan-Taylor polarizer (GTP) were used to continuously vary the incident energy. The polarization of the laser pulse was controlled by the GTP. Each spot on the sample surface was exposed to 200 laser pulses. Te-hyperdoped silicon photodiodes were fabricated with these BS samples. Rapid thermal annealing at 773 K for 10 minutes in a nitrogen flow could be used to activate the hyperdoped carriers. Aluminum films were thermally evaporated on both surfaces of the sample to make electrodes for the ohmic contact. The morphology of the irradiated surface microstructures was obtained with a field emission scanning electron microscopy (SEM) and an atomic force microscopy (AFM). The crystalline form was analyzed by high-resolution transmission electron microscopy (HRTEM) and selected area electron diffraction (SAED) in Shanghai Doesun Energy Technology Co., Ltd.. A secondary ion mass spectrometry (SIMS) were performed to characterize the Te concentration profiles in Eurofins Scientific (Shanghai) Co., Ltd.. The responsivity of the device was measured using a 250 W tungsten halogen lamp, and the light passed through a grating monochromator with a spectral resolution of 0.2 nm, a high-pass filter, and a chopper connected to a lock-in amplifier (Stanford SR830). The light was finally focused on the surface of the device with a size smaller than the active area of the device. The applied reverse bias voltage was 5 V before annealing process, and 2 V after annealing process. We used a substitution method to determine the responsivity. Two calibrated commercial photodetectors with different response wavelength ranges from Thorlabs Inc. were used for comparison with the samples. One was a silicon photodetector (DET36A/M) (350 nm - 1100 nm) and the other was a germanium photodetector (DET50B/M) (700 nm - 1800 nm). The actual responsivity was calibrated by Thorlabs, Inc.. The current-voltage (I-V) characteristics of the BS photodiodes were acquired with a Keithley 2410 source meter. Homogeneous microstructures are formed after irradiation by the femtosecond laser, as presented in Figs. 1(a)–1(c). Macroscopically, a uniform surface microstructure generally indicates a uniform surface hyperdoping, which creates a large n-n$^{+}$ junction area with a high quality. This leads to a uniform spectral absorption and responsivity of the BS photodiode. At 300 K and 500 K, stripes and droplets are observed in the microstructure at the edge and at the center of the laser spot, respectively. At 700 K, the stripes disappear. When the sample is irradiated by the femtosecond laser, ablation occurs when the surface temperature exceeds a critical value [21]. At low substrate temperatures like 300 K and 500 K, a higher laser fluence is required to reach the critical temperature for ablation to occur. At the center of the laser spot, droplets form because the fluence is above the ablation threshold. At the edge of the laser spot, stripes form where the fluence is near the ablation threshold. However, for a high substrate temperature (700 K), the decrease of ablation threshold [22,23] brings the temperature over the critical threshold in the entire irradiated area, which makes the stripes disappear. Additionally, the depth of the different structures in Figs. 1(a)–1(c) can be analyzed statistically with AFM measurements, as shown in Fig. 1(d). When the substrate temperature increases, the distribution of the depth of the microstructures formed is narrower and the surface microstructures are more homogeneous. Previously, we established that modifications of the surface morphology induced by the femtosecond laser irradiation are the result of the competition between periodic surface structuring stimulated by the interference of the incident light with the surface plasmon polaritons (SPPs) and surface smoothing stimulated by the melting of the surface [24]. At elevated temperatures, surface smoothing dominates in the competition mechanism so that homogeneous surface microstructures are formed. Related researches have been studied for various materials [25–28]. Fig. 1. (a)–(c) SEM images of silicon surfaces irradiated with the laser fluence of 1.3 kJ/m$^{2}$ at various substrate temperatures: (a)300 K, (b)500 K and (c)700 K. The black arrow indicates the direction of the laser polarization. The yellow arrow represents the direction of the movement of the laser spot on the sample surface. (d) Box chart of the depth statistics of the structures in (a)–(c) obtained from the AFM measurements. A high-quality crystalline structure is expected since the density of the recombination centers is greatly reduced, which will then improve the responsivity of the photodetectors. The high crystal quality of the samples obtained with a high substrate temperature was confirmed through TEM, HRTEM, and SAED, as summarized in Fig. 2. Cross-sections of a single droplet on the Te-hyperdoped silicon surface at 300 K, 500 K, and 700 K were analyzed. More specifically, bubbles with a diameter around 200 nm are formed at 300 K inside the core of the droplet and many potholes are formed on the surface of the droplets (Fig. 2(a)) as a result of the phase explosion [29] and resolidification after irradiation by the femtosecond laser. In addition, the highly disordered lattice and the blurred boundaries of the surface potholes are shown in Fig. 2(b). Meanwhile, the SAED pattern in Fig. 2(c) also indicates a poor crystallization. When the substrate temperature increases, the size of the bubbles and the potholes decreases and the crystal quality improves, as shown in Figs. 2(d)–2(i). Particularly, the surface profile of the droplet of the substrate kept at 700 K is smooth and the inside bubbles become smaller. Few potholes are formed on the surface of the droplet with a clear surface boundary. Especially, in Figs. 2(h) and 2(i), perfect single crystals remain even after an extreme processing by the femtosecond laser. Fig. 2. Overview cross-sectional TEM images of a single droplet on the hyperdoped silicon surface with bubbles (blue dashed round), high-resolution TEM images of a pothole on the droplet boundary (red square), and the SAED pattern of the surface of the Te-hyperdoped silicon at different substrate temperatures: (a)–(c)300 K, (d)–(f)500 K, and (g)–(i)700 K. Our results indicate that hyperdoping with the femtosecond laser at an elevated temperature has a positive impact on the crystallinity due to the slower resolidification. After the femtosecond laser irradiation, the material is rapidly thermalized through electron-phonon scattering on the timescale of picoseconds. When the temperature of the irradiated area exceeds a critical value, the ablated material is removed, resulting in a decrease of the average energy of the target and a stabilization of the surface temperature around the critical temperature [21]. Subsequently, the processed area cools down from the critical value to the substrate temperature on the timescale of nanoseconds through thermal diffusion and ablation [30]. By heating the sample, the temperature difference between the melting surface and the substrate is decreased, thereby lowering the cooling rate of the irradiated area and the resolidification speed. Compared with traditional processes where the rapidly resolidificated lattice is dramatically disordered [31,32], the longer lasting molten phase leads to a high-quality crystalline structure [33]. Generally, it is difficult to produce a single crystal and hyperdoping at the same time with femtosecond laser processing. We obtained a highly-ordered single crystal as well as a supersaturated concentration of the dopant in the surface layer of the sample using the high-substrate-temperature treatment. Figure 3 shows the Te concentration profiles measured by SIMS in the silicon samples irradiated at 300 K and 700 K. The doping concentration decreases when the depth increases. The reported equilibrium solid-solubility limit of Te in crystalline silicon is 3.5$\times 10^{16}$cm$^{-3}$ at room temperature which is indicated by a dash-dotted line [34]. However, the maximum Te concentration measured was 8.2$\times 10^{19}$cm$^{-3}$ in the top 20 nm after femtosecond laser irradiation at 700 K, in contrast with a value of 1.2$\times 10^{20}$cm$^{-3}$ at 300 K in the top 20 nm. Since the evaporative removal is more intense at an elevated temperature, the overall doping concentration of the sample at 700 K is slightly lower than at 300 K. However, the Te concentration is still more than 3 orders of magnitude above the solid-solubility limit. Meanwhile, a shallow n-n$^{+}$ heterojunction [10,35] is formed due to the steep decrease of the Te concentration with the depth. This favors the rectification of the photodiode, and reduces the dark current due to a larger barrier in a built-in electric field which is stronger at a short distance. This has great potential for the preparation of practical n-n$^{+}$ photodetectors. Fig. 3. SIMS profiles for the Te-hyperdoped silicon samples irradiated with the laser fluence of 1.3 kJ/m$^{2}$ at 300 K and 700 K. Given the uniform surface microstructures, the high single crystal quality, and the n-n$^{+}$ heterojunction, the BS processed at 700 K has a high potential for photoelectric detection. For each substrate temperature of 300 K, 500 K, and 700 K, the responsivity curves of several photodiode samples were exhibited in Fig. 4. Aluminum films were thermally evaporated on both surfaces of the photodiodes to provide electrodes for the ohmic contacts. Figure 4 shows that the responsivity of higher-temperature samples is generally better than that of lower-temperature samples. Although the observable noise appears in the short (400-600 nm) and IR (1200-1600 nm) wavelength ranges, the temperature-dependent responsivity can still be distinguished. The dark-current-voltage curves are shown in the inset, which indicates that all samples have good rectification characteristics with an n-n$^{+}$ heterojunction. Moreover, the elevated substrate temperature can increase the doping concentration gradient, improve the lattice quality, and reduce the defects, which will suppress dark current of the photodiode. The dark current density of the 700-K sample was reduced by nearly an order of magnitude with values of 5.0 ${\upmu }$A/cm$^{2}$, 9.1 ${\upmu }$A/cm$^{2}$, and 22.2 ${\upmu }$A/cm$^{2}$ at −2 V, −3 V, and −5 V, respectively. Therefore, compared to conventional (300-K processed) BS photodiodes, high-temperature (700-K processed) BS photodiodes have significantly reduced noise in the IR band after 1200 nm, as shown in Fig. 4. Fig. 4. Responsivity at room temperature of photodiode samples, made from Te-hyperdoped silicon obtained at 300 K, 500 K and 700 K, respectively, for a reverse bias voltage of 5 V. Four 700-K samples with identical processing conditions were displayed, and two samples of other temperatures were displayed as references. The inset shows one of the dark-current-voltage (I-V) curves of the Si:Te photodiodes for each substrate temperature. Based on the optimal substrate temperature (700 K), we further optimized the post-processing of the BS photodiodes. Rapid thermal annealing at 773 K for 10 minutes in a nitrogen flow was used to activate the hyperdoped carriers and to produce the high responsivity, as shown in Fig. 5. In detail, the responsivity after annealing is 41.7 A/W, 120.6 A/W, 43.9 mA/W and 56.8 mA/W at 600 nm, 1120 nm, 1300 nm, and 1550 nm, respectively. Such excellent properties are far beyond the previously reported Si:Te photodetectors. Furthermore, it can be seen that the responsivity curve after annealing is smoother and more stable in the IR range (1200-1600 nm) than before annealing, which indicates lower noise. The applied optimized reverse bias is 5 V before annealing, and 2 V after annealing. Even though the bias voltage is smaller after annealing, the maximum responsivity is 7.3 times higher than the maximum responsivity of 16.5 A/W before annealing, and the IR responsivity could achieve the same order of magnitude as the IR responsivity before annealing. Because of the smaller dark current brought by the smaller optimized bias voltage after annealing, the noise can be greatly suppressed in the IR band. The detectivity (D*) value is about 2.54$\times 10^{9}$ Jones at 1550 nm. Fig. 5. Responsivity at room temperature of the 700-K Te-hyperdoped silicon photodiodes before and after rapid thermal annealing (773 K, 10 mins). The applied optimized reverse bias is 5 V before annealing, and 2 V after annealing. A commercial silicon photodetector with a reverse bias voltage of 12 V and a commercial germanium photodetector with a reverse bias voltage of 5 V are also shown, for reference. In addition, we displayed a commercial Si photodetector with a reverse bias voltage of 12 V for comparison as a reference. For all photodiodes, the responsivity in 700-1600 nm was calibrated by a commercial Ge photodetector which is indicated by a dash-dotted line in Fig. 5. The BS:Te photodiode after annealing is two orders of magnitude more responsive than the traditional commercial Si photodetector in the range of 600-1100 nm, and has a more significant improvement at a longer wavelength. The high responsivity, the low dark current, and the low noise of the single crystal BS photodiode are attributed to high degree of lattice order and the hyperdoping of Te. The highly-crystalline structure greatly reduces the density of carrier recombination centers, which reduces the scattering and increases the carrier mobility. The intermediate band formed by hyperdoping ensures an efficient IR photon absorption and photoelectric conversion. Moreover, the steep and shallow doping depth forms an n-n$^{+}$ junction, and the collection of carriers is enhanced by the near-surface depletion region. In summary, we developed a Te-hyperdoped BS photodiode with the highest responsivity yet reported for Si:Te photodiodes and with a low dark current, a low noise and a high stability. These excellent properties are achieved by elevating the substrate temperature during the femtosecond laser irradiation that yields uniform surface microstructures, a highly crystalline surface layer, the hyperdoping of Te, and a steep shallow n-n$^{+}$ heterojunction. After annealing, at a reverse bias voltage of 2 V, the photodiode has a responsivity of 41.7 A/W, 120.6 A/W, 43.9 mA/W and 56.8 mA/W at 600 nm, 1120 nm, 1300 nm, and 1550 nm, respectively. The noise is highly suppressed especially in the IR band. In addition, we analyzed how the elevated substrate temperature affects the performance of the BS photodiode by slowing down the cooling velocity after irradiation by the femtosecond laser and reducing the ablation threshold. Our results prove that hyperdoping at a high substrate temperature has great potential for the laser-induced modification of semiconductors, especially for the fabrication of photodetectors in the silicon-based photonic integration circuits. Through further optimization of the post-processing parameters, better comprehensive properties of photodetectors can be expected. National Natural Science Foundation of China (11574158, 11874227, 11974192); Higher Education Discipline Innovation Project (B07013); Changjiang Scholar Program of Chinese Ministry of Education (IRT_13R29). 1. M. He, M. Xu, Y. Ren, J. Jian, Z. Ruan, Y. Xu, S. Gao, S. Sun, X. Wen, L. Zhou, L. Liu, C. Guo, H. Chen, S. Yu, L. Liu, and X. Cai, "High-performance hybrid silicon and lithium niobate Mach–Zehnder modulators for 100 Gbit s−1 and beyond," Nat. Photonics 13(5), 359–364 (2019). [CrossRef] 2. R. Marchetti, C. Lacava, L. Carroll, K. Gradkowski, and P. Minzioni, "Coupling strategies for silicon photonics integrated chips," Photonics Res. 7(2), 201–239 (2019). [CrossRef] 3. R. Soref, "The past, present, and future of silicon photonics," IEEE J. Sel. Top. Quantum Electron. 12(6), 1678–1687 (2006). [CrossRef] 4. J. P. Mailoa, A. J. Akey, C. B. Simmons, D. Hutchinson, J. Mathews, J. T. Sullivan, D. Recht, M. T. Winkler, J. S. Williams, J. M. Warrender, P. D. Persans, M. J. Aziz, and T. Buonassisi, "Room-temperature sub-band gap optoelectronic response of hyperdoped silicon," Nat. Commun. 5(1), 3011 (2014). [CrossRef] 5. E. Ertekin, M. T. Winkler, D. Recht, A. J. Said, M. J. Aziz, T. Buonassisi, and J. C. Grossman, "Insulator-to-metal transition in selenium-hyperdoped silicon: Observation and origin," Phys. Rev. Lett. 108(2), 026401 (2012). [CrossRef] 6. M. J. Sher and E. Mazur, "Intermediate band conduction in femtosecond-laser hyperdoped silicon," Appl. Phys. Lett. 105(3), 032103 (2014). [CrossRef] 7. A. Luque and A. Martí, "Increasing the efficiency of ideal solar cells by photon induced transitions at intermediate levels," Phys. Rev. Lett. 78(26), 5014–5017 (1997). [CrossRef] 8. J. Olea, A. Del Prado, D. Pastor, I. Mártil, and G. González-Díaz, "Sub-bandgap absorption in Ti implanted Si over the Mott limit," J. Appl. Phys. 109(11), 113541 (2011). [CrossRef] 9. J. E. Carey, C. H. Crouch, M. Shen, and E. Mazur, "Visible and near-infrared responsivity of femtosecond-laser microstructured silicon photodiodes," Opt. Lett. 30(14), 1773–1775 (2005). [CrossRef] 10. Z. Huang, J. E. Carey, M. Liu, X. Guo, E. Mazur, and J. C. Campbell, "Microstructured silicon photodetector," Appl. Phys. Lett. 89(3), 033506 (2006). [CrossRef] 11. B. K. Nayak, V. V. Iyengar, and M. C. Gupta, "Efficient light trapping in silicon solar cells by ultrafast-laser-induced self-assembled micro/nano structures," Prog. Photovoltaics 19(6), 631–639 (2011). [CrossRef] 12. S. Kontermann, T. Gimpel, A. L. Baumann, K. M. Guenther, and W. Schade, "Laser processed black silicon for photovoltaic applications," Energy Procedia 27, 390–395 (2012). [CrossRef] 13. D. Recht, M. J. Smith, S. Charnvanichborikarn, J. T. Sullivan, M. T. Winkler, J. Mathews, J. M. Warrender, T. Buonassisi, J. S. Williams, S. Gradečak, and M. J. Aziz, "Supersaturating silicon with transition metals by ion implantation and pulsed laser melting," J. Appl. Phys. 114(12), 124903 (2013). [CrossRef] 14. C. Wu, C. H. Crouch, L. Zhao, J. E. Carey, R. Younkin, J. A. Levinson, E. Mazur, R. M. Farrell, P. Gothoskar, and A. Karger, "Near-unity below-band-gap absorption by microstructured silicon," Appl. Phys. Lett. 78(13), 1850–1852 (2001). [CrossRef] 15. C. Crouch, J. Carey, M. Shen, E. Mazur, and F. Génin, "Infrared absorption by sulfur-doped silicon formed by femtosecond laser irradiation," Appl. Phys. A 79(7), 1635–1641 (2004). [CrossRef] 16. T. F. Lee, R. D. Pashley, T. C. McGill, and J. W. Mayer, "Investigation of tellurium-implanted silicon," J. Appl. Phys. 46(1), 381–388 (1975). [CrossRef] 17. X. Qiu, X. Yu, S. Yuan, Y. Gao, X. Liu, Y. Xu, and D. Yang, "Trap assisted bulk silicon photodetector with high photoconductive gain, low noise, and fast response by Ag hyperdoping," Adv. Opt. Mater. 6(3), 1700638 (2018). [CrossRef] 18. Y. Berencén, S. Prucnal, F. Liu, I. Skorupa, R. Hübner, L. Rebohle, S. Zhou, H. Schneider, M. Helm, and W. Skorupa, "Room-temperature short-wavelength infrared Si photodetector," Sci. Rep. 7(1), 43688 (2017). [CrossRef] 19. T. G. Kim, J. M. Warrender, and M. J. Aziz, "Strong sub-band-gap infrared absorption in silicon supersaturated with sulfur," Appl. Phys. Lett. 88(24), 241902 (2006). [CrossRef] 20. H. Sun, C. Liang, G. Feng, Z. Zhu, J. Zhuang, and L. Zhao, "Improving crystallinity of femtosecond-laser hyperdoped silicon via co-doping with nitrogen," Opt. Mater. Express 6(4), 1321–1328 (2016). [CrossRef] 21. C. Kerse, H. Kalaycıoğlu, P. Elahi, B. Çetin, D. K. Kesim, Ö. Akçaalan, S. Yavaş, M. D. Aşık, B. Öktem, H. Hoogland, R. Holzwarth, and F. Ö. Ilday, "Ablation-cooled material removal with ultrafast bursts of pulses," Nature 537(7618), 84–88 (2016). [CrossRef] 22. K. O'Donnell, K. P. Chen, and X X. Chen, "Temperature dependence of semiconductor band gaps," Appl. Phys. Lett. 58(25), 2924–2926 (1991). [CrossRef] 23. W. Bludau, A. Onton, and W. Heinke, "Temperature dependence of the band gap of silicon," J. Appl. Phys. 45(4), 1846–1848 (1974). [CrossRef] 24. M. Yang, Q. Wu, Z. Chen, B. Zhang, B. Tang, J. Yao, I. Drevensek-Olenik, and J. Xu, "Generation and erasure of femtosecond laser-induced periodic surface structures on nanoparticle-covered silicon by a single laser pulse," Opt. Lett. 39(2), 343–346 (2014). [CrossRef] 25. Q. Li, Q. Wu, Y. Li, C. Zhang, Z. Jia, J. Yao, J. Sun, and J. Xu, "Femtosecond laser-induced periodic surface structures on lithium niobate crystal benefiting from sample heating," Photonics Res. 6(8), 789–793 (2018). [CrossRef] 26. Y. Li, Q. Wu, M. Yang, Q. Li, Z. Chen, C. Zhang, J. Sun, J. Yao, and J. Xu, "Uniform deep-subwavelength ripples produced on temperature controlled LiNbO3:Fe crystal surface via femtosecond laser ablation," Appl. Surf. Sci. 478, 779–783 (2019). [CrossRef] 27. G. Deng, G. Feng, K. Liu, and S. Zhou, "Temperature dependence of laser-induced micro/nanostructures for femtosecond laser irradiation of silicon," Appl. Opt. 53(14), 3004–3009 (2014). [CrossRef] 28. G. Deng, G. Feng, and S. Zhou, "Experimental and FDTD study of silicon surface morphology induced by femtosecond laser irradiation at a high substrate temperature," Opt. Express 25(7), 7818–7827 (2017). [CrossRef] 29. P. Lorazo, L. J. Lewis, and M. Meunier, "Short-pulse laser ablation of solids: From phase explosion to fragmentation," Phys. Rev. Lett. 91(22), 225502 (2003). [CrossRef] 30. S. K. Sundaram and E. Mazur, "Inducing and probing non-thermal transitions in semiconductors using femtosecond laser pulses," Nat. Mater. 1(4), 217–224 (2002). [CrossRef] 31. J. Solis, C. N. Afonso, J. F. Trull, and M. C. Morilla, "Fast crystallizing GeSb alloys for optical data storage," J. Appl. Phys. 75(12), 7788–7794 (1994). [CrossRef] 32. E. N. Glezer, Y. Siegal, L. Huang, and E. Mazur, "Behavior of χ(2) during a laser-induced phase transition in GaAs," Phys. Rev. B 51(15), 9589–9596 (1995). [CrossRef] 33. J. A. Kittl, P. G. Sanders, M. J. Aziz, D. P. Brunco, and M. O. Thompson, "Complete experimental test of kinetic models for rapid alloy solidification," Acta Mater. 48(20), 4797–4811 (2000). [CrossRef] 34. M. A. Sheehy, "Femtosecond-laser microstructuring of silicon: dopants and defects," Ph.D. thesis, Harvard University (2004). 35. C. H. Li, J. H. Zhao, Q. D. Chen, J. Feng, and H. B. Sun, "Sub-bandgap photo-response of non-doped black-silicon fabricated by nanosecond laser irradiation," Opt. Lett. 43(8), 1710–1713 (2018). [CrossRef] M. He, M. Xu, Y. Ren, J. Jian, Z. Ruan, Y. Xu, S. Gao, S. Sun, X. Wen, L. Zhou, L. Liu, C. Guo, H. Chen, S. Yu, L. Liu, and X. Cai, "High-performance hybrid silicon and lithium niobate Mach–Zehnder modulators for 100 Gbit s−1 and beyond," Nat. Photonics 13(5), 359–364 (2019). R. Marchetti, C. Lacava, L. Carroll, K. Gradkowski, and P. Minzioni, "Coupling strategies for silicon photonics integrated chips," Photonics Res. 7(2), 201–239 (2019). R. Soref, "The past, present, and future of silicon photonics," IEEE J. Sel. Top. Quantum Electron. 12(6), 1678–1687 (2006). J. P. Mailoa, A. J. Akey, C. B. Simmons, D. Hutchinson, J. Mathews, J. T. Sullivan, D. Recht, M. T. Winkler, J. S. Williams, J. M. Warrender, P. D. Persans, M. J. Aziz, and T. Buonassisi, "Room-temperature sub-band gap optoelectronic response of hyperdoped silicon," Nat. Commun. 5(1), 3011 (2014). E. Ertekin, M. T. Winkler, D. Recht, A. J. Said, M. J. Aziz, T. Buonassisi, and J. C. Grossman, "Insulator-to-metal transition in selenium-hyperdoped silicon: Observation and origin," Phys. Rev. Lett. 108(2), 026401 (2012). M. J. Sher and E. Mazur, "Intermediate band conduction in femtosecond-laser hyperdoped silicon," Appl. Phys. Lett. 105(3), 032103 (2014). A. Luque and A. Martí, "Increasing the efficiency of ideal solar cells by photon induced transitions at intermediate levels," Phys. Rev. Lett. 78(26), 5014–5017 (1997). J. Olea, A. Del Prado, D. Pastor, I. Mártil, and G. González-Díaz, "Sub-bandgap absorption in Ti implanted Si over the Mott limit," J. Appl. Phys. 109(11), 113541 (2011). J. E. Carey, C. H. Crouch, M. Shen, and E. Mazur, "Visible and near-infrared responsivity of femtosecond-laser microstructured silicon photodiodes," Opt. Lett. 30(14), 1773–1775 (2005). Z. Huang, J. E. Carey, M. Liu, X. Guo, E. Mazur, and J. C. Campbell, "Microstructured silicon photodetector," Appl. Phys. Lett. 89(3), 033506 (2006). B. K. Nayak, V. V. Iyengar, and M. C. Gupta, "Efficient light trapping in silicon solar cells by ultrafast-laser-induced self-assembled micro/nano structures," Prog. Photovoltaics 19(6), 631–639 (2011). S. Kontermann, T. Gimpel, A. L. Baumann, K. M. Guenther, and W. Schade, "Laser processed black silicon for photovoltaic applications," Energy Procedia 27, 390–395 (2012). D. Recht, M. J. Smith, S. Charnvanichborikarn, J. T. Sullivan, M. T. Winkler, J. Mathews, J. M. Warrender, T. Buonassisi, J. S. Williams, S. Gradečak, and M. J. Aziz, "Supersaturating silicon with transition metals by ion implantation and pulsed laser melting," J. Appl. Phys. 114(12), 124903 (2013). C. Wu, C. H. Crouch, L. Zhao, J. E. Carey, R. Younkin, J. A. Levinson, E. Mazur, R. M. Farrell, P. Gothoskar, and A. Karger, "Near-unity below-band-gap absorption by microstructured silicon," Appl. Phys. Lett. 78(13), 1850–1852 (2001). C. Crouch, J. Carey, M. Shen, E. Mazur, and F. Génin, "Infrared absorption by sulfur-doped silicon formed by femtosecond laser irradiation," Appl. Phys. A 79(7), 1635–1641 (2004). T. F. Lee, R. D. Pashley, T. C. McGill, and J. W. Mayer, "Investigation of tellurium-implanted silicon," J. Appl. Phys. 46(1), 381–388 (1975). X. Qiu, X. Yu, S. Yuan, Y. Gao, X. Liu, Y. Xu, and D. Yang, "Trap assisted bulk silicon photodetector with high photoconductive gain, low noise, and fast response by Ag hyperdoping," Adv. Opt. Mater. 6(3), 1700638 (2018). Y. Berencén, S. Prucnal, F. Liu, I. Skorupa, R. Hübner, L. Rebohle, S. Zhou, H. Schneider, M. Helm, and W. Skorupa, "Room-temperature short-wavelength infrared Si photodetector," Sci. Rep. 7(1), 43688 (2017). T. G. Kim, J. M. Warrender, and M. J. Aziz, "Strong sub-band-gap infrared absorption in silicon supersaturated with sulfur," Appl. Phys. Lett. 88(24), 241902 (2006). H. Sun, C. Liang, G. Feng, Z. Zhu, J. Zhuang, and L. Zhao, "Improving crystallinity of femtosecond-laser hyperdoped silicon via co-doping with nitrogen," Opt. Mater. Express 6(4), 1321–1328 (2016). C. Kerse, H. Kalaycıoğlu, P. Elahi, B. Çetin, D. K. Kesim, Ö. Akçaalan, S. Yavaş, M. D. Aşık, B. Öktem, H. Hoogland, R. Holzwarth, and F. Ö. Ilday, "Ablation-cooled material removal with ultrafast bursts of pulses," Nature 537(7618), 84–88 (2016). K. O'Donnell, K. P. Chen, and X X. Chen, "Temperature dependence of semiconductor band gaps," Appl. Phys. Lett. 58(25), 2924–2926 (1991). W. Bludau, A. Onton, and W. Heinke, "Temperature dependence of the band gap of silicon," J. Appl. Phys. 45(4), 1846–1848 (1974). M. Yang, Q. Wu, Z. Chen, B. Zhang, B. Tang, J. Yao, I. Drevensek-Olenik, and J. Xu, "Generation and erasure of femtosecond laser-induced periodic surface structures on nanoparticle-covered silicon by a single laser pulse," Opt. Lett. 39(2), 343–346 (2014). Q. Li, Q. Wu, Y. Li, C. Zhang, Z. Jia, J. Yao, J. Sun, and J. Xu, "Femtosecond laser-induced periodic surface structures on lithium niobate crystal benefiting from sample heating," Photonics Res. 6(8), 789–793 (2018). Y. Li, Q. Wu, M. Yang, Q. Li, Z. Chen, C. Zhang, J. Sun, J. Yao, and J. Xu, "Uniform deep-subwavelength ripples produced on temperature controlled LiNbO3:Fe crystal surface via femtosecond laser ablation," Appl. Surf. Sci. 478, 779–783 (2019). G. Deng, G. Feng, K. Liu, and S. Zhou, "Temperature dependence of laser-induced micro/nanostructures for femtosecond laser irradiation of silicon," Appl. Opt. 53(14), 3004–3009 (2014). G. Deng, G. Feng, and S. Zhou, "Experimental and FDTD study of silicon surface morphology induced by femtosecond laser irradiation at a high substrate temperature," Opt. Express 25(7), 7818–7827 (2017). P. Lorazo, L. J. Lewis, and M. Meunier, "Short-pulse laser ablation of solids: From phase explosion to fragmentation," Phys. Rev. Lett. 91(22), 225502 (2003). S. K. Sundaram and E. Mazur, "Inducing and probing non-thermal transitions in semiconductors using femtosecond laser pulses," Nat. Mater. 1(4), 217–224 (2002). J. Solis, C. N. Afonso, J. F. Trull, and M. C. Morilla, "Fast crystallizing GeSb alloys for optical data storage," J. Appl. Phys. 75(12), 7788–7794 (1994). E. N. Glezer, Y. Siegal, L. Huang, and E. Mazur, "Behavior of χ(2) during a laser-induced phase transition in GaAs," Phys. Rev. B 51(15), 9589–9596 (1995). J. A. Kittl, P. G. Sanders, M. J. Aziz, D. P. Brunco, and M. O. Thompson, "Complete experimental test of kinetic models for rapid alloy solidification," Acta Mater. 48(20), 4797–4811 (2000). M. A. Sheehy, "Femtosecond-laser microstructuring of silicon: dopants and defects," Ph.D. thesis, Harvard University (2004). C. H. Li, J. H. Zhao, Q. D. Chen, J. Feng, and H. B. Sun, "Sub-bandgap photo-response of non-doped black-silicon fabricated by nanosecond laser irradiation," Opt. Lett. 43(8), 1710–1713 (2018). Afonso, C. N. Akçaalan, Ö. Akey, A. J. Asik, M. D. Aziz, M. J. Baumann, A. L. Berencén, Y. Bludau, W. Brunco, D. P. Buonassisi, T. Cai, X. Campbell, J. C. Carey, J. Carey, J. E. Carroll, L. Çetin, B. Charnvanichborikarn, S. Chen, K. P. Chen, Q. D. Crouch, C. Crouch, C. H. Del Prado, A. Deng, G. Drevensek-Olenik, I. Elahi, P. Ertekin, E. Farrell, R. M. Feng, G. Feng, J. Gao, S. Gao, Y. Génin, F. Gimpel, T. Glezer, E. N. González-Díaz, G. Gothoskar, P. Gradecak, S. Gradkowski, K. Grossman, J. C. Guenther, K. M. Guo, C. Guo, X. Gupta, M. C. Heinke, W. Helm, M. Hoogland, H. Huang, L. Huang, Z. Hübner, R. Hutchinson, D. Iyengar, V. V. Jia, Z. Jian, J. Kalaycioglu, H. Karger, A. Kerse, C. Kesim, D. K. Kim, T. G. Kittl, J. A. Kontermann, S. Lacava, C. Lee, T. F. Levinson, J. A. Lewis, L. J. Li, C. H. Liang, C. Liu, F. Liu, L. Liu, M. Liu, X. Lorazo, P. Luque, A. Mailoa, J. P. Marchetti, R. Martí, A. Mártil, I. Mathews, J. Mayer, J. W. Mazur, E. McGill, T. C. Meunier, M. Minzioni, P. Morilla, M. C. Nayak, B. K. Ö. Ilday, F. O'Donnell, K. Öktem, B. Olea, J. Onton, A. Pashley, R. D. Pastor, D. Persans, P. D. Prucnal, S. Qiu, X. Rebohle, L. Recht, D. Ren, Y. Ruan, Z. Said, A. J. Sanders, P. G. Schade, W. Schneider, H. Sheehy, M. A. Shen, M. Sher, M. J. Siegal, Y. Simmons, C. B. Skorupa, I. Skorupa, W. Smith, M. J. Solis, J. Soref, R. Sullivan, J. T. Sun, H. B. Sun, J. Sun, S. Sundaram, S. K. Tang, B. Thompson, M. O. Trull, J. F. Warrender, J. M. Wen, X. Williams, J. S. Winkler, M. T. Wu, C. Wu, Q. X. Chen, X Xu, M. Yang, D. Yang, M. Yao, J. Yavas, S. Younkin, R. Yu, S. Yu, X. Yuan, S. Zhang, B. Zhang, C. Zhao, J. H. Zhao, L. Zhou, S. Zhu, Z. Zhuang, J. Acta Mater. (1) Adv. Opt. Mater. (1) Appl. Phys. A (1) Appl. Surf. Sci. (1) Energy Procedia (1) Nat. Mater. (1) Prog. Photovoltaics (1) Sci. Rep. (1)
CommonCrawl
Performance evaluation of data-driven techniques for the softwarized and agnostic management of an N×N photonic switch Ihtesham Khan, Lorenzo Tunesi, Muhammad Umar Masood, Enrico Ghillino, Paolo Bardella, Andrea Carena, and Vittorio Curri Ihtesham Khan,1,* Lorenzo Tunesi,1 Muhammad Umar Masood,1 Enrico Ghillino,2 Paolo Bardella,1 Andrea Carena,1 and Vittorio Curri1 1Politecnico di Torino, Corso Duca degli Abruzzi, 24, 10129, Torino, Italy 2Synopsys, Inc., 400 Executive Blvd Ste 101, Ossining, NY 10562, USA *Corresponding author: [email protected] Ihtesham Khan https://orcid.org/0000-0002-1435-9159 Lorenzo Tunesi https://orcid.org/0000-0001-7890-5115 Muhammad Umar Masood https://orcid.org/0000-0002-5349-6199 Paolo Bardella https://orcid.org/0000-0003-1075-8264 Andrea Carena https://orcid.org/0000-0001-6848-3326 Vittorio Curri https://orcid.org/0000-0003-0691-0067 I Khan L Tunesi M Masood E Ghillino P Bardella A Carena V Curri Ihtesham Khan, Lorenzo Tunesi, Muhammad Umar Masood, Enrico Ghillino, Paolo Bardella, Andrea Carena, and Vittorio Curri, "Performance evaluation of data-driven techniques for the softwarized and agnostic management of an N×N photonic switch," Opt. Continuum 1, 1-15 (2022) Assessment of cross-train machine learning techniques for QoT-estimation in agnostic optical... Ihtesham Khan, et al. OSA Continuum 3(10) 2690-2706 (2020) Constellation-based identification of linear and nonlinear OSNR using machine learning: a study of... Hyung Joon Cho, et al. Opt. Express 30(2) 2693-2710 (2022) On establishing and task scheduling of data-oriented vNF-SCs in an optical DCI Zichen Xu, et al. J. Opt. Commun. Netw. 14(3) 89-99 (2022) Optical Devices and Detectors Add drop multiplexers Ring resonators Wavelength routing Original Manuscript: May 3, 2021 Revised Manuscript: August 30, 2021 Manuscript Accepted: November 8, 2021 Multistage switching networks Simulation and dataset generation Analysis of machine learning models Machine learning framework The emerging Software Defined Networking (SDN) paradigm paves the way for flexible and automatized management at each layer. The SDN-enabled optical network requires each network element's software abstraction to enable complete control by the centralized network controller. Nowadays, silicon photonics due to its low energy consumption, low latency, and small footprint is a promising technology for implementing photonic switching topologies, enabling transparent lightpath routing in re-configurable add-drop multiplexers. To this aim, a model for the complete management of photonic switching systems' control states is fundamental for network control. Typically, photonics-based switches are structured by exploiting the modern technology of Photonic Integrated Circuit (PIC) that enables complex elementary cell structures to be driven individually. Thus PIC switches' control states are combinations of a large set of elementary controls, and their definition is a challenging task. In this scenario, we propose the use of several data-driven techniques based on Machine Learning (ML) to model the control states of a PIC N×N photonic switch in a completely blind manner. The proposed ML-based techniques are trained and tested in a completely topological and technological agnostic way, and we envision their application in a real-time control plane. The proposed techniques' scalability and accuracy are validated by considering three different switching topologies: the Honey-Comb Rearrangeable Optical Switch (HCROS), Spanke-Beneš, and the Beneš network. Excellent results in terms of predicting the control states are achieved for all of the considered topologies. Recently, the remarkable increase in the global internet traffic [1], compelled by the introduction of evolving technologies of connectivity such as 5G, internet of things (IoT) and cloud services, has marked up high demands for flexible and dynamic network management at each layer. The latest optical SDN concept can provide the required degree of flexibility by implementing the complete virtualization of each Network Element (NE) and function within the network control system. Modern technologies like a coherent optical transmission for Wavelength-division Multiplexed (WDM) optical transport and re-configurable optical switches for transparent wavelength routing provide a path to extend the SDN paradigm down to the physical layer [2–4]. To aim this, optical NE and transmission functions must be abstracted for Quality-of-Transmission (QoT) impairments and for controlling to empower the entire management by the optical control plane within the network controller [5,6] as illustrated in Fig. 1(a). The present effort is the evolution towards the fast-expanding trend in optical networks that goes towards the disaggregation and application of the SDN paradigm down to the WDM transport layer. This work mainly focuses on providing the abstraction of control states of re-configurable optical switches based on PICs with a complete structure-agnostic methodology based on several ML techniques. Fig. 1. (a) Abstraction of the optical switch in a SDN-controlled optical network. (b) Generic $N{\times }N$ optical switch fabric. In recent times, the smart optical NEs extensively utilize PICs to execute complex functions at the photonic level. Specifically, in the core optical networks and data centers, large-scale photonic switching systems have a significant role, with the key benefits of their wide-band capabilities together with low latency and low power consumption. These photonic switches are mainly based on the principle that electrical control signals can maneuver the flow of light: using this mechanism, optical signals can be routed to different paths. Before the development of PIC solutions, different switching technologies have been proposed, such as three-dimensional Micro-Electro-Mechanical Systems (MEMS) [7] and beam-steering technique [8]. These technologies give stable optical switching and a reasonable degree of scalability, but the obligation for precise calibration and installation of discrete components makes them much more costly and bulkier. PIC-based systems mainly depend on elementary units such as Mach-Zehnder Interferometers (MZI) [9] or optical Micro Ring Resonators (MRR) [10]. The standard $N{\times }N$ optical switch architectures are assembled by cascading multiple stages of elementary units following a distinct switching topology, where $N$ input signals can be routed to any of the $N$ output ports by varying $M$ control signals, as depicted in Fig. 1(b). To scale up the range of $N{\times }N$ switching fabrics, the underlying requirement is to efficiently define the control states of the internal switching elements to obtain the requested signals permutation at the output of the integrated circuit. The research on control/routing states of the photonic switching system has been scarcely stated at present. Nothing like the electronic switches routing algorithms [11], where the performance of all paths are equal, the optical switches usually offer path-dependent performances [12]. Variations in performance can be fundamentally due to the topology and physical behavior of the elementary units. On the other hand, they can be derived from fabrication and design defects, which may alter the different switching states of the elementary units and their cascading effect on the entire component. Deterministic routing algorithms can effectively determine the control state of the internal switches for any requested output permutation. The effectiveness of these algorithms initiates in their topology dependence, which allows a faster and more effective assessment of the multistage networks. However, creating the necessity to implement different algorithmic solutions depending on the network structure, with the cost of generality loss. In contrast, general-purpose routing and path-finding algorithms do not provide scalable solutions, as the computational complexity increases rapidly [13–15]. This is caused by the exponential growth of the control states $N_{st}$ in the network, which depends on the number of switches M as $N_{st}=2^M$. This makes the generation and evaluation of the complete routing space unfeasible, as well as the evaluation of the weighted penalties for all configurations. In opposite with conventional topology-dependent methods, we propose a data-driven method based on ML techniques to predict the control/routing states of the photonic integrated switch. Numerous ML-based methods have already been evaluated in managing PICs, such as [16] where an algorithm-driven by the artificial neural network is proposed to calibrate 2 $\times$ 2 dual-ring assisted-MZI switches. In [17], the author experimentally demonstrated a complete self-learning and reconfigurable photonic signal processor based on an optical neural network chip. The proposed chip performs various functions by self-learning, such as multi-channel optical switching, optical multiple-input-multiple-output de-scrambling and tunable optical filtering. In [18], the author proposed to use the Deep Reinforcement Learning (DRL) technique to drive reconfiguration of silicon photonic Flexible Low-latency Interconnect Optical Network Switch (Flex-LIONS) according to the traffic characteristics in High-Performance Computing (HPC) systems. Furthermore, in [19] a novel reinforcement ML-based framework is introduced, which is called DeepConf, for automatically learning and implementing a range of data center networking techniques. The developed DeepConf simplifies configuring and training deep learning agents by using intermediate representation to learn different tasks. In this work, we introduce a unique topology-agnostic blind approach exploiting several ML techniques for predicting the control states of the $N{\times }N$ photonic switch with arbitrary and potentially unspecified internal configuration. The models are trained using a dataset obtained by the component under test used as a black-box. The training dataset can be either obtained experimentally or synthetically by relying on a device software simulator. Initial and partial results for this approach are presented in [20]. In this paper, besides describing in detail the methodology, we extend the framework by suggesting several ML techniques and presenting a detailed performance assessment of these data-driven approaches to model the control states of the photonic switching system. The optimization of these ML models in terms of prediction, accuracy, and complexity is also performed. Furthermore, ML models' error distribution in the predicted control states is also verified and analyzed. The error analysis aims at assessing the quantitative effectiveness of the trained ML agent in predicting the proper internal switching routing, given the PIC topology. Future evolution of the proposed framework will target the inclusion of transmission penalties in the set ML agent prediction, to enable a full component virtualization within a software-defined optical transport network scenario. The remainder of the paper is organized as follows. In Sec. 2., we describe the specific architecture of the Beneš, Spanke-Beneš and HCROS switches used for the demonstration of the proposed ML techniques. In Sec. 3., we describe the simulation environment used to generate datasets, presenting its structure and various statistics. In Sec. 4., we illustrate the architecture of the proposed ML models used to predict the control state of the PIC-based switching system. Then, in Sec. 5., we describe the structure of the proposed ML agent, showing how it is trained on the datasets of different controls and output signals permutations, in order to predict the control signals of internal switching elements. In this work, we do not aim to develop specific ML models; instead, our focus is to show the general effectiveness of ML in this scenario. So, we exploit an extensively tested open-source projects, namely the TensorFlow and scikit-learn libraries [21,22]. Results of our proposed approach are shown in detail in Sec. 6. We demonstrate that the trained ML models enable the correct estimation of the internal switching elements control states for different $N{\times }N$ architectures. We also show that a heuristically enhanced ML can further improve the predictions' accuracy in the present scenario. Finally, conclusions are presented in Sec. 7. 2. Multistage switching networks The routing operation can be carried out through a variety of switching devices, with diverse structures and implementations depending on the transmission requirements and constraints. Instead of designing ad-hoc switches for the desired number of inputs N, a standard class of implemented structures are based around the multistage network paradigm. In these components, the routing is achieved through a cascade of multiple stages, made from smaller switching elements, as to reduce the overall complexity of the circuit, as well as the footprint and number of switching sections. The configurations of this switching network are defined by the control signals applied to each of the M Optical Switching Elements (OSEs), which determine the output configuration of the device. 2.1 2$\times$2 Crossbar switches In optical transmission these elements are typically implemented as 2$\times$2 elementary switching devices, represented as black-box modelled crossbar switches, as shown in Fig. 2. The 2$\times$2 crossbar switch is defined as a two-state device, piloted by a control signal M, which toggle between the two configurations. The bar state, defined for $M=0$, represent the straightforward propagation of the two input signals ($\left (\begin {smallmatrix}{\lambda _1} \\ {\lambda _2}\end {smallmatrix}\right )\to \left (\begin {smallmatrix}{\lambda _1} \\ {\lambda _2}\end {smallmatrix}\right )$), while in the cross state, for $M=1$, the output signals order is reversed ($\left (\begin {smallmatrix}{\lambda _1} \\ {\lambda _2}\end {smallmatrix}\right )\to \left (\begin {smallmatrix}{\lambda _2} \\ {\lambda _1}\end {smallmatrix}\right )$). As previously stated this, fundamental block can be physically implemented through a variety of approaches, with the two most prominent being the Microring Resonator(MRR) and the Mach Zehnder Interferometer (MZI). These implementations offer different performances based on physical design and can be tailored for both colorless or chromatic-dependent applications. The binary control signal present in the black-box model is typically provided through an electrical signal in the OSE, with a dependency on the device implementation. Nonetheless, on a virtual abstraction of the component, for the routing path evaluation, the binary model is suitable while maintaining a general device-independent scope. Fig. 2. Illustration of Bar and Cross states of a 2$\times$2 elementary switching element (CrossBar switch). 2.2 Multistage rearrangeable non-blocking networks To design a N $\times$ N switch the crossbar elements must be placed in a suitable topology, which determines the properties of the switch, both in terms of routing capabilities and number of elements required. The focus of this approach is directed toward a sub-class of switching networks which are defined as rearrangeable non-blocking networks. Switching networks can be defined as non-blocking if all the possible permutations of the input signals can be routed to the output ports: any input-output (I/O) request targeted at an unoccupied port can be established without creating conflicts inside the network, taking also into account the already established I/O links. In rearrangeable networks this property partially upholds, as the routing of all permutations is achievable, although potentially requiring a reconfiguration of the previously established I/O connections. In this class of multistage networks the reconfiguration of the switches is required, as the topology doesn't guarantee path-availability if traffic is already present in the device. This is a clear disadvantage with respect to strict-sense non-blocking structures, as it requires the implementation of a more complex control unit to evaluate the routing and the conflicts inside the network. The trade-off is acceptable in most applications, as this wider-sense property allows most of the topologies to implement an N $\times$ N switch with fewer elements with respect to the strict-sense devices, with a clear reduction of transmission losses, power consumption and footprint. 2.3 Problem complexity Considering a generic device in this class, the two main parameters determining the model complexity are the number of inputs $N$, or the network size, and the number of needed OSE $M$, which correspond to the number of variables to establish a target routing path. The solution space of all the output permutations ($N!$), as well as the states configurations ($2^M$) grows as a Non-Polynomial (NP) function as the network size increases. This is due to the dependency of the number of OSE on the number of inputs. Typically for these networks, the relationship follows $M=O(N\cdot \log (N))$ with a small degree of variance between the different available topologies. This phenomenon leads to scalability issues concerning traditional topology-independent path-finding algorithms, as the NP complexity increase cannot be directly approached. Topology-specific routing algorithms exist for each class, although this solution bears two main disadvantages: it requires a rigid control unit which cannot pilot a device with a different topology, as well as the unavailability of researching an optimal solution. Multi-switching networks, especially in the optical domain, are prone to path-dependant degradation of performance, with a wide range of QoT between the equivalent paths available for the same output configuration. The deterministic topology-dependant routing algorithm needs to evaluate all the equivalent paths, with a complexity dependence on N as $N_{eq}=O(2^M \div N!)$, leading to severe scalability issues. As such ML-based methods can overcome the limitation and can be trained on performance-aware data-sets. Under this scenario the NP size of the solution space becomes an asset instead of a disadvantage, allowing the generation of large data-set for training the specific ML agent. 2.4 Topologies under analysis In order to test the performance of the proposed method, both the scalability as well as the robustness with respect to the topology variation must be tested. To this end, three main topological structures were tested, and are depicted in Fig. 3. Fig. 3. Multistage switching networks topologies under analysis The first network under analysis is the Beneš switch. This device follows a recursive structure based on the Clos network paradigm, with number of OSE $M=N\log _2(N)-\frac {N}{2}$. The Beneš network is a common approach to multistage switching networks, as it's characterized by a low number of 2$\times$2 switching elements, implying reduced footprint and power consumption with respect to larger topologies. Three instances of Beneš structures have been tested, with network size $N=8,\;10,\;15$ and $M=20,\;26,\;36$. An alternative to the recursive Beneš structure is the Spanke-Beneš network: this topology is distinguished by its planarity, as no crossing interconnection is used between the switching stages. The planarity comes as a cost in terms of number of OSE, which are equal to $M=\frac {N\cdot (N-1)}{2}$, increasing the already severe effect of the NP complexity growth. This topology is still considered for applications where the crossing technology cannot be relied upon to guarantee the needed QoT, as such the size $N=8\quad M=28$ and $N=10\quad M=45$ are chosen, as to allow similar complexity to the Beneš networks under analysis. The third considered topology is an optimized with equal footprint with respect to the Beneš counterpart. The device is not based on the planarity constraint of the Spanke-Beneš nor the recursive generation of the traditional Beneš networks. This structure, referred as the HoneyComb Rearrangeable Optical Switch (HCROS) [23], shows an asymmetric topology with respect to the traditional implementations, acting as an extremely useful benchmark to test the proposed ML-method robustness related to irregular and uncommon structures. The device proposed was extended to a 12$\times$12 structure as to offer a valid comparison to the other switching devices under analysis. All three considered structures belongs to the same class with respect to non-blocking properties, as they require rearrangement of the control states to route new input-output links, when traffic already occupies part of the circuit. Similarly each topology has more OSE configuration states than signal output permutations, as such each device can obtain the same output configuration through a different number of equivalent paths. 3. Simulation and dataset generation The dataset has a vibrant role in the training and verification of data-driven models. In the current scenario, the required dataset is retrieved by implementing the abstraction of the considered topologies. Each 2$\times$2 switch factor is driven by a control bit, with 0 characterizing the BAR state and 1 the CROSS state: each configuration of the network can be described by a bit-array of length equal to the number of switches $M$. The considered architectures are implemented as a cascade of permutation matrices, representing each switch and crossing stage. In multistage structures like the Beneš, Spanke-Beneš and HCROS, the network's output for a given control vector can be calculated by applying the permutation matrices sequentially for each stage [11], without considering a more complex and computationally expensive graph structure. In non-blocking switching topologies, the $M$ control signals' variation typically generates $2^{M}$ total combination, whereas $N!$ is the number of distinct permutations of the $N$ input signals, as shown in Fig. 4. Fig. 4. Graphical representation of all possible $N!$ output states for a $N{\times }N$ fabric. The size of a realistic dataset for training should be much smaller than the full-sized look-up table. If such a table could be evaluated in a realistic environment, no other algorithm would be needed to route the inputs. Here, the training dataset is assembled from unique random control vectors to avoid any possible bias toward specific switch configurations or preferential paths within the network. The total train and test dataset for the considered Beneš, Spanke-Beneš and HCROS architectures are a subset of the total $2^{M}$ control combinations, as reported in Table 1. After having sufficient ML module training, the testing is performed for each of the randomly selected permutations of the output channels; the corresponding combination of switches returned by the ML framework is determined by calculating the product of the permutation matrices corresponding to the specific switches states. Finally, the ML agent's accuracy is determined by comparing the combination obtained with the original permutation requested as input. Table 1. Dataset statistics 4. Analysis of machine learning models The standard ML framework allows the translation of effective system attributes that cannot be easily or directly measured. Generally, ML models develop cognition capability by exploiting a series of intelligent algorithms that can understand the training data's intrinsic information. The information inherited by the intelligent algorithms is then abstracted into the decision models that manage the testing phase. These well-trained cognitive models provide real-time operational improvements by enabling the system to draw smart conclusions and react autonomously. In the current work, five distinct ML models have been proposed to model the control states of a PIC-based N$\times$N photonic switching system. The proposed ML framework consists of three basic units; pre-processing, training, and testing. The pre-processing section standardizes the data set before utilizing it in the training section. The training section exploits the standardized train set for the training of the proposed models. Following training, the testing unit uses a test set of the data to initiate the testing phase. The proposed ML models are developed by using high-level python Application Program Interface (API) of two open-source ML libraries, TensorFlow [21] and scikit-learn [22]. Both of these libraries provides a vast range of APIs for data-driven models and different functions to pre-process and clean the dataset from the noise before applying it as an input to the ML model. 4.1 Decision tree regression The Decision Tree Regressor (DTR) model is developed to model the control states of a PIC-based N$\times$N photonic switching system. Normally, DTR provides direct relationships between the input and response variables [24] by constructing a tree based on various decisions made by exploring several dimensions of the provided features and ultimately provides the desired response variable. The proposed DTR has two key tuning parameters; min_samples_leaf and max_depth. The optimum values of these two main parameters are obtained by tuning them in order to achieve the best trade-off between precision and computational time in the proposed simulation environment. 4.2 Random forest regression The considered Random Forest Regressor (RFR) uses ensemble learning which is based on the bagging tree technique [25]. In this technique, various subsets of data from the training set are chosen with replacement. The extracted subset of training samples is managed to train the individual decision tree that runs independently. All the individual decision trees (without giving importance to any particular tree) are averaged to give the final output. In contrast to the classical Bagging mechanism, the proposed RFR has a step extension as it chooses a random subset of train sample and a random selection of features rather than using all the features to train several decision trees. The final prediction of the developed RFR is made by simply averaging the predictions of each decision tree. Similar to DTR the RTR has also two key parameters min_samples_leaf and max_depth. The tuning of these two parameters is performed in such a way as to obtain the best trade-off between accuracy and complexity. 4.3 Boosted tree regression The proposed Boosted Tree Regressor (BTR) also uses ensemble learning, but in contrast to RTR, it is based on the Gradient-Boosting technique. The developed BTR works by combining various regression trees' models, especially decision trees, using Gradient-Boosting technique [26]. The mathematical representation of this class of models can be written as in Eq. (1), where the final regressor f is the sum of (1)$$f(x) = r_{0} + r_{1} + r_{2}+\cdots\cdots..+r_{i}$$ simple base regressor $r_{i}$. Like the other tree regressors key parameters, we also tune these parameters for BTR to obtain the optimum values between precision and complexity. Linear Regression (LR) is a kind of ML model which utilizes a statistical method to learn the linear relationship between the input feature (x) and the output response variable (y). Generally, the mathematical description of LR is as follows: (2)$$y = B_0+ B_1x$$ where $y$ is the output variable, $B_0$ is the intercept, $B_1$ is the co-efficient of each variable, and $x$ is the input features set. The model estimates the values of intercept ($B_0$) and the co-efficient ($B_1$). LR has a different kind of optimization strategy. In our work to model the control states of a PIC-based N$\times$N photonic switching system, we applied the ordinary least square method that takes more than one input feature and requires no weighting function. 4.5 Deep neural network The Deep Neural Network (DNN) is one of the most frequently used ML models inspired by the human nervous system to process information. Generally, DNN is not a single artificial neuron with multiple layers but multiple artificial neurons with multiple layers. Typically, DNN consist of the input layer, hidden layers, and output layer, where each layer has a set of neurons [27]. To model the control states of a PIC-based N$\times$N photonic switching system, the considered DNN is configured by several parametric values that have been optimized (such as the training steps), loaded with the Adaptive Gradient Algorithm (ADAGRAD), learning rate and $L_{1}$ regularization [28]. Moreover, several non-linear activation functions such as Relu, tanh, sigmoid have been tested during the model building. After testing, Relu has been selected to implement DNN as it outperforms the others in terms of prediction and computational load [29]. Another important DNN parameter is the number of hidden-layers. The model has been tuned on several numbers of hidden-layers and neurons to achieve the best trade-off between precision and computational time. Although an increase in the number of layers and neurons improves the accuracy of the DNN up to a certain extent, a further increase in these values introduces diminishing returns that cause over-fitting while simultaneously increasing the computational time. After this trade-off analysis, we decided upon a DNN with three hidden-layers with several cognitive neurons for each hidden layer optimized for each dimension N. To improve prediction accuracy, we propose to use a parallel architecture for the DNN as shown in Fig. 5(a): in practice, we have an independent DNN for the prediction of each of the control states. Fig. 5. (a) Parallel architecture of DNN with hidden layers. (b) Description of the ML agent. 5. Machine learning framework The proposed ML-based methods operate in a complete black-box set up, requiring a sufficiently large amount of training data to develop cognitive models without considering the photonic circuits' internal structural design. We evaluate five ML techniques and compare the prediction performance in the proposed framework of the investigation. Like all other supervised ML-based learning methods, to perform the training and prediction processes, the proposed model requires the definition of the features and labels representing the system inputs and outputs, respectively. The manipulated features comprise the numerous permutations of the input signals ($\lambda _{1}$, $\lambda _{2}$, $\lambda _{3}\ldots.\lambda _{n}$) at the output ports of the switch, and it exploits its $M$ control signals as labels shown in Fig. 5(b). Initially, the training of the ML models is performed. After that, we tested the trained models on the independent subset of the dataset; the standard rule of 70% and 30% has been preferred to set the subset ratios. In order to avoid over-fitting the models, for each particular $M$ we set the tree size (for all tree regression models) and training steps (for LR and DNN) as the stopping factor and the Mean Square Error (MSE) as the loss function, given by: (3)$$\mathrm{MSE} =\frac{1}{n}\sum_{i=0}^{n}\left(\frac{1}{M}\sum_{m=1}^{M}\left(\mathrm{Ctrl\:State}^{p}_{i,\mathrm{m}} - \mathrm{Ctrl\:State}^{a}_{i,\mathrm{m}}\right)^2\right)$$ where $n$ is the number of test realizations, $M$ is the total number of switching elements in the specific $N{\times }N$ switching system, while for each tested case $i$, $\mathrm {Control\:Signal}^{p}_{i,\mathrm {m}}$ and $\mathrm {Control\:Signal}^{a}_{i,\mathrm {m}}$ are the predicted and actual control bits of the $m$-th switching element of the considered configuration. The general tuning parameters and the API used to build the proposed models are reported in Table 2. Table 2. Machine learning models detail This section describes the performance evaluation of numerous ML models developed using higher-level python APIs of the scikit-learn and TensorFlow libraries. The numerical assessment of the proposed data-driven methods is illustrated in Fig. 6(a) against the different $N{\times }N$ photonics switching configuration (i.e., Beneš, Spanke-Beneš, and HCROS ). Figure 6(a) describes the MSE achieved against each of the proposed ML models for different considered $N{\times }N$ architectures. The MSE in the current simulation environment follows the following order LR>DTR>RFR>BTR>DNN for almost all the considered $N{\times }N$ configurations. The LR and DTR shows the worst performance in terms of MSE as they cannot uncover the underlying relationship and irregularities. On the other hand, the RFR benefits from averaging numerous decision trees instead of a single decision tree trained on randomly selected subsets of the training sample. Moreover, the BTR exploits the boasting technique, merging various regression trees' models and pick out the new tree that best degrades the loss function instead of randomly choosing. Therefore, the overall performance of the BTR is better than the RFR. Finally, the DNN performed remarkably well because of its cognitional capability supplied by internally aligned artificial neurons compared to the RFR and BTR. Fig. 6. (a) Mean Square Error of ML models (b) Training Time of ML models. Besides this analysis, the training time for a single control state $M$ is also shown in Fig. 6(b) for all the proposed ML models. The training timing shows the reverse order as we observed during MSE analysis: LR<DTR<RFR<BTR<DNN. The proposed DNN takes a longer time duration during training than the other suggested models due to its internal hidden layers, containing numerous neuron units. The RFR and BTR take a slighter longer time than the LR and DTR because of their dependency on the bagging and boasting techniques. The proposed models are simulated on a workstation having specifications, 32 GB of 2133 MHz RAM and an Intel Core i7 6700 3.4 GHz CPU. Furthermore, the produce simulation is performed without considering quantum computing. Typically, in all the scenarios where data-driven models are exploited, the main objective of using these learning methods is their high accuracy in contrast to the training time. As the ML-based models only need initial training that takes a long time, but the testing can be done in real-time once the models are adequately trained. To this aim, we selected DNN as the preferable ML model to proceed with further investigation. In the rest of the paper, all results are obtained using the proposed parallel DNN approach. To further verify our selection, we observed the complete trend of the loss function (i.e., MSE) with respect to the training steps for the proposed DNN, shown in Fig. 7(a) for a single considered case of Beneš 8x8. Similar behavior is observed for all the other considered switching architectures. Fig. 7. (a) DNN loss function vs. the training steps for Beneš 8x8 architecture. (b) Percentage of correct predictions vs. normalized training dataset size. The normalization is performed with respect to the total generated dataset dimension for the considered N $\times$ N fabric, see data in Table 1. (c) Percentage of correct predictions vs. hidden layer size for the considered switching configurations (d) Single switch training time vs. hidden layer size. The first assessment we performed is the prediction accuracy dependency on the dimension of the training dataset and the size of hidden layers shown in Fig. 7(b) and Fig. 7(c). In Fig. 7(b), the effect of increasing training dataset size is described. The trend reveals that the prediction capability of the proposed DNN improves with an increase in the training dataset size. Likewise, in Fig. 7(c), the effect of increasing the number of neurons per hidden layer is shown: the prediction ability of the DNN improves when increasing the hidden layer size until the diminishing or constant trend is encountered. The lowest possible number of mandatory neurons per layer depends on the structure under examination: the values selected for the following analysis are listed in Table 3. Finally, the correct prediction percentage for the optimized DNN is summarized for the Beneš (8x8, 10x10 and 15x15), Spanke-Beneš(8x8, 10x10) network along with the 12x12 HCROS in Table 3. In the Beneš network, we notice an excellent accuracy level (>96%). However, with reduced prediction efficiency when expanding $N$: correct predictions reach 100%, 99.72%, and 96.25% for $N$ equal to 8, 10, and 15, respectively. To further validate the results, similar results were obtained based on Spanke-Beneš and HCROS: also, in both of these considered architectures, we observe a high level of accuracy (97.47%, 96.51%, and 97.83% for Spanke-Beneš(8x8, 10x10) and 12x12 HCROS, respectively). Table 3. Summary of ML prediction results Algorithm 1. Heuristic to correct single-ring errors Following assessing the accuracy, we studied the distribution of errors in the predicted states, as demonstrated in Fig. 8. The extent of errors in each switching element's control state prediction when considering the test set is encoded in the color heatmap. A non-uniform distribution is observed in which errors are clustered on a small number of switch elements of the total fabric. Based on this observation, we analyzed the number of wrong switch elements where the prediction fails. Furthermore, the results in Table 3 show that only a single error in one of the switches controls is responsible for the incorrect routing in most of the wrong prediction in all the considered architectures instances. Observing this phenomenon, we formulate a simple heuristic that can further improve the DNN prediction performance (see Algorithm 1). The heuristic we suggest requires several device properties such as topological graph ($\mathcal {G}$), ${M}$ control signals, and ${N}$ number of inputs/outputs signals. Additionally, the Test set ($\mathcal {TS}$) and ML Predicted set ($\mathcal {PS}$) are also loaded as an input. The proposed heuristic corrects the single switch errors by switching the state of one element at a time while comparing the output sequence against the target output permutation of wavelengths. This heuristic makes only ${M}$ iteration, and this number is reasonably small, so it can be considered feasible for real-time operations. Besides this, it is also topologically and technologically agnostic as for all the considered architectures, DNN assisted by heuristic enhances the accuracy up to 100%. Fig. 8. Heatmap showing normalized error in prediction of control signals using DNN. We propose and analyze several data-driven ML techniques to control and manage photonic switching systems in the framework of SDN-enabled optical networks in this work. The proposed scheme demonstrates that the ML-based softwarized system is both topological and technological agnostic and can be operated in real-time. The proposed ML approaches can effectively determine the control states for a generic $N{\times }N$ photonic switch without any required knowledge on the structure topology. The presented ML framework is trained and tested assuming the N$\times$N photonic switch as black-box: the ML-based models only need a sufficient amount of training instances for the development of model cognition, without taking into consideration the device internal architecture. The techniques we propose are also scalable to larger input sizes $N$, since a high level of accuracy can be reached with limited size datasets. Moreover, we have shown that the DNN shows an excellent accuracy than other proposed models. Finally, a simple heuristic approach can increase the prediction accuracy of DNN to 100% with a small increase of the computational cost for the considered switching architecture. 1. Cisco, "Cisco Visual Networking Index: Forecast and Trends, 2017–2022," Tech. rep., Cisco (2017). 2. C.-S. Li and W. Liao, "Software defined networks," IEEE Commun. Mag. 51(2), 113 (2013). [CrossRef] 3. M. Jinno, T. Ohara, Y. Sone, A. Hirano, O. Ishida, and M. Tomizawa, "Elastic and adaptive optical networks: possible adoption scenarios and future standardization aspects," IEEE Commun. Mag. 49(10), 164–172 (2011). [CrossRef] 4. A. Ferrari, M. Filer, K. Balasubramanian, Y. Yin, E. Le Rouzic, J. Kundrát, G. Grammel, G. Galimberti, and V. Curri, "Gnpy: an open source application for physical layer aware open optical networks," J. Opt. Commun. Netw. 12(6), C31–C40 (2020). [CrossRef] 5. V. Curri, "Software-defined WDM optical transport in disaggregated open optical networks," in 2020 22nd International Conference on Transparent Optical Networks (ICTON), (2020), pp. 1–4. 6. L. Velasco, A. Sgambelluri, R. Casellas, L. Gifre, J.-L. Izquierdo-Zaragoza, F. Fresi, F. Paolucci, R. Martínez, and E. Riccardi, "Building autonomic optical whitebox-based networks," J. Lightwave Technol. 36(15), 3097–3104 (2018). [CrossRef] 7. J. Kim, C. J. Nuzman, B. Kumar, D. F. Lieuwen, J. S. Kraus, A. Weiss, C. P. Lichtenwalner, A. R. Papazian, R. E. Frahm, N. R. Basavanhally, D. A. Ramsey, V. A. Aksyuk, F. Pardo, M. E. Simon, V. Lifton, H. B. Chan, M. Haueis, A. Gasparyan, H. R. Shea, S. Arney, C. A. Bolle, P. R. Kolodner, R. Ryf, D. T. Neilson, and J. V. Gates, "1100 x 1100 port MEMS-based optical crossconnect with 4-dB maximum loss," IEEE Photon. Technol. Lett. 15(11), 1537–1539 (2003). [CrossRef] 8. A. N. Dames, "Beam steering optical switch," (2008). US Patent 7, 389, 016. 9. K. Suzuki, R. Konoike, J. Hasegawa, S. Suda, H. Matsuura, K. Ikeda, S. Namiki, and H. Kawashima, "Low-insertion-loss and power-efficient 32× 32 silicon photonics switch with extremely high-δ silica PLC connector," JLT 37, 116–122 (2019). [CrossRef] 10. Q. Cheng, L. Y. Dai, N. C. Abrams, Y.-H. Hung, P. E. Morrissey, M. Glick, P. O'Brien, and K. Bergman, "Ultralow-crosstalk, strictly non-blocking microring-based optical switch," Photonics Res. 7(2), 155–161 (2019). [CrossRef] 11. D. Opferman and N. Tsao-Wu, "On a class of rearrangeable switching networks part I: Control algorithm," The Bell Syst. Tech. J. 50(5), 1579–1600 (1971). [CrossRef] 12. Y. Huang, Q. Cheng, Y.-H. Hung, H. Guan, X. Meng, A. Novack, M. Streshinsky, M. Hochberg, and K. Bergman, "Multi-stage 8 × 8 silicon photonic switch based on dual-microring switching elements," J. Lightwave Technol. 38(2), 194–201 (2020). [CrossRef] 13. M. Ding, Q. Cheng, A. Wonfor, R. V. Penty, and I. H. White, "Routing algorithm to optimize loss and IPDR for rearrangeably non-blocking integrated optical switches," in CLEO, (OSA, 2015), pp. JTh2A–60. 14. Y. Qian, H. Mehrvar, H. Ma, X. Yang, K. Zhu, H. Fu, D. Geng, D. Goodwill, P. Dumais, and E. Bernier, "Crosstalk optimization in low extinction-ratio switch fabrics," in OFC, (OSA, 2014), pp. Th1I–4. 15. Q. Cheng, Y. Huang, H. Yang, M. Bahadori, N. Abrams, X. Meng, M. Glick, Y. Liu, M. Hochberg, and K. Bergman, "Silicon photonic switch topologies and routing strategies for disaggregated data centers," IEEE J. Select. Topics Quantum Electron. 26(2), 1–10 (2020). [CrossRef] 16. W. Gao, L. Lu, L. Zhou, and J. Chen, "Automatic calibration of silicon ring-based optical switch powered by machine learning," Opt. Express 28(7), 10438–10455 (2020). [CrossRef] 17. H. Zhou, Y. Zhao, X. Wang, D. Gao, J. Dong, and X. Zhang, "Self-learning photonic signal processor with an optical neural network chip," arXiv preprint arXiv:1902.07318 (2019). 18. R. Proietti, X. Chen, Y. Shang, and S. J. B. Yoo, "Self-driving reconfiguration of data center networks by deep reinforcement learning and silicon photonic flex-lion switches," in 2020 IPC, (2020), pp. 1–2. 19. S. Salman, C. Streiffer, H. Chen, T. Benson, and A. Kadav, "Deepconf: Automating data center network topologies management with machine learning," in Proceedings of the 2018 Workshop on Network Meets AI & ML, (Association for Computing Machinery, New York, NY, USA, 2018), NetAI'18, p. 8–14. 20. I. Khan, L. Tunesi, M. Chalony, E. Ghillino, M. U. Masood, J. Patel, P. Bardella, A. Carena, and V. Curri, "Machine-learning-aided abstraction of photonic integrated circuits in software-defined optical transport," in Next-Generation Optical Communication: Components, Sub-Systems, and Systems X, vol. 11713G. Li and K. Nakajima, eds., International Society for Optics and Photonics (SPIE, 2021), pp. 146–151. 21. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, "Tensorflow: A system for large-scale machine learning," in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), (USENIX Association, Savannah, GA, 2016), pp. 265–283. 22. G. Hackeling, Mastering Machine Learning with scikit-learn (Packt Publishing Ltd, 2017). 23. M. R. Yahya, N. Wu, G. Yan, T. Ahmed, J. Zhang, and Y. Zhang, "Honeycomb ROS: A 6 × 6 non-blocking optical switch with optimized reconfiguration for ONoCs," Electronics 8(8), 844 (2019). [CrossRef] 24. L. Rokach and O. Z. Maimon, Data mining with decision trees: theory and applications, vol. 69 (WS, 2008). 25. L. Breiman, "Random forests," Mach. learning 45(1), 5–32 (2001). [CrossRef] 26. J. Elith, J. R. Leathwick, and T. Hastie, "A working guide to boosted regression trees," J. Anim. Ecol. 77(4), 802–813 (2008). [CrossRef] 27. I. Khan, M. Bilal, and V. Curri, "Assessment of cross-train machine learning techniques for qot-estimation in agnostic optical networks," OSA Continuum 3(10), 2690–2706 (2020). [CrossRef] 28. J. Duchi, E. Hazan, and Y. Singer, "Adaptive subgradient methods for online learning and stochastic optimization," JMLR 12, 2121–2159 (2011). 29. C. Nwankpa, W. Ijomah, A. Gachagan, and S. Marshall, "Activation functions: Comparison of trends in practice and research for deep learning," arXiv preprint arXiv:1811.03378 (2018). Cisco, "Cisco Visual Networking Index: Forecast and Trends, 2017–2022," Tech. rep., Cisco (2017). C.-S. Li and W. Liao, "Software defined networks," IEEE Commun. Mag. 51(2), 113 (2013). M. Jinno, T. Ohara, Y. Sone, A. Hirano, O. Ishida, and M. Tomizawa, "Elastic and adaptive optical networks: possible adoption scenarios and future standardization aspects," IEEE Commun. Mag. 49(10), 164–172 (2011). A. Ferrari, M. Filer, K. Balasubramanian, Y. Yin, E. Le Rouzic, J. Kundrát, G. Grammel, G. Galimberti, and V. Curri, "Gnpy: an open source application for physical layer aware open optical networks," J. Opt. Commun. Netw. 12(6), C31–C40 (2020). V. Curri, "Software-defined WDM optical transport in disaggregated open optical networks," in 2020 22nd International Conference on Transparent Optical Networks (ICTON), (2020), pp. 1–4. L. Velasco, A. Sgambelluri, R. Casellas, L. Gifre, J.-L. Izquierdo-Zaragoza, F. Fresi, F. Paolucci, R. Martínez, and E. Riccardi, "Building autonomic optical whitebox-based networks," J. Lightwave Technol. 36(15), 3097–3104 (2018). J. Kim, C. J. Nuzman, B. Kumar, D. F. Lieuwen, J. S. Kraus, A. Weiss, C. P. Lichtenwalner, A. R. Papazian, R. E. Frahm, N. R. Basavanhally, D. A. Ramsey, V. A. Aksyuk, F. Pardo, M. E. Simon, V. Lifton, H. B. Chan, M. Haueis, A. Gasparyan, H. R. Shea, S. Arney, C. A. Bolle, P. R. Kolodner, R. Ryf, D. T. Neilson, and J. V. Gates, "1100 x 1100 port MEMS-based optical crossconnect with 4-dB maximum loss," IEEE Photon. Technol. Lett. 15(11), 1537–1539 (2003). A. N. Dames, "Beam steering optical switch," (2008). US Patent 7, 389, 016. K. Suzuki, R. Konoike, J. Hasegawa, S. Suda, H. Matsuura, K. Ikeda, S. Namiki, and H. Kawashima, "Low-insertion-loss and power-efficient 32× 32 silicon photonics switch with extremely high-δ silica PLC connector," JLT 37, 116–122 (2019). Q. Cheng, L. Y. Dai, N. C. Abrams, Y.-H. Hung, P. E. Morrissey, M. Glick, P. O'Brien, and K. Bergman, "Ultralow-crosstalk, strictly non-blocking microring-based optical switch," Photonics Res. 7(2), 155–161 (2019). D. Opferman and N. Tsao-Wu, "On a class of rearrangeable switching networks part I: Control algorithm," The Bell Syst. Tech. J. 50(5), 1579–1600 (1971). Y. Huang, Q. Cheng, Y.-H. Hung, H. Guan, X. Meng, A. Novack, M. Streshinsky, M. Hochberg, and K. Bergman, "Multi-stage 8 × 8 silicon photonic switch based on dual-microring switching elements," J. Lightwave Technol. 38(2), 194–201 (2020). M. Ding, Q. Cheng, A. Wonfor, R. V. Penty, and I. H. White, "Routing algorithm to optimize loss and IPDR for rearrangeably non-blocking integrated optical switches," in CLEO, (OSA, 2015), pp. JTh2A–60. Y. Qian, H. Mehrvar, H. Ma, X. Yang, K. Zhu, H. Fu, D. Geng, D. Goodwill, P. Dumais, and E. Bernier, "Crosstalk optimization in low extinction-ratio switch fabrics," in OFC, (OSA, 2014), pp. Th1I–4. Q. Cheng, Y. Huang, H. Yang, M. Bahadori, N. Abrams, X. Meng, M. Glick, Y. Liu, M. Hochberg, and K. Bergman, "Silicon photonic switch topologies and routing strategies for disaggregated data centers," IEEE J. Select. Topics Quantum Electron. 26(2), 1–10 (2020). W. Gao, L. Lu, L. Zhou, and J. Chen, "Automatic calibration of silicon ring-based optical switch powered by machine learning," Opt. Express 28(7), 10438–10455 (2020). H. Zhou, Y. Zhao, X. Wang, D. Gao, J. Dong, and X. Zhang, "Self-learning photonic signal processor with an optical neural network chip," arXiv preprint arXiv:1902.07318 (2019). R. Proietti, X. Chen, Y. Shang, and S. J. B. Yoo, "Self-driving reconfiguration of data center networks by deep reinforcement learning and silicon photonic flex-lion switches," in 2020 IPC, (2020), pp. 1–2. S. Salman, C. Streiffer, H. Chen, T. Benson, and A. Kadav, "Deepconf: Automating data center network topologies management with machine learning," in Proceedings of the 2018 Workshop on Network Meets AI & ML, (Association for Computing Machinery, New York, NY, USA, 2018), NetAI'18, p. 8–14. I. Khan, L. Tunesi, M. Chalony, E. Ghillino, M. U. Masood, J. Patel, P. Bardella, A. Carena, and V. Curri, "Machine-learning-aided abstraction of photonic integrated circuits in software-defined optical transport," in Next-Generation Optical Communication: Components, Sub-Systems, and Systems X, vol. 11713G. Li and K. Nakajima, eds., International Society for Optics and Photonics (SPIE, 2021), pp. 146–151. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, "Tensorflow: A system for large-scale machine learning," in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), (USENIX Association, Savannah, GA, 2016), pp. 265–283. G. Hackeling, Mastering Machine Learning with scikit-learn (Packt Publishing Ltd, 2017). M. R. Yahya, N. Wu, G. Yan, T. Ahmed, J. Zhang, and Y. Zhang, "Honeycomb ROS: A 6 × 6 non-blocking optical switch with optimized reconfiguration for ONoCs," Electronics 8(8), 844 (2019). L. Rokach and O. Z. Maimon, Data mining with decision trees: theory and applications, vol. 69 (WS, 2008). L. Breiman, "Random forests," Mach. learning 45(1), 5–32 (2001). J. Elith, J. R. Leathwick, and T. Hastie, "A working guide to boosted regression trees," J. Anim. Ecol. 77(4), 802–813 (2008). I. Khan, M. Bilal, and V. Curri, "Assessment of cross-train machine learning techniques for qot-estimation in agnostic optical networks," OSA Continuum 3(10), 2690–2706 (2020). J. Duchi, E. Hazan, and Y. Singer, "Adaptive subgradient methods for online learning and stochastic optimization," JMLR 12, 2121–2159 (2011). C. Nwankpa, W. Ijomah, A. Gachagan, and S. Marshall, "Activation functions: Comparison of trends in practice and research for deep learning," arXiv preprint arXiv:1811.03378 (2018). Abadi, M. Abrams, N. Abrams, N. C. Ahmed, T. Aksyuk, V. A. Arney, S. Bahadori, M. Balasubramanian, K. Bardella, P. Barham, P. Basavanhally, N. R. Benson, T. Bergman, K. Bernier, E. Bilal, M. Bolle, C. A. Breiman, L. Carena, A. Casellas, R. Chalony, M. Chan, H. B. Chen, H. Chen, J. Chen, X. Chen, Z. Cheng, Q. Curri, V. Dai, L. Y. Dames, A. N. Davis, A. Dean, J. Devin, M. Ding, M. Duchi, J. Dumais, P. Elith, J. Ferrari, A. Filer, M. Frahm, R. E. Fresi, F. Fu, H. Gachagan, A. Galimberti, G. Gao, D. Gao, W. Gasparyan, A. Gates, J. V. Geng, D. Ghemawat, S. Ghillino, E. Gifre, L. Glick, M. Goodwill, D. Grammel, G. Guan, H. Hackeling, G. Hasegawa, J. Hastie, T. Haueis, M. Hazan, E. Hirano, A. Hochberg, M. Huang, Y. Hung, Y.-H. Ijomah, W. Ikeda, K. Irving, G. Isard, M. Ishida, O. Izquierdo-Zaragoza, J.-L. Jinno, M. Kadav, A. Kawashima, H. Khan, I. Kim, J. Kolodner, P. R. Konoike, R. Kraus, J. S. Kudlur, M. Kumar, B. Kundrát, J. Le Rouzic, E. Leathwick, J. R. Levenberg, J. Li, C.-S. Liao, W. Lichtenwalner, C. P. Lieuwen, D. F. Lifton, V. Liu, Y. Lu, L. Ma, H. Maimon, O. Z. Marshall, S. Martínez, R. Masood, M. U. Matsuura, H. Mehrvar, H. Meng, X. Monga, R. Moore, S. Morrissey, P. E. Murray, D. G. Namiki, S. Neilson, D. T. Novack, A. Nuzman, C. J. Nwankpa, C. O'Brien, P. Ohara, T. Opferman, D. Paolucci, F. Papazian, A. R. Pardo, F. Patel, J. Penty, R. V. Proietti, R. Qian, Y. Ramsey, D. A. Riccardi, E. Rokach, L. Ryf, R. Salman, S. Sgambelluri, A. Shang, Y. Shea, H. R. Simon, M. E. Singer, Y. Sone, Y. Steiner, B. Streiffer, C. Streshinsky, M. Suda, S. Suzuki, K. Tomizawa, M. Tsao-Wu, N. Tucker, P. Tunesi, L. Vasudevan, V. Velasco, L. Wang, X. Warden, P. Weiss, A. White, I. H. Wicke, M. Wonfor, A. Wu, N. Yahya, M. R. Yan, G. Yang, H. Yin, Y. Yoo, S. J. B. Yu, Y. Zhang, X. Zhang, Y. Zheng, X. Zhou, H. Zhu, K. IEEE Commun. Mag. (2) IEEE J. Select. Topics Quantum Electron. (1) J. Anim. Ecol. (1) J. Lightwave Technol. (2) J. Opt. Commun. Netw. (1) JLT (1) JMLR (1) Mach. learning (1) Opt. Express (1) OSA Continuum (1) Photonics Res. (1) The Bell Syst. Tech. J. (1) (1) f ( x ) = r 0 + r 1 + r 2 + ⋯ ⋯ . . + r i (2) y = B 0 + B 1 x (3) M S E = 1 n ∑ i = 0 n ( 1 M ∑ m = 1 M ( C t r l S t a t e i , m p − C t r l S t a t e i , m a ) 2 ) Network type Size ( N × N ) Beneš 8x8 Beneš 10x10 HCROS 12x12 Spanke-Beneš 8x8 Spanke-Beneš 10x10 Permutations ( N !) 40,320 3,628,800 479,001,600 1,307 × 10 9 40,320 3,628,800 Switches ( M ) 20 26 36 49 28 45 Combinations ( 2 M ) 1,048,576 67,108,864 68 × 10 9 562 × 10 12 268,435,456 35 × 10 12 Dataset 100,000 300,000 300,000 1,000,000 300,000 1,000,000 Machine learning models detail Machine learning Model Decision Tree Regressor scikit-learn Min samples leaf 4 Max depth 100 Random Forest Regressor scikit-learn Method ′ B a g g i n g ′ Min samples leaf 4 Boosted Tree Regressor TensorFlow Method 'Gradient Boosting' Learning rate 0.01 L 1 regularization 0.001 Linear Regressor TensorFlow E q u a t i o n Linear Training steps 1000 Method Ordinary Least Squares Deep Neural Network TensorFlow Hidden layers 3 Keras optimizer ′ A D A G R A D ′ Activation function ′ R e L U ′ Summary of ML prediction results Neurons per hidden layer 15 35 35 35 35 45 Accuracy (no heuristic) 100% 99.72% 97.83% 96.25% 97.47% 96.51% Single switch control error 0% 0.28% 2.17% 3.75% 2.53% 3.49% Multiple switch control error 0% 0% 0% 0% 0% 0% Accuracy (with heuristic) 100% 100% 100% 100% 100% 100% Algorithm 1. Heuristic to correct single-ring errors
CommonCrawl
Space Exploration Meta Space Exploration Stack Exchange is a question and answer site for spacecraft operators, scientists, engineers, and enthusiasts. It only takes a minute to sign up. How would you calculate a Hohmann transfer going towards the sun? If you were going from a lower orbit to a higher orbit such as a Earth-Mars transfer, you would use the periapsis of the transfer orbit in the vis-viva equation to calculate the delta V. But if you were going from Mars back to Earth, would you use the apoapsis to calculate the Delta V because you are trying to slow the the spacecraft down when going around the sun? orbit orbital-mechanics orbital-elements Jack PrydeJack Pryde $\begingroup$ Are you assuming that at the destination, the transferring spacecraft will aerobrake, flyby, or impact, instead of using its engines to brake into orbit or land? $\endgroup$ – notovny $\begingroup$ I would just like to know the velocity to get into the transfer orbit. Don't need. to know the circularisation DeltaV. $\endgroup$ – Jack Pryde $\begingroup$ @JackPryde Going from Mars to Earth, you start at the aphelion of the Hohmann ellipse & finish at its perihelion. The Wikipedia article explains the delta V calculations, and how they're derived from the vis viva equation. But if there's something specific you need help with, please edit that into your question. $\endgroup$ – PM 2Ring $\begingroup$ Also the main point I want to know is if there are any differences in calculations between going away from and going towards the sun in a transfer. what replaces 2/r in the vis-viva equation because the burn is starting from apoapsis not periapsis? $\endgroup$ A Hohmann transfer requires two burns. You already appear to know how to calculate the burns going out. For coming in do exactly the same calculations, just reverse the order and signs on the burns. Loren PechtelLoren Pechtel $\begingroup$ How would I reverse the order of the burns? Is the equation the same or is it different? As I said in a previous comment, I only want to know the DeltaV required to get into the transfer orbit and don't need to know about the second circularisation burn. $\endgroup$ $\begingroup$ @JackPryde Figure out the burns to move from the lower orbit to the higher orbit. To move from the higher orbit to the lower orbit you do the exact reverse. Point the engine the other way, do the second burn first. $\endgroup$ – Loren Pechtel $\begingroup$ @JackPryde Newtonian mechanics in general works the same going forwards and backwards, this comes in handy from time to time. However, it does not work when friction or other irreversible processes are involved (e.g. atmospheric reentry) $\endgroup$ A burn is required to make up a difference in velocity. In the case you have already figured out: The difference in velocity is between the velocity of the lower circular orbit, and the periapsis velocity of the transfer orbit (which you get from vis-viva). In this case: The difference in velocity is between the higher circular orbit and the apoapsis velocity of the transfer orbit (which you also get form vis-viva). Quick warning: what you are currently calculating is a Hohmann transfer betweeen two orbits at Earth-distance and Mars-distance from the Sun. That is, escaping the gravity of Earth and Mars are not accounted for! For proper interplanetary transfers, the transfer orbit insertion burn and planetary escape burns are combined into a single manoeuvre. SE - stop firing the good guysSE - stop firing the good guys I'll adapt ai-solutions.com/_freeflyeruniversityguide/hohmann_transfer.htm: I'll go from 20,000 km to 7,000 km First find the velocity of the starting orbit: $$\begin{align} v_{starting} &= \sqrt{\mu \left({2\over r}-{1 \over a}\right)}\\ &= \sqrt{\mu \left({2\over 20000km}-{1 \over 20000km}\right)}\\ &=4.464 km/s\\ \end{align} $$ (Unsurprisingly, this is the velocity of the end orbit in the origninal) Next find the semi major axis of the transfer orbit: $$ \begin{align} a &= {a_{from} + a_{to}\over 2}\\ &={a_{starting} + a_{transfer}\over2}\\ &={7000km+20000km\over2}\\ &=13,500 km\\ \end{align} $$ (Unsurprisingly, the semi major axis of the transfer orbit is the same as the original example) Now, we can find the velocity at apoapsis (because we're going high to low) of this transfer orbit. In this case r = 20,000 km, and a = 13,500 km. We plug these into the Vis-Viva equation to get: $$\begin{align} v_{transfer\_apo}&=\sqrt{\mu \left({2\over 20000km}-{1 \over 13500km}\right)}\\ &=3.215 km/s \end{align}$$ (NB: unsurprisingly, this is the same value for the transfer orbit apoapsis as calculated in the original version) Then, we can calculate the $\Delta v$ of the first maneuver: $$\begin{align}\\ \Delta v_1 &= v_{to} - v_{from}\\ &= v_{transfer\_apo} - v_{starting}\\ &= 3.2155 km/s - 4.464 km/s\\ &= -1.639 km/s\\ \end{align}$$ (As others have indicated, this is the same as $\Delta v_2$ from the original example, but in the opposite direction) This first burn will put our Spacecraft into its transfer orbit. Next, we need to calculate the speed at the transfer orbit's periapsis (because we're going from high to low. For this calculation, r = 7,000 km, and a = 13,500 km. We plug these into the Vis-Viva equation to get: $$\begin{align} v_{transfer\_peri}&=\sqrt{\mu \left({2\over 7000km}-{1 \over 13500km}\right)}\\ &= 9.185 \text{km/s} \end{align}$$ (unsurprisingly this is the same $v_{transfer\_peri}$ as the original example) Now, we must calculate the velocity of the target orbit. For the variables, r = 7,000 km, and a = 7,000 km. $$\begin{align} v_{ending} &= \sqrt{\mu \left({2\over r}-{1 \over a}\right)}\\ &= \sqrt{\mu \left({2\over 7000km}-{1 \over 7000km}\right)}\\ &= 7.546 km/s\\ \end{align} $$ (unsurprisingly, this is the same as the parking orbit in the original example) Finally, we need to work out the second burn: $$\begin{align}\\ \Delta v_2 &= v_{to} - v_{from}\\ &= v_{ending} - v_{transfer\_peri} \\ &= 7.546 km/s - 9.185 km/s \\ &= -1.639 km/s \\ \end{align}$$ (As others have indicated, this is the same as $\Delta v_1$ from the original example, but in the opposite direction) I believe I know what I have done wrong. I have used a different equation given on this website, ai-solutions.com/_freeflyeruniversityguide/hohmann_transfer.htm, which is simpler than the one on wikipedia but only works when going up, e.g going to a geosynchronous orbit from a parking orbit. This equation I think doesn't work in reverse, going from a higher orbit to a lower orbit. The wikipedia version allows for the swapping of r1 and r2 to change between going to higher or lower orbits. $\begingroup$ So long as you don't touch an atmosphere it's reversible. $\endgroup$ $\begingroup$ The equations on wikipedia will give a negative value for delta-V if the destination radius is smaller than the source radius. This is something to watch out for if you're chaining maneuvers together algebraically/in code without looking at the results from each maneuver. I'd just take the absolute value of the equation and be done with it. $\endgroup$ – Ingolifs Thanks for contributing an answer to Space Exploration Stack Exchange! Not the answer you're looking for? Browse other questions tagged orbit orbital-mechanics orbital-elements or ask your own question. Graduation of Space Exploration How to calculate delta-v required for a planet-to-planet Hohmann transfer? What would your altitude be after you had achieved escape velocity from the moon? Question about the Hohmann Transfer: why does delta-v go down when transferring to a higher orbit? Optimum delta-v burn to change periapsis or apoapsis at an arbitrary point on an elliptical orbit? How to calculate the time to apoapsis & periapsis, given the orbital elements? The delta V required for given cordinates intersection on given time How do I calculate the delta-v with patched conics? How would off-apsis burn affect the angle and position of new apoapsis
CommonCrawl
Voltage Drops Are Falling on My Head: Operating Points, Linearization, Temperature Coefficients, and Thermal Runaway Jason Sachs●January 19, 2015Tweet Today's topic was originally going to be called "Small Changes Caused by Various Things", because I couldn't think of a better title. Then I changed the title. This one's not much better, though. Sorry. What I had in mind was the Shockley diode equation and some other vaguely related subjects. My Teachers Lied to Me My introductory circuits class in college included a section about diodes and transistors. The ideal diode equation is this: $$\begin{array}{} V = 0 & \text{if } I > 0 \cr I = 0 & \text{if } V < 0 \end{array} $$ In other words, a diode acts like a short circuit with positive current, to prevent any voltage drop, and it acts like an open circuit with negative voltage, to prevent any current flow. But that's not realistic. So the next best thing is that we just assume there's a diode drop of 0.7V or so: $$ \begin{array}{} V = 0.7 & \text{if } I > 0 \cr I = 0 & \text{if } V < 0.7 \end{array} $$ But that's not much better. So then we learned that the p-n junction equation, which applies to things like diodes and npn transistors and solar cells, has an exponential relationship: $$ V = \frac{kT}{q} \ln \frac{I}{I_s} $$ where \( k \) is Boltzmann's constant (\( k \approx 1.380 \times 10^{-23} J/K \)), \( T \) is the temperature in degrees Kelvin, \( q \) is the charge of an electron (\( q \approx 1.602 \times 10^{-19} C \)), and \( I_s \) is some characteristic current of the junction in question. What's "characteristic" here is really the current density, so for diodes and transistors \( I_s \) increases linearly with the area of the junction. Double the area and you double \( I_s \). At 25°C, \( \frac{kT}{q} \approx 25.7 \)mV, so when you hear "26 millivolts" batted around a lot in semiconductor theory, that's where it comes from. And actually, that's not quite true either; in reality there's a "+1" in the equation: $$ V = \frac{kT}{q} \ln \left(\frac{I}{I_s} + 1\right) $$ But for all practical purposes you can forget about the "+1", since for real devices, \( I_s \) tends to be in the sub-picoampere range. So we're left with \( V = \frac{kT}{q} \ln I/I_s \): at room temperature, double the current, and you increase the junction voltage by about 18 millivolts; increase the current by a factor of 10, and you increase the junction voltage by about 59 millivolts. Great. That's more or less what I've been using in my mental model of diodes and bipolar transistors for the last 20 years. This gives you the equation for \( V_{BE} \) in terms of base current, or if you fold the gain β into \( I_s \), in terms of collector current. Except that's not true either. My teachers lied to me! In doing research for this article, I found they left out a factor of n (or η if you like Greek letters): $$ V = \frac{nkT}{q} \ln \left(\frac{I}{I_s} + 1\right) $$ This factor \( n \) is called the ideality factor, and it's apparently between 1 and 2 for most devices. (Though if you've got your head in the sand, and don't want to consider devices that have areas of operation with \( n > 1 \), then by definition \( n \approx 1 \) and everything is hunky-dory.) And that's not true either, because there are parasitic series resistances and other odd effects, which you learn about in more advanced areas of study. It's not unusual for teachers to lie to you. They have to! Everything we use in scientific modeling is an approximation; reality has all these ugly factors that would give you information overload and send you running away, if you heard about them all at once. For example, we think of dice as cubes, but really they have the edges and corners rounded off, with little indentations for the pips, and the surfaces aren't perfectly parallel or perfectly smooth due to manufacturing limitations, and anyway you have quantum physics coming into the mix telling you that the atoms are moving around unpredictably, doing whatever funny stuff they do in quantumland. So if you want to know more about why p-n junctions work the way they do, you take courses on device physics, whereas if you just want things to get done accurately, you deal with these effects empirically, like measure the ideality factor (Microchip has a good appnote on usage of diode-connected transistors; many of the commercially-available 2N3904 transistors, if used in a diode-connected manner, have ideality factors around the 1.004-1.005 range) and just deal with it. For the rest of us, just forget about \( n \) and pretend it's equal to 1. But that's not what this article is about. Small-Signal Analysis: Operating Points and Linearization So let's say that you have an electrical circuit put together, and all the currents and voltages are constant, and everything's happy. You measure it and diligently figure out what all those currents and voltages are. That's called an operating point. Now you change one of the input currents or voltages, and add a really small signal \( x(t) \) to it, and measure one of the other signals \( y(t) \) and see how it relates to \( x(t) \). This is called small-signal analysis, and it generally relies on the assumption that for small changes in any of the variables or parameters, systems are linear in a small region around any particular operating point. It's the same technique used to define derivatives: it's just the limit of the ratio of one variable to another when deviations are small. This idea is also called linearization: for some vector of inputs \( X \) and vector of outputs \( Y \) around any given operating point \( (X_0, Y_0) \), we can approximate the outputs by \( Y \approx Y_0 + J(X_0, Y_0) \times (X - X_0) \) and J is the Jacobian, which is just a big matrix of partial derivatives \( J_{ij} = \partial y_i / \partial x_j \) . Blah blah blah. It's hard to look at this abstract stuff and see what's going on, so let's look at a more concrete and useful example. Let's say that we have a diode with a 0.70V drop when it conducts 1 mA from a current source. In parallel with the diode is a 1μF capacitor, and we give the capacitor a little whack by discharging it from, say, 0.70V to 0.69V and we want to know the dynamics for it to recover. What does the voltage look like? Aside from just doing it, or running a simulation in SPICE, the linearization approach says, Hmm, well, I have this approximate equation — lemme hear it again: Yeah! — and we'll take the derivative (in case you don't remember your calculus, \( \ln a/b = \ln a - \ln b \) and \( \frac{d}{dx} \ln x = \frac{1}{x} \) ), and get $$ \frac{\partial V}{\partial I} = \frac{kT}{q} \times \frac{1}{I} $$ At room temperature, this is approximately 26mV divided by 1mA = 26Ω. That's it! No \( I_s \) in the equation, not even the diode drop V. It only depends on \( \frac{kT}{q} \) and the diode current: really simple. That's the incremental resistance of any p-n junction carrying 1mA of current, if it comes close to the Shockley equation with ideality factor of 1. So what's going to happen is the capacitor will decay back to its final voltage with a time constant of about 26Ω × 1μF = 26μs. If we had 5mA flowing through the diode instead of 1mA, the incremental resistance would be 5.2Ω, and if we had 200μA rather than 1mA, the incremental resistance would be 130Ω. Got it? In npn transistors we can do the same thing: the collector current \( I_C \) is an exponential function of the base-emitter voltage \( V_{BE} = \frac{kT}{q} \ln \frac{I_C}{I_s} \); if we had an amplifier and were modulating the base-emitter voltage, the collector current variations can be considered \( \Delta I_C = g_m \Delta V_{BE} \), where \( g_m \) is called the transconductance, and it's just equal to \( I_C / \frac{kT}{q} \). The higher the collector current, the higher the transconductance. This kind of analysis illustrates one important relationship in bipolar transistors. If you're willing to bump up the current in the transistor by a factor of K, the transconductance also goes up by a factor of K, whereas the parasitic capacitances generally don't change. So if you run through the equations, you'll find a circuit time constant proportional to \( C/g_m \), and the time constant will go down by a factor of K. In other words, there's a strongly-correlated relationship between circuit speed and quiescent power. Up to the point when circuit dynamics are determined by other factors, if you're willing to double the power, you can just about double the speed. If you want lower power, you have to tolerate a slower response. This comes into play with op-amps as well; the micropower op-amps generally have a much smaller gain-bandwidth product than op-amps with higher quiescent current. Here's another example: let's say we have a differential pair with 2mA of current, and the voltage across the differential pair is 0. Oh, and everything is nicely at room temperature, so \( \frac{kT}{q} \approx 26{\rm mV} \). If the transistors are perfectly matched, each one has 1mA flowing, and some voltage across the \( V_{BE} \) junction. Let's say it's 0.7V. Now let's apply 1mV across the differential pair: one transistor will have \( V_{BE} \approx 0.6995V \) and the other will have \( V_{BE} \approx 0.7005V \). If you run through the math, this raises the current in one transistor by a factor of \( e^{0.5{\rm mV} / 26 {\rm mV}} \approx 1.0194 \) and the other will decrease by about the same factor. The difference between the currents is approximately 38.5μA. That's what we get if we solve the exponential equations. Or we could use a linearization approach. Look at each of the transistors: they each have 1mA flowing through them, and therefore the transconductance \( g_m \approx 1/26 \Omega \), so a change of 0.5mV × \( g_m \) in each of them is \( 0.5{\rm mV} \times 1/26 \Omega = 19.2\mu A \), so one goes up by about 19.2 μA, the other goes down by about 19.2 μA, and the difference changes by 38.4 μA. The linear approximation is easy and gives us essentially the same result. Tempco! OK, back to the pn-junction equation for a transistor: $$V_{BE} = \frac{kT}{q} \ln \frac{I_C}{I_s}$$ Remember I said you could fold the gain β into \( I_s \) so you could write VBE in terms of collector current? The VBE drop depends on the base current, but since the collector current \( I_C = \beta I_B \) I just lumped that factor of β in with the constant \( I_s \). After all, we don't really care what \( I_s \) is, just that it's some constant for any given transistor. Except that β isn't completely constant; it's a function of temperature, and also of current. And for all I know, \( I_s \) isn't exactly a constant either. So let's just rewrite this way: $$V_{BE} = (1+\delta)\frac{kT}{q} \ln \frac{I_C}{I_s} + \epsilon(T - 25^\circ, \ln \frac{I_C}{I_s})$$ where \( 1+\delta \) is our ideality factor, and that ε is some function; its magnitude is relatively small and it sweeps the error all into one bucket that says we don't really know how this behaves. Because it's small, we can take a linear approximation of \( \epsilon(T - 25^\circ, \ln \frac{I_C}{I_s}) \) again, and say $$V_{BE} = \left(\frac{kT}{q} + \frac{kT}{q}\delta + \epsilon_I\right) \ln \frac{I_C}{I_s} + \epsilon_T (T - 25^\circ) + \epsilon_2(T - 25^\circ, \ln \frac{I_C}{I_s})$$ where \( \epsilon_2 \) is REALLY small, because it just handles all the quadratic and higher terms. The value \( \epsilon_T \) here produces a term that is proportional to temperature change. This is called a temperature coefficient, or tempco for short. Usually temperature coefficients are constants of proportionality, measured in units of 1/°C, so they describe how much something changes in relative terms, but sometimes they represent absolute deviation, like in the equation above. We can't really predict very well what \( \delta \) and \( \epsilon_I \) and \( \epsilon_T \) are, but if we tested a whole bunch of transistors, we could get a statistical idea of how they behave for a given semiconductor process. This is called characterization, and maybe we can determine that 99.9999% of all transistors of a given type are expected to have \( 0.003 < \delta < 0.005 \), \( |\epsilon_I| < 10\mu V \), and \( |\epsilon_T| < 5\mu V / ^\circ C \). (These aren't real numbers; I'm just making them up.) If we're confident enough, and it makes sense from a marketing and business standpoint, we might decide to put this information in the datasheet to help out customers, or at least publish some characterization graphs. Transistor databooks used to publish lots of useful characterization data. Compare the 2N2222A datasheets from ON Semiconductor and Fairchild. Fairchild doesn't publish any characterization graphs in their 2N2222A datasheet. Phooey. Whereas ON Semi does. ON Semi was once part of Motorola, which in its heyday used to publish some really helpful information in transistor databooks, and ON Semi has retained this for the most common transistors. (If you're at a garage sale and happen to spot an old copy of a transistor databook from GE or RCA or Motorola, snap it up! They don't write 'em like that anymore.) There are ten characterization graphs, everything from a graph of the DC current gain hFE (essentially a synonym for β) over the 100μA to 500mA range, to turnon and turnoff times, to current gain-bandwidth product as a function of the operating point current, to a graph of voltage temperature coefficients: The bottom curve, RθVB for VBE, is essentially the same thing as what I described as \( \epsilon_T \); it's given as a function of collector current, and is in absolute terms: mV/°C. The way you'd read the graph here is that at 15mA collector current, the tempco is -1.75mV/°C, so if the temperature at the npn junction went up by 10°C, you would expect the VBE drop to decrease by about 17.5mV. There are temperature coefficients for lots of things in electronics. Op-amp datasheets will almost always give you the typical temperature coefficient for offset voltage: in the MCP6022, for example, it's ±3.5μV/°C. Voltage references are another component: the LM4041 has a spec of no more than ±100ppm/°C, whereas the TL431 doesn't give a tempco directly, just an allowed voltage deviation over the rated temperature range. Resistors will tell you the temperature coefficient of resistance; Yageo's garden variety thick-film chip resistors have a tempco of ±100ppm/°C for values in the 10Ω - 10MΩ range, and ±200ppm/°C for the low- and high-value resistors. That's pretty typical, and what you need to keep in perspective is that over a 50°C range, for instance, ±100ppm/°C will turn into a ±5000ppm = ±0.5% change, which is in addition to the ±1% base tolerance. So those ±1% resistors are really only ±1% if you treat them nicely and keep them at a constant temperature. Temperature coefficients of electronic components usually fall into two categories. The first category is when the temperature coefficient is centered around some known number. An example is the base-emitter voltage tempco of the 2N2222. We're stuck with it, and if it matters to us, we have to care about it, and design our circuit to handle that behavior. Another is the resistance of copper wire, with a tempco of approximately 3930ppm/°C. The other category occurs when the temperature coefficient is centered around zero, as in the LM4041 or in resistors. In this case, someone has done the work to use materials in a clever manner (e.g. manganin for resistors) or has designed an integrated circuit in such a way to cancel out the temperature coefficient as much as reasonably possible. So when you see ±100ppm/°C, it means the manufacturer has tried to produce a zero tempco, but some parts might be a little below zero or a little above zero, and if they hadn't been clever, the tempco might be nonzero on average, and much higher. Quartz crystals are particularly interesting in this respect. The temperature coefficient of the resonant frequency depends on the alignment of the crystal surfaces with the crystal lattice. Scientists and engineers have known this, and as a result, many of the quartz crystals used in electronic oscillators are of the AT cut variety, where the tempco is near zero around room temperature. The 32.768kHz crystals used for timekeeping are XY cut, with a tempco also around zero at room temperature. In these cases, there are still variations with temperature, they are just minimized around zero, and because this variation is predictable, the tempco is given as a parabolic temperature coefficient, in ppm/°C2, so the XY-cut tempco yields a frequency vs. temperature curve that looks like $$ f = f_0 \left(1 + b(T-T_0)^2\right)$$ In an XY-cut timing crystal like the Epson FC1610AN, f0 = 32.768kHz, T0 = 25°C, and b = -0.04ppm/°C2. The Wah-wah-wah-wah-wonder of Operational Amplifiers And I wonder, I wah-wah-wah-wah-wonder, why, why why why why why she ran away where she will stay, my little runaway a-run-run-run-run-runaway — Del Shannon, Runaway One of my electronics classes in college was an analog electronics lab. We studied all sorts of stuff you could do with bipolar transistors. In one of the labs we had to design an op-amp out of discrete transistors. In practice, you would never do this, because there's no way you could come close to the performance of even the lousy 741 op-amp. The point was for us to learn something about how commercial op-amps work in practice, and to see that we could make it work even if we were stuck with discrete transistors. Here's the circuit equivalent from the LM741 opamp, the one we like to hate: Doesn't it look a lot like hieroglyphics? To understand how an op-amp works, it helps to ignore the details and focus on the big picture. Here I've annotated the circuit diagram: The input stage is made up of a differential pair of NPN and PNP transistors Q1-Q4; through the magic of transistor circuits, the difference in voltage between the inputs is turned into a current signal proportional to that voltage difference, and sent into the single-ended Darlington amplifier made up of Q15 and Q17. Resistors R7, R8, and the unlabeled transistor (Q16?) form a level-shifting circuit; C1 is the internal compensation capacitor that feeds back into the base of Q15; and transistors Q14 and Q20 form a push-pull output stage, with Q15 (the second Q15? Come on, National/TI, proofread your datasheets!) acting as a current limit. The rest of the circuitry is used to setup bias currents and act as an active load for the input stage. The level-shifting circuit is kind of interesting, and you have to understand something about push-pull output stages to appreciate it. Let's say you have this circuit: When the output stage formed by Q1 and Q2 is sourcing current, Q1 carries current and Vout is one VBE drop below Vin. When the output stage formed by Q1 and Q2 is sinking current, Q2 carries current and Vout is one VBE drop above Vin. This voltage shift, which depends on the direction of output current, is called crossover distortion. Right around zero current, the output voltage has to shift up or down by two VBE drops, or somewhere in the 1.2-1.5V range. In the LM741 schematic, the two resistors R7 and R8 form a kind of adjustable voltage regulator across the unnamed transistor. The voltage across each resistor is roughly proportional to the resistance (current into the base terminal is small), so R8 sees a VBE drop and R7 sees about 4.5K/7.5K = 0.6VBE, for a total voltage drop of 1.6VBE or about 1.0-1.2V. This voltage shift pulls apart the transistors Q14 and Q20 so the crossover distortion is reduced by about 80%. Our lab op-amp was much simpler than the LM741 schematic. We had some room to design our own circuit, but it had to be based on 2N3904 and 2N3906 NPN and PNP transistors. I seem to remember it had a limit of only 6 or 8 transistors, and it didn't have to be as nicely-behaved as the 741 (I can't believe I'm saying that; it's like saying someone didn't have to behave as nicely as Dick Cheney), but it had particular gain requirements up to a few megahertz, which can be challenging on a solderless breadboard. I had got my circuit working, and was making measurements for my lab writeup, when I heard a POP on the other side of the lab, along with a few choice four-letter words, and maybe 30 seconds later that distinctive smell wafted through the room. You know, That Smell. Every electronics student should experience it at least once, but hopefully not very often. Yes, a component has overheated and lets the magic smoke out, sharing volatile organic compounds with the whole room. I looked over and saw the guy in question turn the power off, throw some transistors away, and replace them with new ones. A few minutes after that, I heard another POP. I looked over and it was the same guy, and he replaced the transistors again. I knew the guy; he was a really smart student, but that day he was being stupid. (While writing this, I got curious and looked him up on Google. He's now a successful partner in a business consulting firm. I guess EE just wasn't his thing.) It dawned on me what the problem was. Here is what he used as an output stage in his circuit: There were other components besides the ones shown here (something has to provide base current to the transistors; I think he had resistors from the diodes to VCC and VEE), but ignore that for a moment. The diodes essentially cancel out the VBE drops and remove all of the crossover distortion, making the output voltage identical to the input voltage. Very clever. His circuit did, in fact, work for a while, but then, after a minute or two, would go POP and die in a gasp of pungent smoke. Now, there's an issue here. Let's look at one of the graphs we saw earlier, from the 2N2222A datasheet: You will note that the VBE temperature coefficient is negative. That means that for a fixed amount of collector current, if the transistor heats up, the base-to-emitter voltage goes down. But what happens if you keep the voltage across the base-emitter junction fixed? Well, let's say you were providing 0.7V and got 10mA. Now the transistor heats up by 1 degree C, so the base-emitter voltage goes down by about 1.8mV, so you only need 0.6982V to get 10mA. But you have 0.7V. And we said that every 18mV just about doubles the current. If you run the numbers, increasing base-emitter voltage by 1.8mV should increase the current by about 7%, to 10.7mA. So a 1 degree C rise in temperature increased collector current by 7%, from 10mA to 10.7mA. When the transistor conducts more current, it dissipates power and heats up more. So maybe this causes it to rise another degree. And this, in turn, causes the required VBE drop to go down by 1.8mV, and increase the current another 7%, to about 11.5mA. What we've got is a situation where more current causes the junction temperature to go up, which causes more current to flow. A positive feedback loop. This is called thermal runaway. And eventually, one of three things happens: The extra power dissipation that heats up the transistor is balanced by environmental cooling (convection if the transistor is just sitting in air), and causes the junction temperature to stabilize. The current increases enough that the tempco decreases in magnitude (at 100mA, the tempco is only about -1.4mV/°C), and this causes the current to stabilize. Though if you run the numbers, at 100mA, a 1.4mV decrease in required VBE causes about a 5.5% increase in current, to 105.5mA. This is still a pretty significant increase. Something else happens (POP!) that disrupts the feedback loop. So as long as the guy's output current draw was low, the transistors had a chance of surviving. But as soon as there was enough current, the VBE voltage dropped enough to cause both output transistors to conduct, and thermal runaway took over, until the junction overheated and cracked the package, letting the magic smoke out and stopping the flow of current. The 741 op-amp design has three design features to prevent thermal runaway: The level-shifting circuit to reduce crossover distortion doesn't completely eliminate it, so both transistors are never on at the same time. There are emitter resistors in the output stage. These are called emitter ballast resistors, and they're used to soften the knee of the transistor's current vs. VBE curve at high currents. Emitter resistor R9 is connected to a transistor that eventually conducts and robs base current from the NPN output device, causing an active current limit. (I'm not sure why there isn't one on the PNP side.) Other Types of Thermal Runaway There are plenty of other mechanisms for thermal runaway, so you should keep an eye on power dissipation in your circuit design, as well as the temperature coefficients. Three important ones in power electronics are the following: The base-emitter voltage tempco is usually negative for bipolar power transistors. This means you can't put them directly in parallel, or a similar kind of thing will happen: transistor A and B each carry 1A of current, but then B heats up a little more than A, so it tells A, "Hey, look, I can carry more current now, I'll take 1.1A and you take 0.9A", and then B heats up a little more, and it says, "Hey, I can carry more current now, I'll take 1.2A and you take 0.8A" and eventually B takes almost all of the current. This is called current hogging. Even if you put resistors in series with the base, the negative collector-emitter voltage tempco for bipolar power transistors means that current hogging will occur. If you want to parallel bipolar transistors, you have to add emitter ballast resistors. The on-resistance tempco is usually positive for MOSFETs, typically increasing by a factor of 1.5-2.5 between room temperature and the maximum operating temperature of the transistor. While this means you can parallel MOSFETs (if one heats up, its on-resistance will go up and that will reduce the current it conducts, compared to the other MOSFETs in parallel), it has a negative consequence for designs where the MOSFET load is a constant current, like a power converter or a motor controller. Let's say the MOSFET carries 10A of current, so it heats up, and its on-resistance increases, so it heats up more, which makes its on-resistance increase further… until things either stabilize or you hear a loud POP. What you basically have to do is plan on the MOSFET resistance being its maximum value. If the thermal management of your system keeps the MOSFET junction temperature below its maximum limit, you're OK, and in the end you'll be conservative: instead of it getting to 150°C, it might only get to 125°C, so its on-resistance is a little less, which means the power dissipation is going to be less than you planned for. The temperature coefficient of magnetic saturation is sometimes negative — I'm not 100% sure this is the case for all magnetic materials, but Murphy's Law says it is — which means that when your inductors and transformers heat up, their inductance can drop. And if you're using them in a switching power converter, this means their ripple current will increase, which means they will heat up more, until you get smoke and/or arcing. (I mentioned this in an earlier article.) So don't you wonder why, WHY WHY WHY WHY WHY they ran away — be vigilant and you'll avoid thermal runaway. We covered some miscellaneous circuit design topics today: The base-emitter voltage in a bipolar transistor \( V_{BE} = \frac{nkT}{q} \ln \frac{I_C}{I_s} \) where n is the ideality factor, usually slightly greater than 1.0 for commercially-available transistors. This makes the current an exponential function of base-emitter voltage. Linearization can help you understand the dynamic resistance of a component with a nonlinear V/I relationship, and solve circuit analysis problems more easily. Many electronic components have parameters that change with temperature; the temperature coefficient tells how much they vary with temperature, and if you're lucky it's specified in the component datasheet. Certain temperature coefficients can cause a positive feedback loop that causes components to heat up more when they get hotter, which is called thermal runaway. Thanks for reading, and don't let the magic smoke out! © 2015 Jason M. Sachs, all rights reserved. Important Programming Concepts (Even on Embedded Systems) Part V: State Machines My Love-Hate Relationship with Stack Overflow: Arthur S., Arthur T., and the Soup Nazi An Engineer's Guide to the LPC2100 Series Real-time Image Processing on Low Cost Embedded Computers
CommonCrawl
Constructing Steiner Triple Systems Algorithmically I want to create STS(n) algorithmically. I know there are STS(n)s for $n \cong 1,3 \mod 6$. But it is difficult to actually construct the triples. For STS(7) it is pretty easy and but for larger n I end up using trial and error. Is there a general algorithm that can be used? co.combinatorics algorithms combinatorial-designs steiner-triple-system Ricardo Andrade JarvisPJarvisP $\begingroup$ In general, books that deal with the subject are replete with methods of construction. The proof of existence is in fact constructive. $\endgroup$ – Mariano Suárez-Álvarez Aug 4 '11 at 15:52 The following is Bose's construction for the $6k+3$ case: Elements of the STS are labeled by ordered pairs $(x, i)$ where $x$ is in $\mathbb{Z}/(2k+1)$ and $i$ is in $\mathbb{Z}/3$. The triples are of two forms: $$\{ (x,0),\ (x,1),\ (x,2) \}\quad \mbox{for}\ x \in \mathbb{Z}/(2k+1)$$ $$\{ (x,i),\ (y,i),\ ((x+y)/2, i+1)\}\quad \mbox{for}\ x, y \in \mathbb{Z}/(2k+1),\ \mbox{with}\ x \neq y,\ i \in \mathbb{Z}/3$$ For the $6k+1$ case, one uses a messier variant due to Skolem. See Combinatorial Designs: Constructions and Analysis by Stinson for details. David E SpeyerDavid E Speyer I coded that in Sage if you want to use it immediately (see the patch, see the documentation) : sage: from sage.combinat.designs.block_design import steiner_triple_system sage: list(steiner_triple_system(7)) [[0, 1, 3], [0, 2, 4], [0, 5, 6], [1, 2, 6], [1, 4, 5], [2, 3, 5], [3, 4, 6]] [[0, 1, 5], [0, 2, 4], [0, 3, 6], [0, 7, 8], [1, 2, 3], [1, 4, 7], [1, 6, 8], [2, 5, 8], [2, 6, 7], [3, 4, 8], [3, 5, 7], [4, 5, 6]] sage: list(steiner_triple_system(13)) [[0, 1, 6], [0, 2, 5], [0, 3, 7], [0, 4, 8], [0, 9, 11], [0, 10, 12], [1, 2, 7], [1, 3, 4], [1, 5, 9], [1, 8, 10], [1, 11, 12], [2, 3, 6], [2, 4, 12], [2, 8, 9], [2, 10, 11], [3, 5, 12], [3, 8, 11], [3, 9, 10], [4, 5, 10], [4, 6, 9], [4, 7, 11], [5, 6, 11], [5, 7, 8], [6, 7, 10], [6, 8, 12], [7, 9, 12]] Otherwise, it turns out the proof of their existence is highly constructive -- just check the given constructions are valid -- which makes it really easy to implement (see the ebook A short course in Combinatorial Designs, by Ian Anderson and Iiro Honkala). Nathann Peter Shor Nathann CohenNathann Cohen Since this thread just got bumped to the front page, historically the very first proof (by T. P. Kirkman, On a Problem in Combinatorics, Cambridge Dublin Math. J. 2 (1847) 191-204, 1847.) of the existence of an ${\rm STS}(v)$ for all $v \equiv 1, 3 \pmod{6}$ is completely algorithmic, where you start with a singleton as the point set and an empty set as its block set (i.e., the trivial design ${\rm STS}(1)$) and successively construct an ${\rm STS}(3)$, ${\rm STS}(7)$, ${\rm STS}(9)$, and so forth by applying the same algorithms recursively to the smaller ${\rm STS}$s you have at hand. So, you conjure up ${\rm STS}$s from thin air one after another algorithmically for all admissible orders. A modernized version of this technique is called the doubling construction. This construction can be found in a very accessible textbook "Design Theory" by C. C. Lindner and C. A. Rodger from CRC Press (in Section 1.8 of the second edition). The doubling construction actually consists of two separate construction techniques to cover all $v \equiv 1, 3 \pmod{6}$. If you want a single algorithm to cover all orders, the same textbook also explains such a technique (originally by R. M. Wilson, Some partitions of all triples into Steiner triple systems, Lecture Notes in Math., Springer, Berlin, 411 (1974) 267-277) in Section 1.6 (in either edition). Edit: Here's the first half of the doubling construction: Assume that you have an ${\rm STS}(v)$ with point set $V$ and block set $\mathcal{B}$. First, you copy all points; if $a \in V$, you make a new point $a' \not\in V$ so you have another set $V'$ of the same size. You add one extra point, say $\infty$, and use $$W = \{\infty\}\cup V\cup V'$$ as the new point set. Now, for each block $\{a,b,c\} \in \mathcal{B}$, you create new blocks $\{a',b',c\}$, $\{a',b,c'\}$ and $\{a,b',c'\}$. Then you join $\mathcal{B}$ and all these pseudo-copied new blocks as well as new $v$ blocks $\{\infty, a, a'\}$, where $a \in V$. So, the new block set $\mathcal{B}'$ is $$\mathcal{B}' = \mathcal{B}\cup\{\{a',b',c\}\ \vert\ \{a,b,c\} \in \mathcal{B}\} \cup \{\{\infty, a, a'\} \ \vert \ a \in V\}.$$ You can easily check that the ordered pair $(W, \mathcal{B}')$ is an ${\rm STS}(2v+1)$. The latter half of the construction produces an ${\rm STS}(2v+7)$ from an ${\rm STS}(v)$ in a little more complicated way. Applying these two algorithms recursively gives you an ${\rm STS}(v)$ for all $v \equiv 1, 3 \pmod{6}$, covering all $v$ satisfying the necessary conditions for the existence of an ${\rm STS}(v)$. Yuichiro FujiwaraYuichiro Fujiwara $\begingroup$ Your nice answer makes me glad I bumped it (I fixed a broken link). $\endgroup$ – Peter Shor Aug 18 '13 at 12:53 $\begingroup$ @Peter Thank you for your kind comment! $\endgroup$ – Yuichiro Fujiwara Aug 19 '13 at 17:18 One standard algorithm for constructing Steiner triple systems is the "hill climbing" procedure. You will find it described in "Combinatorial algorithms: generation, enumeration, and search" by Kreher and Stinson, and in many papers. This procedure allows you to construct large families of triple systems on the same number of points, in comparison to the standard recursive constructions which give just one on each size. Chris GodsilChris Godsil Not the answer you're looking for? Browse other questions tagged co.combinatorics algorithms combinatorial-designs steiner-triple-system or ask your own question. How to compute the rank of a matrix? Database of Steiner triple systems Which Steiner systems come from algebraic geometry? Solving a Diophantine equation related to Algebraic Geometry, Steiner systems and $q$-binomials? Why is a block graph of a Steiner Triple System is a Strongly Regular Graph? Can a partial Steiner triple system be completed? Algorithm to test for discrete or quasi-Fuchsian subgroups of PSL(2,C) Difference Sets Hitting sets (aka covers aka transversals) of Steiner triple systems
CommonCrawl
Is band-inversion a 'necessary and sufficient' condition for Topological Insulators? According to my naive understanding of topological insulators, an inverted band strucure in the bulk (inverted with respect to the vaccum/trivial insulator surrounding it) implies the existence of a gapless state at the surface (interface with the trivial insulator). Is this a sufficient condition as well? What role does Time Reversal Symmetry play in the existence of these surface states? topological-field-theory topological-insulators topological-phase negligible_singularitynegligible_singularity Band inversion is a necessary but not sufficient condition for topological insulators (TIs). For band TIs you need to evaluate the topological (or $\mathbb{Z}_{2}$) invariant defined by Fu, Kane and Mele in Eq. (2) of: Liang Fu, Charles L. Kane, and Eugene J. Mele. "Topological insulators in three dimensions." Physical Review Letters 98, no. 10 (2007): 106803. (arXiv) This approach involves computation of the Pfaffian of a matrix whose components are the expectation values of the time-reversal operator between different Bloch bands. Personally, I don't find this approach very intuitive. So let me pick a simpler case where the material has inversion symmetry; in this case Eq. (2) of the previous paper reduces to Eq. (1.2) of: Liang Fu and Charles L. Kane. "Topological insulators with inversion symmetry." Physical Review B 76, no. 4 (2007): 045302. (arXiv) where $\delta_{i}$ is defined in Eq. (3.10) in terms of the band parities $\xi_{2n}$. Combining Eq. (1.2) and (3.10) we get $$(-1)^{\nu}=\prod_{i=1}^{8}\prod_{n=1}^{N}\xi_{2n}(\Lambda_{i})$$ for a 3D topological insulator (we have 8 values of $\Lambda_{i}$ in 3D). The above equation basically says that the overall parity of all the filled Bloch bands, evaluated at the Time-Reversal Invariant Momentum (TRIM) values ($\Lambda_{i}$), determines the topological invariant ($\nu$) of the insulator. For example, if (say) the conduction band (say $s$-like) and valence band (say $p$-like) have even ($\xi_{2n} = +1$) and odd ($\xi_{2n} = -1$) parities at a TRIM respectively, and a band inversion occurs between only those bands, then the product of the band parities of the filled bands would pick up an extra minus sign. In this case the band inversion signifies a topological phase transition. In other words, the number of band inversions occurring between bands of opposite parity has to be odd to get a TI. Once again, this is true for inversion symmetric insulators; we have no choice but to use the first (more complicated) formula for an inversion asymmetric insulator. However, the simpler (inversion symmetric) case hopefully illustrated the subtleties in band inversion. Now, as for your second question, the answer is that time-reversal symmetry guarantees at least one pair of gapless edge states (or at least one Dirac cone) on one edge (face) of a 2D (3D) topological insulator. This has to do with the circumvention of the fermion doubling theorem on a particular edge (face) of a 2D (3D) topological insulator. For simplicity, let me only stick to the 2D case. Due to the nontrivial bulk topology in a TI, a 2D TI will have odd pairs (or Kramers pairs) of edge states. For example, see the figure below. The purple blocks are the bulk states whereas the red and blue lines are the in-gap (i.e. bulk gap) edge states. To the left, you can see that the insulator has (say) five pairs of gapless edge states. The mirror symmetric red and blue bands are Kramers pairs of each other. Here's a nice trick: time-reversal symmetry exists if and only if the red and blue bands are mirror images of each other. Due to this constraint, pairs of edge states can be gapped out in pairs (i.e. four states get gapped out simultaneously). Now, it's easy to see that if we had odd pairs of edge states to begin with, it is possible to gap out all except the last pair of edge states without breaking time-reversal symmetry. This is shown in the figure on the right. It is in this sense that the pair of edge states is protected by time-reversal symmetry. NanoPhysNanoPhys I wouldn't say so. The "topological" part of this kind of insulators come from a different Chern number. For a simple naive picture I suggest the paper of Haldane published in prl 61.2015. PinkFloydPinkFloyd Not the answer you're looking for? Browse other questions tagged topological-field-theory topological-insulators topological-phase or ask your own question. Is edge state of topological insulator really robust? How can there be negative energy gaps sandwiching a topological insulator? How metallic surfaces states can emerge in topological insulators? What happens when a bare 3d topological insulator is subject to a magnetic field? Time-reversal symmery and topological insulators Topological Insulators: is HgX a special case? Why does proximity to a superconductor open a gap in the surface states of topological insulators Homotopy Theory for Topological Insulators Is the quantum Hall state a topological insulating state?
CommonCrawl
Nano Express | Open | Published: 17 April 2019 Nanofabrication of High-Resolution Periodic Structures with a Gap Size Below 100 nm by Two-Photon Polymerization Lei Zheng ORCID: orcid.org/0000-0002-0717-41801,2, Kestutis Kurselis1,3, Ayman El-Tamer2,3, Ulf Hinze2,3, Carsten Reinhardt2,5, Ludger Overmeyer2,4 & Boris Chichkov3 In this paper, approaches for the realization of high-resolution periodic structures with gap sizes at sub-100 nm scale by two-photon polymerization (2PP) are presented. The impact of laser intensity on the feature sizes and surface quality is investigated. The influence of different photosensitive materials on the structure formation is compared. Based on the elliptical geometry character of the voxel, the authors present an idea to realize high-resolution structures with feature sizes less than 100 nm by controlling the laser focus position with respect to the glass substrate. This investigation covers structures fabricated respectively in the plane along and perpendicular to the major axis of voxel. The authors also provide a useful approach to manage the fabrication of proposed periodic structure with a periodic distance of 200 nm and a gap size of 65 nm. The demand for the downscaling of devices grows rapidly with the continuous progress of nanotechnology in recent years. The miniaturized structures with feature sizes below the diffraction limit can be applied in various fields like plasmonics [1], micro- and nanooptics [2], nanophotonics [3, 4], and biomedicine [5, 6]. Moreover, structures with sub-wavelength dimensions are also able to facilitate the characterization performance at micro- and nanoscale [7, 8]. For example, tips [9] and nanoanttennas [10] can be used to improve the characterization performance of high-resolution structures by enhancing the light confinement in the near-field, and gratings [11] are able to transform optical information from the near field to the far field. As to the realization of high-resolution structures, two-photon polymerization (2PP) is popularly utilized due to its capabilities of achieving high resolution and 3D fabrication [12]. Two-photon polymerization is a manufacturing method based on two-photon absorption (2PA), which is a nonlinear process that theoretically enables the achievement of resolution below the diffraction limit. Various 2PP-based methods, such as adding photoinitiator with a high initiation efficiency [13], shaping the spatial phase of deactivation beam [14], using sub-10 fs [15] and 520-nm femtosecond laser pulses [16], combining with hybrid optics [17] and a developed sub-diffraction optical beam lithography [18], have been applied to realize feature sizes at sub-100 nm scale. However, these sizes are mostly achieved on suspended lines or a single line. It still remains challenging to experimentally realize feature sizes and gap sizes beyond the diffraction limit in periodic structures due to the radical diffusion exchanging effect in the gap region when center-to-center distance between adjacent features gets very close [19]. Nevertheless, a few strategies were demonstrated for the purpose of achieving periodic structures with a nanoscale gap distance. Photonic crystals with a periodic distance of 400 nm were realized by adding a quencher molecule into the photoresist [20]. With this approach, the gap size between adjacent lines of the photonic crystals is around 300 nm. Moreover, grating lines with a periodic distance of 175 nm and a gap size of 75 nm were achieved by a STED lithography technique [19]. Recently, it was presented that a straight forward thermal post-treatment process of samples by calcination is able to realize feature sizes down to approximately 85 nm [21]. The above approaches have afforded for the realization of periodic structures with gap sizes below the diffraction limit. However, they are quite special with higher cost, more complicated operations and procedures comparing to 2PP. In this paper, an experimental investigation on the realization of a periodic device (Fig. 1) with both feature sizes and gap sizes below the diffraction limit using 2PP is carried out. The high-resolution periodic structure, composed of grating lines with pillars periodically located between them, was proposed for the enhancement of characterization resolution of interferometric Fourier transform scatterometry (IFTS) [22, 23], which is a method for the characterization of micro- and nanostructures. It is known that the spatial resolution of structures is mainly determined by the photosensitive materials, optical system, and processing parameters [15]. Specifically, researchers have reported that the orientation of laser beam polarization can affect the structure dimensions [24]. When a laser is linearly polarized parallel to its scanning direction, a minimum feature dimension can be realized. Therefore, the laser employed in the experiments is equipped with a linear polarization parallel to the laser scanning direction for the purpose of obtaining smaller feature sizes. Based on this configuration, the effect of laser intensity on the feature sizes is investigated first. Then, the influence of different photosensitive materials on the structure formation is compared. When laser directly writes structures on a glass substrate, only part of the voxel polymerizes the photoresist because the other part of the voxel is inside glass substrate. Benefiting from the elliptical geometry of voxel, an idea of reducing the feature size and gap size by controlling laser focus position with respect to the glass substrate is specially presented. The feature sizes of grating lines (fabricated in the plane perpendicular to the major axis of voxel) and pillars (fabricated in the plane along the major axis of voxel) depending on relative laser focus positions are respectively investigated. As a result, grating lines with a minimum width of 78 nm and pillars with the diameter of 110 nm are realized. In addition, the proposed structure with an area size of 20×20 μm, a periodic distance of 200 nm, and a gap size of 65 nm is demonstrated by separately fabricating grating lines and pillars. Schematic illustration of the proposed periodic structure. The periodic distance between adjacent features is represented by PD Fabrication Method The structures presented in this paper were fabricated using two-photon polymerization. A schematic illustration of the experimental setup is shown in Fig. 2. This 2PP fabrication system, which is also available commercially [25, 26], is able to coordinate all axis simultaneously and reach the velocity over the full travel range without stepping and stitching at a speed of up to 50 mm/s. A linear polarized femtosecond laser with a frequency doubled output at 513 nm, a pulse width of 60 fs and a repetition rate of 76 MHz is used. Laser power is controlled by a half wave plate and a polarizing beam splitter cube. Highly accurate air-bearing translation stages with a travel range of 15 cm are employed as well. A CCD camera is mounted for online monitoring. The polymerization process can be monitored by a CCD camera due to the refractive index variation of photoresist induced by the polymerization. The sample consists of a droplet of photosensitive material on the glass substrate, which is fixed to the translation stage with photoresist on the bottom side. Laser beam is focused into the photoresist by a 100 × oil immersion microscope objective with a high numerical aperture (NA) of 1.4. Schematic diagram of 2PP fabrication system The performance of different photoresists in structure fabrication can be diverse due to their own unique chemical compositions and physical properties. In this work, photoresists called sol-gel organic-inorganic Zr-hybrid material [27] and E-shell 300 (Envisiontec) are applied respectively for the structuring. Zr-hybrid material is a high-viscosity zirconium-based sol-gel organic-inorganic hybrid polymer which is well known for its low shrinkage and high stability for 2PP fabrication. The preparation procedures and other optical properties of this photoresist can be found in ref [27]. E-shell 300 is a dimethacrylate-based liquid photoresist with a viscosity of 339.8 MP a·s. It can be used for 3D printing and fabrication of hearing aid and medical devices, as well as structures with high resolution, strength, stiffness, and chemical resistance. The processing parameters play an important role in determining the feature sizes of structures. Among them, laser intensity is one parameter that is able to effectively influence the structure formation and can be controlled accurately and conveniently. This parameter can be obtained using the formula given in ref [28] $$ {I=\frac{2 P T M^{2}}{\pi w_{0}^{2} f\tau}} $$ where P represents the average laser power [4, 28], T the transmission coefficient of the objective/system (T=15% [4]), M2 the beam quality with M2=1.1, f the repetition rate, τ the pulse duration, and w0 the spot radius with $w_{0}=0.61 \frac {\lambda }{NA}$ (w0≈223.5 nm). In this formula, $\frac {P}{f}$ and $\frac {P}{f\tau }$ indicate the energy per pulse and average power per pulse, respectively. The intensity unit kW/ μm2 is used instead of TW/cm2 (1 TW/cm 2=10 kW/ μm2) for the purpose of straightforward displaying how much power is really focused in the spot area, which also has a range at microscale ($\pi w_{0}^{2} \approx 0.16$ μm2). Here, an investigation about the effect of laser intensity on single line dimensions was carried out. Both Zr-hybrid material and E-shell 300 were applied for the study. The line width and height made of both materials with respect to the laser intensity I is shown respectively in Fig. 3a (Zr-hybrid material) and Fig. 3b (E-shell 300). A speed of 7 μm/s was used for the fabrication. The laser intensity I is in the range 0.67–0.78 kW/ μm2 (with a corresponding laser power range 1.44–1.69 mW) for Zr-hybrid material and 0.78–1.02 kW/ μm2 (laser power range 1.69–2.20 mW) for E-shell 300. It can be seen that the feature sizes (both diameter and height) go up with the increase of laser intensity. In the case of Zr-hybrid material (Fig. 3a), with the laser intensity of approximately 0.67 kW/ μm2, the lateral dimension of a voxel can be reduced to around 115 nm, which is below the diffraction limit (the diffraction limit $\frac {\lambda }{2NA}=185$ nm). It can also be calculated that the aspect ratio (height to width) is in the range 2.5–4. For E-shell 300 (Fig. 3b), a line width of 178 nm was realized when laser intensity was 0.78 kW/ μm2. This feature dimension is below the diffraction limit (185 nm). Based on the above investigation, it can be concluded that the feature sizes are effectively influenced by the applied laser intensity. A smaller feature size can be realized by reducing the laser intensity. Line dimensions versus the laser intensity I. The speed used for the structuring is 7 μm/s. The red and blue lines are linear fit results of voxel width and height, respectively. a The width and height of a single line made of Zr-hybrid material. b The width and height of a single line made of E-shell 300 Influence of Different Materials on the Structure Formation by 2PP For the investigation on the impact of materials on structure formation, various periodic grating lines were fabricated using the materials introduced in "Materials" section. A writing speed of 7 μm/s was applied. Figure 4a and b are respectively the SEM images of periodic grating lines made of Zr-hybrid material and E-shell 300 with the periodic distance (PD, illustrated in Fig. 1) of 1 μm. Laser intensity applied for the fabrication was 1.25 kW/ μm2 (corresponding to laser power 2.7 mW) for Zr-hybrid material and 1.02 kW/ μm2 (corresponding to laser power 2.2 mW) for E-shell 300. It can be seen that the grating lines made of both materials are smooth. Figure 4c and d indicate the SEM images of periodic grating lines made of Zr-hybrid material and E-shell 300 with PD=400 nm, respectively. With the decrease of periodic distance, laser intensity used for the fabrication is reduced as well in order to achieve high resolution and simultaneously avoid overpolymerization inside the space between adjacent features. In this investigation, laser intensity of 0.69 kW/ μm2 was applied for the fabrication with both materials. With the reduced PD, the grating lines made of Zr-hybrid material are grainy (Fig. 4c), while that made of E-shell 300 have less roughness (Fig. 4d). The graininess of grating lines made of Zr-hybrid material might result from an unstable polymerization, which happens due to the proximity of reduced laser power to the polymerization threshold of the material. This comparison reveals that E-shell 300 is more suitable for the fabrication of structures with a nanoscale periodic distance. In addition, all of the structures observed by SEM are deposited with a 20-nm-thick gold layer. SEM images of grating lines fabricated with different materials. The speed for the fabrication is 7 μm/s. a Material: Zr-hybrid material; PD=1 μm; Laser intensity: 1.25 kW/ μm2. b Material: E-shell 300; PD=1 μm; Laser intensity: 1.02 kW/ μm2. c Material: Zr-hybrid material; PD=400 nm; Laser intensity: 0.69 kW/ μm2. d Material: E-shell 300; PD=400 nm; Laser intensity: 0.69 kW/ μm2 Investigation of Structure Formation with Respect to the Laser Focus Position To place the nanostructures on the surface of the glass substrate, the laser beam has to be focused at the substrate/photoresist interface during the 2PP process. Thus, only part of the voxel is able to initiate the polymerization of photoresist. The other part of the voxel is in glass substrate to ensure the adhesion of structures. Since the voxel geometry is elliptical, a variation of its cross-section size exists along the major axis. In high-resolution micro- and nanofabrication, the variation of voxel cross-section size at the interface of substrate and photoresist is of much concern in affecting the structure formation as well as its feature size. Figure 5 is a schematic illustration of laser focus adjustment along z direction. The position at the interface between the photoresist and the substrate is defined as a reference focus position z0 (Fig. 5a). Since photoresist droplet is on the bottom side of the glass substrate, laser focus spot moves down from the reference position z0 into the photoresist. The distance between the current laser focus position z and the reference position z0 is represented by Δz=∣z−z0∣. The region indicated with dark green color in Fig. 5b and c represents the laser focus region inside the photoresist, which enables the polymerization with light intensity above the polymerization threshold. Different feature sizes can be realized by placing laser focus at different z positions. Feature size w is characterized by the average full width half maximum (FWHM, Fig. 5c) of the features that are fabricated at the same z position in one array. Illustration of the variation of laser focus position along z direction Periodic grating lines fabricated with different laser focus positions were obtained as presented in Fig. 6. The periodic distance (PD) between grating lines is 1 μm. With this close PD, the adjacent features begin to connect to each other through extra polymerization in the gap region when laser is focused with Δz=500 nm (Fig. 6a). The clusters out of the grating lines result from additional polymerization. During the 2PP process, free radicals are generated through the laser-induced bond cleavage in the photoinitiator molecules. Those radicals are accumulated in the small gaps between the adjacent features, which results in the increase of the radical concentration. This high radical concentration can exceed the polymerization threshold and thus lead to undesired polymerization. Moreover, an unstable adhesion of polymerized structures to the substrate can also be resulted. In this case, the structures can be easily washed away during the development process. When the focus of laser beam is more inside the substrate, less photoresist is polymerized. As presented in Fig. 6b, grating lines with the width of 78 nm was achieved in this case. However, weak visibility of the structure can also be seen. Therefore, it is of great importance to have a proper laser focus position during the polymerization process not only for a higher resolution but also for a better adhesion of structure to the substrate. The influence of laser focus positions on structure formation. Material: E-shell 300. a Vertical grating lines fabricated with laser focus more inside the photoresist. The laser intensity for fabrication I=0.71 kW/ μm2 (corresponding to laser power 1.55 mW), the relative laser focus distance Δz=500 nm. Extra polymerization between the features is generated, and the adjacent features are connected. b Vertical grating lines fabricated with laser focus more inside the substrate. The laser intensity for fabrication I=0.65 kW/ μm2 (corresponding to laser power 1.4 mW), the relative laser focus position Δz=0 nm As to the influence of laser focus position on the feature sizes, an investigation of its effect on the grating lines that are fabricated in the x−y plane was conducted. By increasing the relative distance Δz, grating lines fabricated under different laser focus positions were obtained. The measured width of grating lines wl depending on the relative laser focus positions is plotted as the dots presented in Fig. 7a. Laser intensity used for the fabrication is 0.85 kW/ μm2 (corresponding to laser power 1.84 mW). The red curve indicates an elliptical fit result in which the major axis is consistent with z axis. The corresponding ellipse was reconstructed (see the lower right corner of Fig. 7a) using the elliptical formula $\left (\frac {x}{a}\right)^{2}+\left (\frac {400-y}{b}\right)^{2}=1$, where (400,0) is the center of the ellipse, b=90 is the semi-minor axis, a=5.65b is the semi-major axis, x represents the relative distance Δz along the major axis, and y represents half of the focus size L which is along the minor axis. The result reveals that the line width follows with the laser focus cross-section size which changes along the major axis of the voxel elliptical geometry. When the relative position Δz=50 nm, grating lines with a feature size of wl=130 nm were realized (Fig. 7b). Additionally, by reducing the laser intensity, grating lines with wl=100 nm were obtained at the same laser focus position as presented in Fig. 7c. Grating lines fabricated at the x−y plane with respect to different relative laser focus distance Δz. Material: E-shell 300. A writing speed of 7 μm/s was applied. a Measured line width and fitted curve with respect to different Δz. The figure in the lower right corner is a reconstruction of the ellipse corresponding to the fitted line. b Grating lines fabricated with the laser intensity of I=0.85 kW/ μm2 (with the laser power P=1.84 mW). The relative laser focus distance is Δz=50 nm. c Grating lines fabricated with the laser intensity of I=0.78 kW/ μm2 (with the laser power P=1.69 mW). The relative laser focus distance is Δz=50 nm The influence of laser focus position on the feature sizes of pillars was also investigated. The pillars are realized by moving the focal spot orthogonally to the substrate plane, which is in the plane of the major axis of voxel (x−z or y−z plane). A single pillar was fabricated by moving the laser beam along z direction with a distance of 1 μm. Figure 8a is the SEM image of pillars manufactured with different laser intensity and relative distances Δz. The distance between the centers of adjacent pillars is 400 nm along x direction and 500 nm along y direction. Laser intensity was increased from the left to the right with a step of approximately 0.23 kW/ μm2 (corresponding to laser power 0.5 mW). The relative distance between the laser focus position z and the reference position z0 was increased from the bottom to the top along the vertical direction. Figure 8b shows measured pillar diameters wp regarding the laser intensity and the relative distance Δz. The diameter of a pillar wp is obtained by measuring its FWHM. The laser intensity is in the range 0.74–0.96 kW/ μm2. It can be observed that wp is reduced with the decrease of both Δz and the laser intensity. When Δz=150 nm, a pillar with the diameter of wp≈110 nm was achieved with a relatively large laser intensity range (0.74–0.81 kW/ μm2). And there is also a relatively stable window for the pillar sizes when an array of pillars is fabricated as presented in Fig. 8c–d, which is the SEM images of a pillar array fabricated with the laser intensity of I=0.74 kW/ μm2 and a relative distance of Δz=300 nm. The aspect ratio of the pillar is around 2. It indicates that the reproducibility of pillar performs very well. Pillar arrays fabricated with different laser intensity and laser focus relative distance Δz. Material: E-shell 300. a SEM image of pillars fabricated with different laser intensity and relative laser focus positions. b Measured pillars diameter wp with respect to the laser intensity I and the relative distance Δz. Laser intensity is respectively 0.74 kW/ μm2, 0.81 kW/ μm2, 0.90 kW/ μm2, and 0.96 kW/ μm2 with the correspondence of laser power 1.59 mW, 1.75 mW, 1.94 mW, and 2.07 mW. c Top view of the pillar array. d SEM image of the pillar array viewed with 45 ∘ Fabrication of Periodic Structures with the Feature Sizes and Gap Size Below the Diffraction Limit Based on the respective investigations on the feature sizes of periodic grating lines (fabricated at x−y plane) and pillars, the proposed high-resolution periodic structure composed of grating lines and pillars was fabricated. Its size is 20×20 μm with a periodic distance of 200 nm between the center of the grating line and the pillar. In this work, the strategy of achieving high-resolution structures with a periodic distance of 200 nm by separately fabricating grating lines and pillars is put forward. In this case, the periodic distance PD between adjacent grating lines and adjacent pillars is 400 nm. During the polymerization process, a larger gap region exists between the features when grating lines and pillars are fabricated separately. This temporarily broaden gap region enables to reduce the accumulation of radicals, which might lead to the undesired polymerization in the gap region. It has to be noted that the laser focus position also has to be adjusted during the fabrication process. Structures fabricated with improper laser focus position are presented in Fig. 9a and b. It can be seen that the lines and pillars are connected when the laser focus is too much inside the photoresist. Figure 9c–f are the SEM images of structures with well-positioned laser focus [23]. By placing the laser focus position properly and utilizing the fabrication strategy provided above, a structure with dimensions below the diffraction limit (a line width of 110 nm, a pillar diameter of 135 nm and a gap size of 65 nm) was realized as shown in Fig. 9e. SEM images of 2PP fabricated periodic structure with PD=200 nm. Material: E-shell 300. Intensity used for the fabrication of grating lines: I=0.83 kW/ μm2; pillars: I=0.6 kW/ μm2. The relative laser focus distance for fabrication of grating lines and pillars is 300 nm. a–b Periodic structures fabricated with laser focus position setting inside the photoresist. c–d SEM images of periodic structures with proper laser focus position. e Top view of the structure fabricated with proper laser focus position. f SEM image of the whole array In conclusion, we compared the influence of different photoresists and processing parameters on the structure formation and presented the way of improving the spatial resolution and reducing the gap size between adjacent features by controlling the laser focus position along z direction. E-shell 300 was experimentally proved to be a more suitable material for the fabrication of structures with a spatial resolution less than 200 nm. We also succeeded to achieve a periodic structure with the gap size of 65 nm and the feature size of 110 nm. The sizes are far below the Abbe diffraction limit. The further investigation on the optical performance (e.g., signal enhancement of optical images) of this high-resolution structure will be attractive. 2PA: Two-photon absorption 2PP: Two-photon polymerization FWHM: Full width half maximum IFTS: Interferometric Fourier transform scatterometry Numerical aperture Periodic distance Seidel A, Ohrt C, Passinger S, Reinhardt C, Kiyan R, Chichkov BN (2009) Nanoimprinting of dielectric loaded surface-plasmon-polariton waveguides using masters fabricated by 2-photon polymerization technique. JOSA B 26(4):810–812. Malinauskas M, Gilbergs H, žukauskas A, Purlys V, Paipulas D, Gadonas R (2010) A femtosecond laser-induced two-photon photopolymerization technique for structuring microlenses. J Opt 12(3):035204. Sun HB, Matsuo S, Misawa H (1999) Three-dimensional photonic crystal structures achieved with two-photon-absorption photopolymerization of resin. Appl Phys Lett 74(6):786–788. Serbin J, Egbert A, Ostendorf A, Chichkov B, Houbertz R, Domann G, Schulz J, Cronauer C, Fröhlich L, Popall M (2003) Femtosecond laser-induced two-photon polymerization of inorganic–organic hybrid materials for applications in photonics. Opt Lett 28(5):301–303. Gittard SD, Nguyen A, Obata K, Koroleva A, Narayan RJ, Chichkov BN (2011) Fabrication of microscale medical devices by two-photon polymerization with multiple foci via a spatial light modulator. Biomed Opt Express 2(11):3167–3178. Raimondi MT, Eaton SM, Nava MM, Laganà M, Cerullo G, Osellame R (2012) Two-photon laser polymerization: from fundamentals to biomedical application in tissue engineering and regenerative medicine. J Appl Biomater Biomech 10(1):55–65. Adams W, Sadatgol M, Güney DÖ (2016) Review of near-field optics and superlenses for sub-diffraction-limited nano-imaging. AIP Adv 6(10):100701. Maznev A, Wright O (2017) Upholding the diffraction limit in the focusing of light and sound. Wave Motion 68:182–189. Heinzelmann H, Pohl D (1994) Scanning near-field optical microscopy. Appl Phys A 59(2):89–101. Bharadwaj P, Deutsch B, Novotny L (2009) Optical antennas. Adv Opt Photon 1(3):438–483. Ropers C, Neacsu C, Elsaesser T, Albrecht M, Raschke M, Lienau C (2007) Grating-coupling of surface plasmons onto metallic tips: a nanoconfined light source. Nano Lett 7(9):2784–2788. Ostendorf A, Chichkov BN (2006) Two-photon polymerization: a new approach to micromachining. Photonics Spectra 40(10):72. Xing JF, Dong XZ, Chen WQ, Duan XM, Takeyasu N, Tanaka T, Kawata S (2007) Improving spatial resolution of two-photon microfabrication by using photoinitiator with high initiating efficiency. Appl Phys Lett 90(13):131106. Li L, Gattass RR, Gershgoren E, Hwang H, Fourkas JT (2009) Achieving λ/20 resolution by one-color initiation and deactivation of polymerization. Science 324(5929):910–913. Emons M, Obata K, Binhammer T, Ovsianikov A, Chichkov BN, Morgner U (2012) Two-photon polymerization technique with sub-50 nm resolution by sub-10 fs laser pulses. Opt Mater Express 2(7):942–947. Haske W, Chen VW, Hales JM, Dong W, Barlow S, Marder SR, Perry JW (2007) 65 nm feature sizes using visible wavelength 3-D multiphoton lithography. Opt Express 15(6):3426–3436. Burmeister F, Steenhusen S, Houbertz R, Zeitner UD, Nolte S, Tünnermann A (2012) Materials and technologies for fabrication of three-dimensional microstructures with sub-100 nm feature sizes by two-photon polymerization. J Laser Applic 24(4):042014. Gan Z, Cao Y, Evans RA, Gu M (2013) Three-dimensional deep sub-diffraction optical beam lithography with 9 nm feature size. Nat Commun 4:2061. Fischer J, Wegener M (2011) Three-dimensional direct laser writing inspired by stimulated-emission-depletion microscopy. Opt Mater Express 1(4):614–624. Sakellari I, Kabouraki E, Gray D, Purlys V, Fotakis C, Pikulin A, Bityurin N, Vamvakaki M, Farsari M (2012) Diffusion-assisted high-resolution direct femtosecond laser writing. Acs Nano 6(3):2302–2311. Gailevičius D, Padolskytė V, Mikoliūnaitė L, Šakirzanovas S, Juodkazis S, Malinauskas M (2019) Additive-manufacturing of 3D glass-ceramics down to nanoscale resolution. Nanoscale Horiz. Paz VF, Peterhänsel S, Frenner K, Osten W, Ovsianikov A, Obata K, Chichkov B (2011) Depth sensitive fourier-scatterometry for the characterization of sub-100 nm periodic structures In: Modeling Aspects in Optical Metrology III, vol. 8083. International Society for Optics and Photonics, 80830. Reinhardt C, Paz VF, Zheng L, Kurselis K, Birr T, Zywietz U, Chichkov B, Frenner K, Osten W (2015) 4 design and fabrication of near-to far-field transformers by sub-100 nm two-photon polymerization In: Optically Induced Nanostructures: Biomedical and Technical Applications, 73.. Walter de Gruyter GmbH & Co KG. Rekštytė S, Jonavičius T, Gailevičius D, Malinauskas M, Mizeikis V, Gamaly EG, Juodkazis S (2016) Nanoscale precision of 3D polymerization via polarization control. Adv Opt Mater 4(8):1209–1214. Farsari M, Chichkov BN (2009) Materials processing: two-photon fabrication. Nat Photon 3(8):450–452. Zheng L, Kurselis K, Reinhardt C, Kiyan R, Evlyukhin A, Hinze U, Chichkov B (2017) Fabrication of sub-150 nm structures by two-photon polymerization for plasmon excitation In: Progress In Electromagnetics Research Symposium-Spring (PIERS), 2017, 3402–3405.. IEEE. Ovsianikov A, Viertl J, Chichkov B, Oubaha M, MacCraith B, Sakellari I, Giakoumaki A, Gray D, Vamvakaki M, Farsari M, et al (2008) Ultra-low shrinkage hybrid photosensitive material for two-photon polymerization microfabrication. Acs Nano 2(11):2257–2262. Jonušauskas L, Juodkazis S, Malinauskas M (2018) Optical 3D printing: bridging the gaps in the mesoscale. J Opt 20(5):053001. The financial support from the Deutsche Forschungsgemeinschaft (DFG, RE3012/4-1 and RE3012/2-1) is greatly acknowledged. Please contact the corresponding author for data requests. Laboratory of Nano and Quantum Engineering, Leibniz Universität Hannover, Schneiderberg 39, Hannover, 30167, Germany Lei Zheng & Kestutis Kurselis Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover, 30419, Germany , Ayman El-Tamer , Ulf Hinze , Carsten Reinhardt & Ludger Overmeyer Institute of Quantum Optics, Leibniz Universität Hannover, Welfengarten 1, Hannover, 30167, Germany Kestutis Kurselis & Boris Chichkov Institute of Transport and Automation Technology, Leibniz Universität Hannover, An der Universität 2, Garbsen, 30823, Germany Ludger Overmeyer Hochschule Bremen, Neustadtswall 30, Bremen, 28199, Germany Carsten Reinhardt Search for Lei Zheng in: Search for Kestutis Kurselis in: Search for Ayman El-Tamer in: Search for Ulf Hinze in: Search for Carsten Reinhardt in: Search for Ludger Overmeyer in: Search for Boris Chichkov in: Correspondence to Lei Zheng. LZ, CR, AE, and UH conceived the idea of the paper. LZ performed the experiments and carried out the data analysis. KK and AE assisted in the experimental work. BC, LO, and CR guided and supervised the work. All authors contributed to the general discussion and approved the final manuscript. Nanofabrication Sub-100 nm Periodic structures
CommonCrawl
Bayesian inference for latent chain graphs FoDS Home Stochastic gradient descent algorithm for stochastic optimization in solving analytic continuation problems March 2020, 2(1): 19-33. doi: 10.3934/fods.2020002 Semi-supervised classification on graphs using explicit diffusion dynamics Robert L. Peach 1,2, , Alexis Arnaudon 2,†, and Mauricio Barahona 2,, Department of Mathematics and Imperial College Business School, Imperial College London, London SW7 2AZ, UK Department of Mathematics, Imperial College London, London SW7 2AZ, UK * Corresponding author: Mauricio Barahona † Current address: Blue Brain Project, Éole polytechnique fédérale de Lausanne (EPFL), Campus Biotech, 1202 Geneva, Switzerland. Published February 2020 Fund Project: All authors acknowledge funding through EPSRC award EP/N014529/1 supporting the EPSRC Centre for Mathematics of Precision Healthcare at Imperial Full Text(HTML) Classification tasks based on feature vectors can be significantly improved by including within deep learning a graph that summarises pairwise relationships between the samples. Intuitively, the graph acts as a conduit to channel and bias the inference of class labels. Here, we study classification methods that consider the graph as the originator of an explicit graph diffusion. We show that appending graph diffusion to feature-based learning as an a posteriori refinement achieves state-of-the-art classification accuracy. This method, which we call Graph Diffusion Reclassification (GDR), uses overshooting events of a diffusive graph dynamics to reclassify individual nodes. The method uses intrinsic measures of node influence, which are distinct for each node, and allows the evaluation of the relationship and importance of features and graph for classification. We also present diff-GCN, a simple extension of Graph Convolutional Neural Network (GCN) architectures that leverages explicit diffusion dynamics, and allows the natural use of directed graphs. To showcase our methods, we use benchmark datasets of documents with associated citation data. Keywords: Semi-supervised learning, graph convolutional neural networks, deep learning, Laplacian dynamics, graph diffusion. Mathematics Subject Classification: Primary: 05C81, 05C85, 05C21, 68R10, 62M45; Secondary: 34B45, 60J60. Citation: Robert L. Peach, Alexis Arnaudon, Mauricio Barahona. Semi-supervised classification on graphs using explicit diffusion dynamics. Foundations of Data Science, 2020, 2 (1) : 19-33. doi: 10.3934/fods.2020002 A. Arnaudon, R. L. Peach and M. Barahona, Graph centrality is a question of scale, arXiv e-prints, arXiv: 1907.08624. Google Scholar K. A. Bacik, M. T. Schaub, M. Beguerisse-Díaz, Y. N. Billeh and M. Barahona, Flow-Based Network Analysis of the Caenorhabditis elegans Connectome, PLoS Computational Biology, 12 (2016), e1005055, http://arXiv.org/abs/1511.00673. doi: 10.1371/journal.pcbi.1005055. Google Scholar M. Beguerisse-Díaz, B. Vangelov and M. Barahona, Finding role communities in directed networks using Role-Based Similarity, Markov Stability and the Relaxed Minimum Spanning Tree, in 2013 IEEE Global Conference on Signal and Information Processing, 2013, 937–940, http://arXiv.org/abs/1309.1795. Google Scholar M. Beguerisse-Díaz, G. Garduno-Hern{á}ndez, B. Vangelov, S. N. Yaliraki and M. Barahona, Interest communities and flow roles in directed networks: The Twitter network of the UK riots, Journal of The Royal Society Interface, 11 (2014), 20140940, https://royalsocietypublishing.org/doi/abs/10.1098/rsif.2014.0940. Google Scholar C. M. Bishop, Pattern Recognition and Machine Learning, New York: Springer, 2006. doi: 10.1007/978-0-387-45528-0. Google Scholar M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam and P. Vandergheynst, Geometric deep learning: Going beyond euclidean data, IEEE Signal Processing Magazine, 34 (2017), 18-42. doi: 10.1109/MSP.2017.2693418. Google Scholar J. Bruna, W. Zaremba, A. Szlam and Y. Lecun, Spectral networks and locally connected networks on graphs, in International Conference on Learning Representations (ICLR2014), CBLS, April 2014, 2014, 1–14, http://arXiv.org/abs/1312.6203. Google Scholar O. Chapelle and A. Zien, Semi-supervised classification by low density separation, in Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics (AISTATS 2005), 2005, 57–64. Google Scholar J. Chen, J. Zhu and L. Song, Stochastic Training of Graph Convolutional Networks with Variance Reduction, arXiv e-prints, arXiv: 1012.2726, http://arXiv.org/abs/1710.10568. Google Scholar F. Chung, Laplacians and the Cheeger inequality for directed graphs, Annals of Combinatorics, 9 (2005), 1-19. doi: 10.1007/s00026-005-0237-z. Google Scholar R. R. Coifman and S. Lafon, Diffusion maps, Applied and Computational Harmonic Analysis, 21 (2006), 5-30. doi: 10.1016/j.acha.2006.04.006. Google Scholar K. Cooper and M. Barahona, Role-based similarity in directed networks, arXiv e-prints, arXiv: 1012.2726, http://arXiv.org/abs/1012.2726. Google Scholar M. Defferrard, X. Bresson and P. Vandergheynst, Convolutional neural networks on graphs with fast localized spectral filtering, in Advances in neural information processing systems, 2016, 3844–3852. Google Scholar J.-C. Delvenne, S. N. Yaliraki and M. Barahona, Stability of graph communities across time scales., Proceedings of the National Academy of Sciences of the United States of America, 107 (2010), 12755–12760, http://arXiv.org/abs/0812.1811. doi: 10.1073/pnas.0903215107. Google Scholar F. Fouss, A. Pirotte, J. Renders and M. Saerens, Random-walk computation of similarities between nodes of a graph with application to collaborative recommendation, IEEE Transactions on Knowledge and Data Engineering, 19 (2007), 355-369. Google Scholar H. Gao, Z. Wang and S. Ji, Large-scale learnable graph convolutional networks, in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 8, Association for Computing Machinery, New York, NY, USA, 2018, 1416–1424, http://arXiv.org/abs/1808.03965. doi: 10.1145/3219819.3219947. Google Scholar [17] I. Goodfellow, Y. Bengio and A. Courville, Deep Learning, MIT press, 2016. Google Scholar D. K. Hammond, P. Vandergheynst and R. Gribonval, Wavelets on graphs via spectral graph theory, Applied and Computational Harmonic Analysis, 30 (2011), 129-150. doi: 10.1016/j.acha.2010.04.005. Google Scholar D. P. Kingma, S. Mohamed, D. J. Rezende and M. Welling, Semi-supervised learning with deep generative models, in Advances in Neural Information Processing Systems, 2014, 3581–3589. Google Scholar T. N. Kipf and M. Welling, Semi-Supervised Classification with Graph Convolutional Networks, arXiv: 1609.02907v4, 1–14, http://arXiv.org/abs/1609.02907. Google Scholar R. Lambiotte, J.-C. Delvenne and M. Barahona, Random walks, markov processes and the multiscale modular organization of complex networks, IEEE Transactions on Network Science and Engineering, 1 (2014), 76–90, http://arXiv.org/abs/1502.04381, http://arXiv.org/abs/0812.1770. doi: 10.1109/TNSE.2015.2391998. Google Scholar Y. LeCun, Y. Bengio and G. Hinton, Deep learning, Nature, 521 (2015), 436-444. doi: 10.1038/nature14539. Google Scholar R. Levie, F. Monti, X. Bresson and M. M. Bronstein, CayleyNets: Graph convolutional neural networks with complex rational spectral filters, IEEE Transactions on Signal Processing, 67 (2019), 97-109. doi: 10.1109/TSP.2018.2879624. Google Scholar Z. Liu and M. Barahona, Geometric multiscale community detection: Markov stability and vector partitioning, Journal of Complex Networks, 6 (2018), 157-172. doi: 10.1093/comnet/cnx028. Google Scholar Z. Liu and M. Barahona, Graph-based data clustering via multiscale community detection, Applied Network Science, 5 (2020), 16pp, http://arXiv.org/abs/1909.04491. Google Scholar Z. Liu, C. Chen, L. Li, J. Zhou, X. Li, L. Song and Y. Qi, GeniePath: Graph neural networks with adaptive receptive paths, AAAI Technical Track: Machine Learning, 33 (2019), http://arXiv.org/abs/1802.00910. doi: 10.1609/aaai.v33i01.33014424. Google Scholar N. Masuda, M. A. Porter and R. Lambiotte, Random walks and diffusion on networks, Physics Reports, 716/717 (2017), 1-58. doi: 10.1016/j.physrep.2017.07.007. Google Scholar L. Page, S. Brin, R. Motwani and T. Winograd, The PageRank Citation Ranking: Bringing Order to the Web, Technical Report 1999-66, Stanford InfoLab, 1999, http://ilpubs.stanford.edu:8090/422/. Google Scholar B. Perozzi, R. Al-Rfou and S. Skiena, Deepwalk: Online learning of social representations, in Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 4, Association for Computing Machinery, New York, NY, USA, 2014, 701–710. doi: 10.1145/2623330.2623732. Google Scholar Y. Qian, P. Expert, T. Rieu, P. Panzarasa and M. Barahona, Quantifying the alignment of graph and features in deep learning, arXiv e-prints, arXiv: 1905.12921. Google Scholar M. T. Schaub, J.-C. Delvenne, R. Lambiotte and M. Barahona, Multiscale dynamical embeddings of complex networks, Phys. Rev. E, 99 (2019), 062308. doi: 10.1103/PhysRevE.99.062308. Google Scholar P. Sen, G. M. Namata, M. Bilgic, L. Getoor, B. Gallagher and T. Eliassi-Rad, Collective classification in network data, AI Magazine, 29 (2008), 93–106, http://www.cs.iit.edu/ ml/pdfs/sen-aimag08.pdf. doi: 10.1609/aimag.v29i3.2157. Google Scholar P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò and Y. Bengio, Graph attention networks, Machine Learning, 3 (2018), 1–12, http://arXiv.org/abs/1710.10903. Google Scholar J. Weston, F. Ratle, H. Mobahi and R. Collobert, Deep learning via semi-supervised embedding, ICML '08: Proceedings of the 25th International Conference on Machine Learning, 2008, 1168–1175. doi: 10.1145/1390156.1390303. Google Scholar Z. Yang, W. W. Cohen and R. Salakhutdinov, Revisiting semi-supervised learning with graph embeddings, arXiv: 1603.08861v2, 48, http://arXiv.org/abs/1603.08861. Google Scholar J. Zhang, X. Shi, J. Xie, H. Ma, I. King and D.-Y. Yeung, GaAN: Gated attention networks for learning on large and spatiotemporal graphs, arXiv e-prints, http://arXiv.org/abs/1803.07294. Google Scholar X. Zhu, Z. Ghahramani and J. Lafferty, Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions, in Proceedings of the Twentieth International Conference on International Conference on Machine Learning, ICML 3, AAAI Press, 2003, 912-919. Google Scholar C. Zhuang and Q. Ma, Dual graph convolutional networks for graph-based semi-supervised classification, in Proceedings of the 2018 World Wide Web Conference, Lyon, France, 2018, 499–508. doi: 10.1145/3178876.3186116. Google Scholar show all references Table 1. Statistics of datasets as reported in [35] and [30] Datasets Nodes Edges Classes Features Citeseer $ 3,327 $ $ 4,732 $ $ 6 $ $ 3,703 $ Cora $ 2,708 $ $ 5,429 $ $ 7 $ $ 1,433 $ Pubmed $ 19,717 $ $ 44,338 $ $ 3 $ $ 500 $ Wikipedia $ 20,525 $ $ 215,056 $ $ 12 $ $ 100 $ Download as excel Table 2. Percentage classification accuracy before and after application of relabelling by GDR for various classifiers. We present the improvement of GDR on the uniform prediction (which ignores features). We also consider four supervised classifiers (which learn from features without the graph): projection, RF, SVM and MLP. For RF, we use a maximum depth of $ 20 $; for SVM, we set $ C = 50 $; for MLP, we implement the same architecture as GCN ($ d_1 = 16 $-unit hidden layer, $ 0.5 $ dropout, $ 200 $ epochs, $ 0.01 $ learning rate, $ L^2 $ loss function). Finally, we compare with two semi-supervised graph classifiers: GCN [20] and Planetoid [35]. The numbers in brackets record the change in accuracy accomplished by applying GDR on the corresponding prior classifier. Boldface indicates the method with highest accuracy for each dataset Method Citeseer Cora Pubmed Wikipedia Uniform 7.7 13.0 18.0 28.7 GDR (Uniform) 50.6 (+42.9) 71.8 (+58.8) 73.2 (+55.2) 31.4 (+2.7) Projection 61.8 59.0 72.0 32.5 RF 60.3 58.9 68.8 50.8 SVM 61.1 58.0 49.9 31.0 MLP 57.0 56.0 70.7 43.0 GDR (Projection) 70.4 (+8.7) 79.7 (+20.7) 75.8 (+3.8) 36.9 (+4.4) GDR (RF) 70.5 (+10.2) 78.7 (+19.8) 72.2 (+3.2) 50.8 (+0.0) GDR (SVM) 70.3 (+9.2) 81.2 (+23.2) 52.4 (+2.5) 41.9 (+10.8) GDR (MLP) 69.7(+12.7) 78.5 (+22.5) 75.5 (+4.8) 40.5 (-2.5) Planetoid 64.7 75.7 72.2 - GCN 70.3 81.1 79.0 39.2 GDR (GCN) 70.8 (+0.5) 82.2 (+1.1) 79.4 (+0.4) 39.5 (+0.3) Table 3. Percentage classification accuracy of GCN and its extension diff-GCN, which has an explicit diffusion operator (16) Model Citeseer Cora Pubmed Wikipedia diff-GCN 71.9 82.3 79.3 45.9 Table 4. Accuracy of GDR using the undirected, directed, and reverse directed graphs of the Cora dataset Undirected Directed (fw) Directed (bw) Method $ A $ $ A_\text{dir} $ $ A_\text{dir}^T $ GDR (Projection) 79.7 62.1 64.6 GDR (RF) 78.7 58.0 57.6 GDR (SVM) 81.2 63.6 62.1 GDR (MLP) 78.5 57.3 56.4 Table 5. Accuracy of GCN and diff-GCN using the undirected, directed, reverse directed, and bidirectional (augmented) graphs of the Cora dataset. The highest accuracy is achieved by diff-GCN with the augmented graph (boldface) Undirected Directed (fw) Directed (bw) Augmented (fw, bw) Method $ A $ $ A_\text{dir} $ $ A_\text{dir}^T $ $ \begin{bmatrix} A_\text{dir} \, A_\text{dir}^T \end{bmatrix} $ Lars Grüne. Computing Lyapunov functions using deep neural networks. Journal of Computational Dynamics, 2020 doi: 10.3934/jcd.2021006 Nicholas Geneva, Nicholas Zabaras. Multi-fidelity generative deep learning turbulent flows. Foundations of Data Science, 2020, 2 (4) : 391-428. doi: 10.3934/fods.2020019 Gökhan Mutlu. On the quotient quantum graph with respect to the regular representation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020295 Yunping Jiang. Global graph of metric entropy on expanding Blaschke products. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1469-1482. doi: 10.3934/dcds.2020325 Leslaw Skrzypek, Yuncheng You. Feedback synchronization of FHN cellular neural networks. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021001 Vaibhav Mehandiratta, Mani Mehra, Günter Leugering. Fractional optimal control problems on a star graph: Optimality system and numerical solution. Mathematical Control & Related Fields, 2021, 11 (1) : 189-209. doi: 10.3934/mcrf.2020033 Editorial Office. Retraction: Honggang Yu, An efficient face recognition algorithm using the improved convolutional neural network. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 901-901. doi: 10.3934/dcdss.2019060 Kengo Nakai, Yoshitaka Saiki. Machine-learning construction of a model for a macroscopic fluid variable using the delay-coordinate of a scalar observable. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1079-1092. doi: 10.3934/dcdss.2020352 Raffaele Folino, Ramón G. Plaza, Marta Strani. Long time dynamics of solutions to $ p $-Laplacian diffusion problems with bistable reaction terms. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020403 Hongfei Yang, Xiaofeng Ding, Raymond Chan, Hui Hu, Yaxin Peng, Tieyong Zeng. A new initialization method based on normed statistical spaces in deep networks. Inverse Problems & Imaging, 2021, 15 (1) : 147-158. doi: 10.3934/ipi.2020045 Vaibhav Mehandiratta, Mani Mehra, Günter Leugering. Existence results and stability analysis for a nonlinear fractional boundary value problem on a circular ring with an attached edge : A study of fractional calculus on metric graph. Networks & Heterogeneous Media, 2021 doi: 10.3934/nhm.2021003 Bingyan Liu, Xiongbing Ye, Xianzhou Dong, Lei Ni. Branching improved Deep Q Networks for solving pursuit-evasion strategy solution of spacecraft. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021016 Xin Zhao, Tao Feng, Liang Wang, Zhipeng Qiu. Threshold dynamics and sensitivity analysis of a stochastic semi-Markov switched SIRS epidemic model with nonlinear incidence and vaccination. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021010 Guillaume Cantin, M. A. Aziz-Alaoui. Dimension estimate of attractors for complex networks of reaction-diffusion systems applied to an ecological model. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020283 Weiwei Liu, Jinliang Wang, Yuming Chen. Threshold dynamics of a delayed nonlocal reaction-diffusion cholera model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020316 Hui Zhao, Zhengrong Liu, Yiren Chen. Global dynamics of a chemotaxis model with signal-dependent diffusion and sensitivity. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021011 Xueli Bai, Fang Li. Global dynamics of competition models with nonsymmetric nonlocal dispersals when one diffusion rate is small. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3075-3092. doi: 10.3934/dcds.2020035 Xiaoxian Tang, Jie Wang. Bistability of sequestration networks. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1337-1357. doi: 10.3934/dcdsb.2020165 Rajendra K C Khatri, Brendan J Caseria, Yifei Lou, Guanghua Xiao, Yan Cao. Automatic extraction of cell nuclei using dilated convolutional network. Inverse Problems & Imaging, 2021, 15 (1) : 27-40. doi: 10.3934/ipi.2020049 Kaixuan Zhu, Ji Li, Yongqin Xie, Mingji Zhang. Dynamics of non-autonomous fractional reaction-diffusion equations on $ \mathbb{R}^{N} $ driven by multiplicative noise. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020376 Impact Factor: PDF downloads (246) HTML views (628) on AIMS Robert L. Peach Alexis Arnaudon Mauricio Barahona Recipient's E-mail*
CommonCrawl
Persistent high incidence of tuberculosis among immigrants in a low-incidence country: Impact of immigrants with early or late latency MBE Home A simple analysis of vaccination strategies for rubella 2011, 8(3): 689-694. doi: 10.3934/mbe.2011.8.689 A note for the global stability of a delay differential equation of hepatitis B virus infection Bao-Zhu Guo 1, and Li-Ming Cai 1, Academy of Mathematics and Systems Science, Academia Sinica, Beijing 100190, China, China Received September 2010 Revised October 2010 Published June 2011 The global stability for a delayed HIV-1 infection model is investigated. It is shown that the global dynamics of the system can be completely determined by the reproduction number, and the chronic infected equilibrium of the system is globally asymptotically stable whenever it exists. This improves the related results presented in [S. A. Gourley,Y. Kuang and J.D.Nagy, Dynamics of a delay differential equation model of hepatitis B virus infection, Journal of Biological Dynamics, 2(2008), 140-153]. Keywords: HBV model, time delay., Lyapunov functional, global stability. Mathematics Subject Classification: Primary: 92D25, 34D23, 34K2. Citation: Bao-Zhu Guo, Li-Ming Cai. A note for the global stability of a delay differential equation of hepatitis B virus infection. Mathematical Biosciences & Engineering, 2011, 8 (3) : 689-694. doi: 10.3934/mbe.2011.8.689 S. A. Gourley, Y. Kuang and J. D. Nagy, Dynamics of a delay differential equation model of hepatitis B virus infection,, J. Biol. Dyn., 2 (2008), 140. doi: 10.1080/17513750701769873. Google Scholar G. Huang, W. Ma and Y. Takeuchi, Global properties for virus dynamics model with Beddington-DeAngelis functional response,, Appl. Math. Lett., 22 (2009), 1690. doi: 10.1016/j.aml.2009.06.004. Google Scholar G. Huang, Y. Takeuchi and W. Ma, Lyapunov functional for delay differential equations model of viral infections,, SIAM J. Appl. Math., 70 (2010), 2693. doi: 10.1137/090780821. Google Scholar G. Huang, Y. Takeuchi, W. Ma and D. Wei, Global stability for delay SIR and SEIR epidemic models with nonlinear incidence rate,, Bull. Math. Biol., 72 (2010), 1192. doi: 10.1007/s11538-009-9487-6. Google Scholar A. Korobeinikov, Global properties of basic virus dynamics models,, Bull. Math. Biol., 66 (2004), 879. doi: 10.1016/j.bulm.2004.02.001. Google Scholar A. Korobeinikov, Global properties of infectious disease models with nonlinear incidence,, Bull. Math. Biol., 69 (2007), 1871. doi: 10.1007/s11538-007-9196-y. Google Scholar A. Korobeinikov, Lyapunov functions and global properties for SEIR and SEIS models epidemic models,, Math. Med. Biol., 21 (2004), 75. doi: 10.1093/imammb/21.2.75. Google Scholar A. Korobeinikov and G. C. Wake, Lyapunov functions and global stability for SIR, SIRS, and SIS epidemiological models,, Appl. Math. Lett., 15 (2002), 955. doi: 10.1016/S0893-9659(02)00069-1. Google Scholar M. Y. Li and H. Shu, Global dynamics of an in-host viral model with intracellular delay,, Bull. Math. Biol., 72 (2010), 1492. doi: 10.1007/s11538-010-9503-x. Google Scholar M. Y. Li and H. Shu, Impact of intracellular delays and target-cell dynamics on in vivo viral infections,, SIAM J. Appl. Math., 70 (2010), 2434. doi: 10.1137/090779322. Google Scholar C. C. McCluskey, Global stability for an SEIR epidemiological model with varying infectivity and infinite delay,, Math. Biosci. Eng., 6 (2009), 603. doi: 10.3934/mbe.2009.6.603. Google Scholar C. C. McCluskey, Complete global stability for an SIR epidemic model with delay-distributed or discrete,, Nonlinear. Anal. Real World Appl., 11 (2010), 55. doi: 10.1016/j.nonrwa.2008.10.014. Google Scholar L. Min, Y. Su and Y. Kuang, Mathematical analysis of a basic virus infection model with application to HBV infection,, Rocky. Mount. J. Math., 38 (2008), 1573. doi: 10.1216/RMJ-2008-38-5-1573. Google Scholar M. A. Nowak, S. Bonhoeffer, A. M. Hill, R. Boehme, H. C. Thomas and H. McDade, Viral dynamics in hepatitis B virus infection,, Proc. Nat. Acad. Sci. USA, 93 (1996), 4398. doi: 10.1073/pnas.93.9.4398. Google Scholar M. A. Nowak and C. R. M. Bangham, Population dynamics of immune responses to persistent viruses,, Science, 272 (1996), 74. doi: 10.1126/science.272.5258.74. Google Scholar Ting Guo, Haihong Liu, Chenglin Xu, Fang Yan. Global stability of a diffusive and delayed HBV infection model with HBV DNA-containing capsids and general incidence rate. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4223-4242. doi: 10.3934/dcdsb.2018134 Songbai Guo, Wanbiao Ma. Global dynamics of a microorganism flocculation model with time delay. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1883-1891. doi: 10.3934/cpaa.2017091 Deqiong Ding, Wendi Qin, Xiaohua Ding. Lyapunov functions and global stability for a discretized multigroup SIR epidemic model. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 1971-1981. doi: 10.3934/dcdsb.2015.20.1971 Yinshu Wu, Wenzhang Huang. Global stability of the predator-prey model with a sigmoid functional response. Discrete & Continuous Dynamical Systems - B, 2020, 25 (3) : 1159-1167. doi: 10.3934/dcdsb.2019214 C. Connell McCluskey. Global stability of an $SIR$ epidemic model with delay and general nonlinear incidence. Mathematical Biosciences & Engineering, 2010, 7 (4) : 837-850. doi: 10.3934/mbe.2010.7.837 C. Connell McCluskey. Global stability for an SEIR epidemiological model with varying infectivity and infinite delay. Mathematical Biosciences & Engineering, 2009, 6 (3) : 603-610. doi: 10.3934/mbe.2009.6.603 Yincui Yan, Wendi Wang. Global stability of a five-dimensional model with immune responses and delay. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 401-416. doi: 10.3934/dcdsb.2012.17.401 Saif Ullah, Muhammad Altaf Khan, Muhammad Farooq, Taza Gul, Fawad Hussain. A fractional order HBV model with hospitalization. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 957-974. doi: 10.3934/dcdss.2020056 Ismael Maroto, Carmen Núñez, Rafael Obaya. Exponential stability for nonautonomous functional differential equations with state-dependent delay. Discrete & Continuous Dynamical Systems - B, 2017, 22 (8) : 3167-3197. doi: 10.3934/dcdsb.2017169 Jinhu Xu, Yicang Zhou. Global stability of a multi-group model with vaccination age, distributed delay and random perturbation. Mathematical Biosciences & Engineering, 2015, 12 (5) : 1083-1106. doi: 10.3934/mbe.2015.12.1083 Jinliang Wang, Lijuan Guan. Global stability for a HIV-1 infection model with cell-mediated immune response and intracellular delay. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 297-302. doi: 10.3934/dcdsb.2012.17.297 Sze-Bi Hsu, Ming-Chia Li, Weishi Liu, Mikhail Malkin. Heteroclinic foliation, global oscillations for the Nicholson-Bailey model and delay of stability loss. Discrete & Continuous Dynamical Systems - A, 2003, 9 (6) : 1465-1492. doi: 10.3934/dcds.2003.9.1465 Nabil T. Fadai, Michael J. Ward, Juncheng Wei. A time-delay in the activator kinetics enhances the stability of a spike solution to the gierer-meinhardt model. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1431-1458. doi: 10.3934/dcdsb.2018158 Khalid Addi, Samir Adly, Hassan Saoud. Finite-time Lyapunov stability analysis of evolution variational inequalities. Discrete & Continuous Dynamical Systems - A, 2011, 31 (4) : 1023-1038. doi: 10.3934/dcds.2011.31.1023 Meihong Qiao, Anping Liu, Qing Tang. The dynamics of an HBV epidemic model on complex heterogeneous networks. Discrete & Continuous Dynamical Systems - B, 2015, 20 (5) : 1393-1404. doi: 10.3934/dcdsb.2015.20.1393 Ferenc A. Bartha, Ábel Garab. Necessary and sufficient condition for the global stability of a delayed discrete-time single neuron model. Journal of Computational Dynamics, 2014, 1 (2) : 213-232. doi: 10.3934/jcd.2014.1.213 Yoshiaki Muroya, Yoichi Enatsu, Huaixing Li. A note on the global stability of an SEIR epidemic model with constant latency time and infectious period. Discrete & Continuous Dynamical Systems - B, 2013, 18 (1) : 173-183. doi: 10.3934/dcdsb.2013.18.173 Monika Joanna Piotrowska, Urszula Foryś, Marek Bodnar, Jan Poleszczuk. A simple model of carcinogenic mutations with time delay and diffusion. Mathematical Biosciences & Engineering, 2013, 10 (3) : 861-872. doi: 10.3934/mbe.2013.10.861 Hong Yang, Junjie Wei. Dynamics of spatially heterogeneous viral model with time delay. Communications on Pure & Applied Analysis, 2020, 19 (1) : 85-102. doi: 10.3934/cpaa.2020005 Abdelhai Elazzouzi, Aziz Ouhinou. Optimal regularity and stability analysis in the $\alpha-$Norm for a class of partial functional differential equations with infinite delay. Discrete & Continuous Dynamical Systems - A, 2011, 30 (1) : 115-135. doi: 10.3934/dcds.2011.30.115 Bao-Zhu Guo Li-Ming Cai
CommonCrawl
SANDEEP SINGH Volume 130 All articles Published: 5 February 2021 Article ID 0031 Research article Vorticity patterns along the Main Central Thrust Zone, Alaknanda–Dhauli Ganga Valleys (Garhwal), Uttarakhand Himalaya LAWRENCE KANYAN ARVIND K JAIN SANDEEP SINGH The Greater Himalayan Sequence (GHS), constituting the anatectic core of the Himalaya, is generally modelled as a mid-crustal southward extruding channel or wedge. Movements along the Main Central Thrust (MCT) in the south and the South Tibetan Detachment System (STDS) in the north and exhumation along the Himalayan front played an important role in the extrusion of the GHS from beneath the Tibetan plateau during the Miocene. To understand the kinematics of these orogen-scale shear zones, it is important to constrain the percentage of pure shear associated with them. In this paper, we present the kinematic vorticity data from the Main Central Thrust Zone (MCTZ), Alaknanda and Dhauli Ganga Valleys (Garhwal), Uttarakhand Himalaya. The mean kinematic vorticity number (W$_{m} $), which can be used to calculate the percentage of pure shear, has been estimated by analysing the rotational behaviour of rigid grains in a ductile matrix. The analysis reveals that pure shear provides significant contribution (30–52%) to the deformation associated with southward ductile shearing along the MCT, with the highest mean kinematic vorticity number (W$_{m} $) values close to the MCT. The results provide important quantitative constraints for the boundary conditions in the extrusion models. The Wm values from within the anatectic core have not been reported as most of the vorticity gauges fail due to increased deformation temperatures in this region. $\bf{Highlights}$ $\bullet$ Orogen-scale mid-crustal southward extruding channel or wedge models deformation of the Great Himalayan Sequence (GHS) of the anatectic core, whose kinematics is to be understood by constraining the percentage of pure shear. $\bullet$ Vorticity estimation near the Main Central Thrust Zone (MCTZ) is performed along the Alaknanda–Dhauli Ganga Valleys, Uttarakhand Himalaya along with critical analysis of published vorticity data from the other areas. $\bullet$ Mean kinematic vorticity number (Wm), a quantitative estimator of pure shear percentage during non-coaxial deformation in a shear zone, varies between 0.675 and 0.875 within the MCTZ, corresponding to a pure shear percentage between 30% and 52%. $\bullet$ A general trend of decreasing pure shear component towards the channel boundaries is explained by velocity profile within an extruding channel of hot and low-viscosity mid-crustal rocks and observed from the compiled vorticity data from other Himalayan traverses. $\bullet$ Our results agree with the channel flow conceptual model and provide quantitative constraints on the percentage of pure shear associated with deformation within the GHS. Volume 131 All articles Published: 3 September 2022 Article ID 0192 Research article Rb–Sr and Sm–Nd ages of the basement from Cauvery Basin: Crustal linkage to the Madurai Block, Peninsular India PIYUSH GUPTA SHAKTI SINGH RATHORE SANDEEP SINGH The Cauvery Basin, formed as a result of the fragmentation of Gondwana during the Late Jurassic/Early Cretaceous period, is located on the eastern continental margin of India. The basin, covered under thick Phanerozoic sediments, has a geochronological lesser understood basement that forms the easternmost extremity of the Madurai Block in the Southern Granulite Terrane. This paper reports Rb–Sr and Sm–Nd ages between 2173–2307 Ma from northern onshore and Rb–Sr ages between 1223–983 Ma from southern offshore parts of the basin. The studied samples include basement core samples of hornblende–gneisses, granites, and metapelites (chlorite–biotite and garnet–biotite schists). These ages represent at least two episodes of tectonothermal events during Early Paleoproterozoic and Late Neoproterozoic, suggesting a polymetamorphic history of the basement and are correlatable with the reported events. The study also yielded Early Paleozoic Cambro–Ordovician whole-rock-biotite mineral Rb–Sr ages of 443–487 Ma, coinciding with the cooling stages of the Pan-African tectonothermal event post the thermal resetting in the studied basement of the Cauvery Basin. Further, the Sm–Nd systematics yielded two groups of model ages (2.1–3.4 Ga and 1.5 Ga), based on which two distinct crustal domains have been identified in the basement, viz., a Paleoarchean to Early Paleoproterozoic northern domain and an Early Mesoproterozoic southern domain, respectively. This has also been supported by distinct radiometric ages obtained fromthese domains. Volume 131 All articles Published: 28 October 2022 Article ID 0232 Research article Three-dimensional attenuation tomography of Garhwal Himalaya, India obtained from strong motion data MOHIT PANDEY A JOSHI SAURABH SHARMA JYOTI SINGH SANDEEP SINGH This paper investigates the three-dimensional frequency-dependent attenuation structure of the Garhwal Himalaya in the Indian subcontinent. Based on the distribution of earthquakes and recording stations in the Garhwal Himalaya, the entire region of 152 ${\times}$ 94 km$^2$ is divided into 108 three-dimensional uniform rectangular blocks. These blocks are assumed to be of thickness 5 km that extends to 15 km depth. Each block represents the rock of different attenuation coefficients. The S-phase of strong motion records has been used to estimate the shear wave quality factor in each block by the inversion of spectral acceleration data. The inversion of spectral acceleration data is based on the modified technique of Joshi (2007) and Joshi et al. (2010) which was initially given by Hashida and Shimazaki (1984). The earthquake data of 19 events digitally recorded by 33 stations of the strong motion network between 2005 and 2017 have been used in this paper. The outcome of the inversion process is the shear wave quality factor at all frequencies present in the records. The three-dimensional attenuation structure at various frequencies is presented in this paper and is correlated with the regional tectonics of the Garhwal Himalaya. The correlation of attenuation structure at 10, 12 and 15 Hz with the tectonics of the region indicates that the shear wave quality factor has a strong relationship with the tectonics of the region. The values of the shear wave quality factor at different frequencies obtained from inversion have been used to obtain a relation of shear wave quality factor Q$_{\beta}$(f)=107f$^{0:82}$ for the region of Garhwal Himalaya for frequencies 10–16 Hz. The comparison of obtained shear wave quality factor with other studied relations clearly indicates that the obtained relation is close to what has been obtained in earlier studies and thereby indicates the reliability of obtained three-dimensional shear wave attenuation structure from inversion of spectral data.
CommonCrawl
With regards to your mental well-being, nootropics are not antidepressants and mental care is important. They are not a replacement for other ways of treating mental difficulties. That being said, they can help boost your happiness. For instance by helping you sleep better. Melatonin is a synthetic version of a naturally occurring neurotransmitter and could help you sleep better. Clearly, the hype surrounding drugs like modafinil and methylphenidate is unfounded. These drugs are beneficial in treating cognitive dysfunction in patients with Alzheimer's, ADHD or schizophrenia, but it's unlikely that today's enhancers offer significant cognitive benefits to healthy users. In fact, taking a smart pill is probably no more effective than exercising or getting a good night's sleep. Only two of the eight experiments reviewed in this section found that stimulants enhanced performance, on a nonverbal fluency task in one case and in Raven's Progressive Matrices in the other. The small number of studies of any given type makes it difficult to draw general conclusions about the underlying executive function systems that might be influenced. That left me with 329 days of data. The results are that (correcting for the magnesium citrate self-experiment I was running during the time period which did not turn out too great) days on which I happened to use my LED device for LLLT were much better than regular days. Below is a graph showing the entire MP dataseries with LOESS-smoothed lines showing LLLT vs non-LLLT days: If this is the case, this suggests some thoughtfulness about my use of nicotine: there are times when use of nicotine will not be helpful, but times where it will be helpful. I don't know what makes the difference, but I can guess it relates to over-stimulation: on some nights during the experiment, I had difficult concentrating on n-backing because it was boring and I was thinking about the other things I was interested in or working on - in retrospect, I wonder if those instances were nicotine nights. "Who doesn't want to maximize their cognitive ability? Who doesn't want to maximize their muscle mass?" asks Murali Doraiswamy, who has led several trials of cognitive enhancers at Duke University Health System and has been an adviser to pharmaceutical and supplement manufacturers as well as the Food and Drug Administration. He attributes the demand to an increasingly knowledge-based society that values mental quickness and agility above all else. However, when I didn't stack it with Choline, I would get what users call "racetam headaches." Choline, as Patel explains, is not a true nootropic, but it's still a pro-cognitive compound that many take with other nootropics in a stack. It's an essential nutrient that humans need for functions like memory and muscle control, but we can't produce it, and many Americans don't get enough of it. The headaches I got weren't terribly painful, but they were uncomfortable enough that I stopped taking Piracetam on its own. Even without the headache, though, I didn't really like the level of focus Piracetam gave me. I didn't feel present when I used it, even when I tried to mix in caffeine and L-theanine. And while it seemed like I could focus and do my work faster, I was making more small mistakes in my writing, like skipping words. Essentially, it felt like my brain was moving faster than I could. Ethical issues also arise with the use of drugs to boost brain power. Their use as cognitive enhancers isn't currently regulated. But should it be, just as the use of certain performance-enhancing drugs is regulated for professional athletes? Should universities consider dope testing to check that students aren't gaining an unfair advantage through drug use? Several new medications are on the market and in development for Alzheimer's disease, a progressive neurological disease leading to memory loss, language deterioration, and confusion that afflicts about 4.5 million Americans and is expected to strike millions more as the baby boom generation ages. Yet the burning question for those who aren't staring directly into the face of Alzheimer's is whether these medications might make us smarter. Stayed up with the purpose of finishing my work for a contest. This time, instead of taking the pill as a single large dose (I feel that after 3 times, I understand what it's like), I will take 4 doses over the new day. I took the first quarter at 1 AM, when I was starting to feel a little foggy but not majorly impaired. Second dose, 5:30 AM; feeling a little impaired. 8:20 AM, third dose; as usual, I feel physically a bit off and mentally tired - but still mentally sharp when I actually do something. Early on, my heart rate seemed a bit high and my limbs trembling, but it's pretty clear now that that was the caffeine or piracetam. It may be that the other day, it was the caffeine's fault as I suspected. The final dose was around noon. The afternoon crash wasn't so pronounced this time, although motivation remains a problem. I put everything into finishing up the spaced repetition literature review, and didn't do any n-backing until 11:30 PM: 32/34/31/54/40%. All clear? Try one (not dozens) of nootropics for a few weeks and keep track of how you feel, Kerl suggests. It's also important to begin with as low a dose as possible; when Cyr didn't ease into his nootropic regimen, his digestion took the blow, he admits. If you don't notice improvements, consider nixing the product altogether and focusing on what is known to boost cognitive function – eating a healthy diet, getting enough sleep regularly and exercising. "Some of those lifestyle modifications," Kerl says, "may improve memory over a supplement." Another ingredient used in this formula is GABA or Gamma-Aminobutyric acid; it's the second most common neurotransmitter found in the human brain. Being an inhibitory neurotransmitter it helps calm and reduce neuronal activity; this calming effect makes GABA an excellent ingredient in anti-anxiety medication. Lecithin is another ingredient found in Smart Pill and is a basic compound found in every cell of the body, with cardiovascular benefits it can also help restore the liver. Another effect is that it works with neurological functions such as memory or attention, thus improving brain Effectiveness. The principal metric would be mood, however defined. Zeo's web interface & data export includes a field for Day Feel, which is a rating 1-5 of general mood & quality of day. I can record a similar metric at the end of each day. 1-5 might be a little crude even with a year of data, so a more sophisticated measure might be in order. The first mood study is paywalled so I'm not sure what they used, but Shiotsuki 2008 used State-Trait of Anxiety Inventory (STAI) and Profiles of Mood States Test (POMS). The full POMS sounds too long to use daily, but the Brief POMS might work. In the original 1987 paper A brief POMS measure of distress for cancer patients, patients answering this questionnaire had a mean total mean of 10.43 (standard deviation 8.87). Is this the best way to measure mood? I've asked Seth Roberts; he suggested using a 0-100 scale, but personally, there's no way I can assess my mood on 0-100. My mood is sufficiently stable (to me) that 0-5 is asking a bit much, even. I almost resigned myself to buying patches to cut (and let the nicotine evaporate) and hope they would still stick on well enough afterwards to be indistinguishable from a fresh patch, when late one sleepless night I realized that a piece of nicotine gum hanging around on my desktop for a week proved useless when I tried it, and that was the answer: if nicotine evaporates from patches, then it must evaporate from gum as well, and if gum does evaporate, then to make a perfect placebo all I had to do was cut some gum into proper sizes and let the pieces sit out for a while. (A while later, I lost a piece of gum overnight and consumed the full 4mg to no subjective effect.) Google searches led to nothing indicating I might be fooling myself, and suggested that evaporation started within minutes in patches and a patch was useless within a day. Just a day is pushing it (who knows how much is left in a useless patch?), so I decided to build in a very large safety factor and let the gum sit for around a month rather than a single day. These days, nootropics are beginning to take their rightful place as a particularly powerful tool in the Neurohacker's toolbox. After all, biochemistry is deeply foundational to neural function. Whether you are trying to fix the damage that is done to your nervous system by a stressful and toxic environment or support and enhance your neural functioning, getting the chemistry right is table-stakes. And we are starting to get good at getting it right. What's changed? One item always of interest to me is sleep; a stimulant is no good if it damages my sleep (unless that's what it is supposed to do, like modafinil) - anecdotes and research suggest that it does. Over the past few days, my Zeo sleep scores continued to look normal. But that was while not taking nicotine much later than 5 PM. In lieu of a different ml measurer to test my theory that my syringe is misleading me, I decide to more directly test nicotine's effect on sleep by taking 2ml at 10:30 PM, and go to bed at 12:20; I get a decent ZQ of 94 and I fall asleep in 16 minutes, a bit below my weekly average of 19 minutes. The next day, I take 1ml directly before going to sleep at 12:20; the ZQ is 95 and time to sleep is 14 minutes. (People aged <=18 shouldn't be using any of this except harmless stuff - where one may have nutritional deficits - like fish oil & vitamin D; melatonin may be especially useful, thanks to the effects of screwed-up school schedules & electronics use on teenagers' sleep. Changes in effects with age are real - amphetamines' stimulant effects and modafinil's histamine-like side-effects come to mind as examples.) As far as anxiety goes, psychiatrist Emily Deans has an overview of why the Kiecolt-Glaser et al 2011 study is nice; she also discusses why fish oil seems like a good idea from an evolutionary perspective. There was also a weaker earlier 2005 study also using healthy young people, which showed reduced anger/anxiety/depression plus slightly faster reactions. The anti-stress/anxiolytic may be related to the possible cardiovascular benefits (Carter et al 2013). So what about the flip side: a drug to erase bad memories? It may have failed Jim Carrey in Eternal Sunshine of the Spotless Mind, but neuroscientists have now discovered an amnesia drug that can dull the pain of traumatic events. The drug, propranolol, was originally used to treat high blood pressure and heart disease. Doctors noticed that patients given the drug suffered fewer signs of stress when recalling a trauma. …Phenethylamine is intrinsically a stimulant, although it doesn't last long enough to express this property. In other words, it is rapidly and completely destroyed in the human body. It is only when a number of substituent groups are placed here or there on the molecule that this metabolic fate is avoided and pharmacological activity becomes apparent. Another common working memory task is the n-back task, which requires the subject to view a series of items (usually letters) and decide whether the current item is identical to the one presented n items back. This task taxes working memory because the previous items must be held in working memory to be compared with the current item. The easiest version of this is a 1-back task, which is also called a double continuous performance task (CPT) because the subject is continuously monitoring for a repeat or double. Three studies examined the effects of MPH on working memory ability as measured by the 1-back task, and all found enhancement of performance in the form of reduced errors of omission (Cooper et al., 2005; Klorman et al., 1984; Strauss et al., 1984). Fleming et al. (1995) tested the effects of d-AMP on a 5-min CPT and found a decrease in reaction time, but did not specify which version of the CPT was used. When Giurgea coined the word nootropic (combining the Greek words for mind and bending) in the 1970s, he was focused on a drug he had synthesized called piracetam. Although it is approved in many countries, it isn't categorized as a prescription drug in the United States. That means it can be purchased online, along with a number of newer formulations in the same drug family (including aniracetam, phenylpiracetam, and oxiracetam). Some studies have shown beneficial effects, including one in the 1990s that indicated possible improvement in the hippocampal membranes in Alzheimer's patients. But long-term studies haven't yet borne out the hype. The real-life Limitless Pill? One of the newer offerings in the nootropic industry, Avanse Laboratories' new ingenious formula has been generating quite much popularity on the internet, and has been buzzing around on dedicated nootropic forums. Why do we pick this awesome formula to be the #1 nootropic supplement of 2017 and 2018? Simple, name another supplement that contains "potent 1160mg capsule" including 15 mg of world's most powerful nootropic agent (to find out, please click on Learn More). It is cheap, in our opinion, compared to what it contains. And we don't think their price will stay this low for long. Avanse Laboratories is currently playing… Learn More... The placebos can be the usual pills filled with olive oil. The Nature's Answer fish oil is lemon-flavored; it may be worth mixing in some lemon juice. In Kiecolt-Glaser et al 2011, anxiety was measured via the Beck Anxiety scale; the placebo mean was 1.2 on a standard deviation of 0.075, and the experimental mean was 0.93 on a standard deviation of 0.076. (These are all log-transformed covariates or something; I don't know what that means, but if I naively plug those numbers into Cohen's d, I get a very large effect: \frac{1.2 - 0.93}{0.076}=3.55.) I ultimately mixed it in with the 3kg of piracetam and included it in that batch of pills. I mixed it very thoroughly, one ingredient at a time, so I'm not very worried about hot spots. But if you are, one clever way to get accurate caffeine measurements is to measure out a large quantity & dissolve it since it's easier to measure water than powder, and dissolving guarantees even distribution. This can be important because caffeine is, like nicotine, an alkaloid poison which - the dose makes the poison - can kill in high doses, and concentrated powder makes it easy to take too much, as one inept Englishman discovered the hard way. (This dissolving trick is applicable to anything else that dissolves nicely.) For proper brain function, our CNS (Central Nervous System) requires several amino acids. These derive from protein-rich foods. Consider amino acids to be protein building blocks. Many of them are dietary precursors to vital neurotransmitters in our brain. Epinephrine (adrenaline), serotonin, dopamine, and norepinephrine assist in enhancing mental performance. A few examples of amino acid nootropics are: The soft gels are very small; one needs to be a bit careful - Vitamin D is fat-soluble and overdose starts in the range of 70,000 IU35, so it would take at least 14 pills, and it's unclear where problems start with chronic use. Vitamin D, like many supplements, follows a U-shaped response curve (see also Melamed et al 2008 and Durup et al 2012) - too much can be quite as bad as too little. Too little, though, is likely very bad. The previously cited studies with high acute doses worked out to <1,000 IU a day, so they may reassure us about the risks of a large acute dose but not tell us much about smaller chronic doses; the mortality increases due to too-high blood levels begin at ~140nmol/l and reading anecdotes online suggest that 5k IU daily doses tend to put people well below that (around 70-100nmol/l). I probably should get a blood test to be sure, but I have something of a needle phobia. For obvious reasons, it's difficult for researchers to know just how common the "smart drug" or "neuro-enhancing" lifestyle is. However, a few recent studies suggest cognition hacking is appealing to a growing number of people. A survey conducted in 2016 found that 15% of University of Oxford students were popping pills to stay competitive, a rate that mirrored findings from other national surveys of UK university students. In the US, a 2014 study found that 18% of sophomores, juniors, and seniors at Ivy League colleges had knowingly used a stimulant at least once during their academic career, and among those who had ever used uppers, 24% said they had popped a little helper on eight or more occasions. Anecdotal evidence suggests that pharmacological enhancement is also on the rise within the workplace, where modafinil, which treats sleep disorders, has become particularly popular. "One of my favorites is 1, 3, 7-trimethylxanthine," says Dr. Mark Moyad, director of preventive and alternative medicine at the University of Michigan. He says this chemical boosts many aspects of cognition by improving alertness. It's also associated with some memory benefits. "Of course," Moyad says, "1, 3, 7-trimethylxanthine goes by another name—caffeine." Cognition is a suite of mental phenomena that includes memory, attention and executive functions, and any drug would have to enhance executive functions to be considered truly 'smart'. Executive functions occupy the higher levels of thought: reasoning, planning, directing attention to information that is relevant (and away from stimuli that aren't), and thinking about what to do rather than acting on impulse or instinct. You activate executive functions when you tell yourself to count to 10 instead of saying something you may regret. They are what we use to make our actions moral and what we think of when we think about what makes us human. During the 1920s, Amphetamine was being researched as an asthma medication when its cognitive benefits were accidentally discovered. In many years that followed, this enhancer was exploited in a number of medical and nonmedical applications, for instance, to enhance alertness in military personnel, treat depression, improve athletic performance, etc. Using prescription ADHD medications, racetams, and other synthetic nootropics can boost brain power. Yes, they can work. Even so, we advise against using them long-term since the research on their safety is still new. Use them at your own risk. For the majority of users, stick with all natural brain supplements for best results. What is your favorite smart pill for increasing focus and mental energy? Tell us about your favorite cognitive enhancer in the comments below. Weyandt et al. (2009) Large public university undergraduates (N = 390) 7.5% (past 30 days) Highest rated reasons were to perform better on schoolwork, perform better on tests, and focus better in class 21.2% had occasionally been offered by other students; 9.8% occasionally or frequently have purchased from other students; 1.4% had sold to other students The next morning, four giant pills' worth of the popular piracetam-and-choline stack made me... a smidge more alert, maybe? (Or maybe that was just the fact that I had slept pretty well the night before. It was hard to tell.) Modafinil, which many militaries use as their "fatigue management" pill of choice, boasts glowing reviews from satisfied users. But in the United States, civilians need a prescription to get it; without one, they are stuck using adrafinil, a precursor substance that the body metabolizes into modafinil after ingestion. Taking adrafinil in lieu of coffee just made me keenly aware that I hadn't had coffee. A week later: Golden Sumatran, 3 spoonfuls, a more yellowish powder. (I combined it with some tea dregs to hopefully cut the flavor a bit.) Had a paper to review that night. No (subjectively noticeable) effect on energy or productivity. I tried 4 spoonfuls at noon the next day; nothing except a little mental tension, for lack of a better word. I think that was just the harbinger of what my runny nose that day and the day before was, a head cold that laid me low during the evening. How much of the nonmedical use of prescription stimulants documented by these studies was for cognitive enhancement? Prescription stimulants could be used for purposes other than cognitive enhancement, including for feelings of euphoria or energy, to stay awake, or to curb appetite. Were they being used by students as smart pills or as "fun pills," "awake pills," or "diet pills"? Of course, some of these categories are not entirely distinct. For example, by increasing the wakefulness of a sleep-deprived person or by lifting the mood or boosting the motivation of an apathetic person, stimulants are likely to have the secondary effect of improving cognitive performance. Whether and when such effects should be classified as cognitive enhancement is a question to which different answers are possible, and none of the studies reviewed here presupposed an answer. Instead, they show how the respondents themselves classified their reasons for nonmedical stimulant use. Nicotine absorption through the stomach is variable and relatively reduced in comparison with absorption via the buccal cavity and the small intestine. Drinking, eating, and swallowing of tobacco smoke by South American Indians have frequently been reported. Tenetehara shamans reach a state of tobacco narcosis through large swallows of smoke, and Tapirape shams are said to eat smoke by forcing down large gulps of smoke only to expel it again in a rapid sequence of belches. In general, swallowing of tobacco smoke is quite frequently likened to drinking. However, although the amounts of nicotine swallowed in this way - or in the form of saturated saliva or pipe juice - may be large enough to be behaviorally significant at normal levels of gastric pH, nicotine, like other weak bases, is not significantly absorbed. 3 days later, I'm fairly miserable (slept poorly, had a hair-raising incident, and a big project was not received as well as I had hoped), so well before dinner (and after a nap) I brew up 2 wooden-spoons of Malaysia Green (olive-color dust). I drank it down; tasted slightly better than the first. I was feeling better after the nap, and the kratom didn't seem to change that. Modafinil is not addictive, but there may be chances of drug abuse and memory impairment. This can manifest in people who consume it to stay up for way too long; as a result, this would probably make them ill. Long-term use of Modafinil may reduce plasticity and may harm the memory of some individuals. Hence, it is sold only on prescription by a qualified physician. The Nootroo arrives in a shiny gold envelope with the words "proprietary blend" and "intended for use only in neuroscience research" written on the tin. It has been designed, says Matzner, for "hours of enhanced learning and memory". The capsules contain either Phenylpiracetam or Noopept (a peptide with similar effects and similarly uncategorised) and are distinguished by real flakes of either edible silver or gold. They are to be alternated between daily, allowing about two weeks for the full effect to be felt. Also in the capsules are L-Theanine, a form of choline, and a types of caffeine which it is claimed has longer lasting effects. The evidence? Ritalin is FDA-approved to treat ADHD. It has also been shown to help patients with traumatic brain injury concentrate for longer periods, but does not improve memory in those patients, according to a 2016 meta-analysis of several trials. A study published in 2012 found that low doses of methylphenidate improved cognitive performance, including working memory, in healthy adult volunteers, but high doses impaired cognitive performance and a person's ability to focus. (Since the brains of teens have been found to be more sensitive to the drug's effect, it's possible that methylphenidate in lower doses could have adverse effects on working memory and cognitive functions.) That is, perhaps light of the right wavelength can indeed save the brain some energy by making it easier to generate ATP. Would 15 minutes of LLLT create enough ATP to make any meaningful difference, which could possibly cause the claimed benefits? The problem here is like that of the famous blood-glucose theory of willpower - while the brain does indeed use up more glucose while active, high activity uses up very small quantities of glucose/energy which doesn't seem like enough to justify a mental mechanism like weak willpower.↩ Some suggested that the lithium would turn me into a zombie, recalling the complaints of psychiatric patients. But at 5mg elemental lithium x 200 pills, I'd have to eat 20 to get up to a single clinical dose (a psychiatric dose might be 500mg of lithium carbonate, which translates to ~100mg elemental), so I'm not worried about overdosing. To test this, I took on day 1 & 2 no less than 4 pills/20mg as an attack dose; I didn't notice any large change in emotional affect or energy levels. And it may've helped my motivation (though I am also trying out the tyrosine). Much better than I had expected. One of the best superhero movies so far, better than Thor or Watchmen (and especially better than the Iron Man movies). I especially appreciated how it didn't launch right into the usual hackneyed creation of the hero plot-line but made Captain America cool his heels performing & selling war bonds for 10 or 20 minutes. The ending left me a little nonplussed, although I sort of knew it was envisioned as a franchise and I would have to admit that showing Captain America wondering at Times Square is much better an ending than something as cliche as a close-up of his suddenly-opened eyes and then a fade out. (The movie continued the lamentable trend in superhero movies of having a strong female love interest… who only gets the hots for the hero after they get muscles or powers. It was particularly bad in CA because she knows him and his heart of gold beforehand! What is the point of a feminist character who is immediately forced to do that?)↩ Discussions of PEA mention that it's almost useless without a MAOI to pave the way; hence, when I decided to get deprenyl and noticed that deprenyl is a MAOI, I decided to also give PEA a second chance in conjunction with deprenyl. Unfortunately, in part due to my own shenanigans, Nubrain canceled the deprenyl order and so I have 20g of PEA sitting around. Well, it'll keep until such time as I do get a MAOI. Disclaimer: None of the statements made on this website have been reviewed by the Food and Drug Administration. The products and supplements mentioned on this site are not intended to diagnose, treat, cure, alleviate or prevent any diseases. All articles on this website are the opinions of their respective authors who do not claim or profess to be medical professionals providing medical advice. This website is strictly for the purpose of providing opinions of the author. You should consult with your doctor or another qualified health care professional before you start taking any dietary supplements or engage in mental health programs. Any and all trademarks, logos brand names and service marks displayed on this website are the registered or unregistered Trademarks of their respective owners. along with the previous bit of globalization is an important factor: shipping is ridiculously cheap. The most expensive S&H in my modafinil price table is ~$15 (and most are international). To put this in perspective, I remember in the 90s you could easily pay $15 for domestic S&H when you ordered online - but it's 2013, and the dollar has lost at least half its value, so in real terms, ordering from abroad may be like a quarter of what it used to cost, which makes a big difference to people dipping their toes in and contemplating a small order to try out this 'nootropics thing they've heard about. When you hear about nootropics, often called "smart drugs," you probably picture something like the scene above from Limitless, where Bradley Cooper's character becomes brilliant after downing a strange pill. The drugs and supplements currently available don't pack that strong of a punch, but the concept is basically the same. Many nootropics have promising benefits, like boosting memory, focus, or motivation, and there's research to support specific uses. But the most effective nootropics, like Modafinil, aren't intended for use without a prescription to treat a specific condition. In fact, recreational use of nootropics is hotly-debated among doctors and medical researchers. Many have concerns about the possible adverse effects of long-term use, as well as the ethics of using cognitive enhancers to gain an advantage in school, sports, or even everyday work. Taken together, these considerations suggest that the cognitive effects of stimulants for any individual in any task will vary based on dosage and will not easily be predicted on the basis of data from other individuals or other tasks. Optimizing the cognitive effects of a stimulant would therefore require, in effect, a search through a high-dimensional space whose dimensions are dose; individual characteristics such as genetic, personality, and ability levels; and task characteristics. The mixed results in the current literature may be due to the lack of systematic optimization. So is there a future in smart drugs? Some scientists are more optimistic than others. Gary Lynch, a professor in the School of Medicine at the University of California, Irvine argues that recent advances in neuroscience have opened the way for the smart design of drugs, configured for specific biological targets in the brain. "Memory enhancement is not very far off," he says, although the prospects for other kinds of mental enhancement are "very difficult to know… To me, there's an inevitability to the thing, but a timeline is difficult." Poulin (2007) 2002 Canadian secondary school 7th, 9th, 10th, and 12th graders (N = 12,990) 6.6% MPH (past year), 8.7% d-AMP (past year) MPH: 84%: 1–4 times per year; d-AMP: 74%: 1–4 times per year 26% of students with a prescription had given or sold some of their pills; students in class with a student who had given or sold their pills were 1.5 times more likely to use nonmedically Deficiencies in B vitamins can cause memory problems, mood disorders, and cognitive impairment. B vitamins will not make you smarter on their own. Still, they support a wide array of cognitive functions. Most of the B complex assists in some fashion with brain activity. Vitamin B12 (Methylcobalamin) is the most critical B vitamin for mental health. QUALITY : They use pure and high quality Ingredients and are the ONLY ones we found that had a comprehensive formula including the top 5 most proven ingredients: DHA Omega 3, Huperzine A, Phosphatidylserine, Bacopin and N-Acetyl L-Tyrosine. Thrive Natural's Super Brain Renew is fortified with just the right ingredients to help your body fully digest the active ingredients. No other brand came close to their comprehensive formula of 39 proven ingredients. The "essential 5" are the most important elements to help improve your memory, concentration, focus, energy, and mental clarity. But, what also makes them stand out above all the rest was that they have several supporting vitamins and nutrients to help optimize brain and memory function. A critical factor for us is that this company does not use fillers, binders or synthetics in their product. We love the fact that their capsules are vegetarian, which is a nice bonus for health conscious consumers. …researchers have added a new layer to the smart pill conversation. Adderall, they've found, makes you think you're doing better than you actually are….Those subjects who had been given Adderall were significantly more likely to report that the pill had caused them to do a better job….But the results of the new University of Pennsylvania study, funded by the U.S. Navy and not yet published but presented at the annual Society for Neuroscience conference last month, are consistent with much of the existing research. As a group, no overall statistically-significant improvement or impairment was seen as a result of taking Adderall. The research team tested 47 subjects, all in their 20s, all without a diagnosis of ADHD, on a variety of cognitive functions, from working memory-how much information they could keep in mind and manipulate-to raw intelligence, to memories for specific events and faces….The last question they asked their subjects was: How and how much did the pill influence your performance on today's tests? Those subjects who had been given Adderall were significantly more likely to report that the pill had caused them to do a better job on the tasks they'd been given, even though their performance did not show an improvement over that of those who had taken the placebo. According to Irena Ilieva…it's the first time since the 1960s that a study on the effects of amphetamine, a close cousin of Adderall, has asked how subjects perceive the effect of the drug on their performance. Enhanced learning was also observed in two studies that involved multiple repeated encoding opportunities. Camp-Bruno and Herting (1994) found MPH enhanced summed recall in the Buschke Selective Reminding Test (Buschke, 1973; Buschke & Fuld, 1974) when 1-hr and 2-hr delays were combined, although individually only the 2-hr delay approached significance. Likewise, de Wit, Enggasser, and Richards (2002) found no effect of d-AMP on the Hopkins Verbal Learning Test (Brandt, 1991) after a 25-min delay. Willett (1962) tested rote learning of nonsense syllables with repeated presentations, and his results indicate that d-AMP decreased the number of trials needed to reach criterion. A quick search for drugs that make you smarter will lead you to the discovery of piracetam. Piracetam is the first synthetic smart drug of its kind. All other racetams derive from Piracetam. Some are far more potent, but they may also carry more side effects. Piracetam is an allosteric modulator of acetylcholine receptors. In other words, it enhances acetylcholine synthesis which boosts cognitive function. This article is for informational purposes only and does not constitute medical advice. Quartz does not recommend or endorse any specific products, studies, opinions, or other information mentioned in this article. This article is not intended to be used for, or as a substitute for, professional medical advice, diagnosis, or treatment. Always seek the advice of a physician or other qualified health provider with any questions you may have before starting any new treatment or discontinuing any existing treatment.Reliance on any information provided in this article or by Quartz is solely at your own risk. A Romanian psychologist and chemist named Corneliu Giurgea started using the word nootropic in the 1970s to refer to substances that improve brain function, but humans have always gravitated toward foods and chemicals that make us feel sharper, quicker, happier, and more content. Our brains use about 20 percent of our energy when our bodies are at rest (compared with 8 percent for apes), according to National Geographic, so our thinking ability is directly affected by the calories we're taking in as well as by the nutrients in the foods we eat. Here are the nootropics we don't even realize we're using, and an expert take on how they work. This doesn't fit the U-curve so well: while 60mg is substantially negative as one would extrapolate from 30mg being ~0, 48mg is actually better than 15mg. But we bought the estimates of 48mg/60mg at a steep price - we ignore the influence of magnesium which we know influences the data a great deal. And the higher doses were added towards the end, so may be influenced by the magnesium starting/stopping. Another fix for the missingness is to impute the missing data. In this case, we might argue that the placebo days of the magnesium experiment were identical to taking no magnesium at all and so we can classify each NA as a placebo day, and rerun the desired analysis: More than once I have seen results indicating that high-IQ types benefit the least from random nootropics; nutritional deficits are the premier example, because high-IQ types almost by definition suffer from no major deficiencies like iodine. But a stimulant modafinil may be another such nootropic (see Cognitive effects of modafinil in student volunteers may depend on IQ, Randall et al 2005), which mentions: Bacopa Monnieri is probably one of the safest and most effective memory and mood enhancer nootropic available today with the least side-effects. In some humans, a majorly extended use of Bacopa Monnieri can result in nausea. One of the primary products of AlternaScript is Optimind, a nootropic supplement which mostly constitutes of Bacopa Monnieri as one of the main ingredients. It is at the top of the supplement snake oil list thanks to tons of correlations; for a review, see Luchtman & Song 2013 but some specifics include Teenage Boys Who Eat Fish At Least Once A Week Achieve Higher Intelligence Scores, anti-inflammatory properties (see Fish Oil: What the Prescriber Needs to Know on arthritis), and others - Fish oil can head off first psychotic episodes (study; Seth Roberts commentary), Fish Oil May Fight Breast Cancer, Fatty Fish May Cut Prostate Cancer Risk & Walnuts slow prostate cancer, Benefits of omega-3 fatty acids tally up, Serum Phospholipid Docosahexaenonic Acid Is Associated with Cognitive Functioning during Middle Adulthood endless anecdotes. Sarter is downbeat, however, about the likelihood of the pharmaceutical industry actually turning candidate smart drugs into products. Its interest in cognitive enhancers is shrinking, he says, "because these drugs are not working for the big indications, which is the market that drives these developments. Even adult ADHD has not been considered a sufficiently attractive large market." Gamma-aminobutyric acid, also known as GABA, naturally produced in the brain from glutamate, is a neurotransmitter that helps in the communication between the nervous system and brain. The primary function of this GABA Nootropic is to reduce the additional activity of the nerve cells and helps calm the mind. Thus, it helps to improve various conditions, like stress, anxiety, and depression by decreasing the beta brain waves and increasing the alpha brain waves. It is one of the best nootropic for anxiety that you can find in the market today. As a result, cognitive abilities like memory power, attention, and alertness also improve. GABA helps drug addicts recover from addiction by normalizing the brain's GABA receptors which reduce anxiety and craving levels in the absence of addictive substances. I took the first pill at 12:48 pm. 1:18, still nothing really - head is a little foggy if anything. later noticed a steady sort of mental energy lasting for hours (got a good deal of reading and programming done) until my midnight walk, when I still felt alert, and had trouble sleeping. (Zeo reported a ZQ of 100, but a full 18 minutes awake, 2 or 3 times the usual amount.) (On a side note, I think I understand now why modafinil doesn't lead to a Beggars in Spain scenario; BiS includes massive IQ and motivation boosts as part of the Sleepless modification. Just adding 8 hours a day doesn't do the world-changing trick, no more than some researchers living to 90 and others to 60 has lead to the former taking over. If everyone were suddenly granted the ability to never need sleep, many of them would have no idea what to do with the extra 8 or 9 hours and might well be destroyed by the gift; it takes a lot of motivation to make good use of the time, and if one cannot, then it is a curse akin to the stories of immortals who yearn for death - they yearn because life is not a blessing to them, though that is a fact more about them than life.) Certain pharmaceuticals could also qualify as nootropics. For at least the past 20 years, a lot of people—students, especially—have turned to attention deficit hyperactivity disorder (ADHD) drugs like Ritalin and Adderall for their supposed concentration-strengthening effects. While there's some evidence that these stimulants can improve focus in people without ADHD, they have also been linked, in both people with and without an ADHD diagnosis, to insomnia, hallucinations, seizures, heart trouble and sudden death, according to a 2012 review of the research in the journal Brain and Behavior. They're also addictive. These are quite abstract concepts, though. There is a large gap, a grey area in between these concepts and our knowledge of how the brain functions physiologically – and it's in this grey area that cognitive enhancer development has to operate. Amy Arnsten, Professor of Neurobiology at Yale Medical School, is investigating how the cells in the brain work together to produce our higher cognition and executive function, which she describes as "being able to think about things that aren't currently stimulating your senses, the fundamentals of abstraction. This involves mental representations of our goals for the future, even if it's the future in just a few seconds." The evidence? Found helpful in reducing bodily twitching in myoclonus epilepsy, a rare disorder, but otherwise little studied. Mixed evidence from a study published in 1991 suggests it may improve memory in subjects with cognitive impairment. A meta-analysis published in 2010 that reviewed studies of piracetam and other racetam drugs found that piracetam was somewhat helpful in improving cognition in people who had suffered a stroke or brain injury; the drugs' effectiveness in treating depression and reducing anxiety was more significant. Known widely as 'Brahmi,' the Bacopa Monnieri or Water Hyssop, is a small herb native to India that finds mention in various Ayurvedic texts for being the best natural cognitive enhancer. It has been used traditionally for memory enhancement, asthma, epilepsy and improving mood and attention of people over 65. It is known to be one of the best brain supplement in the world. First off, overwhelming evidence suggests that smart drugs actually work. A meta-analysis by researchers at Harvard Medical School and Oxford showed that Modafinil has significant cognitive benefits for those who do not suffer from sleep deprivation. The drug improves their ability to plan and make decisions and has a positive effect on learning and creativity. Another study, by researchers at Imperial College London, showed that Modafinil helped sleep-deprived surgeons become better at planning, redirecting their attention, and being less impulsive when making decisions. The FDA has approved the first smart pill for use in the United States. Called Abilify MyCite, the pill contains a drug and an ingestible sensor that is activated when it comes into contact with stomach fluid to detect when the pill has been taken. The pill then transmits this data to a wearable patch that subsequently transfers the information to an app on a paired smartphone. From that point, with a patient's consent, the data can be accessed by the patient's doctors or caregivers via a web portal. "They're not regulated by the FDA like other drugs, so safety testing isn't required," Kerl says. What's more, you can't always be sure that what's on the ingredient label is actually in the product. Keep in mind, too, that those that contain water-soluble vitamins like B and C, she adds, aren't going to help you if you're already getting enough of those vitamins through diet. "If your body is getting more than you need, you're just going to pee out the excess," she says. "You're paying a lot of money for these supplements; maybe just have orange juice." An additional complexity, related to individual differences, concerns dosage. This factor, which varies across studies and may be fixed or determined by participant body weight within a study, undoubtedly influences the cognitive effects of stimulant drugs. Furthermore, single-unit recordings with animals and, more recently, imaging of humans indicate that the effects of stimulant dose are nonmonotonic; increases enhance prefrontal function only up to a point, with further increases impairing function (e.g., Arnsten, 1998; Mattay et al., 2003; Robbins & Arnsten, 2009). Yet additional complexity comes from the fact that the optimal dosage depends on the same kinds of individual characteristics just discussed and on the task (Mattay et al., 2003). Sleep itself is an underrated cognition enhancer. It is involved in enhancing long-term memories as well as creativity. For instance, it is well established that during sleep memories are consolidated-a process that "fixes" newly formed memories and determines how they are shaped. Indeed, not only does lack of sleep make most of us moody and low on energy, cutting back on those precious hours also greatly impairs cognitive performance. Exercise and eating well also enhance aspects of cognition. It turns out that both drugs and "natural" enhancers produce similar physiological changes in the brain, including increased blood flow and neuronal growth in structures such as the hippocampus. Thus, cognition enhancers should be welcomed but not at the expense of our health and well being. A big part is that we are finally starting to apply complex systems science to psycho-neuro-pharmacology and a nootropic approach. The neural system is awesomely complex and old-fashioned reductionist science has a really hard time with complexity. Big companies spends hundreds of millions of dollars trying to separate the effects of just a single molecule from placebo – and nootropics invariably show up as "stacks" of many different ingredients (ours, Qualia , currently has 42 separate synergistic nootropics ingredients from alpha GPC to bacopa monnieri and L-theanine). That kind of complex, multi pathway input requires a different methodology to understand well that goes beyond simply what's put in capsules.
CommonCrawl
MathOverflow Meta MathOverflow is a question and answer site for professional mathematicians. Join them; it only takes a minute: Are there any mathematical objects that exist but have no concrete examples? [closed] I am curious as to whether there exists a mathematical object in any field that can be proven to exist but has no concrete examples? I.e., something completely non-constructive. The closest example I know of are ultrafilters, which only have one example that can be written down. MathOverflow user Harrison Brown mentioned to me that there are examples in Ramsey theory of objects that are proven to exist but have no known deterministic construction (but there might be), which is close to what I'm looking for. He also mentioned that the absolute Galois group of the rationals has only two elements that you can write down - the identity element and complex conjugation. I am worried that this might be a terribly silly question, since typically there is a trivial example of an object, and a definition that specifically did not include the trivial case would be 'cheating' as far as I'm concerned. My motivation for this question is purely out of curiosity. Also, this is my first question on MO, so I probably need help with tags and such (I'm not terribly sure what this would belong to). I think that this should be a community wiki, but I do not have the reputation to make it so as far as I can tell. constructive-mathematics Jon PaprockiJon Paprocki closed as too localized by Ryan Budney, Pete L. Clark, Dan Petersen, Dmitri Pavlov, Andrés E. Caicedo Apr 23 '11 at 4:22 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. If this question can be reworded to fit the rules in the help center, please edit the question. $\begingroup$ Lots of things proved to exist by Zorn's lemma are non-constructive, like a basis for R as a Q-vector space, a transcendence basis for R as a field extension of Q, a well-ordering of R, a nontrivial non-archimedean absolute value on C, a field isomorphism between the algebraic closure of Q_p and C,... $\endgroup$ – KConrad Apr 22 '11 at 16:47 $\begingroup$ "No concrete examples" does not imply "non-constructive". For example, in the game of hex the first player has a winning strategy, which can be constructed by marking the (finite) game graph. However, I don't think anyone has described a winning strategy for the game. en.wikipedia.org/wiki/Hex_%28board_game%29 $\endgroup$ – Harald Hanche-Olsen Apr 22 '11 at 17:46 $\begingroup$ @Qiaochu: Pretty sure the intention was that only the principal ultrafilters can be "explicitly" described. $\endgroup$ – Andrés E. Caicedo Apr 22 '11 at 19:03 $\begingroup$ @Jon: You need to specify better what you mean by "concrete", or the question becomes too vague to be useful. $\endgroup$ – Andrés E. Caicedo Apr 22 '11 at 19:04 $\begingroup$ This question is, at time of writing, likely to be closed. Since the comments section is already long, please bring discussion to tea.mathoverflow.net/discussion/1019 . And please vote up this comment so that it appears "above the fold". $\endgroup$ – Theo Johnson-Freyd Apr 22 '11 at 21:14 If I remember correctly, there is a theorem that asserts that all but possibly zero, one or two prime numbers generate infinitely many of the (cyclic) multiplicative groups $\mathbb{Z}/q\mathbb{Z}^{\times}$ where $q$ varies among the primes. Yet not even one such prime is known, not even $2$ or $3$. Thus, among $2,3$ and $5$, at least one of them has the property, but no one knows which do. Olivier BégassatOlivier Bégassat $\begingroup$ It is due to Heath-Brown: qjmath.oxfordjournals.org/content/37/1/27.full.pdf $\endgroup$ – S. Carnahan♦ May 11 '11 at 4:28 $\begingroup$ But note also that each of the numbers $2$, $3$ and $5$ is concrete individually and can be given as explicitly as anything can be in mathematics. So this is not really a case where we prove something exists but there is no concrete example; rather, it is a case where among several concrete examples, we can prove that one of them has a certain property, but we don't know which one. $\endgroup$ – Joel David Hamkins May 11 '11 at 12:21 You should look at Handbook of Analysis and its Foundations by Eric Schecter. Here is an excerpt from the preface: Students and researchers need examples; it is a basic precept of pedagogy that every abstract idea should be accompanied by one or more concrete examples. Therefore, when I began writing this book (originally a conventional analysis book), I resolved to give examples of everything. However, as I searched through the literature, I was unable to find explicit examples of several important pathological objects, which I now call intangibles: finitely additive probabilities that are not countably additive, elements of $(l_\infty)^*- l_1$(a customary corollary of the Hahn- Banach Theorem), universal nets that are not eventually constant, free ultrafilters (used very freely in nonstandard analysis!), well orderings for R, inequivalent complete norms on a vector space, etc. In analysis books it has been customary to prove the existence of these and other pathological objects without constructing any explicit examples, without explaining the omission of examples, and without even mentioning that anything has been omitted. Typically, the student does not consciously notice the omission, but is left with a vague uneasiness about these unillustrated objects that are so difficult to visualize. I could not understand the dearth of examples until I accidentally ventured beyond the traditional confines of analysis. I was surprised to learn that the examples of these mysterious objects are omitted from the literature because they must be omitted: Although the objects exist, it can also be proved that explicit constructions do not exist. That may sound paradoxical, but it merely reflects a peculiarity in our language: The customary requirements for an "explicit construction" are more stringent than the customary requirements for an "existence proof." In an existence proof we are permitted to postulate arbitrary choices, but in an explicit construction we are expected to make choices in an algorithmic fashion. (To make this observation more precise requires some definitions, which are given in 14.76 and 14.77.) Though existence without examples has puzzled some analysts, the relevant concepts have been a part of logic for many years. The nonconstructive nature of the Axiom of Choice was controversial when set theory was born about a century ago, but our understanding and acceptance of it has gradually grown. An account of its history is given by Moore [1982]. It is now easy to observe that nonconstructive techniques are used in many of the classical existence proofs for pathological objects of analysis. It can also be shown, though less easily, that many of those existence theorems cannot be proved by other, constructive techniques. Thus, the pathological objects in question are inherently unconstructible. The paradox of existence without examples has become a part of the logicians' folklore, which is not easily accessible to nonlogicians. Most modern books and papers on logic are written in a specialized, technical language that is unfamiliar and nonintuitive to outsiders: Symbols are used where other mathematicians are accustomed to seeing words, and distinctions are made which other mathematicians are accustomed to blurring -- e.g., the distinction between first-order and higher-order languages. Moreover, those books and papers of logic generally do not focus on the intangibles of analysis. On the other hand, analysis books and papers invoke nonconstructive principles like magical incantations, without much accompanying explanation and -- in some cases -- without much understanding. One recent analysis book asserts that analysts would gain little from questioning the Axiom of Choice. I disagree. The present work was motivated in part by my feeling that students deserve a more "honest" explanation of some of the non-examples of analysis -- especially of some of the consequences of the Hahn- Banach Theorem. When we cannot construct an explicit example, we should say so. The student who cannot visualize some object should be reassured that no one else can visualize it either. Because examples are so important in the learning process, the lack of examples should be discussed at least briefly when that lack is first encountered; it should not be postponed until some more advanced course or ignored altogether. Willie Wong Justin HilburnJustin Hilburn $\begingroup$ Very interesting! Definitely looks like something I will jack from the library. $\endgroup$ – Jon Paprocki Apr 23 '11 at 15:53 I think the meaning of the term "exist" needs to be clarified. All of the examples you describe except the Ramsey-theoretic one depend on axioms independent of ZF (e.g. the ultrafilter lemma). On the other hand, the probabilistic method can prove, in ZF, that plenty of objects exist (e.g. efficient sphere packings, families of graphs realizing bounds on the Ramsey numbers) for which we do not have efficient deterministic constructions. I assume this is what Harrison is referring to (the use of the probabilistic method in Ramsey theory). $\begingroup$ I was going to mention error-correcting codes saturating the Shannon limit, but this answer implicitly covers that. $\endgroup$ – Steve Huntsman Apr 22 '11 at 21:10 $\begingroup$ "for which we do not have deterministic constructions" is an exaggeration. Whenever finite objects are concerned, one can always produce one by enumeration. You wouldn't say that we do not have a deterministic way of finding prime decompositions, would you? $\endgroup$ – Ori Gurel-Gurevich May 10 '11 at 20:49 $\begingroup$ @Ori: sorry, I guess I meant "efficient deterministic constructions." $\endgroup$ – Qiaochu Yuan May 10 '11 at 23:48 In the game which is just like chess, except each player makes two moves in a row, the first player has a strategy that draws at least, but no explicit such strategy is known. Igor RivinIgor Rivin $\begingroup$ Do you have a reference for that? $\endgroup$ – Mariano Suárez-Álvarez Apr 23 '11 at 0:10 $\begingroup$ If the second player has a winning strategy, the first player starts by moving a knight and then undoing their move. More generally, this applies to any game in which "do nothing" is a legal move. $\endgroup$ – Eric Wofsey Apr 23 '11 at 0:16 $\begingroup$ Eric, I don't quite follow. White plays Nf3, Black plays, White plays Ng1, Black plays. Now Black has made <i>two</i> moves. $\endgroup$ – Todd Trimble♦ May 11 '11 at 0:12 $\begingroup$ Todd, did you miss the part about each player making two moves in a row? White plays Nf3 and Ng1 before Black gets to do anything. $\endgroup$ – Gerry Myerson May 11 '11 at 0:37 $\begingroup$ Of course, if the best opening for the first player is to do nothing, effectively reversing roles, then the best follow-up for the second player is also to do nothing, reversing roles back. So two ideal players would just keep infinitely passing the buck back and forth; the game would stall out with no action. Which, yeah, in some sense is still a validation of this as a guaranteed non-loss strategy, but it's alas a possibility which is not very interesting. $\endgroup$ – Sridhar Ramesh May 11 '11 at 12:15 -The Robertson-Thomas-Seymour Graph Minor theorem says there exists of a polynomial time algorithm for determining if a graph has a heritable property P. http://www.google.com/search?client=ubuntu&channel=fs&q=Graph+Minor+Theorem&ie=utf-8&oe=utf-8#q=Graph+Minor+Theorem+Algorithms&bav=on.2,or.r_gc.r_pw.&channel=fs&fp=f26a11cf684416b&hl=en -Banach-Tarski decomposition of a ball into two balls of the same volume is another example. -The proposition which is true but not provable in Godel's incompleteness theorem. -The linear PDE which admits no solution (like in the last Chapter of John Fritz's book). Pretty much any proof that uses the axiom of choice to construct something has this problem. I'll post more if I can come up with other examples. There are tons. Taylor DupuyTaylor Dupuy $\begingroup$ Taylor, the last comment you make is a bit delicate. In set theory there is what we call "canonical structures", whose existence needs the axiom of choice, but are 'explicit' infinitary objects, not dependent on any well-ordering or choice function used in their construction. $\endgroup$ – Andrés E. Caicedo Apr 22 '11 at 19:31 $\begingroup$ "-The proposition which is true but not provable in Godel's incompleteness theorem." I'll just mention that for Peanos' axioms such a proposition had been found: for any infinite sequence of binary words (the words of symbols of 0 and 1) there are two of them, s.t. the first contains the second as a subword. $\endgroup$ – zroslav Apr 22 '11 at 19:37 $\begingroup$ Godel's proof provided a perfectly explicit construction of a statement (in a given formal system) which is true but unprovable in the system if the system is consistent. Godel's second incompleteness theorem shows that one such statement expresses the consistency of the system. $\endgroup$ – Robert Israel Apr 22 '11 at 19:39 $\begingroup$ What I think is a nicer example from the graph minor theorem, is that there "exists" a finite set of forbidden minors for toroidal graphs (or any family of graphs which is closed under minors). $\endgroup$ – Daniel Mehkeri Apr 22 '11 at 21:17 $\begingroup$ It's actually Fritz John. $\endgroup$ – David Hansen Apr 23 '11 at 1:58 I think the best known example is the subset of the plane, s.t. its intersection with any line has exactly 2 points in it. This can be proven by the axiom of choice but there are no constructions of it. zroslavzroslav $\begingroup$ In what sense is that a better than, say, a basis of $\mathbb R$ as a rational vector space? It's not like your set obviously exists! :) $\endgroup$ – Mariano Suárez-Álvarez Apr 22 '11 at 20:04 $\begingroup$ This sense is my own sense of beauty :) My set is the most argued to be existing. The existing of this set is the main reason for elementary geometers to beleive that the axiom of choice is not true :) $\endgroup$ – zroslav Apr 22 '11 at 22:45 $\begingroup$ I'm pretty confident that if/when those "elementary geometers" become algebraic geometers, though, they'll want their commutative rings to have lots of maximal ideals! :) $\endgroup$ – Mariano Suárez-Álvarez Apr 23 '11 at 0:13 Eigenvalues of the Laplacian $\Delta$ acting on $L^2 (G/ \Gamma)$, where $G = SL_2 (\mathbb{R})$ and $\Gamma = SL_2 (\mathbb{Z}) < G$ (one can consider more general groups $G$ and take any lattice $\Gamma$ in $G$), or the so called Maass forms. It is known, by Selberg's trace formula and other related results, that such eigenvalues do exist, and we even have theorems describing their asymptotic count, but not a single, concrete example of a Maass form is known, even for this specific choice of $G$ and $\Gamma$. Quoting from Goldfeld's "Automorphic forms and L-functions for the group GL(n,R)": "Up to now no one has found a single example of a Maass form for $SL_2 (\mathbb{Z})$". MarkMark $\begingroup$ This quote must be interpreted carefully. Numerical values of these functions' Laplace eigenvalues and Hecke eigenvalues have been computed to over one hundred decimal places. Goldfeld is referring to the fact that they will never be constructed explicitly "from simpler functions", just as the solutions to any moderately complicated PDE will never be constructed explicitly in this way. $\endgroup$ – David Hansen Apr 23 '11 at 2:02 A lot of existence proofs use arguments such as Cantor's diagonal argument, Baire category etc. Unlike the Zorn's lemma arguments, they can "in principle" yield examples. For instance, we could construct a transcendental number by enumerating the algebraic numbers and picking a number that differs from the nth algebraic number in the nth decimal digit. We can compute this number to as many digits as we want. Of course, this is not a transcendental number that anyone wants to know about. Michael RenardyMichael Renardy $\begingroup$ You mean the Baire category theorem which assumes local compactness? Because the one assuming completeness sounds very constructive to me. $\endgroup$ – darij grinberg Apr 22 '11 at 23:55 $\begingroup$ Unfortunately for much of mathematical physics, Arzela-Ascoli is unconstructive. Maybe you meant that? $\endgroup$ – darij grinberg Apr 22 '11 at 23:57 $\begingroup$ Why is Arzela-Ascoli unconstructive? The proof gives a construction of the convergent subsequence of functions, once we know how to get a convergent subsequence for given argument. In any case, the point of my comment was that some existence proofs may appear to be unconstructive (such as proving existence of transcendental numbers based on uncountability), but in principle allow a construction. $\endgroup$ – Michael Renardy Apr 23 '11 at 0:46 $\begingroup$ "once we know how to get a convergent subsequence for given argument" This. $\endgroup$ – darij grinberg Apr 23 '11 at 8:27 Not the answer you're looking for? Browse other questions tagged constructive-mathematics or ask your own question. Are there any good nonconstructive "existential metatheorems"? How to make Ext and Tor constructive? Is there a constructive proof that in four dimensions, the PL and the smooth category are equivalent?
CommonCrawl
Micro and Nano Systems Letters LIDAR system with electromagnetic two-axis scanning micromirror based on indirect time-of-flight method Seung-Han Chung1, Sung-Woo Lee1, Seung-Ki Lee1 and Jae-Hyoung Park1Email authorView ORCID ID profile Micro and Nano Systems Letters20197:3 Accepted: 8 March 2019 This paper presents a light detection and ranging (LIDAR) system using electromagnetically actuated two-axis scanning micromirror. The distance measurement with the LIDAR is based on the indirect time-of-flight method using the relative ratio of the accumulated charges in capacitors connected to photodiode pixels, which is determined by the time difference between the transmitted and reflected light pulse. The micromirror has double gimbaled structure for two-axis actuation and circular reflection plate with the diameter of 3 mm. The horizontal scan angle of 49.13° was obtained by the resonant actuation at 28 kHz, and the vertical scan of 29.23° was achieved by the sinusoidal forced actuation at 60 Hz. The distance to multiple targets could be measured at once by laser scanning using the micromirror, and the distance profile LIDAR image was constructed by the measurement results. LIDAR system Micromirror Indirect time-of-flight method The LIDAR system has been one of the important research topics in the remote sensing technologies for three-dimensional imaging based on the distance measurement and diverse environmental monitoring. LIDAR has been studied intensively to be applied to the various applications such as autonomous vehicles, obstacle detection and security [1–5]. In the conventional LIDAR system, a motorized laser scanning unit such as galvanometer scanner is widely used for the detection of large area. In addition, LIDAR system has the disadvantages of having a large and heavy units with high cost. Recently, many research activities have been directed toward the development of LIDAR system using the micromirror, which has small size, light weight and low power consumption [6, 7]. In the LIDAR systems, the phase shift and time-of-flight method have been applied for the distance measurement. Laser beam is emitted to the target and reflected through receiving lens. In the phase shift method, the intensity modulated beam at a particular frequency is emitted to the target, and phase shift is produced in the reflected light depending on the distance to target [2, 8]. Even with the precise measurement of distance, however, the phase shift method requires a complicated system including laser beam modulation and data processing system. In addition, the phase shift method has disadvantage to be applied to real-time measurement because long processing time is required for precise distance ranging [1, 9]. Time-of-flight method calculates the distance using time difference between transmitted and reflected light beam [10]. Time-of-flight method enables a simple system setup and distance calculation, but can make relatively large measurement error compared with the phase shift method [9]. The indirect time-of-flight calculates the distance to a target by measuring the phase difference between the emitted and the reflected pulse. Since the relative ratio of the phase difference determined by two pulse signals is used for the measurement of distance, the high-precision time measurement sensor is not required for the indirect time-of-flight method [11–14]. In this paper, we present the LIDAR system using electromagnetically actuated two-axis scanning micromirror based on the indirect time-of-flight method. The indirect time-of-flight method uses the relative ratio of the accumulated charges in capacitors connected to image sensor pixels, which is determined by the time difference between transmitted and reflected light beam. The distance measurement and imaging was demonstrated using the two-dimensional laser scanning with micromirror. Since the micromirror has a high scanning speed with small volume, it can make the system more compact and simple with high measurement rate compared to the system using conventional motor-based laser scanning units [6, 15, 16]. Design and experimental setup Figure 1 shows the schematic diagram of the LIDAR system using two-axis scanning micromirror for distance measurement. The LIDAR system is composed of a pulsed laser (Toptica laser beam, 640 nm, 150 mW), scanning micromirror, receiving lens (TF 8 M, 8 mm focal length, 1/3 inch), image sensor made of avalanche photodiode (Hamamatsu, S11963-01CR, 160 × 120 pixels), and data processing board. Figure 2 shows the LIDAR system setup on the optical table. The pulsed laser is transmitted and reflected at the two-axis scanning micromirror, and emitted to the targets. The laser is scanned to the objects through the micromirror. The beam scattered from the object enters the receiving lens and detected at the sensor. The data processing board converts the collected light information into the distance and image information. In the LIDAR system, the laser diode having 640 nm wavelength and 150 mW average power was used as light emitting source with the adjustable pulse duration from 3 ns to continuous wave. The image sensor with 160 × 120 pixels photodiode array and receiving lens with the field of view of 37.5° × 27.7° were used. Schematic diagram of the LIDAR system using two-axis scanning micromirror LIDAR system setup on the optical table Figure 3 shows the basic circuit structure at the photosensitive area of the photodiode to obtain the distance with indirect time-of-flight method. The distance measurement is using the relative ratio of the accumulated charges in each capacitor connected to photodiode output. The pulse timing chart at the circuit is shown in Fig. 4. The transmitted and reflected light pulses are shown. T0 is transmitted light pulse width, and Td is time difference between transmitted and reflected light pulse, which is dependent on the distance from source to target. The pulses to turn on the switch 1 and 2 are also shown, which are connected to the capacitor 1 (C1) and 2 (C2), respectively. The switch 1 turn-on pulse is synchronously generated with the transmitted light pulse, and then the switch 2 pulse is generated at the end of switch 1 pulse. Therefore, Q1 represents the amount of charge accumulated in C1 during the switch 1 turn-on period, which is generated at the photodiode due to the reflected light pulse. Q2 is the accumulated charge in C2 from the reflected light pulse. The ratio of the charge is determined by the time difference between the transmitted and reflected light pulse. Therefore, the distance D is calculated by Eq. (1); $$D = \frac{1}{2} \times c \times T_{0} \times \frac{{Q_{2} }}{{Q_{1} + Q_{2} }}$$ where c is light velocity, T0 is transmitted light pulse width, Q1 and Q2 are charges accumulated in C1 and C2, respectively. The switch 3 is to discharge unneeded charges caused by ambient light during the non-emission period. In the experiment, the switch 3 turn-on pulse width was set to 40 ns. The image sensor signals from reflected light and charge-to-voltage conversion on accumulated charges are processed to calculate distance through the data processing board (S11963-01CR, Hamamatsu) [17]. In the LIDAR system, the transmitted light pulse width (T0) was set to 90 ns, which means the measureable distance to an object is 13.5 m. Indirect time-of-flight circuit structure at the photosensitive area of the photodiode Pulse timing chart for the distance measurement of indirect time-of-flight method Electromagnetic two-axis scanning micromirror The electromagnetically actuated two-axis scanning micromirror was employed in the LIDAR system, which was developed in our previous study [18]. The scanning micromirror used in the LIDAR system is shown in Fig. 5. The micromirror device consists of a microfabricated double gimbaled structure and a set of permanent magnets for high magnetic field generation. A current path using electroplated copper coil is formed on the backside to generate torque under magnetic field. The size of the circular mirror plate is 3 mm in diameter, on which an aluminum reflective surface is formed. The fast horizontal scan is obtained by the resonant mode actuation, while the tilting for slow vertical scan is achieved by forced actuation. The micromirror is assembled with permanent magnets using plastic housing, and the volume of the packaged device is 6.6 mm × 11 mm × 4.72 mm. For horizontal scan, maximum optical scan angle of 49.13° at the resonance frequency of 28.0 kHz was obtained. The vertical scan angle was 29.23° at 60 Hz as shown in Fig. 6. Two-axis scanning micromirror and assembly housing with permanent magnets a Frequency response of the micromirror for the horizontal scan, b Measured scan angle LIDAR image and distance measurements Figure 7a shows target object setup to test the distance measurement using the LIDAR system. White panels were used for the targets, which are placed at 200, 400, 600, and 800 cm from the laser. The laser scan image to the targets using the two-axis scanning micromirror is shown in Fig. 7b. All the targets are in the field of view of micromirror optical scan angle, and the distance to multiple targets were measured at the same time with the laser scan. The experiment was performed in the dark environment, where the ambient light was blocked. a Target object setup to test the distance measurement using the LIDAR system, b laser scan image to the targets using two-axis scanning micromirror Figure 8a shows the distance profile image in the scanned area, which is constructed by the distance measurement results at each pixel of photodiode image sensor. At each pixel, the distance was obtained by the average value of calculation using the charge ratio of Q1 and Q2 at a frame rate of 100/s. In addition, the distance values of 15 × 15 pixels were taken at the center of each target panel and averaged to determine the measured distance to each target. Figure 8b shows the comparison of the measured distance with the actual distance to each target. The measurement distances are the average values for 100 frame. For 200, 400, 600, and 800 cm targets, the distance was measured to be 204.8 ± 17.8, 404.7 ± 37.6, 625.4 ± 23.8, and 840.6 ± 12.1 cm, respectively. The measurement errors tend to increase as increasing the distance. The distance resolution of the system is obtained to be about 36 cm using 3 standard deviation rule, which is the general method to calculate the limit of detection. It is considered that the main factor related to distance resolution is synchronization of the pulsed laser with the control signal. Laser diode is controlled and synchronized by the pulse signal from the processing board. The control signal pulse width was set to 90 ns, but the laser pulse synchronization error was about 2 ns. The results show that two-axis scanning micromirror could be applied for the LIDAR system based on the indirect time-of-flight method with the distance measurement. a Distance profile image in the scanned area, b comparison of the measured distance with the actual distance to each target In this paper, the feasibility of the LIDAR system using electromagnetic two-axis scanning micromirror has been successfully demonstrated based on indirect time-of-flight method. By using the transmitted laser scan with the micromirror and the ratio of the accumulated charges in capacitors determined by the reflected light, the distance to multiple targets could be measured at the same time, and the distance profile LIDAR image could be obtained. The distance measurement results show that the scanning micromirror can be readily applied to the indirect time-of-flight LIDAR system. The proposed LIDAR system is expected to be used in various application areas with further improvement of the LIDAR optics combined with the scanning micromirror. JHP devised the idea and supervised the project. JHP, SHC, SWL, and SKL discussed the design and experimental setup. SHC performed the LIDAR experiment using the scanning micromirror. JHP and SHC drafted the manuscript. All authors read and approved the final manuscript. This research was supported by the Space Core Technology Development Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (NRF-2013M1A3A3A02042410). Department of Electronics and Electrical Engineering, Dankook University, Yongin, 16890, South Korea Hu P, Tan J, Yang H, Zhao X, Liu S (2011) Phase-shift laser range finder based on high speed and high precision phase-measuring techniques. In: Proceedings of the 10th international symposium of measurement technology and intelligent instrument, Daejeon, 1–5 July 2011Google Scholar Lefsky MA, Cohen WB, Parker GG, Harding DJ (2002) Lidar remote sensing for ecosystem studies: Lidar, an emerging remote sensing technology that directly measures the three dimensional distribution of plant conopies, can accurately estimate vegetation structural attributes and should be of particular interest to forest, landscape, and global ecologists. Bioscience 52:19–30View ArticleGoogle Scholar Asvadi A, Premebida C, Peixoto P, Nunes U (2016) 3D Lidar-based static and moving obstacle detection in driving environments: an approach based on voxels and multi-region ground planes. Rob Auton Syst 83:299–311View ArticleGoogle Scholar Gietelink O, Ploeg J, Schutter BD, Verhaegen M (2006) Development of advanced driver assistance systems with vehicle hardware-in-the-loop simulations. Vehicle Syst Dyn 44:569–590View ArticleGoogle Scholar Takai I, Matsubara H, Soga M, Ohta M, Ogawa M, Yamashita T (2016) Single-photon avalanche diode with enhanced NIR-sensitivity for automotive LIDAR Systems. Sensors 16:459–467View ArticleGoogle Scholar Niclass C, Ito K, Soga M, Matsubara H, Aoyagi I, Kato S, Kagami M (2012) Design and characterization of a 256x64-pixel single-photon imager in CMOS for a MEMS-based laser scanning time-of-flight sensor. Opt Express 20:11863–11881View ArticleGoogle Scholar Brigante CMN, Abbate N, Basile A, Faulisi AC, Sessa S (2011) Towards miniaturization of a MEMS-based wearable motion capture system. IEEE Trans Ind Electron 58:3234–3241View ArticleGoogle Scholar Bamji CS, O'Connor P, Elkhatib T, Mehta S, Thompson B, Prather LA, Snow D, Akkaya OC, Daniel A, Payne AD, Perry T, Fenton M, Chan VH (2015) A 0.13 μm CMOS system-on-chip for a 512 × 424 time-of-flight image sensor with multi-frequency photo-demodulation up to 130 MHz and 2 GS/s ADC. IEEE J Solid-State Circuits 50:303–319View ArticleGoogle Scholar Amann M-C, Bosch TM, Lescure M, Myllylae RA, Rioux M (2001) Laser ranging: a critical review of unusual techniques for distance measurement. Opt Eng 40:10–19View ArticleGoogle Scholar Gokturk SB, Yalcin H, Bamji C (2004) A time-of-flight depth sensor—system description, issues and solutions. In: Proceedings of the 4th IEEE Computer vision and pattern recognition workshop, Washington DC, 35–43 May 2006Google Scholar Perenzoni M, Stoppa D (2011) Figures of merit for indirect time-of-flight 3D cameras: definition and experimental evaluation. Remote Sens 3:2461–2472View ArticleGoogle Scholar Bellisai S, Villa F, Tisa S, Bronzi D (2012) Indirect time-of-flight 3D ranging based on SPADs. In: Proceedings of quantum sensing and nanophotonics devices IX, California, Jan 2012Google Scholar Jang J, Hwang S, Park K (2013) Design of indirect time-of-flight based lidar for precise three-dimensional measurement under various reflection conditions. In: Proceedings of the 4th international conference on sensor device technologies and applications, Barcelona, 25–29 Aug 2013Google Scholar Yasutomi K, Usui T, Han S-M, Takasawa T, Kagawa K, Kawahito S (2014) An indirect time-of-flight measurement technique with impulse photocurrent response for sub-millimeter range resolved imaging. Opt Express 22:18904–18913View ArticleGoogle Scholar Kasturi A, Milanovic V, Atwood BH, Yang J (2016) UAV-borne lidar with MEMS mirror-based scanning capability. In: Proceedings of the laser radar technology and applications XXI, Maryland, May 2016Google Scholar Hu Q, Pedersen C, Rodrigo PJ (2016) Eye-safe diode laser doppler lidar with a MEMS beam-scanner. Opt Express 24:1934–1942View ArticleGoogle Scholar http://www.hamamatsu.com/us/en/product/category/3100/4005/4148/S11963-01CR/index.html. Accessed 02 Feb 2018 Ju S, Jeong H, Park J-H, Ji C-H (2018) Electromagnetic 2D scanning micromirror for high definition laser projection displays. IEEE Photonic Tech Lett 30:2072–2075View ArticleGoogle Scholar
CommonCrawl
How can I find the eigenvectors for this matrix? Here is the matrix A: \begin{pmatrix} a & b\\ 0 & d \\ \end{pmatrix} I've been able to find the eigenvalues ($a$ and $d$), however when you put these eigenvalues into the matrix $|A - \lambda I|$ \begin{pmatrix} a - \lambda & b\\ 0 & d - \lambda \\ \end{pmatrix} the matrix reduces to either a single row or single column. How can I get around this problem? linear-algebra eigenvalues-eigenvectors Mich2908Mich2908 $\begingroup$ For $\lambda = a$, it reduces to $S = \pmatrix{0 & b \\ 0 & d-a}$. That's not a "problem". Can you solve $S\pmatrix{x \\y} = \pmatrix{0 \\ 0}$? Because the solution is an eigenvector for the eigenvalue $a$. $\endgroup$ – John Hughes Feb 4 at 19:23 $\begingroup$ To find eigenvectors $v_1$ and $v_2$, solve $Av_1=av_1$ and $Av_2=dv_2$ $\endgroup$ – J. W. Tanner Feb 4 at 19:27 $\begingroup$ Do you want left or right eigenvectors? $\endgroup$ – robjohn♦ Feb 4 at 19:46 $\begingroup$ You should be able to find an eigenvector of $a$ at a glance. Recall the meaning of the columns of a matrix. $\endgroup$ – amd Feb 5 at 0:56 $\lambda_1 = a$: $$ \begin{pmatrix} 0 & b \\ 0 & d-a \end{pmatrix}\cdot \begin{pmatrix} x\\ y \end{pmatrix}=\begin{pmatrix} 0\\ 0 \end{pmatrix}$$ $$by=0 \quad\wedge\quad (d-a)y=0$$ which is solved by $y=0$ given that $b\neq 0$ or $a\neq d$. Because $x$ is not present in the above system of equations at all, it means it can be any number. So the eigenvector $v_1$ corresponding to $\lambda_1$ is $$v_1 = \begin{pmatrix} x\\ 0 \end{pmatrix},$$ which after normalisation gives $$v_1 = \begin{pmatrix} 1\\ 0 \end{pmatrix}.$$ For $\lambda_2=d$ you'll get one equation involving both $x$ and $y$, meaning that the solution $x$ will be given in terms of $y$ (or vice versa). Then you insert it to $v_2 = \begin{pmatrix} x\\ y \end{pmatrix}$, take out the common factor (i.e., $x$ or $y$) and normalise to get the second eigenvector. answered Feb 4 at 19:37 corey979corey979 when $d \neq a$ we do get two eigenvectors, which I put as the columns of $$ E = \left( \begin{array}{cc} 1 & b \\ 0 & d-a \end{array} \right) $$ Indeed, we get $$ E^{-1} = \frac{1}{d-a} \left( \begin{array}{cc} d-a & -b \\ 0 & 1 \end{array} \right) $$ and $E^{-1}AE = .....$ Will JagyWill Jagy Yes, of course that matrix reduces- that's the whole point of an "eigenvector". That's because the set of all eigenvectors corresponding to a given eigenvalue form a subspace. There are necessarily an infinite number of vectors. The definition of "eigenvector" of matrix A corresponding to eigenvalue $\lambda$ is a vector v such that $Av= \lambda v$. Here, $A= \begin{bmatrix}a & b \\ 0 & d \end{bmatrix}$. The eigenvalues are a and d. Any eigenvector, $v= \begin{bmatrix}x \\ y \end{bmatrix}$, corresponding to eigenvalue a, must satisfy $Av= \begin{bmatrix}a & b \\ 0 & d \end{bmatrix}\begin{bmatrix}x \\ y \end{bmatrix}= \begin{bmatrix}ax+ by\\ dy\end{bmatrix}= \begin{bmatrix}ax \\ ay\end{bmatrix}$. I would write that as the pair of equations ax+ by= ax and dy= ay. The first of those reduces to by= 0 so y= 0 which also satisfies dy= ay. But that does not say anything about x. In fact x can be anything. The set of all eigenvectors corresponding to eigenvalue a is the set of all eigenvectors of the form $\begin{bmatrix} x \\ 0\end{bmatrix}$. Similarly the set of all eigenvectors corresponding to eigenvalue b is the set of all vectors of the for $\begin{bmatrix}0 \\ y \end{bmatrix}$. Added after Christoph's comment: Boy, I really bollixed that up, didn't I? I should have said that the eigenvalues are a and d, not a and b. And an eigenvector, $\begin{bmatrix}x \\ y \end{bmatrix}$ corresponding to eigenvalue d must satisfy $\begin{bmatrix}a & b \\ 0 & d \end{bmatrix}\begin{bmatrix}x \\ y \end{bmatrix}= \begin{bmatrix}ax+ by \\ dy\end{bmatrix}= \begin{bmatrix}dx \\ dy \end{bmatrix}$ so we have equations ax+ by= dx and dy= dy. The second equation is true for any y and we can solve the first equation for x= by/(d- a). Every eigenvector corresponding to eigenvalue d is of the form $\begin{bmatrix} by/(d-a) \\ y\end{bmatrix}$ so is a multiple of $\begin{bmatrix} b/(d-a) \\ 1\end{bmatrix}$. But my basic concept is still true- there is not a single "eigenvector" but an infinite number of them- an entire subspace. 12.4k11 gold badge66 silver badges1717 bronze badges $\begingroup$ Neither is $b$ an eigenvalue nor is $\begin{bmatrix}0\\y\end{bmatrix}$ an eigenvector (in general). $\endgroup$ – Christoph Feb 4 at 19:51 Assuming you want right eigenvectors, we are looking for vectors perpendicular to the rows of $A-I\lambda$. A row of the cofactor matrix is perpendicular to all the other rows of the original matrix. Since the rows are dependent, a row of the cofactor matrix $$ \operatorname{cof}\begin{bmatrix}0&b\\0&d-a\end{bmatrix}=\begin{bmatrix}\color{#C00}{d-a}&\color{#C00}{0}\\-b&0\end{bmatrix} $$ $$ \operatorname{cof}\begin{bmatrix}a-d&b\\0&0\end{bmatrix}=\begin{bmatrix}0&0\\\color{#C00}{-b}&\color{#C00}{a-d}\end{bmatrix} $$ Therefore, $\begin{bmatrix}d-a\\0\end{bmatrix}$ and $\begin{bmatrix}-b\\a-d\end{bmatrix}$ are right eigenvectors. robjohn♦robjohn Not the answer you're looking for? Browse other questions tagged linear-algebra eigenvalues-eigenvectors or ask your own question. How can I find the eigenvectors of this Matrix? Solving for eigenvectors of a $3\times3$ matrix Eigenvectors of the Zero Matrix Eigenvectors for shear matrix and diagonalizing. Find the eigenvalues and associated eigenvectors for this matrix How to find the eigenvalues with repeated eigenvectors of this $3\times3$ matrix How do you find Eigenvectors for common root Eigenvalue? How many eigenvectors have this matrix? How To Find The Unit Eigenvectors Find a rank one non negative matrix $C$ such that the Matrix $B+C$ will have eigenvalues $13,2,-1$
CommonCrawl
Quiz : On the works of mathematician Srinivasa Ramanujan What is Hardy–Ramanujan number which is the smallest number that can be expressed as the sum of two cubes in two different ways? Select 1 1729 65 1000 Given a positive integer $x$, prime counting function ($f$) outputs number of primes that are less than $x$. Now, the value $f(x) - f(x/2)$ will only change if we obtain another prime which means that $x$ itself is prime. Therefore, $f(x) - f(x/2) \geq n$, if $x \geq R_{n}$ and these $R_{n}$'s are all primes and is referred as Ramanujan Primes. (http://en.wikipedia.org/wiki/Ramanujan_prime) The smallest Ramanujan Prime number greater than 10 is Who is the first Indian to be elected as the fellow of the Royal Society? Who is the first Mathematician to be elected as the fellow of the Royal Society? The problem of finding the solutions for $a$, $b$, $m$, $n$ and $x$, $y$ such that $a^{3} + b^{3} = m^{3} + n^{3} = x^{3} + y^{3}$ is known as Select TaxiCab(2) problem Box Counting Problem TaxiCab(3) problem House Counting problem Ramanujan was awarded Bachelor of Science by Research in 1916 (latter referred as Ph.D) for his work on Select Smooth Numbers Highly Composite Numbers Amicable Numbers Ramanujan Numbers A positive integer which has more divisors than any smaller positive integer is called Select Fermat Numbers Highly Composite Numbers Ramanujan Numbers Lychrel Numbers The first five Highly Composite Numbers are Select 1, 2, 4, 8, 10 1, 2, 4, 6, 12 1, 3, 6, 8, 9 None of these Which one is not a highly composite number? Apart from Number Theory and Mathematical Analysis, Ramanujan has great contributions in the area of Select Partial Fractions Continued Fractions Dyadic Fractions None of these Ramanujan worked extensively with the following two English Mathematicians. Select J. E. Littlewood and G. H. Hardy Charles Babbage and S. L. Loney Paul Dirac and John Venn None of these A positive integer on a given base if is divisible by the sum of it's digits on the same base, is called a Harshad number. Which one is not a Harshad number Select 7 19 x 91 11 55440 In the December 1914 issue of the English magazine 'Strand', a King's college student saw the following puzzle and narrated it to Ramanujan. In a long street there are $n$ number of houses where $n$ is greater than 50 and less than 500. And houses are numbered from left to right as 1, 2, 3, .. so on to $n$. The problem is to find a house with number $x$ such that the sum of the house numbers to the left of it equals the sum of the house numbers to its right. Ramanujan while was cooking vegetables in a frying pan gave the most general soloution to the whole class of problems not just the one with the constraint $50 < n < 500.$ Who is the King's college student who narrated the problem to Ramanujan? Select Meghnad Saha G. H. Hardy P. C. Mahalanobis S. L. Loney In the December 1914 issue of the English magazine 'Strand', a King's college student saw the following puzzle and narrated it to Ramanujan. In a long street there are $n$ number of houses where $n$ is greater than 50 and less than 500. And houses are numbered from left to right as 1, 2, 3, .. so on to $n$. The problem is to find a house with number $x$ such that the sum of the house numbers to the left of it equals the sum of the house numbers to its right. Ramanujan while was cooking vegetables in a frying pan gave the most general soloution to the whole class of problems not just the one with the constraint $50 < n < 500.$ What is the solution for this specific instance of the problem? Select x = 200, n = 343 x = 299, n = 499 x = 204, n = 288 No Solution Exists Number of positive integers that are less than a given integer $x$ and can be expressed as sum of two square numbers is proportional to $x$ divided by square root of $ln(x)$, i.e., $\frac{x}{\sqrt{ln(x)}}$. And the constant of proportionality as $x$ grows to infinity is called Landau-Ramanujan constant and its value is Select 0.76422 3.141 1.5 0.65413
CommonCrawl
$G$-Chaplygin systems with internal symmetries, truncation, and an (almost) symplectic view of Chaplygin's ball JGM Home Dirac cotangent bundle reduction March 2009, 1(1): 55-85. doi: 10.3934/jgm.2009.1.55 Three-dimensional discrete systems of Hirota-Kimura type and deformed Lie-Poisson algebras Andrew N. W. Hone 1, and Matteo Petrera 2, Institute of Mathematics, Statistics and Actuarial Science, University of Kent, Canterbury CT2 7NF, United Kingdom Dipartimento di Fisica, Università degli Studi Roma Tre and Sezione INFN, Roma Tre, Via della Vasca Navale 84, 00146 Roma, Italy Received October 2008 Published April 2009 Recently Hirota and Kimura presented a new discretization of the Euler top with several remarkable properties. In particular this discretization shares with the original continuous system the feature that it is an algebraically completely integrable bi-Hamiltonian system in three dimensions. The Hirota-Kimura discretization scheme turns out to be equivalent to an approach to numerical integration of quadratic vector fields that was introduced by Kahan, who applied it to the two-dimensional Lotka-Volterra system. The Euler top is naturally written in terms of the $\mathfrak{so}(3)$ Lie-Poisson algebra. Here we consider algebraically integrable systems that are associated with pairs of Lie-Poisson algebras in three dimensions, as presented by Gümral and Nutku, and construct birational maps that discretize them according to the scheme of Kahan and Hirota-Kimura. We show that the maps thus obtained are also bi-Hamiltonian, with pairs of compatible Poisson brackets that are one-parameter deformations of the original Lie-Poisson algebras, and hence they are completely integrable. For comparison, we also present analogous discretizations for three bi-Hamiltonian systems that have a transcendental invariant, and finally we analyze all of the maps obtained from the viewpoint of Halburd's Diophantine integrability criterion. Keywords: Diophantine integrability., Integrable discretizations, Lie-Poisson algebras. Mathematics Subject Classification: Primary: 37K10; Secondary: 14E0. Citation: Andrew N. W. Hone, Matteo Petrera. Three-dimensional discrete systems of Hirota-Kimura type and deformed Lie-Poisson algebras. Journal of Geometric Mechanics, 2009, 1 (1) : 55-85. doi: 10.3934/jgm.2009.1.55 Meera G. Mainkar, Cynthia E. Will. Examples of Anosov Lie algebras. Discrete & Continuous Dynamical Systems - A, 2007, 18 (1) : 39-52. doi: 10.3934/dcds.2007.18.39 Aristophanes Dimakis, Folkert Müller-Hoissen. Bidifferential graded algebras and integrable systems. Conference Publications, 2009, 2009 (Special) : 208-219. doi: 10.3934/proc.2009.2009.208 Tracy L. Payne. Anosov automorphisms of nilpotent Lie algebras. Journal of Modern Dynamics, 2009, 3 (1) : 121-158. doi: 10.3934/jmd.2009.3.121 Robert L. Griess Jr., Ching Hung Lam. Groups of Lie type, vertex algebras, and modular moonshine. Electronic Research Announcements, 2014, 21: 167-176. doi: 10.3934/era.2014.21.167 M. P. de Oliveira. On 3-graded Lie algebras, Jordan pairs and the canonical kernel function. Electronic Research Announcements, 2003, 9: 142-151. Isaac A. García, Jaume Giné, Jaume Llibre. Liénard and Riccati differential equations related via Lie Algebras. Discrete & Continuous Dynamical Systems - B, 2008, 10 (2&3, September) : 485-494. doi: 10.3934/dcdsb.2008.10.485 Özlem Orhan, Teoman Özer. New conservation forms and Lie algebras of Ermakov-Pinney equation. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 735-746. doi: 10.3934/dcdss.2018046 Thierry Paul, David Sauzin. Normalization in Banach scale Lie algebras via mould calculus and applications. Discrete & Continuous Dynamical Systems - A, 2017, 37 (8) : 4461-4487. doi: 10.3934/dcds.2017191 Sasho Popov, Jean-Marie Strelcyn. The Euler-Poisson equations: An elementary approach to integrability conditions. Journal of Geometric Mechanics, 2018, 10 (3) : 293-329. doi: 10.3934/jgm.2018011 Francisco Crespo, Francisco Javier Molero, Sebastián Ferrer. Poisson and integrable systems through the Nambu bracket and its Jacobi multiplier. Journal of Geometric Mechanics, 2016, 8 (2) : 169-178. doi: 10.3934/jgm.2016002 Primitivo B. Acosta-Humánez, Martha Alvarez-Ramírez, David Blázquez-Sanz, Joaquín Delgado. Non-integrability criterium for normal variational equations around an integrable subsystem and an example: The Wilberforce spring-pendulum. Discrete & Continuous Dynamical Systems - A, 2013, 33 (3) : 965-986. doi: 10.3934/dcds.2013.33.965 A. A. Kirillov. Family algebras. Electronic Research Announcements, 2000, 6: 7-20. Antonio Fernández, Pedro L. García. Regular discretizations in optimal control theory. Journal of Geometric Mechanics, 2013, 5 (4) : 415-432. doi: 10.3934/jgm.2013.5.415 Barnabas M. Garay, Keonhee Lee. Attractors under discretizations with variable stepsize. Discrete & Continuous Dynamical Systems - A, 2005, 13 (3) : 827-841. doi: 10.3934/dcds.2005.13.827 David DeLatte. Diophantine conditions for the linearization of commuting holomorphic functions. Discrete & Continuous Dynamical Systems - A, 1997, 3 (3) : 317-332. doi: 10.3934/dcds.1997.3.317 Shrikrishna G. Dani. Simultaneous diophantine approximation with quadratic and linear forms. Journal of Modern Dynamics, 2008, 2 (1) : 129-138. doi: 10.3934/jmd.2008.2.129 Dmitry Kleinbock, Barak Weiss. Dirichlet's theorem on diophantine approximation and homogeneous flows. Journal of Modern Dynamics, 2008, 2 (1) : 43-62. doi: 10.3934/jmd.2008.2.43 Hans Koch, João Lopes Dias. Renormalization of diophantine skew flows, with applications to the reducibility problem. Discrete & Continuous Dynamical Systems - A, 2008, 21 (2) : 477-500. doi: 10.3934/dcds.2008.21.477 E. Muñoz Garcia, R. Pérez-Marco. Diophantine conditions in small divisors and transcendental number theory. Discrete & Continuous Dynamical Systems - A, 2003, 9 (6) : 1401-1409. doi: 10.3934/dcds.2003.9.1401 Chao Ma, Baowei Wang, Jun Wu. Diophantine approximation of the orbits in topological dynamical systems. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2455-2471. doi: 10.3934/dcds.2019104 Andrew N. W. Hone Matteo Petrera
CommonCrawl
Joint uniform convergence in distribution of random variables and constant by Lundborg Last Updated July 12, 2019 10:20 AM - source Let $(X_{n, \theta})_{n \in \mathbb{N}, \theta \in \Theta}$ be a sequence of parameter dependent real-valued random variables where $\Theta$ is some parameter space. Assume that $X_{n, \theta}$ converges uniformly to $X_\theta$, i.e. for any continuous and bounded $f: \mathbb{R} \to \mathbb{R}$ $$ \sup_{\theta} \left|E(f(X_{n, \theta})) - E(f(X_\theta)) \right| \to 0 $$ as $n \to \infty$. Let $(y_\theta)_{\theta \in \Theta}$ be some family of real numbers. Does then $(X_{n, \theta}, y_\theta)$ converge uniformly to $(X_\theta, y_\theta)$, i.e. for any continuous and bounded $f: \mathbb{R}^2 \to \mathbb{R}$ $$ \sup_{\theta} \left|E(f(X_{n, \theta}, y_\theta)) - E(f(X_\theta, y_\theta)) \right| \to 0 $$ as $n \to \infty$. Intuitively I find it crazy that adding a constant that does nothing would change this convergence but perhaps I need some assumptions like boundedness of $y_\theta$ (which would be fine) but I just cant figure out a way to show it. Usually arguments like this will be of the form: note that $g(x) = f(x, y_\theta)$ is continuous and then we're done but $g$ is now $\theta$-dependent and therefore I don't think the argument works. Any ideas? Tags : probability-theory Ratio of moment generating function with it's derivative Updated October 11, 2017 06:20 AM Why probabilities of a random variables (distribution) have a certain structure? Probability a knight on a chess board is back where it started after $n$ moves Updated September 28, 2018 15:20 PM Find the limiting probability... Tail weight of product distributions
CommonCrawl
Physics > Atomic Physics [Submitted on 30 Jun 2021 (v1), last revised 19 Oct 2021 (this version, v2)] Title:Precision Measurement of the Helium $2^{3\!}S_1- 2^{3\!}P/3^{3\!}P$ Tune-Out Frequency as a Test of QED Authors:B. M. Henson, J. A. Ross, K. F. Thomas, C. N. Kuhn, D. K. Shin, S. S. Hodgman, Yong-Hui Zhang, Li-Yan Tang, G. W. F. Drake, A. T. Bondy, A. G. Truscott, K. G. H. Baldwin Abstract: Despite quantum electrodynamics (QED) being one of the most stringently tested theories underpinning modern physics, recent precision atomic spectroscopy measurements have uncovered several small discrepancies between experiment and theory. One particularly powerful experimental observable that tests QED independently of traditional energy level measurements is the `tune-out' frequency, where the dynamic polarizability vanishes and the atom does not interact with applied laser light. In this work, we measure the `tune-out' frequency for the $2^{3\!}S_1$ state of helium between transitions to the $2^{3\!}P$ and $3^{3\!}P$ manifolds and compare it to new theoretical QED calculations. The experimentally determined value of $725\,736\,700\,$$(40_{\mathrm{stat}},260_{\mathrm{syst}})$ MHz is within ${\sim} 2.5\sigma$ of theory ($725\,736\,053(9)$ MHz), and importantly resolves both the QED contributions (${\sim} 30 \sigma$) and novel retardation (${\sim} 2 \sigma$) corrections. Subjects: Atomic Physics (physics.atom-ph); Quantum Gases (cond-mat.quant-gas) Cite as: arXiv:2107.00149 [physics.atom-ph] (or arXiv:2107.00149v2 [physics.atom-ph] for this version) From: Kieran Francis Thomas [view email] [v1] Wed, 30 Jun 2021 23:23:37 UTC (7,883 KB) [v2] Tue, 19 Oct 2021 05:52:25 UTC (8,405 KB) physics.atom-ph cond-mat.quant-gas
CommonCrawl
On the nature of the candidate T-Tauri star V501 Aurigae (1702.04512) M. Vaňko, G. Torres, L. Hambálek, T. Pribulla, L.A. Buchhave, J. Budaj, P. Dubovský, Z. Garai, C. Ginski, K. Grankin, R. Komžík, V. Krushevska, E. Kundra, C. Marka, M. Mugrauer, R. Neuhaeuser, J. Ohlert, Š. Parimucha, V. Perdelwitz, St. Raetz, S.Yu. Shugarov Feb. 15, 2017 astro-ph.SR We report new multi-colour photometry and high-resolution spectroscopic observations of the long-period variable V501 Aur, previously considered to be a weak-lined T-Tauri star belonging to the Taurus-Auriga star-forming region. The spectroscopic observations reveal that V501 Aur is a single-lined spectroscopic binary system with a 68.8-day orbital period, a slightly eccentric orbit (e ~ 0.03), and a systemic velocity discrepant from the mean of Taurus-Auriga. The photometry shows quasi-periodic variations on a different, ~55-day timescale that we attribute to rotational modulation by spots. No eclipses are seen. The visible object is a rapidly rotating (vsini ~ 25 km/s) early K star, which along with the rotation period implies it must be large (R > 26.3 Rsun), as suggested also by spectroscopic estimates indicating a low surface gravity. The parallax from the Gaia mission and other independent estimates imply a distance much greater than the Taurus-Auriga region, consistent with the giant interpretation. Taken together, this evidence together with a re-evaluation of the LiI~$\lambda$6707 and H$\alpha$ lines shows that V501 Aur is not a T-Tauri star, but is instead a field binary with a giant primary far behind the Taurus-Auriga star-forming region. The large mass function from the spectroscopic orbit and a comparison with stellar evolution models suggest the secondary may be an early-type main-sequence star. Affordable echelle spectroscopy of the eccentric HAT-P-2, WASP-14 and XO-3 planetary systems with a sub-meter-class telescope (1608.00745) Z. Garai, T. Pribulla, Ľ. Hambálek, E. Kundra, M. Vaňko, S. Raetz, M. Seeliger, C. Marka, H. Gilbert Aug. 2, 2016 astro-ph.EP, astro-ph.IM A new off-shelf low-cost echelle spectrograph was installed recently on the 0.6m telescope at the Star\'a Lesn\'a Observatory (Slovakia). In this paper we describe in details the radial velocity (RV) analysis of the first three transiting planetary systems, HAT-P-2, WASP-14 and XO-3, observed with this instrument. Furthermore, we compare our data with the RV data achieved with echelle spectrographs of other sub-meter-, meter- and two-meter-class telescopes in terms of their precision. Finally, we investigate the applicability of our RV data for modeling orbital parameters. Search for transiting exoplanets and variable stars in the open cluster NGC 7243 (1601.04562) Z. Garai, T. Pribulla, L. Hambálek, R. Errmann, Ch. Adam, S. Buder, T. Butterley, V.S. Dhillon, B. Dincel, H. Gilbert, Ch. Ginski, L.K. Hardy, A. Kellerer, M. Kitze, E. Kundra, S.P. Littlefair, M. Mugrauer, J. Nedoroščík, R. Neuhäuser, A. Pannicke, S. Raetz, J.G. Schmidt, T.O.B. Schmidt, M. Seeliger, M. Vaňko, R.W. Wilson Jan. 18, 2016 astro-ph.SR, astro-ph.EP We report results of the first five observing campaigns for the open stellar cluster NGC 7243 in the frame of project Young Exoplanet Transit Initiative (YETI). The project focuses on the monitoring of young and nearby stellar clusters, with the aim to detect young transiting exoplanets, and to study other variability phenomena on time-scales from minutes to years. After five observing campaigns and additional observations during 2013 and 2014, a clear and repeating transit-like signal was detected in the light curve of J221550.6+495611. Furthermore, we detected and analysed 37 new eclipsing binary stars in the studied region. The best fit parameters and light curves of all systems are given. Finally, we detected and analysed 26 new, presumably pulsating variable stars in the studied region. The follow-up investigation of these objects, including spectroscopic measurements of the exoplanet candidate, is currently planned. Investigation of a transiting planet candidate in Trumpler 37: an astrophysical false positive eclipsing spectroscopic binary star (1403.6020) R. Errmann, G. Torres, T.O.B. Schmidt, M. Seeliger, A.W. Howard, G. Maciejewski, R. Neuhäuser, S. Meibom, A. Kellerer, D.P. Dimitrov, B. Dincel, C. Marka, M. Mugrauer, Ch. Ginski, Ch. Adam, St. Raetz, J.G. Schmidt, M.M. Hohle, A. Berndt, M. Kitze, L. Trepl, M. Moualla, T. Eisenbeiß, S. Fiedler, A. Dathe, Ch. Graefe, N. Pawellek, K. Schreyer, D.P. Kjurkchieva, V.S. Radeva, V. Yotov, W.P. Chen, S.C.-L. Hu, Z.-Y. Wu, X. Zhou, T. Pribulla, J. Budaj, M. Vaňko, E. Kundra, Ľ. Hambálek, V. Krushevska, Ł. Bukowiecki, G. Nowak, L. Marschall, H. Terada, D. Tomono, M. Fernandez, A. Sota, H. Takahashi, Y. Oasa, C. Briceño, R. Chini, C.H. Broeg March 24, 2014 astro-ph.SR We report our investigation of the first transiting planet candidate from the YETI project in the young (~4 Myr old) open cluster Trumpler 37. The transit-like signal detected in the lightcurve of the F8V star 2M21385603+5711345 repeats every 1.364894+/-0.000015 days, and has a depth of 54.5+/-0.8 mmag in R. Membership to the cluster is supported by its mean radial velocity and location in the color-magnitude diagram, while the Li diagnostic and proper motion are inconclusive in this regard. Follow-up photometric monitoring and adaptive optics imaging allow us to rule out many possible blend scenarios, but our radial-velocity measurements show it to be an eclipsing single-lined spectroscopic binary with a late-type (mid-M) stellar companion, rather than one of planetary nature. The estimated mass of the companion is 0.15-0.44 solar masses. The search for planets around very young stars such as those targeted by the YETI survey remains of critical importance to understand the early stages of planet formation and evolution. The DWARF project: Eclipsing binaries - precise clocks to discover exoplanets (1206.6709) T. Pribulla, M. Vaňko, M. Ammler - von Eiff, M. Andreev, A. Aslantürk, N. Awadalla, D. Baluďanský, A. Bonanno, H. Božić, G. Catanzaro, L. Çelik, P. E. Christopoulou, E. Covino, F. Cusano, D. Dimitrov, P. Dubovský, E. M. Esmer, A. Frasca, Ľ. Hambálek, M. Hanna, A. Hanslmeier, B. Kalomeni, D. P. Kjurkchieva, V. Krushevska, I. Kudzej, E. Kundra, Yu. Kuznyetsova, J. W. Lee, M. Leitzinger, G. Maciejewski, D. Moldovan, M. H. M. Morais, M. Mugrauer, R. Neuhäuser, A. Niedzielski, P. Odert, J. Ohlert, İ. Özavcı, A. Papageorgiou, Š. Parimucha, S. Poddaný, A. Pop, M. Raetz, S. Raetz, Ya. Romanyuk, D. Ruždjak, J. Schulz, H. V. Şenavcı, T. Szalai, P. Székely, D. Sudar, C. T. Tezcan, M. E. Törün, V. Turcu, O. Vince, M. Zejda June 28, 2012 astro-ph.SR We present a new observational campaign, DWARF, aimed at detection of circumbinary extrasolar planets using the timing of the minima of low-mass eclipsing binaries. The observations will be performed within an extensive network of relatively small to medium-size telescopes with apertures of ~20-200 cm. The starting sample of the objects to be monitored contains (i) low-mass eclipsing binaries with M and K components, (ii) short-period binaries with sdB or sdO component, and (iii) post-common-envelope systems containing a WD, which enable to determine minima with high precision. Since the amplitude of the timing signal increases with the orbital period of an invisible third component, the timescale of project is long, at least 5-10 years. The paper gives simple formulas to estimate suitability of individual eclipsing binaries for the circumbinary planet detection. Intrinsic variability of the binaries (photospheric spots, flares, pulsation etc.) limiting the accuracy of the minima timing is also discussed. The manuscript also describes the best observing strategy and methods to detect cyclic timing variability in the minima times indicating presence of circumbinary planets. First test observation of the selected targets are presented. Young Exoplanet Transit Initiative (YETI) (1106.4244) R. Neuhäuser, R. Errmann, A. Berndt, G. Maciejewski, H. Takahashi, W.P. Chen, D.P. Dimitrov, T. Pribulla, E.H. Nikogossian, E.L.N. Jensen, L. Marschall, Z.-Y. Wu, A. Kellerer, F.M. Walter, C. Briceño, R. Chini, M. Fernandez, St. Raetz, G. Torres, D.W. Latham, S.N. Quinn, A. Niedzielski, Ł. Bukowiecki, G. Nowak, T. Tomov, K. Tachihara, S.C.-L. Hu, L.W. Hung, D.P. Kjurkchieva, V.S. Radeva, B.M. Mihov, L. Slavcheva-Mihova, I.N. Bozhinova, J. Budaj, M. Vaňko, E. Kundra, Ľ. Hambálek, V. Krushevska, T. Movsessian, H. Harutyunyan, J.J. Downes, J. Hernandez, V.H. Hoffmeister, D.H. Cohen, I. Abel, R. Ahmad, S. Chapman, S. Eckert, J. Goodman, A. Guerard, H.M. Kim, A. Koontharana, J. Sokol, J. Trinh, Y. Wang, X. Zhou, R. Redmer, U. Kramm, N. Nettelmann, M. Mugrauer, J. Schmidt, M. Moualla, C. Ginski, C. Marka, C. Adam, M. Seeliger, S. Baar, T. Roell, T.O.B. Schmidt, L. Trepl, T. Eisenbeiß, S. Fiedler, N. Tetzlaff, E. Schmidt, M.M. Hohle, M. Kitze, N. Chakrova, C. Gräfe, K. Schreyer, V.V. Hambaryan, C.H. Broeg, J. Koppenhoefer, A.K. Pandey We present the Young Exoplanet Transit Initiative (YETI), in which we use several 0.2 to 2.6m telescopes around the world to monitor continuously young (< 100 Myr), nearby (< 1 kpc) stellar clusters mainly to detect young transiting planets (and to study other variability phenomena on time-scales from minutes to years). The telescope network enables us to observe the targets continuously for several days in order not to miss any transit. The runs are typically one to two weeks long, about three runs per year per cluster in two or three subsequent years for about ten clusters. There are thousands of stars detectable in each field with several hundred known cluster members, e.g. in the first cluster observed, Tr-37, a typical cluster for the YETI survey, there are at least 469 known young stars detected in YETI data down to R=16.5 mag with sufficient precision of 50 milli-mag rms (5 mmag rms down to R=14.5 mag) to detect transits, so that we can expect at least about one young transiting object in this cluster. If we observe 10 similar clusters, we can expect to detect approximately 10 young transiting planets with radius determinations. The precision given above is for a typical telescope of the YETI network, namely the 60/90-cm Jena telescope (similar brightness limit, namely within +/-1 mag, for the others) so that planetary transits can be detected. For planets with mass and radius determinations, we can calculate the mean density and probe the internal structure. We aim to constrain planet formation models and their time-scales by discovering planets younger than 100 Myr and determining not only their orbital parameters, but also measuring their true masses and radii, which is possible so far only by the transit method. Here, we present an overview and first results. (Abstract shortened) Photometric Analysis of Recently Discovered Eclipsing Binary GSC 00008-00901 (0711.1966) S. Parimucha, T. Pribulla, M. Vaňko, P. Dubovsky, L. Hambalek Nov. 13, 2007 astro-ph Photometric analysis of $BVR_C$ light curves of newly discovered eclipsing binary GSC 0008-00901 is presented. The orbital period is improved to 0.28948(11) days. Photometric parameters are determined, as well. The analysis yielded to conclusion that system is an over-contact binary of W UMa type with components not in thermal contact. The light curves from 2005 show the presence of a spot on the surface of one of the components, while light curves from 2006 are not affected by maculation.
CommonCrawl
The New Big Fish Called Mean-Field Game Theory In recent years, at the interface of game theory, control theory and statistical mechanics, a new baby of applied mathematics was given birth. Now named mean-field game theory, this new model represents a new active field of research with a huge range of applications! This is mathematics in the making! February 5, 2014 ArticleComputer Science, Differential Equations, Game Theory, Mathematics, Optimization, StatisticsLê Nguyên Hoang 15504 views Many people believe that mathematics research is over. Yet, as I often retort to them, we still only know little of the immense ocean of mathematical structures, which, yet, fill the world we live in. One recent advancement is that of mean-field games around 2006, independently by Minyi Huang, Roland Malhamé and Peter Caines in Montreal, and by Jean-Michel Lasry and Fields medalist Pierre-Louis Lions in Paris. This revolutionary model has since been greatly developed by other mathematicians and largely applied to describe complex multi-agent dynamic systems, like Mexican waves, stock markets or fish schoolings. Applications of mean-field games go way beyond the realm of animal swarms! Recently, Lasry and Lions exploited mean-field game theory to design new decision making processes for financial, technological and industrial innovation, as they launched the company mfg labs in Paris! Customers include banks and energy councils. I need to thank Viswanadha Puduru Reddy and Dario Bauso for having taught me the basics of mean-field games I've presented here! I'm also glad to announce that Roland Malhamé, one of the founders of mean-field games, has read and appreciated this article. The Main Idea Sometimes large numbers are much simpler than small ones. The core idea of mean field games is precisely to exploit the smoothing effect of large numbers. Indeed, while game theory struggles with systems of more than 2 individuals, mean-field games turn the problem around by restating game theory as an interaction of each individual with the mass of others. Imagine a fish in a schooling. In classical game theory, it reacts to what other fishes nearby do. And this is very complicated, as there is then a huge number of interactions between the different fishes. This means that classical game theory corresponds to a long list of highly coupled equations. Don't worry if you don't get what I mean. In essence, my point is that the classical game theory model becomes nearly impossible to solve with as little as 3 fishes, and it gets "exponentially harder" with more fishes. I'm using very loosely the concept of "exponentially harder" here! That's not good and you should not do that! But, basically, a way of understanding that is that the number of states grows exponentially in the number of fishes. So, how are things in mean-field game theory? They are cleverly thought! In mean-field game theory, each fish does not care about each of the other fishes. Rather, it cares about how the fishes nearby, as a mass, globally move. In other words, each fish reacts only to the mass. And, amazingly, this mass can be nicely described using the powerful usual tools of statistical mechanics! But how do we know what the mass does? Granted, the motion of the mass is necessarily the result of what each fish does. This means that we actually still have coupled equations, between each fish and the mass. On one hand, the reaction of fishes to the mass is described to the Hamilton-Jacobi-Bellman equation. On the other hand, the aggregation of the actions of the fishes which determines the motion of the mass corresponds to the Fokker-Planck-Kolmogorov equation. Mean-field game theory is the combination of these two equations. Waw! This sounds scary… Don't worry, it sounds way more complicated than it really is! Bear with me! Hamilton-Jacobi-Bellman Before getting further, I need to have you pondering upon a simple question… What can fishes do? They can move however they want, can't they? I'm not sure Newton would have liked this answer… Isn't it rather acceleration that's in their control? Oh yeah. And there's also the water currents… Humm… This is getting complicated. So, let's assume that, as you first said, fishes can move however they want. Mathematically, this means that they control their velocity, which is an arrow which points towards their direction of motion. Plus, the longer the arrow the faster fishes swim. So, at any time, a fish controls its velocity, depending on where it is, and where the mass is. Where are you going with this? I'm going to define one of the two main objects of mean-field games: the control $u$. A control is a choice of velocity depending on position and time. Crucially, if all fishes are similar, then they all have the same optimal control. Thus, we only need one control $u$ to describe the actions of all the fishes! Formally, fishes live in a space $\mathbb R^n$ (or even, a manifold), with $n=3$ for the case of real fishes. Denoting $x \in \mathbb R^n$ the position and $t \in \mathbb R$ the time, then a control is a map $u : \mathbb R^n \times \mathbb R \rightarrow \mathbb R^n$, where $u(x, t) \in \mathbb R^n$ is the velocity chosen by a fish located at position $x$ at time $t$. What do you mean by "optimal control"? Hummm… The concept of what's a best control for fishes is quite debatable. But, for simplicity, let's say that a good control should get fishes close to the schooling where it is safer, while avoiding abusing great velocities which are fuel-consuming. In fact, the similarity of fishes I mentioned earlier means that if two fishes are at the same location at the same time, then they'll feel the same unsafeness, and that if they choose the same velocities, then they'll pay the same cost of abusing of that velocity. Crucially, a good control doesn't only care about what's best for the fish right now. It must also consider where the fish should go for it to be safe in the future. How is that modeled? Basically, at each point of time, a fish pays the unsafeness of its position and the exhaustion due to its velocity. Its total loss over all time simply consists in adding up all the losses at all times. Thus, the fish must strike a happy balance between hurrying to reach a future safer position and not running out of energy at present time. This setting is known as an optimal control problem. The unsafeness of fish is typically modeled by a cost $g(x,m)$ which depends on the position $x$ of the fish and on the "position" $m$ of the mass (which I'll explain more about later). Meanwhile, there is a "fuel-consuming" cost due to velocity. Quite often, this velocity cost is modeled by kinetic energy, which equals $^1/_2 ||u||^2$ (more or less a multiplicative factor). The cost over all times is then $\int_t (^1/_2 ||u||^2 + g(x,m)) dt$. In finite horizon, there may also be a cost $G(x, m)$ for the positions of the fish and the mass at end time $T$. Also, I should say that control problems can get awfully more complicated than the setting I'm defining here, and that a whole article should be written about them. If you can, please write it! So, how do you solve the optimal control problem? Optimal control is solved like chess! In essence, we should first think of where we should get to, and then work backwards to determine which steps will get us there. This is what's known as dynamic programming. Shouldn't it rather be called something like "backward programming"? I totally agree with you. But I don't name stuffs! So how does dynamic programming solve the optimal control problem? First, we start by judging how costly the possible future positions are. This gives us a map of total future costs. Now, ideally, no matter where a fish is, he'd prefer to get to the least costly future position. However, there's also a cost due to motion: Now, given a present position, we'll want to choose a velocity which minimizes the sum of future total costs and velocity cost. Humm… Can you give an example? Sure! Look at the figure below. On the left, the present position is assumed to be where arrows go out from. Staying still has a future total cost of 4, and no velocity cost. Meanwhile, moving one step to the left yields a future total cost of 2 and a velocity cost of 2, which add up to 4. What's more, moving one step below adds a future total cost of 1 and a velocity cost of 2, adding up to 3. Thus, moving below is less costly than moving to the left or staying still. In fact, it's the least costly move. This is why the optimal control consists in moving downwards. Interestingly, now that we know what the optimal control at the present position is, we can derive the present total cost, which is the sum of present unsafeness (1), future total cost (1) and velocity cost (2): 4. Now, by doing so for all present positions, we can derive the full optimal control at present time, as well as the map of present total costs! Notice that the left picture is the future, from which we deduce the present on the right. This is why I'd rather rename dynamic programming as backward programming. In fact, in mean-field games, the Hamilton-Jacobi-Bellman equation is often informally called the backward equation. Wait a minute… Does dynamic programming mean that we need to discretize space and time? Excellent remark! Well, by increasing the details of discretization, and following Newton's steps, we can in fact derive a continuous version of dynamic programming. This yields the famous Hamilton-Jacobi-Bellman equation, which, in essence, is simply the continuous extension of dynamic programming! Denoting $J(x,t)$ the unsafeness of being at position $x$ at time $t$, the choice of velocity at position $x$ and time $t$ must minimize $(\partial_t J + u \cdot \nabla_x J) + (^1/_2||u||^2) + g(x,m)$. The first term corresponds to total future cost (in future position), the second to the velocity cost, and the last one to present unsafeness. If you're unfamiliar with derivatives, check my article on differential calculus. Fokker-Planck-Kolmogorov So, if I recapitulate, the Hamilton-Jacobi-Bellman tells us how fishes react to the mass. But, as we've already discussed it, the mass is derived from what the fishes do. Wait… What do you mean by "the mass $m$"? Excellent question! The mass $m$ describes all trajectories of all fishes. Waw! That sounds like a scary object! There's a way to make it sound (slightly) less frightening: Let's imagine all the possible trajectories. Then, we can simply have $m(x,t)$ counting the ratio of fishes which happen to be at position $x$ at time $t$. To be more accurate, $m(\cdot,t)$ is rather a probability distribution over the space $\mathbb R^n$ fishes live in. But, to obtain differential equations, mean-field games usually assume that this distribution can be described by a probability density function $\mathbb R^n \rightarrow \mathbb R, x \mapsto m(x, t)$. Now, as opposed to the backward Hamilton-Jacobi-Bellman equation, we are now going to work forward: We'll derive the mass of the near future, from the present mass and the control. We first need to notice that the velocities given by the control aren't very relevant to describe how the mass moves. Instead, as noticed by statistical mechanics, what rather matters is the "quantity of motion" of fishes, which physicists call momentum. This momentum really is what describes how fishes move. At a given point, this momentum corresponds to the velocity multiplied by the number of fishes in motion. It is thus the vector field $m(x,t) \cdot u(x,t) = (mu) (x,t)$. Now, by adding up all the quantities going in and out of a point we obtain the Liouville equation. Sparing you the details, what we get is that everything that goes out and in of our point adds up to $div(mu) = \sum \partial_{x_i}(mu)$. This means that the variation of the mass is $\partial_t m = – div(mu)$, which is the Liouville equation. But the Liouville equation features no Brownian motion. Brownian motion? Yes, I need to explain that! The thing is that, if fishes follow the Liouville equation, they'll all end up converging to the single safest point. That's not what happens in reality. While fishes would probably like to all be in the middle of the schooling, there may not get a chance, as the crowdedness will have them knocking one another. One way of interpreting that, is to say that they won't be in total control of their trajectories, kind of like a grain of pollen floating in water. In fact, this motion of the grains of pollen was discovered by botanist Robert Brown in 1827, and it is what we now call the Brownian motion. This Brownian motion has played a key role in the History of science, as it was what led Albert Einstein to prove the existence of atoms! This is what I explained it in my talk More Hiking in Modern Math World: For our purpose, the important effect of Brownian motion is that there is a natural tendency for fishes to go from crowded regions to less crowded ones. Thus, while safeness makes fishes converge to a single safest point, the Brownian motion spreads them in space. The addition of the latter idea to the Liouville equation yields by the famous Fokker-Planck equation, also known as the Kolmogorov forward equation, and which I'll name the Fokker-Planck-Kolmogorov equation here. The relative crowdedness of a point compared to its surrounding is measured by the Laplacian $\Delta_x m = \sum \partial_{x_i x_i}^2 m$. Thus, the Fokker-Planck-Kolmogorov equation is then $\partial_t m = – div(mu) + ^{\sigma^2}/_2 \Delta_m$, where $\sigma$ represents the strength of the Brownian motion. More precisely, it is the standard deviation of the Brownian motion in one unit of time (typically, in meters/second). Time-Independency One natural setting in which to study mean-field games is that where there is a perfect symmetry in time. This means that the costs $g$ don't depend in time and that there is no end time. No end time? I know, it sounds weird. After all, fishes eventually die, so there has to be an end time. But if this end time is so far away compared to the reaction time scales of the fishes, and it's the case in practice, then it's very natural to approximate the very large end time by infinity. There are two main modelings of this time-independent infinite horizon setting. First is to consider that the total cost is the average cost, which means that $J = \lim_{T \rightarrow \infty} \; ^1/_T \int_0^T (^1/_2 ||u||^2 + g(x,m)) dt$. Second is to involve a discount rate, which says that the present counts more than the future. Denoting $\rho > 0$ this discount rate, we'd have $J = \int_0^\infty e^{-\rho t} (^1/_2 ||u||^2 + g(x,m)) dt$. Interesting, differentiating this latter formula can be done analytically. So how does that affect mean-field games? Crucially, now, the controls $u$ no longer depend on the time variable. They are just instructions that give a velocity to be taken depending on position. This means that, at each point of space, there is a velocity to take. This is what physicists call a vector field. Does the same hold for the mass? Absolutely! The mass is now merely described by an unchanging variable $m$. This means that the mass of fishes is remaining still. Or, rather, since this is up to switch of inertial system, that fishes are moving altogether at the same velocity (up to Brownian motion, and along the current to minimize kinetic energy). But what about changes of direction of the fish schooling? Hehe… That's an awesome question. Well, we'd need to get back to the time-dependent case, and include some perturbations of the map of unsafeness, which may be due, for instance, to the apparition of a predator! I've never seen simulations of that, but I trust the maths to make it awesome! Linear-Quadratic Games In this article, whenever I gave formulas so far, I assumed we were in a (nearly) linear-quadratic setting. This means that the control linearly determine the velocity (like in the formula $u=\dot x$, or more generally, if $u=a\dot x + b$`), that the cost of velocity is quadratic (as in the kinetic energy $^1/_2 ||u||^2$), and that the unsafeness of position is also quadratic (which, in fact, we don't need to state the theorems of this last section). Crucially, this makes the Hamilton-Jacobi-Bellman equation easy to transform into a partial differential equation. Namely, removing constant terms, it yields $\min\; ^1/_2 ||u||^2 + u \cdot \nabla_x J$, hence $u = -\nabla_x J$. Thus, we obtain, after including also the Brownian motion, the partial differential equation on $J$ defined by $-\partial_t J + \;^1/_2 ||\nabla_x J||^2 – \;^{\sigma^2}/_2 \Delta J = g(x,m)$. In the time-independent setting with discount rate $\rho$, we get $^1/_2 ||\nabla_x J||^2 – \;^{\sigma^2}/_2 \Delta J + \rho J = g(x,m)$. Not only do these assumptions enable us to write nice-looking partial differential equations, they also imply three holy results mathematicians are always searching for. To be precise, the three results I'll present require assumptions on the boundedness and regularity of cost functions $g$ at all time and $G$ at end time, and on the mass $m$ at time 0. Namely, $g$ and $G$ must be uniformly bounded and Lipschitz-continuous, while $m$ at time 0 must be absolutely continuous with regards to Lebesgue measure (i.e. it must correspond to a classical probability density function). What are the three holy results? The two first ones are very classical. Yes, you know what I'm talking about: existence and uniqueness. Why are these so important? Because, if we want our equations to have a chance to describe reality, then their solutions must share one important feature with reality. Namely, they must exist, and they must be unique. More generally, especially in the field of partial differential equations, existence and uniqueness of solutions often represent the Saint-Graal, or, at least, an important mathematical question. Really? Don't people care about solving these? Sure, people do! But since these equations usually can't be solved analytically, we usually turn towards numerical simulations. These simulations typically discretize space and time and approximate the equations. However, while these numerical simulations will run and engineers may then make decisions based on their outcomes, if the equations didn't have a solution to start with, or if they had several of them, then the results of the simulations would then be meaningless! Proving existence and uniqueness yields evidence for the relevancy of numerical simulations. I should point out that numerical simulations may still go wrong even though a partial differential equation does satisfy existence and uniqueness. Numerical simulations can be quite a tricky endeavor. If you can, please write about finite difference, finite element and finite volume methods! In fact, one of the 7 Millenium Prize problem, called the Navier-Stokes existence and smoothness problem, is precisely about determining the existence and uniqueness of solutions to a partial differential equation, as I explain it later in my talk More Hiking in Modern Math World: Each Millenium Prize problem is worth 1 million dollars. And, apparently, Kazakh mathematician Mukhtarbay Otelbayev has claimed the Prize for the Navier-Stokes existence and smoothness problem! However, it may take a while for his proof to be translated, reviewed and accepted… provided it is correct! But what about the mean-field game partial differential equations? Do they satisfy existence and uniqueness? In the linear-quadratic setting with Brownian motion, they do! How cool is that? I guess it's nice… So, what about the third result? The third result is that the solutions can be computed! While classical partial differential equations are solvable by numerical methods, mean-field game equations are a bit trickier as they form coupled partial differential equations. More precisely, given the mass $m$, we can compute the optimal control $u$ with the Hamilton-Jacobi-Bellman equation; and given the control $u$, we can derive the mass $m$ from the Fokker-Planck-Kolmogorov equation. But, at the beginning, we don't know any of the two variables! So how can we compute any of them? Through an iterative process! Let's start with an arbitrary mass $m_0$. Careful! The mass $m_0$ must not be confused with the mass $m(\cdot, 0)$ at time 0. Rather, $m_0 : \mathbb R^n \times \mathbb R \rightarrow \mathbb R_+$ is an arbitrary mass defined for all positions and all times. It is the first mass of our iteration process. Then, we'll compute the corresponding optimal control $u_1$, from which we then derive the mass $m_1$. And we repeat this process to determine $u_2$ and $m_2$, then $u_3$ and $m_3$… and so on. Crucially, this iterative process will yield the right result, as, in some sense I won't develop here, the sequence $(u_n, m_n)$ converges exponentially quickly to the unique solution $(u,m)$ of the mean-field game couple partial differential equations! In essence, this is a consequence of the contraction property of the map $(u_n, m_n) \mapsto (u_{n+1}, m_{n+1})$. This contraction property, by the way, is what proves the existence and uniqueness of a solution of the mean-field game coupled partial differential equation. Let's Conclude It's interesting to note that the Hamilton-Jacobi-Bellman equation comes from control theory (which is often studied in electrical and engineering departments), while the Fokker-Planck-Kolmogorov equation first appeared in statistical mechanics. And yet, it is in game theory that their combination has yielded the most relevant models! The story of the creation of mean-field game theory really is that of the power of interdisciplinarity in innovation processes! And yet, I'm confident in the fact that the full potential of mean-field games has not yet been found, and that some hidden mind-blowing applications are still to come! I guess you want to point out, once again, that mathematics is still a mysterious ocean we've only started to explore… Exactly! As weird as it sounds to many, mathematics is more than ever a living and dynamic field of research, with frequent discoveries and conceptual breakthroughs! Which leads me to advertise my own ongoing research… The first part of my PhD research is actually a generalization of mean-field games. Crucially, the key step of mean-field games occurred early in this article, as we restated classical game theory in terms of other interacting variables: The control $u$ and the mass $m$. More generally, in game theory, we can similarly decompose games into two interacting components: The strategies played by players, and the so-called return functions, which I got to name since it's my very own creation! What are these return functions? Hehe! Stay tuned (by subscribing)… as I'll reveal everything about these return functions once my (recently submitted) research paper gets published! More on Science4All Game Theory and the Nash Equilibrium Game Theory and the Nash Equilibrium By Lê Nguyên Hoang | Updated:2016-01 | Views: 12280 In the movie "A Beautiful Mind", the character is John Nash. He is one of the founders of a large and important field of applied mathematics called game theory. Game Theory is the study of human interactions. Its fallouts in economy, politics or biology are countless. This article gives you an introduction to the concepts of this amazing way of thinking. Differential Calculus and the Geometry of Derivatives Differential Calculus and the Geometry of Derivatives By Lê Nguyên Hoang | Updated:2016-02 | Views: 7504 Differential calculus is one of the most important concept of mathematics for science and engineering. This article focuses on its fundamental meaning. Advanced Game Theory Overview Advanced Game Theory Overview By Lê Nguyên Hoang | Updated:2016-02 | Prerequisites: Game Theory and the Nash Equilibrium | Views: 5805 This article gives an overview of recent developments in game theory, including evolutionary game theory, extensive form games, mechanism design, bayesian games and mean field games. More Elsewhere Mean-Field Learning: a Survey by Tembine, Tempone and Vilanova. Prabodini says: Thanks, the article is really helpful and the main idea of mean field games is very well-explained. Muhammad Saad says: Excellent explanation, i was looking for some material on Mean field games, and i found this, once again, very neat and clean way of explanation. Its hard now a days to find some literature like this, you put way hard effort in this. Once again, the best article on mean field games. Do share the links of your other articles in the field of game theory, or if you have a personal blog or website then please share a link. I want to explore more.
CommonCrawl
OSA Publishing > OSA Continuum > Volume 3 > Issue 4 > Page 1058 Takashige Omatsu, Editor-in-Chief Etched multicore fiber Bragg gratings for refractive index sensing with temperature in-line compensation Wenbin Hu, Chi Li, Shu Cheng, Farhan Mumtaz, Cheng Du, and Minghong Yang Wenbin Hu,1 Chi Li,1,2 Shu Cheng,1,2 Farhan Mumtaz,1,3 Cheng Du,4 and Minghong Yang1,* 1National Engineering Laboratory for Fiber Optic Sensing Technology, Wuhan University of Technology, Luoshi Road 122#, Wuhan 430070, China 2School of materials Science and Engineering, Wuhan University of Technology, Luoshi Road 122#, Wuhan 430070, China 3School of Information and Communication Engineering, Wuhan University of Technology, Luoshi Road 122#, Wuhan 430070, China 4FiberHome Telecommunication Technologies, Co., Ltd., No. 6, Gaoxinsilu, Wuhan 430205, China *Corresponding author: [email protected] W Hu S Cheng F Mumtaz M Yang •https://doi.org/10.1364/OSAC.387019 Wenbin Hu, Chi Li, Shu Cheng, Farhan Mumtaz, Cheng Du, and Minghong Yang, "Etched multicore fiber Bragg gratings for refractive index sensing with temperature in-line compensation," OSA Continuum 3, 1058-1067 (2020) Tapered multicore fiber interferometer for refractive index sensing with graphene enhancement (AO) Etching Bragg gratings in Panda fibers for the temperature-independent refractive index sensing (OE) Simultaneous measurement of refractive index and temperature with high sensitivity based on a multipath fiber Mach–Zehnder interferometer (AO) Fiber Optics and Optical Communications Coupled mode theory Fiber Bragg grating sensors Fiber Bragg gratings Long period gratings Multicore fibers Original Manuscript: December 30, 2019 Revised Manuscript: April 7, 2020 Manuscript Accepted: April 7, 2020 Principles and simulation Experimental setup and fabrication Results and discussions A novel refractive index sensor based on etched multicore fiber Bragg gratings with temperature in-line compensation is proposed and experimentally demonstrated. By chemically etching the cladding of the multicore fiber, the six outer cores exhibit the sensitive responses to the surrounding refractive index change, with refractive index insensitive and temperature-sensitive central core inside of the multicore fiber. By using the a central Bragg wavelength in the multicore fiber as temperature compensators, the refractive index sensing can be in-line compensated. Moreover, the distribution of multiple outer cores enables the capability of avoiding the nonhomogeneous performance by averaging and balancing the read-out data. Theoretical analysis and experimental results demonstrate that this structure can easily discriminate the RI and temperature. The maximum sensitivity 42.83 nm/ RIU could be obtained at around 1.435 RIU, and the temperature sensitivity is 9.89 pm/°C. The proposed structure is able to in-line and in-situ determine refractive index and temperature simultaneously. © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement It is well known that fiber Bragg gratings (FBGs) have been utilized as optical sensors to measure a wide range of physical parameters including temperature, pressure, loading, bending, strain, etc. [1–4]. The conventional FBGs is not sensitive to the surrounding refractive index (SRI) due to its intrinsic coupling mode. However, it has been found that FBGs can achieve a considerable sensitivity to refractive index (RI) by heating-and-tapering [5], side-polishing [6], or chemical etching [7]. Comparatively, chemical etching is appealing for producing relatively robust and stable fiber devices with reduced size in a convenient way. A lot of research works have been conducted to achieve RI sensitive FBGs with the chemical etching technique. For examples, Iadicicco et al. [8] has demonstrated a thinned FBG sensor, which can achieve RI sensitivity based on intensity measurements. Chen et al. [9] has studied the cladding mode resonances of a chemical etch-eroded fiber Bragg grating for ambient RI sensing. Zhou et al. [10] has measured the concentrations of sugar solution by an etched tilted Bragg grating structures in multimode fiber. J. H. Osório et al [11] exposed the surface-core with FBGs to the external environment, and achieved a maximum RI sensitivity of ∼40 nm/RIU. However, these previous works are mainly focused on RI sensing. Noticeably, the temperature effect should not be neglected to trace an accurate RI value, and the RI value is even dependent on temperature. Simultaneous sensing of refractive index and temperature have been reported using sampled FBG, dual long period gratings (LPGs), modified FBGs, hybrid LPG-FBG structures, F-P cavity, and Mach–Zehnder interferometer [12–17]. Recently, multicore fiber (MCF) have shown great potential for sensing applications, including strain [18], curvature [19], temperature [20], and shape sensing [21]. Additionally, the multicore fiber can also be used for RI sensing. Zhou proposed a Michelson interferometer composed of an asymmetrical twin-core fiber. By properly splicing this twin-core fiber to a single-mode fiber (SMF), a Michelson interferometer is realized with a sensitivity of 826.8 nm/RIU [22]. Daniel demonstrated a highly sensitive RI sensing structure based on multicore coupled structures [23]. These sensors proposed by the peers have high RI sensitivity, but they are easy to be influenced by the temperature or strain-crosstalk. On the other hand, eched MCF can be employed to consititute cascaded structure for temperature sensing, which exhibits a single parameter sensitivity [24]. In this work, a RI sensor based on etched Fiber Bragg Gratings in multicore fibers (eFBG-MCFs) with temperature in-line compensation is proposed and experimentally demonstrated. Different from the conventional etched FBG in SMF, this multicore structure contains six outer cores and one central core, which locate at the same position along longitudinal direction. After HF-etching, the cladding of MCF was partly removed, the wavelength shifts of FBGs in six outer cores exhibit the sensitive responses to the SRI change, with an RI-insensitive and temperature-sensitive central core inside of the MCF. The wavelength shift of FBGs in outer cores is used to determine SRI, while the wavelength shift of FBG in central core is employed to compensate for temperature variation. The following experiment demonstrates that this simple structure can easily discriminate the RI and temperature. 2. Principles and simulation The principle of the etched MCF sensor can be drawn by extending the coupled mode theory to the specific grating structure. The MCF (YOFC, China) contains seven Ge-doped cores surrounded by trench was adopted. The MCF diameter is 150.0 µm, the pitch size is 41.5 µm, and the trench and core diameter are 24.0 µm and 8.2 µm, respectively. The six outer cores are symmetrically surrounding the central core. Here, the cladding of the MCF is controllably removed by means of wet chemical etching. If the cladding diameter is uniformly reduced, it can be presumed the six outer cores are at same condition. So the Bragg wavelength of different cores can be given by (1)$${\lambda _C} = 2{n_{eff,C}} \cdot \Lambda $$ (2)$${\lambda_O} = 2{n_{eff,O}} \cdot \Lambda $$ where ${\lambda _C}$ and ${\lambda _O}$ are the Bragg wavelengths of the central core and outer core, respectively, with a grating pitch of Λ. ${n_{eff,C}}$ and ${n_{eff,O}}$ are their effective refractive indexes. By immersing the optical fiber without coating layer in an etching liquid, the cladding is etched gradually, and consequently the surrounding is close to the outer cores. In this case, the weakly guiding approximation is still applicable to the central core grating. As for the outer core grating, the light is coupled out of the core into cladding, so the FBGs in outer cores are sensitive to SRI [25]. To further understand the interaction between the etched fiber and the surrounding environment, numerical simulations in COMSOL Multiphysics were performed for a structure with the following simulation parameters: ncladding=1.444, ntrench=1.442, and ncore=1.448. Firstly, the cladding diameter is reduced from 150 µm to 80 µm, and the SRI is fixed at 1.333 to study the relationship between effective RI and the cladding diameter. Figure 1 shows the electrical-field (e-field) distributions with different cladding diameters. For COMSOL simulations, the simulated distribtions are obtained by specifying the two types of modes, center-core mode and outer-core mode. Hence, the e-field distributions of center-core mode (a-d) and outer-core mode (e-h) are shown respectively. It can be found that the center-core mode maintains almost same, while the outer-core modes change with the decrease of cladding diameter. Firstly, the e-fields distribute homegeneously in all cores, the effective RI of the outer core is as same as that of central core. Then, as the cladding diameter is reduced to be 91 µm and a part of trench is etched, the evanecent e-fields of the outer core appear obviously, and the effective RI of outers begin to decrease. The e-fields in outer cores are more sensitive to the SRI. When the cladding diameter is reduced to be 88 µm, the outer-core modes evolve to be cladding mode. Considering the complexity of the cladding mode, the etched diameter under 88 µm should be avoided according to the simulation results. Therefore, the diameter of this proposed sensor should be limited in the range of 150-88 µm. Fig. 1. Simulated e-field distributions of the center-core mode with diameters of (a) 150 µm, (b) 117 µm, (c) 91 µm and (d) 88 µm, and the outer-core modes with diameters of (e) 150 µm, (f) 117 µm, (g) 91 µm and (h) 88 µm. Figure 2(a) plots the simulated effective RIs of central core and one of the outer cores corresponding to different cladding diameters. It is obvious that the effective RI of the central core is stable as the cladding diameter changes, which suggests that the central core does not interact with the surrounding media within the specified cladding range. On the other hand, the effective RI of outer core is firstly stable within a cladding-diameter range of 150-117 µm, then rapidly decrease in the range of 117.0-88 µm. This means the outer core begins to interact strongly with the surrounding media when the cladding diameter is reduced to a certain value, i.e., 117 µm. When the diameter is reduced to be approximately 88 µm, the effective RI of outer core decreases very slowly. It has been figured out by the previous analysis of the simulated e-field distributions that the cladding mode is dominate for this case. Obviously, the variation of the cladding diameter within the range of 88-80 µm cannot affect the cladding mode, according to the plots in Fig. 2(a). Fig. 2. (a) Effective RI of central core and one of the outer cores for different cladding diameters of MCFs (SRI=1.333). (b) Effective RI of central core and outer core for MCF diameters of 91 µm, 93 µm and 95 µm under different SRI To investigate the influence of SRI on the SRI sensitivities of central and outer cores, the simulation of effective index varying trend under different SRIs is implemented. For simplification, three cladding diameters, 95 µm, 93 µm and 91 µm are focused. The SRI ranges from that of water (1.333) to the cladding (1.444). Figure 2(b) shows the simulated effective RIs corresponding to three cladding diameters under different SRIs. In the SRI range from 1.333 to 1.442 (the trench), it can be observed that the ${n_{eff,C}}$ is constant regardless of the surrounding media, which means the central core has no sensitivity to the SRI. On the other hand, the ${n_{eff,O}}$ is strongly dependent on the SRI, especially when the SRI is approching to the RI of trench (1.442). As the cladding diameter decreases, the ${n_{eff,O}}$ changes faster, which suggests that the smaller cladding diameter has a higher RI sensitivity. Obviously, ${n_{eff,C}}$ and ${n_{eff,O}}$ tend to reach the same value as the SRI approaches the RI of the trench (1.442). In the SRI range from 1.442 to 1.444 (cladding), the ${n_{eff,C}}$ and ${n_{eff,O}}$ increase fast, because the e-field of the cores couple into the cladding for bigger SRI than 1.442 RIU. As a result, the sensor has a RI sensitive range with an upper limit of 1.442 RIU. According to the simulated plots and above analysis, the effective RI of etched MCFs varies non-linearly for different SRIs. The non-linearity of the simultion implies that the RI sensitivity of FBGs in etched MCFs will be non-linear, due to the linear function between effective RI and FBG wavelength shift. Conventional FBG is a perfect strain and temperature sensor due to its linear response, while this is not the case for RI sensing. The simulation results illustrate that when the cladding is partly removed, the outer cores are sensitive to the SRI, while the central core is not sensitive to the SRI. This feature can be consequently employed to determine RI sensing with local temperature compensation, which is the motivation of the proposed eFBG-MCFs. Assuming that the fiber is naturally stretched and not subjected to external strain, the wavelength shift of the gratings of the MCF can be expressed as follows: (3)$$\frac{{\Delta {\lambda _C}}}{{\lambda c}} = \left( {\frac{1}{{{n_{{e_{ff,c}}}}}} \cdot \frac{{\partial {n_{{e_{ff,C}}}}}}{{\partial T}} + \frac{1}{\Lambda } \cdot \frac{{\partial \Lambda }}{{\alpha T}}} \right)\Delta T = ({{\zeta_C} + \alpha } )\cdot \Delta T$$ (4)$$\begin{aligned} \frac{{\Delta {\lambda _O}}}{{{\lambda _O}}} &= \left( {\frac{1}{{{n_{{e_{ff,O}}}}}} \cdot \frac{{\partial {n_{{e_{ff,O}}}}}}{{\partial T}} + \frac{1}{\Lambda } \cdot \frac{{\partial \Lambda }}{{\alpha T}}} \right)\Delta T + \left( {\frac{1}{{{n_{{e_{ff,O}}}}}} \cdot \frac{{\partial {n_{eff \cdot O}}}}{{\partial SRI}}} \right)\Delta SRI\\ &= ({{\zeta_O} + \alpha } )\cdot \Delta T + \kappa SRI \end{aligned}$$ where ${\zeta _C}$ = ${\zeta _C} = \frac{1}{{{n_{eff,C}}}} \cdot \frac{{\Delta {n_{eff,C}}}}{{\Delta T}}$ and ${\zeta _{_O}} = \frac{1}{{{n_{eff,O}}}} \cdot \frac{{\Delta {n_{eff,O}}}}{{\Delta T}}$ are the thermo-optical coefficient of the central core and outer cores of MCF, respectively. $\alpha = \frac{1}{\Lambda } \cdot \frac{{\partial \Lambda }}{{\; \partial T}}$ is the thermal expansion coefficient of the MCF, same for two kind of the grating regions. The thermo-optical coefficients of central core and outer cores, ${\zeta _C}$and ${\zeta _O}$, are slightly different, which causes the slight difference of temperature sensitivity between central core and outer cores. $\kappa = \frac{1}{{{n_{eff,O}}}} \cdot \frac{{\Delta {n_{eff,O}}}}{{\Delta SRI}}$ is the SRI coefficient of the outer core of etched MCF. Obviously, ${\lambda _C}$ is sensitive only to local temperature changes according to Eq. (3), whereas${\lambda _O}$would respond to both effects according to Eq. (4). This enables the possibility to develop a temperature-compensated RI sensor. 3. Experimental setup and fabrication A 7-core optical fiber (YOFC, China) with seven trench-assisted cores is used in this work, with a cladding diameter of 150.0 µm and a MFD of 9.5 µm, and a pitch size of 41.5 µm. The Bragg gratings in MCF were inscribed by a conventional 193 nm excimer laser with phase mask. The reflection spectra of FBGs in MCF are shown in Fig. 3. The minor difference of the resonance wavelength among the cores is caused by the nonhomogeneous distribution of laser energy in different cores during FBG writing, which has trivial effect on the wavelength interrogation and readout for the following experiments. Fig. 3. The reflection spectra of FBGs in MCF The cladding of MCF was etched by immersing the MCF with inscribed FBGs in 20% HF under ambient temperature [26]. To smooth the surface of the etched area, the fiber was then transferred to a 3% HF solution and neutralized by a 2 mol/L NaOH solution after the previous etching process. The side-views and cross-sections of MCFs before and after etching are shown in Fig. 4(a-c). The trench-assisted cores are clearly observed in the cross-section pictures via the concentric circles around each core due to the refractive index difference of the trenches. It is noted that the more the MCFs cladding etched, the more hexagon-like for the cross-sections. This is attributed to the faster chemical etching speed of the trenches than that of the claddings. As shown in Fig. 4(d-f), the waists of MCFs along z-direction are smooth from a side view. Fig. 4. The microscope images of cross-section for unetched MCF (a) and etched MCFs with diameters of (b) 93.2 µm, (c) 89.8 µm, and their side-views (d-f). Figure 5 illustrates the experimental setup of the proposed RI sensor. As shown in the figure, an 8-channel interrogator is used to record the seven FBGs in eFBG-MCFs simultaneously via a fan-in device. The spectral responses are acquired by the 8-channel interrogator for different RI sensing tests, and also for the etching process. The insets of Fig. 5 show the schematic diagram of the probe after etching. The length of etched section is approximately 10 mm. Fig. 5. Experimental setup of the proposed RI sensor and the schematic diagram of the probe. Figure 6 plots the evolution of the FBG resonance wavelengths shift of different cores during etching process. It can be found that the resonance wavelengths of all cores increase during the first 100 minutes. This effect could be attributed to the continuous exothermal reaction of HF and SiO2, similar to the behavior of the previous work [27]. After etching for approximately 102 minutes, the resonance wavelengths of six outer cores (O1-O6) show an abrupt reduction, meanwhile the resonance wavelengths of central core show slightly blue shift. Hence, the stable trend of the central core ensures the possibility of temperature compensation. On the other hand, there is small diversity within approximately 0.15 nm for the wavelength shifts of the outer cores, which is attributed to the inhomogeneous etching speed of the cladding. The variation can be balanced by averaging the diverse wavelength shifts, which will contribute to diversity compensation and nonhomogeneous performance elimination during the following RI measuring. Fig. 6. The evolution of the resonance wavelengths shifts of two types of cores, central core (C1) and outer cores (O1-O6) After the claddings are partly removed, the interaction between the outer-core mode and the surrounding medium (HF solution) is getting stronger, and the interface is getting close to the boundary of the outer cores. The Bragg gratings of outer cores show an improved RI sensitivity due to strong coupling with higher mode, which interacts with the liquid localized in the vicinity of outer cores. Meanwhile, the Bragg grating of central core shows no sensitivity to RI. In the following experiment, the performance of the etched MCF with three diameters of 89.8 µm, 93.2 µm and 94.3 µm are investigated. 4. Results and discussions The RI sensing experiments were conducted by immersing the fabricated sensors into an aqueous solution of glycerin at room temperature (25°C). The RIs of the solution were realized from 1.333 to 1.442 by tuning the glycerin concentration continuously. The actual RIs of the glycerin solutions were calibrated by an Abbe refractometer. From the microscope images in Fig. 6, it is noted that the cladding etched degrees for the six outer cores of MCF are slightly different, which will cause effective RI diversity and different sensing performance of each outer core. For this sensor, the disadvantage of non-homogeneity can be overcome by using the average value of the wavelength shifts of the six outer cores (denoted by ${\bar{\textrm O}}$) to represent the general change of all of the outer cores. The distribution of six outer cores of MCF can enable the capability of averaging the nonhomogeneous performance, however, which presents particular advantage over the conventional etched SMF or other twin-core optical fiber sensors [22,28–30]. In the following plots and discussions, ${\bar{\textrm O}}$ is regarded as the wavelength shift variation of outers cores under different conditions, while C represents the trend of central core. Figure 7(a) plots the wavelength shifts as non-linear functions of the SRI for central core (C) and outer core (${\bar{\textrm O}}$) with diameters of 89.8 µm, 93.2 µm and 94.3 µm, respectively. From Fig. 7(a), the sensitivity of RI for three diameters are obviously different. The change of SRI has no influence on C regardless of different diameters. On the other hand, the increase of RI induces the red shift of ${\bar{\textrm O}}$. And the slope of ${\bar{\textrm O}}$ increases with the decrease of cladding diameter. By performing an exponential fitting to the experimental data, expressions are obtained for each sensor with diameters of 89.8 µm, 93.2 µm and 94.3 µm, respectively (5)$$\Delta \lambda = 4.174{E^{ - 36}}{\textrm{e}^{\frac{{RI}}{{0.018}}}}\textrm{ + }0.061,{R^2} = 99.1\%$$ here Δλ represents the change in the absolute value of the wavelength. Fig. 7. (a) The experimental results of wavelength shifts as functions of the SRI for central core (C) and outer cores ${\bar{\textrm O}}$) with diameters of 89.8 µm, 93.2 µm and 94.3 µm, respectively. (b) The simulation wavelength shifts calculated from computed effective RI of simulation models It can be concluded that when the cladding diameter is decreased from 94.3 to 93.2 and 89.8 µm, the maximum RI sensitivity at around 1.435 RIU, as shown by grey dashed line in the grey window of Fig. 7(a), is calculated to be 7.65 nm/RIU, 17.86 nm/ RIU and 42.83 nm/ RIU, respectively. Significantly, the sensitivity of the diameter etching of 4.5 µm (from 94.3 to 89.8 µm) is improved four times. Figure 7(b) shows the simulation results that the wavelength shifts as functions of the SRI for outer cores with diameters of 89.8 µm, 93.2 µm and 94.3 µm, respectively. The simulated wavelength shifts are calculated under a specified wavelength according to the computed effective RI, consistent with the simulations in Fig. 2(b). The simulation results are found to be in good agreement with the experimental results shown in Fig. 7(a). Unavoidably, the RI sensitivity enhancement within the range of 1.33-1.39 RIU, inferred from the experimental results as shown in Fig. 7(a), is not obvious due to the fundamental constraint of effective index variation in this range, as shown in Fig. 7(b). The RI sensitivity within the range of 1.39-1.42 RIU is as well improved by cladding etching. The experimental and simulated results confirm that the sensitivity can be optimized, inferred from previous analysis, by tuning the thickness of the MCF cladding for the sensor fabrication. However, it can be concluded from the comparison of data trend in Fig. 7(a) and (b) that the fitting equations are different between the experiments and simulations. This is mainly due to the process control of the chemical etching and the difficulty of employing precise parameters for the simulation. As for the experimental results, the fitting equations (Eqs. 5–7) are not only concluded from the experimental data, but also can be used for sensor calibration in future application. The calibration function of each sensor will be slightly different due to the fluctuation of fabrication process and sensor configuration, as least for current stage, which can be hopefully optimized for more precise process control in future. To evaluate the temperature dependence of the eFBG-MCFs, direct measurements of deionized water at different temperatures are conducted. A high precision thermometer is placed close to the grating to calibrate the temperature. Figure 8 shows the wavelength shifts as linear functions of temperature for outer core (${\bar{\textrm O}}$) and central core (C) with diameters of 89.8 µm, 93.2 µm and 94.3 µm, respectively. The temperature sensitivities are measured to be 9.61 pm/°C, 9.89 pm/°C (diameter=89.8 µm); 9.84 pm/°C, 10.01 pm/°C (diameter=93.2 µm) and 9.41 pm/°C, 9.76 pm/°C (diameter=94.3 µm) for${\bar{\textrm O}}$ and C, respectively. The thermal sensitivity of ${\bar{\textrm O}}$ is slightly lower than the sensitivity of C due to the thermo-optic coefficient of water is weaker than that of the cladding (fused silica), which is similar to the behavior in conventional counterparts [13]. Fig. 8. The wavelength shifts as functions of the SRI for central core (C) and outer cores${\bar{\textrm O}}$) with diameters of 89.8 µm, 93.2 µm and 94.3 µm, respectively. To evaluate the accuracy and stability of the sensor, the 93.2 µm probe was employed to monitor six RI solutions at temperature of 30°C, 40°C and 50°C, and the experiment results are plotted in Fig. 9. The temperature fluctuation is ±0.3°C. Figure 9(a) shows the results with the compensation of the central core, the measured RI value has good consistency with the actual RI value, and the maximum error calculated to be 0.66% at 30°C at 1.342 RIU, which establishes a high accuracy in the full SRI range. The Fig. 9(b) gives the results without the compensation of the central core, in which the results show great error. The results demonstrate that temperature compensation is necessary and essential for eFBG-MCFs, and the proposed sensor realizes a high accuracy and advantage of in-line and local temperature compensation. Fig. 9. Error of measured RI value corresponding to actual RI value (a) with compensation and (b) without compensation Based on the obtained results, the proposed configuration involving eFBG-MCFs demonstrates the potential to perform in-situ simultaneous and accurate measurements of refractive index and temperature. Compared with the counterparts reported in the literatures, the proposed configuration has advantages of easy facility and high capability of in-line and in-situ temperature compensation, particularly, within micro environment. Additionally, the multiple outer cores contribute the precision and performance improvement for fabrication by averaging and balancing the read-out data from six outer cores. Furthermore, the eFBG-MCFs, undoubtedly, will inherit the major merit of FBGs with the great potential for distributed RI sensing. A refractive index sensor with temperature in-line compensation based on eFBG-MCFs is proposed and experimentally demonstrated. Theoretical analysis and experimental results reveal that the FBGs in the distributed cores of the fabricated sensor can easily discriminate the RI and temperature due to the same longitudinal locations of the outer cores and the central core. The RI sensitivity of the FBGs in the outer cores increases as the cladding diameter decreases by the chemical etching, while the central core maintains RI-insensitivity. By taking advantage of this feature, the central core can perform as in-line and in-situ temperature compensator for RI sensing. The maximum sensitivity 42.83 nm/ RIU could be obtained at around 1.435 RIU, and the temperature sensitivity is 9.89 pm/°C. This device has great potential on RI sensing with the main feature of RI and temperature discrimination, and other merits of easy achievement, small size and high stability. Major Technology Innovation of Hubei Province (2018AAA016); Fundamental Research Funds for the Central Universities (2019-zy-022); National Key Research and Development Program of China (2017YFB0405501). 1. L. Jin, W. Zhang, and H. Zhang, "An embedded FBG sensor for simultaneous measurement of stress and temperature," IEEE Photonics Technol. Lett. 18(1), 154–156 (2006). [CrossRef] 2. L. Liu, Z. Hao, and Q. Zhao, "Temperature-independent FBG pressure sensor with high sensitivity," Opt. Fiber Technol. 13(1), 78–80 (2007). [CrossRef] 3. J. Leng and A. Anand, "Structural health monitoring of smart composite materials by using EFPI and FBG sensors," Sens. Actuators, A 103(3), 330–340 (2003). [CrossRef] 4. A. Rajabzadeh, R. Heusdens, and R. C. Hendriks, "Calculation of the Mean Strain of Smooth Non-Uniform Strain Fields Using Conventional FBG Sensors," J. Lightwave Technol. 36(17), 3716–3725 (2018). [CrossRef] 5. X. Liu, T. Wang, and Y. Wu, "Dual-Parameter Sensor Based on Tapered FBG Combined with Microfiber Cavity," IEEE Photonics Technol. Lett. 26(8), 817–820 (2014). [CrossRef] 6. K. Zhou, X. Chen, and L. Zhang, "High-sensitivity optical chemsensor based on etched D-fibre Bragg gratings," Electron. Lett. 40(4), 232–234 (2004). [CrossRef] 7. S. Sridevi, S. Vasu K, and N. Jayaraman, "Optical bio-sensing devices based on etched fiber Bragg gratings coated with carbon nanotubes and graphene oxide along with a specific dendrimer," Sens. Actuators, B 195, 150–155 (2014). [CrossRef] 8. A. Iadicicco, A. Cusano, and S. Campopiano, "Thinned fiber Bragg gratings as refractive index sensors," IEEE Sens. J. 5(6), 1288–1295 (2005). [CrossRef] 9. N. Chen, B. Yun, and Y. Cui, "Cladding mode resonances of etch-eroded fiber Bragg grating for ambient refractive index sensing," Appl. Phys. Lett. 88(13), 133902 (2006). [CrossRef] 10. X. Chen, K. Zhou, and L. Zhang, "Optical chemsensor based on etched tilted Bragg grating structures in multimode fiber," IEEE Photonics Technol. Lett. 17(4), 864–866 (2005). [CrossRef] 11. J. H. Osório, R. Oliveira, S. Aristilde, G. Chesini, M. A. R. Franco, R. N. Nogueira, and C. M. B. Cordeiro, "Bragg gratings in surface-core fibers: Refractive index and directional curvature sensing," Opt. Fiber Technol. 34, 86–90 (2017). [CrossRef] 12. X. Shu, B. A. Gwandu, and Y. Liu, "Sampled fiber Bragg grating for simultaneous refractive-index and temperature measurement," Opt. Lett. 26(11), 774 (2001). [CrossRef] 13. J. Yan, A. P. Zhang, and L. Y. Shao, "Simultaneous Measurement of Refractive Index and Temperature by Using Dual Long-Period Gratings with an Etching Process," IEEE Sens. J. 7(9), 1360–1361 (2007). [CrossRef] 14. A. Iadicicco, S. Campopiano, and A. Cutolo, "Nonuniform thinned fiber Bragg gratings for simultaneous refractive index and temperature measurements," IEEE Photonics Technol. Lett. 17(7), 1495–1497 (2005). [CrossRef] 15. D. A. C. Enríquez, A. R. D. Cruz, and M. T. M. R. Giraldi, "Hybrid FBG–LPG sensor for surrounding refractive index and temperature simultaneous discrimination," Opt. Laser Technol. 44(4), 981–986 (2012). [CrossRef] 16. C. Gouveia, P. A. S. Jorge, and J. M. Baptista, "Fabry–Pérot Cavity Based on a High-Birefringent Fiber Bragg Grating for Refractive Index and Temperature Measurement," IEEE Sens. J. 12(1), 17–21 (2012). [CrossRef] 17. C. R. Liao, Y. Wang, and D. N. Wang, "Fiber In-Line Mach–Zehnder Interferometer Embedded in FBG for Simultaneous Refractive Index and Temperature Measurement," IEEE Photonics Technol. Lett. 22(22), 1686–1688 (2010). [CrossRef] 18. R. M. Silva, M. S. Ferreira, and J. Kobelke, "Simultaneous measurement of curvature and strain using a suspended multicore fiber," Opt. Lett. 36(19), 3939 (2011). [CrossRef] 19. P. Saffari, T. Allsop, and A. Adebayo, "Long period grating in multicore optical fiber: an ultra-sensitive vector bending sensor for low curvatures," Opt. Lett. 39(12), 3508 (2014). [CrossRef] 20. J. E. Antoniolopez, Z. S. Eznaveh, and P. Likamwa, "Multicore fiber sensor for high-temperature applications up to 1000°C," Opt. Lett. 39(15), 4309 (2014). [CrossRef] 21. P. S. Westbrook, T. Kremp, and K. S. Feder, "Continuous multicore optical fiber grating arrays for distributed sensing applications," J. Lightwave Technol. 35(6), 1248–1252 (2017). [CrossRef] 22. A. Zhou, Y. Zhang, and G. Li, "Optical refractometer based on an asymmetrical twin-core fiber Michelson interferometer," Opt. Lett. 36(16), 3221 (2011). [CrossRef] 23. D. A. May-Arrioja and J. R. Guzman-Sepulveda, "Highly sensitive fiber optic refractive index sensor using multicore coupled structures," J. Lightwave Technol. 35(13), 2695–2701 (2017). [CrossRef] 24. F. Mumtaz, P. Cheng, C. Li, S. Cheng, C. Du, M. Yang, Y. Dai, and W. Hu, "A design of taper-like etched multicore fiber refractive index-insensitive a temperature highly sensitive Mach-Zehnder interferometer," IEEE Sensors Journal. 2020 Mar 5 25. A. Kersey, M. A. Davis, and H. J. Patrick, "Fiber grating sensors," J. Lightwave Technol. 15(8), 1442–1463 (1997). [CrossRef] 26. K. Zhou, X. Chen, and L. Zhang, "Implementation of optical chemsensors based on HF-etched fiber Bragg grating structures," Meas. Sci. Technol. 17(5), 1140–1145 (2006). [CrossRef] 27. Y. Yuan, L. Wang, and L. Ding, "Theory, experiment, and application of optical fiber etching," Appl. Opt. 51(24), 5845–5849 (2012). [CrossRef] 28. A. N. Chryssis, S. M. Lee, S. B. Lee, S. S. Saini, and M. Dagenais, "High sensitivity evanescent field fiber Bragg grating sensor," IEEE Photonics Technol. Lett. 17(6), 1253–1255 (2005). [CrossRef] 29. H. Zhou Y, X. Guang Q, and M. Rajibul I, "Simultaneous measurement of aliphatic alcohol concentration and temperature based on etched taper FBG," Sens. Actuators, B 202, 959–963 (2014). [CrossRef] 30. J. Li, H. Wang, L.-P. Sun, Y. Huang, L. Jin, and B.-O. Guan, "Etching Bragg gratings in Panda fibers for the temperature-independent refractive index sensing," Opt. Express 22(26), 31917–31923 (2014). [CrossRef] L. Jin, W. Zhang, and H. Zhang, "An embedded FBG sensor for simultaneous measurement of stress and temperature," IEEE Photonics Technol. Lett. 18(1), 154–156 (2006). L. Liu, Z. Hao, and Q. Zhao, "Temperature-independent FBG pressure sensor with high sensitivity," Opt. Fiber Technol. 13(1), 78–80 (2007). J. Leng and A. Anand, "Structural health monitoring of smart composite materials by using EFPI and FBG sensors," Sens. Actuators, A 103(3), 330–340 (2003). A. Rajabzadeh, R. Heusdens, and R. C. Hendriks, "Calculation of the Mean Strain of Smooth Non-Uniform Strain Fields Using Conventional FBG Sensors," J. Lightwave Technol. 36(17), 3716–3725 (2018). X. Liu, T. Wang, and Y. Wu, "Dual-Parameter Sensor Based on Tapered FBG Combined with Microfiber Cavity," IEEE Photonics Technol. Lett. 26(8), 817–820 (2014). K. Zhou, X. Chen, and L. Zhang, "High-sensitivity optical chemsensor based on etched D-fibre Bragg gratings," Electron. Lett. 40(4), 232–234 (2004). S. Sridevi, S. Vasu K, and N. Jayaraman, "Optical bio-sensing devices based on etched fiber Bragg gratings coated with carbon nanotubes and graphene oxide along with a specific dendrimer," Sens. Actuators, B 195, 150–155 (2014). A. Iadicicco, A. Cusano, and S. Campopiano, "Thinned fiber Bragg gratings as refractive index sensors," IEEE Sens. J. 5(6), 1288–1295 (2005). N. Chen, B. Yun, and Y. Cui, "Cladding mode resonances of etch-eroded fiber Bragg grating for ambient refractive index sensing," Appl. Phys. Lett. 88(13), 133902 (2006). X. Chen, K. Zhou, and L. Zhang, "Optical chemsensor based on etched tilted Bragg grating structures in multimode fiber," IEEE Photonics Technol. Lett. 17(4), 864–866 (2005). J. H. Osório, R. Oliveira, S. Aristilde, G. Chesini, M. A. R. Franco, R. N. Nogueira, and C. M. B. Cordeiro, "Bragg gratings in surface-core fibers: Refractive index and directional curvature sensing," Opt. Fiber Technol. 34, 86–90 (2017). X. Shu, B. A. Gwandu, and Y. Liu, "Sampled fiber Bragg grating for simultaneous refractive-index and temperature measurement," Opt. Lett. 26(11), 774 (2001). J. Yan, A. P. Zhang, and L. Y. Shao, "Simultaneous Measurement of Refractive Index and Temperature by Using Dual Long-Period Gratings with an Etching Process," IEEE Sens. J. 7(9), 1360–1361 (2007). A. Iadicicco, S. Campopiano, and A. Cutolo, "Nonuniform thinned fiber Bragg gratings for simultaneous refractive index and temperature measurements," IEEE Photonics Technol. Lett. 17(7), 1495–1497 (2005). D. A. C. Enríquez, A. R. D. Cruz, and M. T. M. R. Giraldi, "Hybrid FBG–LPG sensor for surrounding refractive index and temperature simultaneous discrimination," Opt. Laser Technol. 44(4), 981–986 (2012). C. Gouveia, P. A. S. Jorge, and J. M. Baptista, "Fabry–Pérot Cavity Based on a High-Birefringent Fiber Bragg Grating for Refractive Index and Temperature Measurement," IEEE Sens. J. 12(1), 17–21 (2012). C. R. Liao, Y. Wang, and D. N. Wang, "Fiber In-Line Mach–Zehnder Interferometer Embedded in FBG for Simultaneous Refractive Index and Temperature Measurement," IEEE Photonics Technol. Lett. 22(22), 1686–1688 (2010). R. M. Silva, M. S. Ferreira, and J. Kobelke, "Simultaneous measurement of curvature and strain using a suspended multicore fiber," Opt. Lett. 36(19), 3939 (2011). P. Saffari, T. Allsop, and A. Adebayo, "Long period grating in multicore optical fiber: an ultra-sensitive vector bending sensor for low curvatures," Opt. Lett. 39(12), 3508 (2014). J. E. Antoniolopez, Z. S. Eznaveh, and P. Likamwa, "Multicore fiber sensor for high-temperature applications up to 1000°C," Opt. Lett. 39(15), 4309 (2014). P. S. Westbrook, T. Kremp, and K. S. Feder, "Continuous multicore optical fiber grating arrays for distributed sensing applications," J. Lightwave Technol. 35(6), 1248–1252 (2017). A. Zhou, Y. Zhang, and G. Li, "Optical refractometer based on an asymmetrical twin-core fiber Michelson interferometer," Opt. Lett. 36(16), 3221 (2011). D. A. May-Arrioja and J. R. Guzman-Sepulveda, "Highly sensitive fiber optic refractive index sensor using multicore coupled structures," J. Lightwave Technol. 35(13), 2695–2701 (2017). F. Mumtaz, P. Cheng, C. Li, S. Cheng, C. Du, M. Yang, Y. Dai, and W. Hu, "A design of taper-like etched multicore fiber refractive index-insensitive a temperature highly sensitive Mach-Zehnder interferometer," IEEE Sensors Journal. 2020 Mar 5 A. Kersey, M. A. Davis, and H. J. Patrick, "Fiber grating sensors," J. Lightwave Technol. 15(8), 1442–1463 (1997). K. Zhou, X. Chen, and L. Zhang, "Implementation of optical chemsensors based on HF-etched fiber Bragg grating structures," Meas. Sci. Technol. 17(5), 1140–1145 (2006). Y. Yuan, L. Wang, and L. Ding, "Theory, experiment, and application of optical fiber etching," Appl. Opt. 51(24), 5845–5849 (2012). A. N. Chryssis, S. M. Lee, S. B. Lee, S. S. Saini, and M. Dagenais, "High sensitivity evanescent field fiber Bragg grating sensor," IEEE Photonics Technol. Lett. 17(6), 1253–1255 (2005). H. Zhou Y, X. Guang Q, and M. Rajibul I, "Simultaneous measurement of aliphatic alcohol concentration and temperature based on etched taper FBG," Sens. Actuators, B 202, 959–963 (2014). J. Li, H. Wang, L.-P. Sun, Y. Huang, L. Jin, and B.-O. Guan, "Etching Bragg gratings in Panda fibers for the temperature-independent refractive index sensing," Opt. Express 22(26), 31917–31923 (2014). Adebayo, A. Allsop, T. Anand, A. Antoniolopez, J. E. Aristilde, S. Baptista, J. M. Campopiano, S. Chen, N. Cheng, P. Cheng, S. Chesini, G. Chryssis, A. N. Cruz, A. R. D. Cui, Y. Cusano, A. Cutolo, A. Dagenais, M. Dai, Y. Davis, M. A. Ding, L. Du, C. Enríquez, D. A. C. Eznaveh, Z. S. Feder, K. S. Ferreira, M. S. Franco, M. A. R. Giraldi, M. T. M. R. Gouveia, C. Guan, B.-O. Guang Q, X. Guzman-Sepulveda, J. R. Gwandu, B. A. Hao, Z. Hendriks, R. C. Heusdens, R. Hu, W. Iadicicco, A. Jayaraman, N. Jin, L. Jorge, P. A. S. Kersey, A. Kobelke, J. Kremp, T. Lee, S. B. Lee, S. M. Leng, J. Li, C. Li, G. Liao, C. R. Likamwa, P. Liu, X. May-Arrioja, D. A. Mumtaz, F. Nogueira, R. N. Oliveira, R. Osório, J. H. Patrick, H. J. Rajabzadeh, A. Rajibul I, M. Saffari, P. Saini, S. S. Shao, L. Y. Shu, X. Silva, R. M. Sridevi, S. Sun, L.-P. Vasu K, S. Wang, D. N. Wang, L. Westbrook, P. S. Yan, J. Yuan, Y. Yun, B. Zhang, A. P. Zhang, W. Zhang, Y. Zhao, Q. Zhou, A. Zhou, K. Zhou Y, H. Electron. Lett. (1) IEEE Photonics Technol. Lett. (6) IEEE Sens. J. (3) J. Lightwave Technol. (4) Meas. Sci. Technol. (1) Opt. Fiber Technol. (2) Opt. Laser Technol. (1) Sens. Actuators, A (1) Sens. Actuators, B (2) (1) λ C = 2 n e f f , C ⋅ Λ (2) λ O = 2 n e f f , O ⋅ Λ (3) Δ λ C λ c = ( 1 n e f f , c ⋅ ∂ n e f f , C ∂ T + 1 Λ ⋅ ∂ Λ α T ) Δ T = ( ζ C + α ) ⋅ Δ T (4) Δ λ O λ O = ( 1 n e f f , O ⋅ ∂ n e f f , O ∂ T + 1 Λ ⋅ ∂ Λ α T ) Δ T + ( 1 n e f f , O ⋅ ∂ n e f f ⋅ O ∂ S R I ) Δ S R I = ( ζ O + α ) ⋅ Δ T + κ S R I (5) Δ λ = 4.174 E − 36 e R I 0.018 + 0.061 , R 2 = 99.1 %
CommonCrawl
Your search: "author:"Kaplinghat, M."" Physical Sciences and Mathematics (2) BY - Attribution required (3) Direct detection signatures of self-interacting dark matter with a light mediator Nobile, ED Kaplinghat, M Yu, HB Self-interacting dark matter (SIDM) is a simple and well-motivated scenario that could explain long-standing puzzles in structure formation on small scales. If the required self-interaction arises through a light mediator (with mass ∼ 10 MeV) in the dark sector, this new particle must be unstable to avoid overclosing the universe. The decay of the light mediator could happen due to a weak coupling of the hidden and visible sectors, providing new signatures for direct detection experiments. The SIDM nuclear recoil spectrum is more peaked towards low energies compared to the usual case of contact interactions, because the mediator mass is comparable to the momentum transfer of nuclear recoils. We show that the SIDM signal could be distinguished from that of DM particles with contact interactions by considering the time-average energy spectrum in experiments employing different target materials, or the average and modulated spectra in a single experiment. Using current limits from LUX and SuperCDMS, we also derive strong bounds on the mixing parameter between hidden and visible sector. Galactic center excess in γ rays from annihilation of self-interacting dark matter Linden, T © 2015 American Physical Society. Observations by the Fermi Large-Area Telescope have uncovered a significant γ-ray excess directed toward the Milky Way Galactic Center. There has been no detection of a similar signal in the stacked population of Milky Way dwarf spheroidal galaxies. Additionally, astronomical observations indicate that dwarf galaxies and other faint galaxies are less dense than predicted by the simplest cold dark matter models. We show that a self-interacting dark matter model with a particle mass of roughly 50 GeV annihilating to the mediator responsible for the strong self-interaction can simultaneously explain all three observations. The mediator is necessarily unstable, and its mass must be below about 100 MeV in order to decrease the dark matter density of faint galaxies. If the mediator decays to electron-positron pairs with a cross section on the order of the thermal relic value, then we find that these pairs can up-scatter the interstellar radiation field in the Galactic center and produce the observed γ-ray excess. First light and reionization: A conference summary Barton, EJ Bullock, JS Cooray, A The search for the first illuminated astronomical sources in the universe is at the edge of the cosmic frontier. Promising techniques for discovering the first objects and their effects span the electromagnetic spectrum and include gravitational waves. We summarize a workshop on discovering and understanding these sources which was held in May 2005 through the Center for Cosmology at the University of California, Irvine. © 2005 Elsevier B.V. All rights reserved. Proceedings of the Davis Meeting on Cosmic Inflation Kaloper, N Knox, L The Davis Meeting on Cosmic Inflation marked an exciting milestone on the road to precision cosmology. This is the index page for the proceedings of the conference. Individual proceedings contributions, when they appear on this archive, are linked from this page. Discovery of a new galactic center excess consistent with upscattered starlight Abazajian, KN Canac, N Horiuchi, S Kwa, A © 2015 IOP Publishing Ltd and Sissa Medialab srl . We present a new extended gamma ray excess detected with the Fermi Satellite Large Area Telescope toward the Galactic Center that traces the morphology of infrared starlight emission. Combined with its measured spectrum, this new extended source is approximately consistent with inverse Compton emission from a high-energy electron-positron population with energies up to about 10 GeV . Previously detected emissions tracing the 20 cm radio, interpreted as bremsstrahlung radiation, and the Galactic Center Extended emission tracing a spherical distribution and peaking at 2 GeV, are also detected. We show that the inverse Compton and bremsstrahlung emissions are likely due to the same source of electrons and positrons. All three extended emissions may be explained within the framework of a model where the dark matter annihilates to leptons or a model with unresolved millisecond pulsars in the Galactic Center. Astrophysical and Dark Matter Interpretations of Extended Gamma-Ray Emission from the Galactic Center We construct empirical models of the diffuse gamma-ray background toward the Galactic Center. Including all known point sources and a template of emission associated with interactions of cosmic rays with molecular gas, we show that the extended emission observed previously in the Fermi Large Area Telescope data toward the Galactic Center is detected at high significance for all permutations of the diffuse model components. However, we find that the fluxes and spectra of the sources in our model change significantly depending on the background model. In particular, the spectrum of the central Sgr A* source is less steep than in previous works and the recovered spectrum of the extended emission has large systematic uncertainties, especially at lower energies. If the extended emission is interpreted to be due to dark matter annihilation, we find annihilation into pure b-quark and τ-lepton channels to be statistically equivalent goodness of fits. In the case of the pure b-quark channel, we find a dark matter mass of 39.4(+3.7−2.9 stat)(±7.9 sys) GeV, while a pure τ+τ−-channel case has an estimated dark matter mass of 9.43(+0.63−0.52stat)(±1.2 sys) GeV. Alternatively, if the extended emission is interpreted to be astrophysical in origin such as due to unresolved millisecond pulsars, we obtain strong bounds on dark matter annihilation, although systematic uncertainties due to the dependence on the background models are significant. 3D stellar kinematics at the Galactic center: measuring the nuclear star cluster spatial density profile, black hole mass, and distance Do, T Martinez, GD Yelda, S Ghez, AM Bullock, J Lu, JR Peter, AGH Phifer, K We present 3D kinematic observations of stars within the central 0.5 pc of the Milky Way nuclear star cluster using adaptive optics imaging and spectroscopy from the Keck telescopes. Recent observations have shown that the cluster has a shallower surface density profile than expected for a dynamically relaxed cusp, leading to important implications for its formation and evolution. However, the true three dimensional profile of the cluster is unknown due to the difficulty in de-projecting the stellar number counts. Here, we use spherical Jeans modeling of individual proper motions and radial velocities to constrain for the first time, the de-projected spatial density profile, cluster velocity anisotropy, black hole mass ($M_\mathrm{BH}$), and distance to the Galactic center ($R_0$) simultaneously. We find that the inner stellar density profile of the late-type stars, $\rho(r)\propto r^{-\gamma}$ to have a power law slope $\gamma=0.05_{-0.60}^{+0.29}$, much more shallow than the frequently assumed Bahcall $ amp;$ Wolf slope of $\gamma=7/4$. The measured slope will significantly affect dynamical predictions involving the cluster, such as the dynamical friction time scale. The cluster core must be larger than 0.5 pc, which disfavors some scenarios for its origin. Our measurement of $M_\mathrm{BH}=5.76_{-1.26}^{+1.76}\times10^6$ $M_\odot$ and $R_0=8.92_{-0.55}^{+0.58}$ kpc is consistent with that derived from stellar orbits within 1$^{\prime\prime}$ of Sgr A*. When combined with the orbit of S0-2, the uncertainty on $R_0$ is reduced by 30% ($8.46_{-0.38}^{+0.42}$ kpc). We suggest that the MW NSC can be used in the future in combination with stellar orbits to significantly improve constraints on $R_0$. WHEPP-X: Report of the working group on cosmology Kaplinghat, M. Sriramkumar, L. Berera, A. Chingangbam, P. Jain, R. K. Joy, M. Martin, J. Mohanty, S. Nautiyal, L. Rangarajan, R. Ray, S. Kumar, V. H. S. This is a summary of the activities of the working group on cosmology at WHEPP-X. The three main problems that were discussed at some length by the group during the course of the workshop were (i) canceling a 'large' cosmological constant, (ii) non-Gaussianities in inflationary models and (iii) stability of interacting models of dark energy and dark matter. We have briefly outlined these problems and have indicated the progress made. Planning the Future of U.S. Particle Physics (Snowmass 2013): Chapter 4: Cosmic Frontier Feng, JL Ritz, S Beatty, JJ Buckley, J Cowen, DF Cushman, P Dodelson, S Galbiati, C Honscheid, K Hooper, D Kusenko, A Matchev, K McKinsey, D Nelson, AE Olinto, A Profumo, S Robertson, H Rosenberg, L Sinnis, G Tait, TMP These reports present the results of the 2013 Community Summer Study of the APS Division of Particles and Fields ("Snowmass 2013") on the future program of particle physics in the U.S. Chapter 4, on the Cosmic Frontier, discusses the program of research relevant to cosmology and the early universe. This area includes the study of dark matter and the search for its particle nature, the study of dark energy and inflation, and cosmic probes of fundamental symmetries. Working group report: Neutrino physics Choubey, Sandhya Indumathi, D. Agarwalla, S. Bandyopadhyay, A. Bhattacharyya, G. Chun, E. J. Dasgupta, B. Dighe, A. Ghoshal, P. Giri, A. K. Goswami, S. Hirsch, M. Kajita, T. Mani, H. S. Mohanta, R. Murthy, M. V. N. Pakvasa, S. Parida, M. K. Rajasekaran, G. Rodejohann, W. Roy, P. Uma Sankar, S. Schwetz, T. Sinha, N. This is the report of the neutrino physics working group at WHEPP-X. We summarize the problems selected and discussed at the workshop and the papers which have resulted subsequently.
CommonCrawl
Vote Fidler For School Board Fidler Matthew Leffler Candidate SEO Political SEO AWPCP Dominic Thiem Doesn't Play As Much Anymore, And That's A Good Thing Election Update: The First Post-Debate Polls Are In! And They're … Pretty Weird. The Iowa Caucuses Are In 17 Days. What's Going On In The Early States? Can You Track The Delirious Ducks? Admin Roles Adult SEO CCPA COMPLIANCE cloud502 Cloudflare And SEO CyberSEO deleted domains Digital Ocean Team Disavow domain detailer domain rank expireddomains.net External RSS external source Good SEO Practices gtmetrix louisville seo Manual Action matthewleffler.com Multi-Domain Strategy page load pbn 2019 pbns PPC – Adwords Private Blog Networks rank number one Ransomlinks Search Engine Console sem rush seo guide seo metrics seo porn SEO Promotions SEO Security SEO Visualization SEOprofiler server cluster Sitch social media mentions Syndicated News techniology consultant theglobenet UltimateSEO ultimateseo.net UltimateSEO.org unbranded keywords WP-Cerber WPA Auto Links Todd Gurley Is In The Right System At The Right Time Todd Gurley is off to one of the hottest starts in NFL history. After rushing for a league-leading 623 yards and nine touchdowns — plus 247 receiving yards and two more TDs through the air — Gurley has accumulated the fifth-most adjusted yards14 from scrimmage through six games since the 1970 AFL-NFL merger, joining former Rams greats Marshall Faulk and Eric Dickerson near the top of the list. The Rams are 6-0 on the young season, and Gurley's breakneck performance is often cited as a catalyst for the team's success. He has even been in the early discussion for league MVP. But is that really warranted? Does the Rams offense truly run through Gurley, or should we be giving head coach Sean McVay more of the credit? One approach to answering that question is to look at how McVay's scheme affects Gurley's performance. So far this year, the Rams have run nearly every offensive play from what is called the "11" personnel: one running back, one tight end and three wide receivers. According to charting from Sports Info Solutions, the Rams have run 95 percent of their offensive plays from this package — 32 percentage points more than the league average of 63 percent. And while heavy utilization of three wide receiver looks isn't new to McVay — the Rams ran 81 percent of their plays out of "11" in 2017 — 2018 is a massive outlier. McVay appears to have concluded that the deception afforded the offense by lining up with the same personnel package each play is greater than the constraints it places on his play calling. The Rams rarely stray from their favorite look NFL teams by the share of their plays run in each of the three most popular personnel packages, 2018 Personnel package 11: ONE RB, ONE TE, three WRs 12: ONE RB, TWO TEs, TWO WRs 21: Two RBs, one TE, two WRs L.A. Rams 95% 2% 0% Green Bay 77 14 1 Miami 77 8 1 Seattle 77 9 5 Indianapolis 72 18 3 Cleveland 70 16 1 Jacksonville 70 10 6 Cincinnati 69 20 2 Washington 69 17 0 Oakland 68 13 7 N.Y. Giants 67 23 4 Tampa Bay 67 14 7 Arizona 66 19 4 Denver 66 13 11 Buffalo 64 20 10 Chicago 64 20 10 Houston 62 34 0 Minnesota 62 23 9 Detroit 61 10 5 Pittsburgh 61 15 7 New Orleans 60 13 12 Carolina 59 14 8 Kansas City 59 22 9 Dallas 57 18 6 Atlanta 56 14 13 L.A. Chargers 56 17 10 Philadelphia 54 36 0 Tennessee 53 35 2 N.Y. Jets 52 24 0 New England 49 9 28 Baltimore 48 26 1 San Francisco 40 8 41 Average 63 17 7 Source: Sports Info Solutions There are other benefits from repeatedly giving the opponent the same look, however, and they affect Gurley's performance in important ways. When a team can spread a defense out laterally across the field, it opens up the middle and makes running the ball easier. Running backs with at least 20 carries averaged 4.75 yards per carry against six men in the box from 2016 to 2018.15 That's well over half a yard higher than the average of 4.09 yards per carry when that same group of runners faced seven defenders near the line of scrimmage. Against eight-man fronts, the average gain falls to 3.59. Facing a loaded box makes running much more difficult. McVay is no rube. He likely realizes that if you are going to run in the NFL, you should do so against a light box. Even better, this is something he can control. An offense exerts quite a bit of influence over how many box defenders it faces by how many wide receivers it chooses to deploy. When offenses play three wideouts, NFL defensive coordinators will typically match body type with body type and send a nickel defensive back in to cover the third receiver, leaving six defenders in the box. As a consequence, Gurley has faced more six-man fronts on his carries than any other running back in football since McVay took over as head coach of the Rams. It has paid serious dividends. So far this season, Gurley is crushing it against those fronts, averaging 5.5 yards per carry. But against a neutral seven-man front, he's been below league average at just 3.7 yards per attempt. Gurley thrives when there are fewer defenders Number of carries and yards per carry against a standard defense of six men in the box, 2017-18 No. of carries yards per carry Todd Gurley 202 5.12 Kareem Hunt 113 4.91 Lamar Miller 112 4.42 LeVeon Bell 103 4.45 Melvin Gordon 101 4.73 Gurley is basically the same back he has always been since he came into the league. If you use broken and missed tackles as a proxy for talent,16 you can see that Gurley makes defenders miss when running against six-man fronts far less than expected. He thrives, like most running backs, when he's allowed to hit open holes and get to the second level relatively unscathed. So Gurley is the beneficiary, not the proximate cause, of the Rams' offensive resurgence under McVay. Gurley has been put in a position to succeed and has taken full advantage. Crucially, while the Rams have benefited from being smart in their offensive schemes and decision-making, it's likely that many teams could emulate them and achieve similar success on the ground. Spreading a defense out and running against a light front is not a particularly novel idea. The commitment shown by running 95 percent of your plays out of a formation that encourages that result, however, is quite innovative. McVay pushes winning edges better than any coach in the NFL — and he, not his running back, is the principal reason that the Rams are currently the toast of the league. Check out our latest NFL predictions. Posted on October 19, 2018 Categories CampaignsTags Los Angeles Rams, NFL, Sean McVay, Todd Gurley How Worried Should Real Madrid, Bayern Munich And Barcelona Be? The big leagues in continental Europe have been dominated by their superpowers for years. Bayern Munich won the German Bundesliga title each of the past six seasons. In 13 of the past 14 seasons in Spain's La Liga, either Barcelona or Real Madrid has taken the crown. The trio has also combined to win the past six Champions League titles. But right now, you can say something about these teams that's been largely unthinkable for nearly a decade: They look vulnerable. Bayern is sixth in the Bundesliga, 4 points behind leaders Borussia Dortmund and trailing smaller clubs like Werder Bremen and Hertha BSC as well. Sevilla currently tops La Liga, with Barcelona and Madrid trailing close behind, but the two Spanish giants have each won just four of eight matches this season. Over the past decade, both teams have typically won at least 28 of their 38 matches per season, and the lowest win total either has posted was 22. These numbers are well off their pace. How worried should the superpowers of soccer be? The Soccer Power Index suggests reason for both confidence and concern. At the start of the season, Bayern was projected as 82 percent favorites to win the title. That has fallen, but only to 70 percent. Real Madrid has seen its La Liga title chances drop from 41 percent to 37 percent, but Barcelona's have actually increased to 47 percent from opening at 43. For now, it seems likely that these teams have enough of a head start in talent that they can still win their domestic leagues. The Champions League may be another story. At the beginning of the year, the continental big three plus Manchester City were dead even with one another at the top of the projections. Now City leads, Juventus has caught up, and the gap to Liverpool and Paris Saint-Germain is narrowing. And this is particularly striking because all three clubs are still massive favorites to progress out of their groups. What's changed is that the Soccer Power Index is starting to downgrade its projections. The early season struggles of Barcelona, Bayern Munich and Real Madrid are not merely a matter of a few bad bounces. Expected goals, a measure of the quality of scoring chances created and conceded, shows that this is no fluke of hot or cold shooting — these sides' underlying production numbers are off, too. The following chart shows the goal difference and expected goal difference for Barcelona, Bayern and Real Madrid in their first 10 matches of the season between domestic and Champions League play since 2010-2011, according to data analytics firm Opta Sports. For all three clubs, these are among their slowest starts to the season ever.12 Among all these starts to the season, the only one that was significantly worse than the three this year was Bayern Munich's in 2010-11, when the club ended up in third place in the Bundesliga on 65 points. These numbers suggest three things. First, Bayern has been better than its table position suggests. Its goal difference is the second-worst of any of these clubs since 2010, but its expected goal differential is merely eighth-worst. Barcelona's good goal difference, by contrast, is covering up problems in the underlying numbers. And Real Madrid is simply in trouble. In its last three league matches, Bayern has taken only 1 point — a draw against Augsburg — and scored just one goal while conceding six. However, its expected goals difference for those matches is roughly 4.8 to 2.5. These are performances typically good enough to win in the Bundesliga, and the points should come. But even the expected goals numbers do not reflect outright dominance. Bayern has struggled to produce spectacular attacking numbers. In particular, 30-year-old striker Robert Lewandowski is having a surprisingly down season, which comes on the heels of a surprisingly down World Cup. After scoring 27, 29 and 37 nonpenalty goals in the past three seasons between domestic and Champions League competition, with underlying numbers to match, the Polish forward has scored just two nonpenalty goals this season. His expected goals per 90 minutes has been more than 0.8 each of the last three seasons, and it's down to 0.28 now. Arjen Robben and James Rodriguez have carried the shooting load for Lewandowski so far, but that has meant a decline in their creative passing numbers, which has weakened the whole team. It is possible that this is just an early season slump or World Cup-related fatigue, and Lewandowski will snap out of it. If he doesn't, Bayern could be in for a disappointing year. Striker problems also have beset Real Madrid, but for them it's even worse. Real sold Cristiano Ronaldo over the summer and shocked observers by simply not replacing him. The club eventually purchased Mariano from Lyon, but no one expected that to be a like-for-like replacement. In nearly the same number of minutes last season, against weaker competition, Mariano attempted 130 shots, exactly half of Ronaldo's 260. Mariano has yet to start a match this season for Real; Gareth Bale and Karim Benzema have now been promoted to the point men in the attack after serving as Ronaldo's support crew for years. The results have been as expected. Losing Cristiano Ronaldo has zapped Real's offense Real Madrid's expected goals through the first 10 matches of its season, 2010-19 Expected goals through first 10 matches Bars in orange indicate Real Madrid seasons with Ronaldo. Source: Opta Sports Real has consistently produced about 2.5 or more expected goals per match over its first 10 games of the season, and that number is under 2.0 per match this year. The attack is no longer elite, and it's hard to see how Real can improve without an injection of talent. Real Madrid looks headed for year in the wilderness as merely one of Europe's 10 to 15 best teams rather than a top Champions League contender. For Barcelona, the problems are more complicated but perhaps no less severe. And unlike with Bayern and Real, they do not start at the top. Lionel Messi is still Lionel Messi, with 11 goals and four assists. Rather, Barcelona is struggling in the midfield, and that's leading to defensive problems. Last season, Barcelona conceded just 29 goals, second-fewest in La Liga. It is hardly unusual for the Catalan side to put up dominant defensive numbers, but last year's effort involved a change in tactics from manager Ernesto Valverde. In 2017-18, Barcelona relaxed the high press that had been a feature of its play at least since Pep Guardiola's reign ended in 2012. With midfielders content to allow opposition teams to hold possession in less dangerous areas, Barcelona broke up only about 48 percent of new open-play possessions for the other team before they completed three passes. This year, Valverde has brought the old press back, and Barca is breaking up 55 percent of new opposition possessions. This has not worked to their advantage. The 2017-18 team conceded shots at a reasonably high rate — 444 shots, seventh fewest in La Liga. But it prevented quality chances by keeping numbers back and not allowing passes in behind the defense. Barcelona's 0.087 expected goals per shot was second-best in La Liga after only Atletico Madrid. Valverde drilled his team to defend deeper rather than dominate midfield, and it worked. This year, the new style is having the opposite effect. Barca's expected goals per shot conceded has exploded to 0.148, the worst in La Liga. Barcelona's midfield depends on two 30-year-olds, Sergio Busquets and Ivan Rakitic, and last year Valverde's tactics already suggested he knew he needed to cover for their deficiencies in the press. The results of the new, more aggressive midfield tactics confirm he was right to pull back. So Barcelona's problems seem fixable, at least compared to those of Real Madrid and Bayern. If Valverde can accept once more the limitations of his midfield and play the more basic, defensive style he rolled out last season, there should be more than enough talent in the forward line to carry Barcelona deep in the Champions League. But if the team persists with this press, the Catalan side may end up in just as much trouble as Bayern and Real. Check out our latest soccer predictions. Posted on October 19, 2018 Categories CampaignsTags Barcelona, Bayern Munich, Bundesliga, Champions League, La Liga, Real Madrid, Soccer So Your Archipelago Is Exploding. How Doomed Is Your Island? "The Riddler" book is out now! It's chock-full of the best puzzles from this column (and, fret not, their answers) and some riddles that have never been seen before. I hope you enjoy it, and thank you for riddling with us these past three years. Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. There are two types: Riddler Express for those of you who want something bite-size and Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,30 and you may get a shoutout in next week's column. If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter. Riddler Express From Philip Ruo, a puzzle of pondering playing perfection: The NFL season is in full swing, and only one undefeated team remains — the 6-0 Los Angeles Rams. In theory, though, given the current NFL scheduling scheme — or at least what Wikipedia says it is — what is the largest number of teams that could finish a regular season 16-0? Submit your answer Riddler Classic From Ricky Jacobson and Ben Holtz, geological disaster looms beneath: You live on the volcanic archipelago of Riddleria. The Riddlerian Islands are a 30-minute boat ride off the shores of the nearby mainland. Your archipelago is connected via a network of bridges, forming one unified community. In an effort to conserve resources, the ancient Riddlerians who built this network opted not to build bridges between any two islands that were already connected to the community otherwise. Hence, there is exactly one path from any one island to any other island. One day, you feel the ground start to rumble — the islands' volcanoes are stirring. You're not sure whether any volcano is going to blow, but you and the rest of the Riddlerians flee the archipelago in rowboats bound for the mainland just to be safe. But as you leave, you look back and wonder what will become of your home. Each island contains exactly one volcano. You know that if a volcano erupts, the subterranean pressure change will be so great that the volcano will collapse in on itself, causing its island — and any connected bridges — to crumble into the ocean. Remarkably, other islands will be spared unless their own volcanoes erupt. But if enough bridges go down, your once-unified archipelagic community could split into several smaller, disjointed communities. If there were N islands in the archipelago originally and each volcano erupts independently with probability p, how many disjointed communities can you expect to find when you return? What value of p maximizes this number? Solution to last week's Riddler Express Congratulations to Sarry Al-Turk of Toronto, winner of last week's Riddler Express! Last week, you learned about a girl who loves to sing "The Unbirthday Song" to people — as one does. But she can only do that, of course, if it's not the person's actual birthday. If she kept singing it to random people until it happened to be someone's birthday, how long would her singing streak go before it became more likely than not that she would encounter someone whose birthday it is? Care for those vocal cords, child: It's 252 people. The probability that it's not an individual person's birthday is 364/365. The probability that you sing to N people in a row without it having been anyone's birthday is \((364/365)^N\) — since the birthdays are independent events, we can multiply that fraction over and over to match the number of people. We want to find the number N such that that probability is larger than 0.5. We could just stick that straight into a computer solver, but it's Friday, so let's have some fun and do a little algebra.31 First, we can take the logarithm of both sides, to get N out of the exponent, and then we can rearrange things a little bit: \begin{equation*}(364/365)^N > 0.5\end{equation*} \begin{equation*}N\log(364/365) > \log(0.5)\end{equation*} \begin{equation*}N > \log(0.5)/\log(364/365)\end{equation*} That numerator, log(0.5), equals about -0.3, and that denominator, log(364/365), equals about -0.001. That fraction equals about 253. So the singing streak is expected to go 252 people before it became more likely than not that a birthday boy or girl would be encountered. And a very merry unbirthday to you, dear reader — unless, of course, it is the big day. Solution to last week's Riddler Classic Congratulations to Clemens Fiedler of Krems, Austria, winner of last week's Riddler Classic! Last week, a farmer wanted to tether a goat — as one does. Specifically, the farmer wanted to tether the goat to the fence that surrounded his circular field such that the goat could graze on exactly half the field, by area. The field had a radius R. How long should the goat's tether be? The tether should be a bit longer than the radius — specifically, it should have a length of about 1.159R. Solver Russell No-last-name-given showed us what this looks like, pictured below. The field is green with radius R; the goat's would-be grazing area is gray, with radius r. The picture is all fine and good, but what about that math? It looks a little nasty. And it is a little nasty, I'm afraid. But, hey, it's nearly Halloween, so let's surrender to the nastiness and fear. The area on the field available to the goat is the intersection between two circles. And there just happens to be a whole body of knowledge about such circle-circle intersections. One way to think about it is to consider the two different shapes that the goat's tether allows it to graze in. The first is the big "pizza slice", defined in the image above by the two diagonal green radii and the bottom arc of the gray circle, and the second are the narrower areas between the pizza slice and the fence. Hector Pefo broke this math down a little bit more for us in his diagram (this time, the goat is on the southern end of the field): And finally, Laurent Lessard showed us a way to get to this same solution with calculus. Either way, I hope you enjoy it, goat. Nom nom nom. Want more riddles? Well, aren't you lucky? There's a whole book full of the best of them and some never seen before, called "The Riddler," and it's in stores now! Want to submit a riddle? Email me at [email protected] Posted on October 19, 2018 Categories CampaignsTags The Riddler ZF takes 35 percent stake in autonomous driving specialist ASAP FRANKFURT (Reuters) – Car parts maker ZF Friedrichshafen said on Friday it acquired a 35 percent stake in ASAP, a Germany-based maker of software and testing systems for autonomous driving applications and electric vehicles. ASAP specializes in car-to-x communication, human-machine interfaces and electronic architecture and last year generated sales of 84 million euros. It employs 1,100 staff. ZF's Chief Executive Officer Wolf-Henning Scheider recently said ZF will invest about 12 billion euros in electromobility and autonomous driving over the next five years. A purchase price for the ASAP stake was not disclosed. Reporting by Arno Schuetze, editing by Riham Alkousaa Posted on October 19, 2018 Categories Uncategorized Comcast says its fastest internet reaches more U.S. homes than any other provider (Reuters) – Comcast Corp said on Thursday that its fastest-speed gigabit internet service now reaches more homes than any other provider in the United States after it completed rollout to nearly all 58 million homes and businesses it serves. The milestone widens its greater coverage over telecoms rivals. Competitor Verizon Communications Inc races to deploy its next-generation 5G wireless service on mobile phones and in the home with speeds theoretically rivaling cable company products. Verizon launched its home internet 5G service in October, the first commercial offering of its kind in the United States. Comcast's high-speed internet business is one of its biggest contributors to revenue and profit as customers for its video services decline. The segment has helped prop up the media and communications conglomerate's finances. So-called cord cutters, many of whom continue to rely on broadband providers such as Comcast for internet services, have started defecting to video-streaming services such as Netflix, Hulu and Youtube TV for television programming. That trend sparked a wave of media company mergers. Walt Disney Co struck a deal to buy 21st Century Fox for $71.3 billion and AT&T Inc bought Time Warner for $85 billion, and both have vowed to build streaming video services for consumers looking for a lower-cost alternative to traditional pay television services. Comcast lost to Disney on a bid to buy Fox this year but prevailed against Disney in an auction to buy satellite television broadcaster and media company Sky. Reporting by Kenneth Li; Editing by Cynthia Osterman Robot market growth slows as trade war hits industrial spending: robot industry chief TOKYO (Reuters) – An escalating trade war between the United States and China has dampened manufacturers' appetite for investment in equipment, causing growth in the industrial robot market to slow, the chief of the global robot industry group said. Many global manufacturers "are now in a wait-and-see mode, wondering whether to shift production (away from China) to, let's say, Vietnam or the United States," said Junji Tsuda, chief of the International Federation of Robotics (IFR), in an interview on Thursday. IFR, which brings together nearly 60 global robot suppliers and integrators, predicts worldwide industrial robot sales this year to grow 10 percent compared to last year's 30 percent jump. China is the world's largest robots market with a 36 percent global share, with its sales volume exceeding the total of Europe and the Americas combined. Tsuda, also the chairman of Japan's Yaskawa Electric Corp, said the manufacturers would move out of the wait-and-see mode by the end of this year. It will take a while for the direction of the trade war to be clear, Tsuda said. "But global demand for smartphones, semiconductors and autos have been solid, and the time will eventually come that they can wait no longer and will resume investment to meet the demand." Yaskawa, one of the world's top robot manufacturers, last week cut its annual operating profit forecast to 59 billion yen ($524.40 million) from 65.5 billion yen, citing a slowdown in smartphone-related demand in China and growing caution over the trade dispute. From next year onwards, however, IFR expects the robot market growth to pick up again, forecasting an average 14 percent increase per year through 2021. ($1 = 112.5100 yen) Reporting by Makiko Yamazaki; Editing by Muralikumar Anantharaman Philip Rivers Has The Supporting Cast He Deserves Again Conventional NFL wisdom says teams should do whatever it takes to snag a Franchise Quarterback — that from there, the winning just takes care of itself. But for most of Philip Rivers's career, his Los Angeles (née San Diego) Chargers have been the exception to that rule. Taken fourth overall in the 2004 draft, Rivers has been the elite passer that teams dream about building around. And yet, his team has just four total playoff wins to show for it, including only one this decade. This year, though, Los Angeles looks poised to reverse that trend and actually capitalize on having a future Hall of Fame QB in its midst, while there's still time left in Rivers's career to do it. The Chargers walloped the Browns 38-14 in Cleveland last Sunday, bringing their record to 4-2 on the season — and giving them a 61 percent probability of making their first playoff appearance since 2013. Although L.A.'s postseason bid is far from assured, right now the Chargers have set themselves up with their most promising start to a season in a long time. This Charger renaissance has been building for a few years, since the team finally began surrounding Rivers again with better playmakers on both sides of the ball. On defense, that goes back to 2012, when former general manager A.J. Smith drafted pass-rusher Melvin Ingram 18th overall. After a slow start to his career, Ingram has blossomed into a Pro Bowler and an annual double-digit sack candidate. Under Smith's successor, Tom Telesco, the Chargers have also grabbed several defensive contributors through the draft, including sack-machine DE Joey Bosa,17 solid LB Denzel Perryman, up-and-coming CB Desmond King II and rookie S Derwin James (who, in his first season, already ranks as the NFL's fifth-best safety according to ProFootballFocus's player grades). Toss in outside pickups such as DT Brandon Mebane and CB Casey Hayward — another Pro Bowler from last season — plus the guidance of proven coordinator Gus Bradley, and the Chargers' defensive talent base has undeniably made strides over the past handful of seasons. On offense, Telesco also made key acquisitions that helped pave the way for this year's hot start when he took WR Keenan Allen in the third round of the 2013 draft and RB Melvin Gordon 15th overall in 2015. Picking first-round running backs is always tricky business, but Gordon has been a good one so far in his career, with a couple of 1,400-yards-from-scrimmage seasons under his belt (in 2016 and 2017) and an excellent start to 2018 as well. Meanwhile, Allen has taken the lead from top San Diego-era targets Malcom Floyd and Antonio Gates and forged his own chemistry with Rivers — only four receivers leaguewide have more yards through the air since 2017 than Allen does. (It also helps that Allen has stayed healthy these past two seasons after missing 23 combined games in 2015-16.) Allen and Gordon aren't the only teammates making Rivers's life easier: The offensive line has been much better with free-agent C Mike Pouncey anchoring the middle, while change-of-pace RB Austin Ekeler has proven himself exceptionally tough to bring down — he leads all RBs in yards after first contact per rush. More broadly, in its second year under head coach Anthony Lynn, Los Angeles now has the offensive pieces to beat teams in multiple ways. Add it all up and it's clear that Rivers, who turns 37 in December, has a much better group of talent around him to work with than in years past. Here's a look at the changes in Rivers's own production over time — as measured by his Yards Above Backup Quarterback (YABQ) — along with how his top skill-position teammates and defense have also evolved: Philip Rivers is great again — and he has help Los Angeles Chargers' production from quarterback Philip Rivers and his supporting cast, 2006-2018 Rivers YABQ/G Top RB YdSc/G Top Rec. Team Def. efficiency 2018 99.2 M. Gordon 124.2 K. Allen 80.0 54.9 2017 75.1 M. Gordon 98.8 K. Allen 87.6 63.0 2016 31.8 M. Gordon 88.5 T. Williams 66.2 51.1 2015 48.3 D. Woodhead 68.2 K. Allen 45.3 38.6 2014 45.7 B. Oliver 53.3 M. Floyd 53.5 42.8 2013 79.6 R. Mathews 90.3 K. Allen 65.4 32.2 2012 -3.9 R. Mathews 59.9 M. Floyd 50.9 55.4 2011 48.0 R. Mathews 96.6 V. Jackson 72.3 33.2 2010 77.1 M. Tolbert 59.4 A. Gates 48.9 64.3 2009 97.0 L. Tomlinson 55.3 V. Jackson 73.6 46.4 2007 26.4 L. Tomlinson 121.8 A. Gates 61.5 62.6 Per-game measures are relative to team schedule lengths, not individual games played. YABQ: Yards Above Backup Quarterback, a measure of QB performance that gives credit for passing and rushing, and adjusts for strength of schedule. YDSC: yards from scrimmage, or rushing yards plus receiving yards. Defensive efficiency: ESPN's measure of a defense's per-play effectiveness on a 0-100 scale. Source: ESPN Stats & Information Group, pro-Football-Reference.com It probably isn't a coincidence that Rivers is currently enjoying his best statistical performance in years, with Gordon and Allen also contributing more than any Charger rusher and receiver since the days of LaDainian Tomlinson and Vincent Jackson. It's a little circular, in that sense: Is Rivers making them better, or are they helping Rivers rediscover his form? (Gordon's ability to run against stacked defenses, for instance, has opened up space for Rivers to throw downfield.) Either way, the ingredients have been in place for a late-career QB rejuvenation. Right now, Rivers is on pace to tie for the ninth-most-efficient post-merger performance for a passer age 35 or older, according to Pro-Football-Reference.com's advanced passing index. As far as old-man QB seasons go, this is one of the best in history. Of course, with the Chargers, it's about more than just improved talent. It's also about execution, something this team has often been found sorely lacking over the years. As Mike Tanier wrote in his L.A. chapter for Football Outsiders' 2018 Almanac, you could make a pretty convincing case that the 2017 Chargers missed the playoffs because of two very fundamental football activities: tackling and kicking. Last year, Los Angeles let opponents break tackles at an incredible rate and missed numerous field goals and extra points, helping to turn a team with 10-and-a-half-win point differential into a sad-sack nine-game-winner. This year's place-kicking game hasn't been great (Caleb Sturgis made just 71 percent of his total field goals and extra points before he was sidelined by an injury), but it's no longer dead-last in football, which I suppose is an accomplishment. Plus, the Chargers rank among the best in the league in terms of kickoffs, a big reason for their fourth-ranked net starting field position. And as for the tackling woes, they appear to be a thing of the past. According to Football Outsiders' charting data, only 3.9 percent of plays by Charger opponents have seen a broken tackle, good for 10th best in the league this year. Relatedly, the Chargers are also allowing the league's sixth-lowest rate of yards after first contact per rush this season, another major sign of defensive progress as compared with last season. The Chargers must have practiced their tackling Los Angeles Chargers' defensive performance and league ranking in preventing opponents from breaking tackles or gaining yards after contact Broken tackles/play NFL Rank Opponents' yards after 1st contact/rush 2018 3.9% 10 1.56 6 2017 13.3 31 2.31 32 Source: Football Outsiders, ESPN Stats & Information Group Los Angeles will put its improved talent and newfound execution on display in London on Sunday, for a game against the Tennessee Titans that ranks among the best of Week 7 in terms of both matchup quality (i.e., the harmonic mean of the two teams' Elo ratings in each game) and how much it figures to swing either team's odds of making the playoffs: The best matchups of Week 7 Week 7 games by the highest average Elo rating (using the harmonic mean) plus the total potential swing for the two teams' playoff chances, according to FiveThirtyEight's NFL predictions Playoff % Avg. Chg* Total Change Game Quality CAR 43.4% ±12.8 PHI 64.2% ±12.2 25.0 1586 LAC 60.6 14.6 TEN 41.3 12.6 27.2 1524 WSH 38.8 16.3 DAL 40.2 16.2 32.6 1517 BAL 68.7 11.5 NO 72.2 9.7 21.1 1605 CHI 43.0 12.2 NE 78.4 9.0 21.3 1560 CIN 49.6 11.2 KC 95.8 3.6 14.8 1575 JAX 46.7 13.8 HOU 23.6 12.9 26.7 1470 MIA 42.8 12.6 DET 24.3 9.8 22.4 1496 MIN 57.0 13.4 NYJ 14.7 7.1 20.5 1513 LAR 95.8 3.3 SF 3.1 2.8 6.1 1512 ATL 27.7 6.5 NYG 1.2 1.0 7.5 1454 BUF 10.2 5.7 IND 4.1 2.3 8.0 1417 DEN 3.8 2.6 ARI 1.4 0.8 3.4 1418 TB 20.7 5.6 CLE 1.1 1.0 6.6 1394 Game quality is the harmonic mean of the Elo ratings for the two teams in a given matchup. *Average change is weighted by the likelihood of a win or loss. (Ties are excluded.) Source: ESPN.com For the Chargers, it's part of a long road trip that will keep them away from Southern California until Nov. 18. The StubHub Center doesn't exactly offer an intimidating advantage even when they are at home, but it does bear watching how L.A. manages all that travel. Even so, the Chargers' season will still probably hinge on the final few matchups of the season — their last five games are either against division rivals or the biggest threats to their wild-card chances. If Rivers and his improved supporting cast can continue to thrive up to and including the month of December, we'll know the Chargers have stamped their ticket back to the postseason and given their star QB at least one more chance to shine on the game's brightest stage. FiveThirtyEight vs. the readers Attention football fans! Be sure to check out our constantly updating NFL prediction interactive, which uses FiveThirtyEight's Elo ratings to forecast the rest of the season. And if you think you can outsmart Elo, step right up to our prediction game, which lets you pick against our model (and your fellow readers) for bragging rights and a place on our giant leaderboard. Here are the games where Elo made its best — and worst — predictions against the field of prognosticators last week: Elo's dumbest (and smartest) picks of Week 6 Average difference between points won by readers and by Elo in Week 6 matchups in FiveThirtyEight's NFL prediction game OUR PREDICTION (ELO) READERS' PREDICTION WIN PROB. READERS' NET PTS BUF 52% HOU 60% HOU 20, BUF 13 +9.4 TEN 53 BAL 54 BAL 21, TEN 0 +4.8 GB 66 GB 75 GB 33, SF 30 +3.3 LAR 69 LAR 75 LAR 23, DEN 20 +1.2 CIN 54 CIN 51 PIT 28, CIN 21 +1.2 MIN 74 MIN 79 MIN 27, ARI 17 +0.9 ATL 67 ATL 64 ATL 34, TB 29 -3.9 SEA 67 SEA 63 SEA 27, OAK 3 -4.4 CAR 55 CAR 58 WSH 23, CAR 17 -5.3 PHI 71 PHI 66 PHI 34, NYG 13 -5.4 NE 54 NE 50 NE 43, KC 40 -6.2 LAC 69 LAC 60 LAC 38, CLE 14 -9.1 NYJ 67 NYJ 57 NYJ 42, IND 34 -9.9 MIA 54 CHI 59 MIA 31, CHI 28 -15.4 DAL 53 JAX 60 DAL 40, JAX 7 -16.2 Home teams are in bold. The scoring system is nonlinear, so readers' average points don't necessarily match the number of points that would be given to the average reader prediction. What's been a great season for Elo kept getting better in Week 6 as the algorithm beat the average reader by 55 points, its second-best showing of the entire year so far. Human predictors really only had one major feather in their cap — Houston's Nathan Peterman-fueled win over Buffalo (a very bad team whose badness Elo refuses to acknowledge) — but otherwise they saw Elo run roughshod over their picks. Elo correctly called wins for Dallas and Miami when readers picked otherwise, and it had a lot more confidence than readers in the Jets' and Chargers' victories as well. All told, the average reader is now down 233 points to Elo for the season to date. Among the readers who weren't destroyed by Elo, congrats to John D. Harden, who led all users with 275 points in Week 6, and to Jevon Mallett, who continues to lead all users for the season with 453 points. Thanks to everyone who played last week — and if you didn't play, get in on the game already! You can make picks now and still try your luck against Elo, even if you haven't played yet. Posted on October 17, 2018 Categories CampaignsTags Los Angeles Chargers, NFL, NFL Elo Ratings, Philip Rivers Will The Midterms Decide Who Runs In 2020? Welcome to FiveThirtyEight's weekly politics chat. The transcript below has been lightly edited. sarahf (Sarah Frostenson, politics editor): It is now 21 DAYS UNTIL THE MIDTERMS!! And while voters will mainly be deciding who controls Congress, they'll also maybe be deciding what kind of Democrat should run in 2020. For instance, if Democrats don't take back the House, does that mean a Joe Biden run in the 2020 Democratic primary is more likely? Or if there is a blue wave and Democrats gain 60+ seats, does that make the road easier for a more progressive Democrat like Sen. Kamala Harris? clare.malone (Clare Malone, senior political writer): Man, if the Democrats lose the House, I think there will be some straight-up PANIC. natesilver (Nate Silver, editor in chief): There would be, although one could ask whether it was warranted or not. clare.malone: I don't think Joe Biden needs them to lose the House to prove he's a good candidate. He could just point to Democratic Senate losses, maybe? Assuming that Democrats lose in a couple of red states, a candidate like Biden could say, "Look, I will make inroads in a place like that." But I'm interested in Nate's House take. natesilver: I mean, to a first approximation I think a lot of this stuff is silly. There's no clear relationship between midterm losses and what happens in the next presidential election Sometimes it's 2006 and there are two rough elections in a row Sometimes it's 2010 and a win comes before a loss The pattern just isn't strong https://t.co/7dzH6heerv pic.twitter.com/IVwUSew2ix — David Byler (@databyler) October 15, 2018 As David says, there isn't much of a pattern for how midterms affect the next presidential election. Certainly. it will affect Democrats' attitude, but how much that attitudinal change affects 2020, and whether that is helpful or hurtful to Democrats, is pretty up in the air, IMO. clare.malone: Right — I mean was just about to say, proof aside (proof! facts!), I think candidates and party apparatchiks always use a loss to motivate their constituents. That attitudinal thing can be pretty powerful in a primary campaign. See: Bernie Sanders. natesilver: I'm skeptical that Biden could use Senate losses to justify the need for more conservative candidates … if Democrats also win the House. We'll see, though. There are some pretty wacky scenarios that are within the realm of possibility, like Democrats winning 35 House seats but losing four Senate seats. clare.malone: I think people's minds are on the Senate right now, though. And the Republican majority there does lie in smaller states and regions that Democrats have gradually lost over the past couple of decades. It's not an absurd argument to make in 2019. perry (Perry Bacon Jr., senior writer): I think Biden has to decide if he wants to run or not. He was kind of confused about whether to run in 2016. And based on what he's been saying, he doesn't seem to know now either. I think a really strong push to draft him might encourage him to get in the running. And I think Democrats not winning the House (assuming that they lose the Senate too) will get more people to encourage him to run. Biden would be an important figure if he got in the race, in large part because others in this more "centrist" lane might not run if he is in. clare.malone: I don't think Biden is a Mario Cuomo: I think he'll get in the race. I'm not sure how much he'll toy with people up until the very end. natesilver: Are people's expectations that Democrats will win the Senate? If so, people aren't paying much attention (certainly not paying much attention to our forecast). clare.malone: I don't know. I don't think people expect that. I guess you hear "blue wave" bandied about and you could make assumptions. sarahf: And it wasn't always so dire in the Senate either — it wasn't until early October that Democrats' odds worsened dramatically. But OK, let's set aside what could happen in the Senate for a moment and assume that there is a huge blue wave in the House and even in some key gubernatorial races like Stacey Abrams's, in Georgia, and Andrew Gillum's, in Florida. It doesn't mean Democrats win in 2020, but doesn't it change the playing field of candidates in the Democratic primary? Or would Sens. Harris, Elizabeth Warren and Cory Booker run no matter what? clare.malone: I think Gillum or Abrams wins would be huge. It would challenge some norms about what sorts of candidates win in states where you need to win over moderates or Republican-leaning independents. natesilver: Gov. Scott Walker losing his re-election bid in Wisconsin might have some interesting narrative implications too, although not in the same way that Gillum and Abrams do. perry: I'm interested in Abrams's and Gillum's gubernatorial bids and Rep. Beto O'Rourke's Texas Senate run because they are all making the case that it is a better strategy to try to amp up the base to get greater minority and youth turnout rather than trying to win over swing voters. If they do significantly better in their states than more moderate candidates from previous years, I think that would buttress Democrats like Warren and Harris, who are more likely to run more decidedly liberal campaigns. But the Midwest is interesting, as Nate is hinting at. The Democrats are doing well in the Midwest with a bunch of candidates who are kind of bland and fairly centrist-friendly. The South and the Midwest are, of course, very different regions, too. natesilver: I guess I've just never dealt with an election before where you'd get the sort of split verdict like the one we're predicting, where Democrats win the House and do pretty darn well in gubernatorial races but fall short –– and possibly even lose seats –– in the Senate. And some of the high-profile toss-up races could also go in different directions. Maybe Gillum wins in Florida but Abrams loses in Georgia, for example. In that case, there would be a sort of battle-of-narrative-interpretations over the midterms. sarahf: As our colleague Geoffrey Skelley wrote, the last time the Senate and House moved in opposite directions during a midterm was in 1982, during under Ronald Reagan's presidency. Part of that was because Reagan had a pretty bad approval rating, in the low 40s … which isn't too far off from where President Trump's sits now. natesilver: And I guess 1982 was interpreted as being pretty bad for Reagan? I was 4 years old then, so I don't remember. sarahf: But we could get a really weird coalition of Democrats with competing priorities in 2019 if they do take back the House. And that could make finding a general-election candidate that appeals to both the more moderate and more progressive wings of the party … challenging. clare.malone: I guess this is why so many people in the post-2016 party were enamored of the Sanders economic message. It gets to the progressive heart of things while trying to avoid the touchy culture stuff. But, of course, Democrats have to figure out the Trump factor. Trump will inevitably drag culture wars stuff into a campaign. perry: Sens. Tammy Baldwin, Sherrod Brown, Bob Casey and Amy Klobuchar are likely to win in the Midwest, an important region of the country for Democrats electorally — and some of that group could win easily. Post-election, we will be able to see the counties in Minnesota where Trump won in 2016 but that Klobuchar carried in 2018 — and I think there may be a lot of them. Plus, that's the kind of thing she could talk about if she decides to run for president. clare.malone: Right. But none of those senators have the buzz factor in this shadow primary that we're in right now. Nor the fundraising. But that could change post-November. sarahf: Speaking of fundraising … What do we make of all the pouring into O'Rourke's campaign? Why aren't more Democratic supporters funding races where the Democratic candidate actually stands a chance of winning? natesilver: Oh no, you're going to trigger me, Sarah. The O'Rourke fundraising narrative is so fucking dumb. Democrats are raising huge amounts of money EVERYWHERE. Democrats have raised almost 2/3 of the total money for the House (not counting candidates who lost in primaries). Despite the fact that the GOP holds the incumbency advantage. Never been anything like that before in our House fundraising data, which goes back to 1998. pic.twitter.com/18iRaPGQxJ — Nate Silver (@NateSilver538) October 16, 2018 clare.malone: We spend a lot of time here on numbers, but I always think of people reacting to politicians they really like in almost pheromone-tinged ways. People are irrational actors when it comes to politics — it's why they vote by party even when the party positions do a 180 (see, ahem, the post-2016 GOP on trade, Russia, and so on.) perry: In terms of O'Rourke, I was surprised the majority of the money came from Texas, according to his campaign. So it was not just coastal elites who liked seeing a white man delivering Black Lives Matter talking points. Houston, Austin and Dallas all have plenty of Democrats, but they are not Los Angeles, Washington, D.C., or New York. That also pushes back on the idea that he is somehow taking money away from other close races around the country. clare.malone: Democrats see inspirational stuff in O'Rourke's response to the NFL kneeling issue and the incident where a black man was shot and killed in his own home. They like that he's saying this stuff in Texas. They also really dislike Ted Cruz. perry: I think this kind of small-dollar fundraising is a real talent and shows real political appeal. It is what made Howard Dean, Barack Obama and Bernie Sanders such viable candidates. clare.malone: O'Rourke's also gotten a lot of media buzz, so people know his name, unlike, say, Sen. Joe Donnelly (running for re-election in Indiana) or former Gov. Phil Bredesen (running for Senate in Tennessee). So they send O'Rourke money! natesilver: Texas is also a big state with a lot of wealth, and Democrats there haven't had a lot to donate to in a while. perry: I can't tell if O'Rourke should run for president if he loses the Senate race. But he should definitely think about it. Hard. clare.malone: I mean, the thing about O'Rourke running in 2020 is that he's proved he can fundraise and he's still young(-ish), but he's been in Congress awhile, which is an asset. People can't call him too inexperienced the way they could with, say, failed Missouri Senate candidate Jason Kander. perry: But I'm not sure how the midterms affect the political outsiders like lawyer Michael Avenatti, billionaire Tom Steyer and former New York Mayor Michael Bloomberg. That is the one group I'm probably the most curious about. I feel like I know who the main established candidates are — the senator and governor types like Warren, Booker or Montana Gov. Steve Bullock. I suspect the more it seems like Democrats are in crisis, the more these outsiders have a rationale to run. sarahf: Bloomberg did just re-register as a Democrat. clare.malone: Real question: Does Avenatti actually want to run or does he just like the attention? I don't think he really wants to run. natesilver: I think Avenatti's chances are overrated because people are overcompensating for their failure to see Trump last time. natesilver: He massively, massively, massively fucked up in the Kavanaugh thing. He's polling at 1 percent. I don't think his chances are zero … I just think he's one from a long list of long shot possibilities. clare.malone: POLLS! Bloomberg's flirtations feel so off for this political moment with the Democrats. perry: I think Avenatti, Bloomberg and Steyer would love to be president, so if there is demand for their candidacies, they will be more than eager to jump in. But whether there's demand for their candidacies is going to depend on whether Democrats need a savior. sarahf: I guess what I'm trying to wrap my head around is: Under what scenario does it make sense for these outsider candidates to run? perry: If Democrats lose the House and Senate. natesilver: Make sense for them or make sense for Democrats? perry: It makes sense for them. natesilver: If Avenatti thinks it will help to sell more books and put him on TV even more, he'll run. If he thinks it will damage his brand in the long term, maybe not. perry: Yes, Avenatti may just want the fame. clare.malone: Right. Steyer and Bloomberg are more interesting because they actually have $$$$. perry: Bloomberg endorsed Hillary Clinton in 2016. Maybe he feels like he should just run. He tried to be a team player, and it didn't work. sarahf: No matter the national environment for Democrats? clare.malone: E G O perry: I think a lot of these candidates are more responsive to, say, "Morning Joe" than FiveThirtyEight. I will be watching what "Morning Joe" says the day after the election. sarahf: OK, so, what happens in 2018 means nothing for 2020 … but in a world where they are related, what are you looking for on election night to give you clues about 2020? perry: I'm looking for results that will create narratives that make it easier for people to run — or not run. I assume Klobuchar wants to be president. Does she win her Senate election by so much that she convinces herself (and others) that she is the electable candidate Democrats want? Or do Democrats do poorly enough in swing states that people who are too centrist for the party's activist crowd (e.g., Biden, Bloomberg or Colorado Gov. John Hickenlooper) convince themselves and others that they are the solution? Maybe O'Rourke, Abrams and Gillum do so well that it's clear Democrats should try to grow the base and focus on swing voters less. Or they do terribly — and the message is that Democrats should be thinking about the center more. natesilver: Again, it depends a lot on what happens (obviously). If Democrats sweep both chambers, or lose both chambers, there are some pretty clear takeaways. Otherwise, I'm not sure that the midterms will affect people's behavior that much. I do think the Abrams and Gillum gubernatorial races are important, though. Plus, there's the fact that Democrats have nominated an awful lot of women. And if women do well, it could (perhaps quite correctly!) lead to a narrative that Democratic candidates should look more like the party they're representing, which is to say diverse and mostly female. perry: I think I might view the 2018 election results less as telling us important information about 2020 and more as data points that will be spun by self-interested people into rationales for what they already wanted to do anyway. clare.malone: I guess I'm mostly focusing on what kinds of women turn out to vote for Democrats in this election. I want to see whether there's elevated turnout in communities we don't usually see elevated turnout in, particularly with women. There are a huge number of female candidates potentially on the Democratic docket for 2020, and Warren, for example, has already made an interesting ad about running as an angry women in the age of Trump. What I'm saying here is that I'm eager to see what the zeigtgeisty take away from Nov. 6 will be, in addition to what stories "Good Morning America" is running vs. "Morning Joe" (as a proxy for what Americans who aren't microscopically interested in politics will take away from the election). sarahf: Indeed. And I'll be looking for FiveThirtyEight's takes as well. Posted on October 17, 2018 Categories 2018 Election, CampaignsTags 2018 Election, 2018 House Elections, 2018 Senate Elections, 2020 Democratic Primary, 2020 Election, Cory Booker, Elizabeth Warren, Joe Biden, Kamala Harris, Slack Chat What Happens When Humans Fall In Love With An Invasive Species On a rocky strip of Lake Superior beachfront, the rites of spring begin at dusk and involve fish. Lots and lots of fish. Every year, like clockwork, slender, silvery rainbow smelt, each no longer than your hand, return from deeper waters. They arrive just as the crust of winter ice on the water breaks apart, looking to spawn in the frigid creeks that run out of the hills north of Duluth, Minnesota. For three or four nights, maybe a week if you're lucky, thousands of smelt jostle their way out of the lake. And that's where the humans are waiting. On this night in early May, on the narrow mouth of the Lester River, there are only about a couple dozen people present. They stand around, bundled in hooded sweatshirts layered under thick rubber overalls that cover their bodies from toe to nipple. The smelt have not yet arrived and the beach is quiet. Waves lap the shore. Someone kicks a rock. But 40 years ago, smelt fishing on the Lester River was something else entirely. "There were people all over the place, bumper to bumper on London Road," said Don Schreiner, fisheries specialist with the Minnesota Sea Grant. These now-tranquil shores were once home to a circus tent that housed an all-night smelt fry and a party atmosphere so wild that Schreiner's parents wouldn't even take him and his siblings down to the beach. In addition to hangovers, the smelt also brought a tourism industry. There were professional fishermen catching and selling smelt. It was a huge cultural event. "And then," Schreiner said. "It crashed." Starting around 1979, smelt numbers in Lake Superior plummeted. In '78, commercial fishing companies took in nearly 1.5 million pounds of smelt. A decade later, the haul was 182,000 pounds. There is no commercial smelt fishing on Lake Superior today. But because the smelt in Lake Superior are an invasive species, their decline is actually a sign that the lake is becoming healthier, ecologically speaking. From a cultural and economic perspective, though, the North Shore isn't what it was. So is the decline of smelt something to celebrate? And if so, who should be throwing the party? Some people miss the glory days of Lester River fishing even when evidence suggests that Lake Superior and the people who rely on it are better off now. Facts, it turns out, can't always sway emotion or reshape business plans. And these issues are not unique to smelt. All over the world, you'll find invasive species that are beloved by humans — even as these foreign plants and animals alter or damage the environment. The fight against invasive species is often framed as a technological problem — how do you selectively eliminate a species once it's made itself at home in an environment? But in reality, it's also a question of human hearts and minds. And those might be the harder obstacle to clear. Smelt may not fit into the stereotype that invasive species are all bad, but the sea lamprey does. Snake-like fish that suck the blood of other animals, lamprey were devastating to the Great Lakes, all but wiping out populations of native trout. At the same time, native herring populations were also declining, and lamprey may have had a hand in that, too, Schreiner said. The lamprey's swath of destruction cleared the way for smelt, whose populations grew as they filled the gaps those native species left behind. In the world of invasive species, sea lamprey are, arguably, public enemy No. 1 — the toothy alien maw grinning from a wanted poster. But while the lamprey and smelt are connected, they affect the environment very differently. In the world of invasive species, sea lamprey are, arguably, public enemy No. 1 — the toothy alien maw grinning from a wanted poster. Nobody loves a sea lamprey. They kill native fish. They are neither beautiful nor delicious.9 They put commercial fishermen out of work. The story of the lamprey is the story of a clear villain that the good guys can (at least try to) vanquish. A poison designed to kill lamprey, and only lamprey, has helped drop the population from nearly 800,000 to around 100,000 in Lake Superior. That's the narrative about invasive species that you're most likely to hear. Whether it's kudzu engulfing Southern forests, emerald ash borers wiping out the tree canopy in whole cities, or the beaver-like nutria devouring Louisiana like a swamp buffet, the prototypical invasive species story doesn't leave a lot of room for the color gray. A half-hour of fishing nearly fills a 10-gallon bucket with rainbow smelt. On a few nights in early spring, thousands of these invasive (and delicious) fish swim en masse from the deep waters of Lake Superior and into coastal streams. But smelt are more complicated — which is to say they have more redeeming characteristics. Take, for instance, their relationship with lake trout. Smelt numbers exploded as wild lake trout declined in the 1950s and '60s. Around the time Don Schreiner's parents were refusing to take him to late-night fishing parties, commercial fisheries on Lake Superior were bringing in millions of pounds of smelt a year. For the trout that remained, those smelt became a crucial food source, as other, native food supplies were lost. In 1986, smelt accounted for 80 percent of a Lake Superior trout's diet. Decades later, smelt are still a major food source for trout, even as the smelt themselves may be partly responsible for the shrinking numbers of the trout's native food source — herring. Smelt also form the basis of the diet of the Lake Superior salmon, another species that came to the lake from somewhere else. The salmon are a mostly self-sustaining population now, but even though they're not native to the lake, no state government is making an effort being made to eradicate them, Schreiner said, because, well, many people enjoy fishing for salmon.10 All of this produces a rat's nest of competing interests and emotions. The people who I met fishing on the Lester River want smelt to stick around so they can share a tradition (and a meal) with their children and grandchildren. Steve Dahl, an independent commercial fisherman I interviewed, wanted the smelt gone because they interfered with his herring business. Schreiner sees smelt as a useful, if maybe not ideal, food source that now plays a role in the ecosystem of the lake. And he remembered that in 2005, as Minnesota's Department of Natural Resources was making long-term plans for managing wildlife in the lake, some people who like to fish for salmon tried to convince the department to start adding more smelt to the lake in hopes of producing more salmon. Even if some people really did want the smelt gone, pretty much everyone I spoke to agreed there's no clear means of killing the species off. Turns out, this kind of nuanced story is the norm in the world of invasive species. It's the sea lamprey — clear villains — that are the exception. In Lake Superior in 2017, for instance, the National Oceanic and Atmospheric Administration counted 82 non-native species, only about a quarter of which were harmful invasive species that provided no redeeming benefits. Globally, nobody knows exactly how many invasive species have muscled their way into spaces where nature didn't intend them to go, or what percentage of those species are smelt-like mixed bags vs. lamprey-like forces of pure destruction. Instead, invasion biologists have synthesized decades of research into a simple rule of thumb. Of all the species introduced to new environments, we can assume that about 10 percent will successfully start breeding and living on their own. Of those, about 10 percent will become truly harmful. In other words, 1 percent of all non-native species will end up seriously harming their new homes. We don't know everything, said Marc Cadotte, professor of urban forest conservation and biology at the University of Toronto, Scarborough. But "we do know it is a small minority of species that end up becoming serious problems." Technically speaking, smelt are not an invasive species. Instead, they are classified as a non-native species, a larger category of which invasive species are just a subset. According to a 1999 executive order that established the National Invasive Species Council, an invasive species is a non-native plant or animal "whose introduction does or is likely to cause … harm." But what counts as "harm"? The answer to that question is supposed to show us how to cut through the knots of competing interests and decide how to allocate scarce resources for the management of invasive species. But "harm" is also basically impossible to define objectively. An ecologist might see harm as altering the function of the natural ecosystem or reducing the populations of native species, Cadotte said. While someone else, looking at the same situation but focused on economic impacts and recreation, might not see the harms that worry the ecologist. It's very common to have a plant or animal seem obviously harmful to one group of people and obviously benign to another. Take cats. "Cats are introduced all over the world. They have massive impacts on native songbird populations. But nobody in their right mind would classify them as invasive and try to control them," Cadotte said. "I mean, except Australia." There are plenty of other examples. Take those salmon introduced to Lake Superior and prized by many sport fishermen. The state of Minnesota regulates the size and quantity of salmon you can catch, which helps keep their numbers stable. The Grand Portage Band of Lake Superior Chippewa, on the other hand, treats salmon as an invasive species that it wants gone. There's no limit on how many of the fish tribal members can catch. In the past, the tribe has actually killed non-native sport fish in its streams in order to more effectively stock those streams with native trout, said Seth Moore, director of biology and environment for the Grand Portage band. Another example: The earthworms that live in the soil along the shores of Lake Superior are invaders from Europe, and while they're great for gardens, they alter soil quality in forests and make those ecosystems less hospitable to native plants, said Stuart Reitz, professor of entomology at Oregon State University. In other parts of the country, beekeepers and ranchers have fought bitterly over whether an invasive flower, called yellow starthistle, should be considered generally beneficial (because it is to bees) or generally harmful (because it is to livestock), said Mark Hoddle, director of the Center for Invasive Species Research at the University of California, Riverside. This isn't just trivia. Invasive species control is always expensive, and you only get the resources to launch a full-court press against a plant or animal — like the hundreds of millions of dollars spent in the last six decades to get sea lamprey populations under control — on the rare and shining occasion when everyone in power agrees on what "harm" is. And so the definition of invasive species has also created fights within the biological sciences. In 2011, Mark Davis, a biology professor at Minnesota's Macalester College, published an essay in Nature in which he and 18 co-authors argued that the field of invasion biology had become too weighted toward viewing all non-native species as bad and worthy of eradication. "Harm," he argued, had come to mean "change." "And, boy, this world is a bad place to be if any change is viewed as bad," Davis told me. But other biologists have pushed back against Davis. Some, like Daniel Simberloff, professor of environmental science at the University of Tennessee, Knoxville, really are suspicious of the idea that an ecosystem changed by non-native species is something that could be neutral or balanced, let alone good. "A parking lot [still counts as] an ecosystem," he pointed out. But even if this new ecosystem is healthy in its own way, that doesn't mean it's a good replacement for the forest that once occupied that land. Most of the scientists I spoke to, however, had not drawn such hard lines in their views. Non-native incursions could be neutral — or, at least, not bad enough that they needed to be prioritized for eradication. The basic idea was that we should try to stop new invasive species, and when something is really damaging, we should invest in serious eradication efforts, but some non-native species just aren't worth spending the energy and cash required to get rid of them. Smelt, in this conception, are a non-native species, not an invasive one. They're here now, and we have to deal with them. "You have to dance with the one what brung ya," said Marc Gaden, communications director for the Great Lakes Fisheries Commission. "Manage with the reality that's out there." But that could be changing. In the future, it might be easier to wage a lamprey-esque war against a smelt-esque species. Invasive species management is a small-government sort of problem. The federal government can contribute in some ways, particularly by seizing unwanted flora and fauna during customs checks, but most choices about which species to accept and which to fight are being made at the level of the states … and counties … and cities … and even on the level of individual private parks and reserves. Eradication campaigns are complex and expensive, so usually a lot of people have to agree about that a species is harmful before it gets marked for death. But technology is changing both the costs and the stakes. In a few years, it could be a lot easier and cheaper to stage a successful eradication campaign, Simberloff said. "We're getting to the point with the technology where we need to have these conversations for the first time. Do we want a world of gardens, or something wild and dangerous?" There are a variety of biological controls that scientists may be able to use in the future to produce animals that can't effectively maintain their own population, reducing a species' numbers not by killing off existing members but by blocking the next generation from breeding. For example, researchers are working on mosquitoes that are genetically engineered to die off before maturity. And rats whose genome has been altered so that all their offspring are infertile. And fish bred so that all their offspring are male. Simberloff sees that as mostly a good thing — a way to empower governments and communities to protect native ecosystems at a lower cost. A way, more to the point, of pushing back against the Mark Davises of the world who argue that the expense of eradication is reason enough to give up and let an ecosystem change. But if and when these techniques are perfected, their use — and the disagreements about them — will be highly decentralized. Right now, when there's a dispute about whether something causes harm, there's not a clear framework for how to decide who wins. Peer pressure is a big lever, Moore said. He told me that the Grand Portage tribe and other management agencies responsible for Lake Superior waters have all pushed on each other at various times. If there's enough peer pressure, he said, it can alter the policies of other agencies, as when the state of Michigan was considering allowing fish farms to operate in the Great Lakes. The Grand Portage Band opposed that decision, he said, and so did a lot of other agencies that manage fish on the lakes. Ultimately, the idea was shot down. Some disputes, though, seem to "persist and persist without really being resolved," Simberloff said. And Reitz told me that the decisions about which species get tackled and which don't depend mostly on politics — who can make a case that a problem is big enough to deserve government money. Even then, a different kind of politics can enter the picture. In the summer of 2016, Florida Keys officials sought, and won, the federal government's approval to release genetically modified mosquitoes with the goal of eliminating an invasive mosquito population … and with it, the risk of the Zika virus. But local residents didn't want their island to be a testing ground. The political battle has dragged on in the years since residents voted against the trial — even as a different kind of altered mosquito, this one carrying a bacteria that kills other mosquitoes, were released in another part of the Keys and elsewhere. The future, experts told me, is likely to be full of scenarios like this, where one group is able to move to eradicate a species, even if other, nearby communities disagree. The shape of the natural world would come down to who has the political power, money, will — and vision. "It all depends on what we want," Reitz said. "Is it what people wish and desire [to serve human tastes], or do we want some continuous legacy — some long-term persistence with what nature was in the past? We're getting to the point with the technology where we need to have these conversations for the first time. Do we want a world of gardens, or something wild and dangerous?" People gather at the mouth of Minnesota's Lester River, waiting for schools of rainbow smelt to swim in from Lake Superior. Nobody knows exactly when the fish will arrive, so smelting is a social affair where people share bonfires and beer to pass the time. For now, the smelt will continue to alter the ecosystem of Lake Superior, and their presence will create something different than what existed in the past. And humans' reactions will be different too. While many of the folks who gathered to fish on the rocky beach of the Lester River were white men, like the crowds Schreiner remembered from childhood, I did meet one woman, Sam Bo. She was Hmong, a member of an indigenous group from Southeast Asia, many of whom have immigrated to Minnesota. Bo herself lives in the town of Coon Rapids, a two-and-a-half-hour drive from the Lester River. It was her first time making the excursion, but the smelt, she'd been told, were worth it. Lots of Hmong were smelt fishing now, she said, pointing out several other groups on the beach. The fish are similar to a species native to Southeast Asia, and the Hmong there catch them in much the same way. Finally, as darkness fell, a man waded chest-deep into the water. Holding a net on a long pole, like a porous frying pan, he swished it back and forth along the bottom of the river and came up with net half full of wriggling, bouncing silver fish. In minutes, all the people on the shore had joined him, waddling into the flow as fast as hip waders and uneven ground would allow. Under the moonlight, with small waves gently nudging both her and the smelt toward shore, Sam Bo pulled up a bounty of fish. And she smiled. Posted on October 16, 2018 Categories CampaignsTags Animals, Conservation, Ecology, Invasive Species Kangaroo attacks couple in northeastern Australia, injures woman SYDNEY (Reuters) – Australian wildlife carers Jim and Linda Smith are lucky to be alive, an ambulance official said, after they were attacked by a kangaroo in northeastern Queensland state. The Smiths were feeding wild kangaroos on their property in the Darling Downs when a grey kangaroo buck struck out at Jim Smith, knocking him to the ground. The kangaroo attacked his wife, Linda, when she ran to help him, leaving her with a collapsed lung, broken ribs, cuts and scratches. "It's scary, it knocked me over once or twice and once they grab you, you can see what they do", Jim said, showing his injuries. It was only when their son came out and hit the kangaroo with a piece of wood that the marsupial stopped the attack and returned to nearby bushland, Australian media reported. Linda Smith was taken to Toowoomba Hospital, where she underwent surgery, media reports said. "If the kangaroo was able to continue to inflict further injury, her life was, yes, in danger," Queensland Ambulance Service's senior operations supervisor Stephen Johns said. Australia has roughly 45 million kangaroos and it is not unusual for them to come into conflict with people as housing has expanded to areas where the marsupials live. They are even more likely to be driven into populated areas in search of food and water in drought-stricken areas. Reporting by Stefica Nicol Bikes; Writing by Karishma Singh
CommonCrawl
Risk of the hydrogen economy for atmospheric methane Matteo B. Bertagni ORCID: orcid.org/0000-0001-5912-47941, Stephen W. Pacala2, Fabien Paulot ORCID: orcid.org/0000-0001-7534-49223 & Amilcare Porporato ORCID: orcid.org/0000-0001-9378-207X1,4 Nature Communications volume 13, Article number: 7706 (2022) Cite this article Hydrogen (H2) is expected to play a crucial role in reducing greenhouse gas emissions. However, hydrogen losses to the atmosphere impact atmospheric chemistry, including positive feedback on methane (CH4), the second most important greenhouse gas. Here we investigate through a minimalist model the response of atmospheric methane to fossil fuel displacement by hydrogen. We find that CH4 concentration may increase or decrease depending on the amount of hydrogen lost to the atmosphere and the methane emissions associated with hydrogen production. Green H2 can mitigate atmospheric methane if hydrogen losses throughout the value chain are below 9 ± 3%. Blue H2 can reduce methane emissions only if methane losses are below 1%. We address and discuss the main uncertainties in our results and the implications for the decarbonization of the energy sector. Commitments to reach net-zero carbon emissions have drawn renewed attention to hydrogen (H2) as a low-carbon energy carrier1,2. Currently, H2 is mostly used as an industrial feedstock, and its global production has a high carbon footprint because it relies almost entirely (≈95%) on fossil fuels1. However, many technologies to produce H2 with a lower carbon footprint are available1. Among these, low-carbon H2 can be produced from water electrolysis powered by renewable energy (green H2) or from methane reforming coupled with carbon capture and storage (blue H2). H2 fuel may be especially important to decarbonize energy and transport sectors where direct electrification is complicated, like heavy industry, heavy-duty road transport, shipping, and aviation1. H2 is also being considered for storing renewable energy1. As a result of this potential, countries accounting for more than a third of the world's population have developed national strategies for large-scale H2 production1,2. Even if a more hydrogen-based economy would reduce CO2 emissions and improve air quality3, it would also increase the H2 emissions into the atmosphere. The H2 molecule is very small and difficult to contain, so it is still largely unknown how much H2 will leak in future value chains. H2 emissions will also occur due to venting, purging, and incomplete combustion4,5,6. This potential increase in H2 emissions has received relatively little attention to date because H2 is neither a pollutant nor a greenhouse gas (GHG). However, it has been long known7,8,9,10 that H2 emissions may exert a significant indirect radiative forcing by perturbing the concentration of other GHG gases in the atmosphere. This indirect GHG effect of H2 calls for a detailed scrutiny of the global H2 budget and the environmental consequences of its perturbation11,12. H2 is the second most abundant reactive trace gas in the atmosphere, after methane, with an average concentration of around 530 ppbv13. H2 sources include both direct emissions (≈45% of total sources) and production in the troposphere from the oxidation of volatile organic compounds (≈25%) and methane (≈30%)11,14. The main H2 sinks are the uptake by soil bacteria (70–80% of total tropospheric removal) and the atmospheric reaction with the radical OH (20–30%), which is responsible for the indirect GHG effect of H2. H2's reaction with the OH radical tends to increase tropospheric methane (CH4) and ozone (O3), which are two potent greenhouse gases. It also increases stratospheric water vapor, which is associated with stratospheric cooling and tropospheric warming8,15. Recent global climate models have estimated that hydrogen has an indirect radiative forcing of around 1.314–1.816 10−4 W m−2 ppb\({}_{{{{{{{{\rm{v}}}}}}}}}^{-1}\), and a global warming potential (GWP) that lies in the range 11 ± 5 for a 100-year time horizon16. Hence, H2 emissions are far from being climate neutral, and their largest impact is related to the perturbation of atmospheric CH414,16, the second most important anthropogenic GHG. The tropospheric budgets of H2 and CH4 are deeply interconnected (Fig. 1). First, the removal of both gases from the atmosphere is controlled by their reaction with OH, which is the dominant sink (≈90%) for atmospheric methane17,18. An increase in the concentration of tropospheric H2 may reduce the availability of OH, consequently weakening CH4's removal and increasing CH4's lifetime and abundance14,19. Second, methane is a primary precursor of hydrogen. Namely, CH4 oxidation results in the production of formaldehyde, whose photolysis produces H2. Firn-air records suggest that the increase in H2 over the 20th century can be largely explained by the increase in CH4 concentration20. Fig. 1: Tangled hydrogen (H2) and methane (CH4) budgets. Sketch of H2 and CH4 tropospheric budgets and their interconnections: (1) the competition for OH; (2) the production of H2 from CH4 oxidation; (3) the potential emissions [minimum-maximum] due to a more hydrogen-based energy system. Flux estimates (Tg/year) are from refs. 11,18. Arrows are scaled with mass flux intensity, CH4 scale being 10 times narrower than H2 scale. On a per-mole basis, H2 consumes only around 3 times less OH than CH4. ppq = part per quadrillon (10−15). a top-down estimate including also minor atmospheric sinks (<10%). b range obtained as a difference between total and fossil fuel emissions18. Additionally, H2 and CH4 are linked at the industrial level. Around 60% of global H2 production is currently produced from steam methane reforming (gray H2) and is responsible for 6% of global natural gas use1. In the next decade, steam methane reforming coupled with carbon capture and storage will likely remain the dominant technology for large-scale H2 production (blue H2), since facilities for H2 production from renewable sources (green H2) will require time to become operational and economically favorable2. Since CH4 is the second-largest contributor to atmospheric warming since the beginning of the industrial era and there are global efforts to mitigate its atmospheric levels21, it is crucial to quantify the response of atmospheric CH4 to increasing H2 production. We analyze this problem through a simple atmospheric model that captures the interaction between H2 and CH4 ("Methods"). The investigation of the transient dynamics ("Methods") shows that any H2 emissions pulse to the atmosphere leads to a small transient growth of atmospheric CH4 whose effects last for several decades. In the next sections, we focus on how the equilibrium concentrations of tropospheric H2 and CH4 would respond to scenarios of continuous emissions from an energy system where part of the fossil fuel energy share is replaced by green or blue H2. The analysis emphasizes how atmospheric CH4 could either decrease or increase, mainly depending on the H2 production pathway and the amount of H2 lost to the atmosphere. The latter is defined through the hydrogen emission intensity (HEI), namely the percentage of H2 produced that is lost to the atmosphere. Specifically, we find a critical HEI above which the CH4 atmospheric burden rises despite the lower fossil fuel use. We assess the critical factors and the main uncertainties in the quantification of this critical HEI. We finally discuss how our results can help better inform policymakers regarding the trade-off associated with different scenarios of hydrogen production and use. Emission scenarios Here we investigate how the tropospheric burdens of methane and hydrogen would be affected by the transition to a more hydrogen-based energy system, wherein hydrogen replaces part of the current fossil fuel energy (≈490 ExJ in 201922). To achieve this goal, we estimate the CH4 and H2 source changes, \({{\Delta }}{S}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}\) and \({{\Delta }}{S}_{{{{{{{{{\rm{H}}}}}}}}}_{2}}\), where Δ indicates the difference to the current tropospheric conditions ("Methods"). This fossil fuel displacement reduces both CH4 and H2 sources (Fig. 1). The rise in H2 production causes additional H2 emissions due to intentional (e.g., venting) and unintended (e.g., fugitive) losses, and possibly CH4 emissions associated with blue H2 production. The change in H2 emissions can be estimated from the amount of hydrogen produced to substitute fossil fuels and the HEI, namely the percentage of H2 produced that is lost to the atmosphere. Losses can occur due to venting, purging, incomplete combustion and leaks across the hydrogen value chain. The HEI of the future global H2 value chain is very uncertain. Literature values range from 1 to 12%4,9,23, but the upper bound is unlikely to occur at large scales because it would be both unsafe and too expensive. Recent empirical estimates for specific H2 infrastructures suggest HEI's ranging from 0.1 to 6.9%, critically depending on the pathway of hydrogen production and transport6. To account for these uncertainties and to explore a broad spectrum of possible scenarios, here we vary HEI from 0 to 10% of the total hydrogen produced (Fig. 2a). The lower and upper bounds of this range represent a perfectly sealed and a highly leaking global H2 value chain, respectively. With a perfectly sealed hydrogen value chain, H2 emissions would only decrease due to the lower fossil fuel use. On the contrary, a highly leaking H2 value chain, coupled with an envisioned penetration of H2 in the energy market, could increase hydrogen emissions up to several times the total current sources, which are around 80 Tg H2 yr−1. Fig. 2: Hydrogen replacement of fossil fuels. a Changes in H2 sources (\({{\Delta }}{S}_{{{{{{{{{\rm{H}}}}}}}}}_{2}}\)) as a function of fossil fuel replacement for different hydrogen emission intensity (HEI). b Changes in CH4 sources (\({{\Delta }}{S}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}\)) as a function of fossil fuel replacement for different H2 production pathways. Methane leak rates associated with blue H2 production are 0.2, 1, and 2%. Bands for \({{\Delta }}{S}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}\) account for different amounts of blue H2 produced and lost. c Response of the tropospheric concentrations of H2 and CH4 for the emission scenarios of the previous panels. Symbols mark the different percentages of fossil fuel displacement. Only symbols for 100% fossil fuel replacement are reported for blue H2 with 1% CH4 leakage. Also reported is the difference in CO2 concentration (Δ[CO2e]) that would produce equivalent radiative forcing to the change in equilibrium CH4 (upper axis). The variation in CH4 emissions depends not only on the percentage of fossil fuel energy that is displaced by hydrogen, but also on the hydrogen production pathway. For green H2, i.e., hydrogen obtained from renewable sources, we scale CH4 emissions based on the reduced consumption of fossil fuels resulting from hydrogen usage (Fig. 2b). Estimates of current methane emissions associated with fossil fuel extraction and distribution are in the range 80–160 Tg CH4 yr−118,24,25 and relatively equally distributed among coal, oil, and gas sectors26. Here we use the top-down estimate of 111 Tg/year18. For blue H2, which is derived from steam methane reforming (SMR), the variation in CH4 sources not only accounts for the reduced consumption of fossil fuels but also for the methane emissions (venting, incomplete combustion, fugitive) associated with blue hydrogen production. These emissions depend on the amount of CH4 needed to produce H2, i.e., feedstock and energy requirements of the SMR process ("Methods"), and the CH4 leak rate. The precise average leak rate of the global natural gas supply chain remains uncertain. One of the reasons is that national inventories generally underestimate real emissions27,28,29,30. More detailed studies relying on field measurements in the United States and Canada estimate average leak rates around 2%28,29,30, with large spatial heterogeneity between different operators31. Although national inventories suggest that some countries, like Venezuela and Turkmenistan, have higher leak rates26, here we adopt 2% as the maximum global CH4 leak rate for our scenarios, because methane-mitigation efforts are likely to decrease future global leak rates21 and, more importantly, because not all hydrogen produced will be blue H2. In this regard, the scenario of blue H2 with a 2% CH4 leak rate can also be interpreted as a combination of equal production of green H2 and blue H2 with 4% CH4 leak rate. We use 0.2% as a lower bound for the CH4 leak rate, since this has been declared as the target of several energy companies for 202532. 1% represents an intermediate scenario of blue H2 production. Figure 2b shows the resulting CH4 emissions associated with green and blue H2 production with methane leak rates of 0.2, 1, and 2%. The different leak rates have a great impact on the methane emissions. Compared to the fossil fuel energy system, CH4 emissions are reduced in the blue H2 scenario with 0.2% methane losses, but largely increased in the blue H2 scenario with 2% methane losses. The fossil fuel displacement by blue H2 with 1% methane losses shows basically no net effect on the CH4 emissions. As a specific case, we also investigate the H2 and CH4 emission changes associated with estimates of future hydrogen production in a set of net-zero scenarios. H2 production is expected to increase from current 90 Tg/year to 530–660 Tg/year in 20502,33,34. We thus consider a 500 Tg/year rise in the global H2 production, which is energetically equivalent to about 15% of current fossil fuel energy. Figure 3a shows how, depending on the H2 production pathway and the different hydrogen and methane leak rates, the emission changes of these two gases can vary substantially. Fig. 3: Methane response to increasing H2 production. a Changes in H2 and CH4 sources (ΔS) due to green and blue H2 production (≈500 Tg yr−1). HEI is the H2 emission intensity. Gray lines mark the case for HEI = 0%. Blue bars for \({{\Delta }}{S}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}\) are obtained with HEI = 10%. b Response of CH4 atmospheric concentration. The right axis shows the Δ[CO2e] that would produce equivalent radiative forcing to the change in equilibrium CH4. Tropospheric response For the previous emission scenarios, we evaluate the changes in the equilibrium concentrations of tropospheric hydrogen and methane, namely Δ[H2] and Δ[CH4]. The timescales to equilibrium are dictated by the gas average lifetimes ("Methods"). The corresponding variations in steady state concentration of OH are reported in Supplementary Figs. 1 and 2. The H2 economy causes a rise in tropospheric H2 as a result of the additional emissions (Fig. 2c). The intensity of this increase varies considerably as a function of the emissions of the hydrogen value chain. The concentration variation could go from less than 100 ppbv to more than 2000 ppb in envisioned scenarios of the H2 economy, namely a +300% from the current H2 tropospheric level. The response of atmospheric CH4 results from the combination of the methane emission change and the methane sink weakening due to the higher hydrogen emissions. To discriminate between the two mechanisms, it is useful to focus on the scenarios of fossil fuel displacement by green H2. In the case of a perfectly sealed green H2 value chain (HEI = 0%), [CH4] and [H2] both decrease due to reduction in fossil fuel emissions. As H2 emissions increase (HEI > 0), Δ[CH4] increases too. Up to the point that when HEI overcomes a critical threshold, there is an increase in atmospheric methane, i.e., Δ[CH4] > 0, even though methane emissions are lower. This critical HEI is in the range 8–10% for green H2 as it has a weak nonlinear dependence on the percentage of fossil fuel energy that is replaced by H2 (see also Supplementary Fig. 3). The scenarios of blue H2 with 0.2% CH4 leak rates are not very different from the green H2 scenarios, with the critical HEI being in the range 7–8%. Regarding the scenarios of blue H2 with 1% CH4 leak rates, since there is basically no change in the methane emissions (Fig. 2b), the methane response is only associated with the reduction in OH availability due to the higher H2 concentration. The critical HEI is not defined for this blue H2 as the methane burden increases in all cases. The worst scenarios of blue H2 with 2% CH4 leak rates show drastic differences in the tropospheric concentrations of the two gases, which increase considerably, with a weakly nonlinear effect due to the drop in atmospheric OH. The atmospheric methane response to future H2 production2,33,34 shows qualitatively similar results as a function of the H2 production pathway and the percentage of H2 lost to the atmosphere (Fig. 3b). Positive effects in terms of methane mitigation are observed only for green and blue H2 with low methane losses, if the H2 emission intensity is well below 10%. Otherwise, the tropospheric methane burden is enhanced. We also evaluated the change in CO2 concentration (Δ[CO2e]) that would produce equivalent radiative forcing to the change in the equilibrium concentration of CH4 (Figs. 2c and 3b). We used the radiative efficiency of CH4 that includes indirect effects on O3 and stratospheric H2O35. Under the worst scenario of blue H2 production with 2% CH4 losses and 10% H2 losses, the rise in equilibrium CH4 due to future H2 production would be like adding 9 ppm of CO2 to the atmosphere (Fig. 3b). For the same blue H2, the rise in CH4 following the entire displacement of fossil fuels would be like adding around 70 ppm of CO2 (Fig. 2c). This is equivalent to around 50% of the CO2 increase from preindustrial times (278 ppm) to current days (417 ppm). Since the goal of keeping the global average temperature rise below 1.5 ∘C requires a mid-century maximum of CO2 close to 450 ppm, these results support previous concerns about the sustainability of blue H236 unless fugitive emissions can be kept sufficiently low. Critical HEI for methane mitigation The quantification of the critical hydrogen emission intensity (HEIcr) for methane mitigation is key to assess whether displacing fossil fuels with hydrogen would mitigate or enhance the tropospheric burden of CH4. Here we investigate how the HEIcr is affected by the hydrogen production pathway and by two of the most uncertain terms in the CH4-H2-OH balance: (i) the partitioning of the OH sink among the tropospheric gases; (ii) the rate of H2 uptake by soil bacteria. The derivation of an analytical solution for the HEIcr is reported in the "Methods". The very short lifetime of OH makes the quantification of its atmospheric dynamics extremely challenging. Indirect methods are typically used to estimate OH concentrations, sources, and sink partitioning37,38,39. Using a range of OH partition estimates38,40, we investigate the dependence of the HEIcr to different values of OH excess (EOH), EOH being the excess of OH that is consumed by other tropospheric gases besides hydrogen, methane, and carbon monoxide. Figure 4 shows the quasi-linear response of the HEIcr to EOH. We stress that a variation in EOH is equivalent to a variation in the OH sources since we preserve the current average OH concentration, which is relatively well constrained by inverse modeling37,41. Fig. 4: Critical hydrogen emission intensity (HEI) for methane mitigation. Critical HEI as a function of OH excess (EOH) and hydrogen production method (green and blue H2 with 0.2, 0.5, 1% CH4 leak rates, respectively). Dashed (dotted) lines are obtained for a 20% increase (decrease) in the H2 uptake rate by soil bacteria (kd). Triangles mark the critical HEI for the best estimate of EOH. The HEIcr is much lower for blue H2 than for green H2 because of the methane emissions associated with blue H2 production. For the current tropospheric conditions, we find that HEIcr is around 9% for green H2, around 7% for blue H2 with 0.2% methane leak rates, and 4.5% for blue H2 with 0.5% methane leak rates. Blue H2 with 1% methane leak rate has a HEIcr that is close to zero, as displacement of fossil fuel with this hydrogen does not reduce methane emissions (Fig. 3b). For even higher methane leak rates, the methane burden would increase regardless of the H2 emissions, so that the HEIcr is negative. The H2 uptake by soil bacteria is another crucial process in the evaluation of HEIcr and in the overall CH4–H2–OH dynamics, since it accounts for 70–80% of H2 tropospheric removal11. Despite recent research on uptake modeling42,43 and the microbial characterization of the H2-oxidizing bacteria44, the spatial heterogeneity of the uptake as driven by local hydro-climatic and biotic conditions hinders bottom-up estimates of the global average uptake rate. In atmospheric studies, the average uptake rate is usually adjusted in order to obtain a reasonable simulation of observed surface hydrogen concentrations14,45. To account for these potential sources of uncertainties, we show how a ±20% variation in the uptake rate influences the critical HEI (bands in Fig. 4). A stronger biotic sink (dashed lines) reduces the consumption of OH by H2 and, consequently, increases the HEIcr. A weaker biotic sink (dotted line) has the opposite effect. Regarding the impact of climate change on the H2 soil sink, recent studies indicate that increasing temperatures are expected to slightly favor the uptake on a global scale14, while shifts in rainfall regimes will be the significant drivers of H2 uptake changes at the local scale43. From a biotic perspective, the adaptability of H2-oxidizing bacteria to extreme environments46 suggests that their presence will remain widespread in the future, but their spatial heterogeneity may change as a result of climate and anthropogenic pressures. Another source of uncertainties in the evaluation of HEIcr is related to the estimate of CH4 emissions associated with fossil fuel use. Since there is a quasi linear relationship between these emissions and the HEIcr (Eq. (16) in "Methods"), the same relative uncertainty of fossil fuel methane emissions (Fig. 1) applies to the HEIcr. The success of the global net-zero transition hinges on hydrogen as a scalable low-carbon energy carrier that can replace fossil fuels in several hard-to-electrify energy and transport sectors. More than 20 governments and many companies have already announced strategies for hydrogen production, and the numbers are likely to increase as policy frameworks that facilitate hydrogen adoption are promoted1,2. Considerable investments are still needed to achieve such a transition, as the current hydrogen momentum falls short compared to net-zero goals. The Hydrogen Council2 estimates that there is a USD 540 billion gap between the investments of announced projects (USD 160 billion) on hydrogen production and the investments required by 2030 to be on a net-zero pathway (USD 700 billion). While the positive effects of a more hydrogen-based economy are relatively established (e.g., lower CO2 emissions, decreased urban pollution, etc.), considerable uncertainty still surrounds the consequences of hydrogen emissions to the atmosphere, because of potential indirect GHG effects14,19. Here we have focused on the impact of a more hydrogen-based energy system on tropospheric methane, the second most important greenhouse gas. We have shown how the replacement of fossil fuel energy with green or blue hydrogen could have very different consequences for tropospheric CH4, depending on the amount of hydrogen lost to the atmosphere and the methane emissions associated with hydrogen production (Figs. 2 and 3). Specifically, tropospheric CH4 would decrease due to the fossil fuel displacement only if the rate of H2 losses is kept below the critical HEI. This is around 9 ± 3% for green H2 (Fig. 4). The same critical value would apply to other H2 colors that do not entail the use of fossil fuels, like white or orange H2 extracted from underground deposits12,47. The critical HEI for blue H2 is much lower due to the CH4 emission associated with blue H2 production. We have found that the methane emissions in a blue H2 economy could be higher than in a fossil fuel economy if the methane supply chain had an average leak rate above 1%. Furthermore, the superimposition of CH4 and H2 emissions may have undesired consequences for the tropospheric burden of CH4. This may be a potential problem in the near term, given that steam methane reforming will be used to bridge the gap between increasing H2 demand and limited green H2 production capacities2. Our results suggest that including hydrogen emissions would aggravate the greenhouse gas footprint of blue H236. In addition to the CH4 feedback, H2 emissions are also expected to impact ozone (O3) and stratospheric water vapor (H2O), with negative consequences for both air quality and radiative forcing. Accounting for these effects, we can provide a comparison between the radiative forcing of hydrogen-based and fossil fuel-based energy systems. Because both H2 and CH4 are short-lived gas compared to CO2, the time horizon for this comparison is crucial48. Here we consider 20-year and 100-year time horizons. The GWP of H2 is estimated at 11 ± 5 (100-year) and 33\({}_{-13}^{+11}\) (20-year)16. The GWP of CH4 is estimated at 28 (100-year) and 80 (20-year)35. In an envisioned hydrogen economy that replaces the current fossil fuel industry, the H2 emissions could be in the range 23 to 370 Tg H2 yr−1, for a H2 emission intensity going from 1 to 10% (Fig. 2a). These emissions would have a radiative forcing impact of 0.7–12% (100-year) and 2–35% (20-year) of the current CO2 emissions from fossil fuels (≈35 Pg CO2 yr−1). If the global H2 economy relied on blue H2 with a 2% methane leakage rate, methane emissions would cause an additional radiative forcing impact that is around 10% (100-year) and 27% (20-year) of the current CO2 emissions from fossil fuels. Hence, in the worst scenario, up to 22% of the climate benefits of the hydrogen economy could be offset by gas losses over a 100-year horizon. The percentage could be as large as 65% over a 20-year horizon. These values could be higher on a regional scale if the leak rate of the natural gas supply chain is above 2%. To maximize the climate benefit of hydrogen adoption, minimizing both H2 and CH4 losses across the supply chain of hydrogen production will need to be a priority. On the methane side, some governments and companies have already committed to reducing the leaks from the oil and gas sector, because this could be the most cost-effective and impactful action for near-term climate mitigation21. The International Energy Agency (IEA) estimates that, with the recent rise in natural gas prices, the abatement of methane emissions from the global gas and oil sector could be implemented at no net cost49. Hence, the accomplishment of this mitigation is only a matter of political will for the limited number of companies involved. On the hydrogen side, the global value chain still has to be built. This offers the advantage of tackling the hydrogen emission problem ahead of time. On the one hand, energy companies will have a great interest in minimizing economic loss and safety risks due to hydrogen leaks. On the other hand, however, many technological challenges still need to be addressed. First, H2 containment may remain an issue even as technologies progress. The high diffusivity of the small H2 molecule has already challenged the scientific community's ability to measure the H2 concentration in the atmosphere50 and in the firn air of ice sheets51. Second, while more field-based estimates of H2 losses are needed, there is currently no commercially available sensing technology able to detect small H2 leaks at the ppb level48. Third, global-space monitoring, which is bringing a much-needed transparency to the quantification of real methane emissions27,31, will also require new technology since H2, unlike CH4 or CO2, does not absorb infrared radiation. For all these reasons, the uncertainty about future emissions from the H2 value chain remains large. Our versatile atmospheric model allowed a broad exploration of scenarios in a hydrogen-based energy system. Simulations with high resolution three-dimensional atmospheric chemistry models, which are more comprehensive but more computationally demanding, could refine our results for specific scenarios. In particular, a more detailed model could improve the assessment of H2 displacement of fossil fuels by accounting for the emission changes of other chemical species, like CO and NOx, which impact the CH4–H2–OH dynamics. Further analyses could also refine the potential changes in emission inventories due to H2 displacement of different fossil fuels. With the increasing anthropogenic alteration of atmospheric chemistry, detailed three-dimensional atmospheric chemistry models have become critical to evaluate the atmospheric interactions with the climate forcing52,53. Nonetheless, thanks to their versatility, simplified models of atmospheric chemistry have also proven very useful to investigate the fundamental processes governing the coupling between atmospheric gases and the consequences of their possible perturbations (e.g., refs. 54,55,56,57,58,59). The insights obtained with the CH4–CO–OH model by Prather et al.54, in particular, led to a +40% revision of the IPCC's GWP for CH460. Here we extend Prather's seminal model by adding the mass balance equation for atmospheric H2. The purpose is to identify the key components that control the H2 feedback on the tropospheric dynamics of CH4 (Fig. 1). The chemical reactions considered are $${{{{{{{{\rm{CH}}}}}}}}}_{4}+{{{{{{{\rm{OH}}}}}}}}\ \mathop{\longrightarrow }\limits^{{k}_{1}}\ \ldots \longrightarrow \alpha \,{{{{{{{{\rm{H}}}}}}}}}_{2}+{{{{{{{\rm{CO}}}}}}}}\ldots,\quad {R}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}={k}_{1}[{{{{{{{\rm{OH}}}}}}}}][{{{{{{{{\rm{CH}}}}}}}}}_{4}],$$ $${{{{{{{{\rm{H}}}}}}}}}_{2}+{{{{{{{\rm{OH}}}}}}}}\ \mathop{\longrightarrow }\limits^{{k}_{2}}\ \ldots,\quad {R}_{{{{{{{{{\rm{H}}}}}}}}}_{2}}={k}_{2}[{{{{{{{\rm{OH}}}}}}}}][{{{{{{{{\rm{H}}}}}}}}}_{2}],$$ $${{{{{{{\rm{CO}}}}}}}}+{{{{{{{\rm{OH}}}}}}}}\ \mathop{\longrightarrow }\limits^{{k}_{3}}\ \ldots,\quad {R}_{{{{{{{{\rm{CO}}}}}}}}}={k}_{3}[{{{{{{{\rm{OH}}}}}}}}][{{{{{{{\rm{CO}}}}}}}}],$$ $${{{{{{{\rm{X}}}}}}}}+{{{{{{{\rm{OH}}}}}}}}\ \mathop{\longrightarrow }\limits^{{k}_{4}}\ \ldots,\quad {R}_{{{{{{{{\rm{X}}}}}}}}}={k}_{4}[{{{{{{{\rm{OH}}}}}}}}][{{{{{{{\rm{X}}}}}}}}],$$ with R representing the rates of reactions, [ ⋅ ] the concentrations, and ki the rate coefficients. We indicated only the products with which we are concerned, the CO and H2 produced by oxidation of CH4(1). H2 production through CH4 oxidation has yield α ≈ 0.3713. X encompasses all the other species, besides CH4, CO, and H2, that consume OH. Based on the above reactions, the balance equations for the CH4–H2–CO–OH system are $$\frac{d[{{{{{{{{\rm{CH}}}}}}}}}_{4}]}{dt}={S}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}-{R}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}-{R}_{s},$$ $$\frac{d[{{{{{{{{\rm{H}}}}}}}}}_{2}]}{dt}={S}_{{{{{{{{{\rm{H}}}}}}}}}_{2}}+\alpha {R}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}-{R}_{{{{{{{{{\rm{H}}}}}}}}}_{2}}-{R}_{d},$$ $$\frac{d[{{{{{{{\rm{CO}}}}}}}}]}{dt}={S}_{{{{{{{{\rm{CO}}}}}}}}}+{R}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}-{R}_{{{{{{{{\rm{CO}}}}}}}}},$$ $$\frac{d[{{{{{{{\rm{OH}}}}}}}}]}{dt}={S}_{{{{{{{{\rm{OH}}}}}}}}}-{R}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}-{R}_{{{{{{{{{\rm{H}}}}}}}}}_{2}}-{R}_{{{{{{{{\rm{CO}}}}}}}}}-{R}_{{{{{{{{\rm{X}}}}}}}}},$$ where Rd = kd[H2] is the H2 uptake by soil bacteria, which plays a crucial role in the global balance of H2 since it accounts for around 70–80% of tropospheric removal11,43,61; Rs = ks[CH4] accounts for the smaller sinks of CH4, namely soil uptake, stratospheric loss and reactions with chlorine radicals62. For simplicity, we neglect the smaller sinks of H2, i.e., stratospheric loss (≈1% of removal63), and CO, i.e., soil uptake and stratospheric loss (<10% of removal64). The solution at quasi steady state (i.e., d[ ⋅ ]/dt = 0) provides the sources for fixed tropospheric concentrations. Positive solutions for OH occurs if \({S}_{{{{{{{{\rm{OH}}}}}}}}} > (2+\alpha )({S}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}-{R}_{s})+{S}_{{{{{{{{\rm{CO}}}}}}}}}+{S}_{{{{{{{{{\rm{H}}}}}}}}}_{2}}-{R}_{d}\), i.e., when there is enough OH to oxidize all CO sources, the part of CH4 sources that is not balanced by smaller sinks, and the part of H2 sources that is not balanced by the soil uptake. The excess of OH consumed by other gases, besides CH4, CO, and H2, can be defined as \({E}_{{{{{{{{\rm{OH}}}}}}}}}={R}_{{{{{{{{\rm{X}}}}}}}}}/({R}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}+{R}_{{{{{{{{\rm{CO}}}}}}}}}+{R}_{{{{{{{{{\rm{H}}}}}}}}}_{2}})\). The values representing average tropospheric conditions are summarized in Table 1. The values of SOH and SCO are kept constant in all scenarios. Table 1 Tropospheric budgets of key species and definition of linear stability modes Linear stability and transient dynamics We investigate the effects of an emission pulse of H2 on the tropospheric system (5)–(8). The timescales and modes of the atmospheric response to chemical perturbations are defined by the eigenvalues and eigenvectors of the system54,55. Indicating with c(t) the solution vector of the system (5)–(8), the temporal dynamics of a small perturbation \(\hat{{{{{{{{\bf{c}}}}}}}}}\) around c evolves as $$\frac{{{{{{\rm{d}}}}}}\hat{{{{{{{{\bf{c}}}}}}}}}}{{{{{{\rm{d}}}}}}t}={{{{{{{\bf{J}}}}}}}}\hat{{{{{{{{\bf{c}}}}}}}}},$$ where J is the Jacobian of the system evaluated in c. For the equilibrium solution c0 representing the current tropospheric concentrations, the eigenvalues and eigenvectors, or modes, of the linearized system (9) are reported in Table 1. Since all eigenvalues are real and negative (λi < 0), the equilibrium solution c0 is a stable node. As a result, any small perturbation asymptotically decays in time with a timescale defined by the negative reciprocal of the eigenvalue. Because the system equations are coupled, the decay timescale (\(-{\lambda }_{i}^{-1}\)) of a gas perturbation does not necessarily correspond to the gas steady state average lifetime (τi). The CH4 perturbation, in particular, decays with a timescale that is much larger than what predicted by its steady state lifetime, i.e., \(R\,=\,-{\lambda }_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}^{-1}/{\tau }_{{{{{{{{{\rm{CH}}}}}}}}}_{4}} > 1\). This mechanism, known as the CH4 feedback effect55,65, has a crucial role in increasing the GWP and the environmental impact of CH4 emissions. Detailed models of atmospheric chemistry usually provide R around 1.3–1.465. We find a marginally higher feedback factor, namely R ≈ 1.5, in agreement with previous findings using Prather's box model54,55,57. The decay timescale of the H2 perturbation instead corresponds to the H2 average lifetime, namely \(-{\lambda }_{{{{{{{{{\rm{H}}}}}}}}}_{2}}^{-1}\approx {\tau }_{{{{{{{{{\rm{H}}}}}}}}}_{2}}\), in agreement with results from detailed atmospheric chemistry models19. While the modal eigenvalue analysis correctly captures the asymptotic stability of the solution c0, it does not describe the perturbation dynamics at finite times, i.e., before the asymptotic decay. Still within the domain of the linearized system (9), a more complete picture can be obtained by analyzing the temporal evolution of the solutions with specific attention to the emergence of transient growth phenomena, which are known to occur in systems where the modes are non-orthogonal, as in the present case. When large enough, a transient growth can even trigger nonlinearities that destabilize the equilibrium solution66. Figure 5 shows the transient growth phase of tropospheric CH4 and CO that follows a 10% perturbation of H2 concentration. Specifically, the pulse of H2 causes a drop in OH and a build-up of CH4 that lasts a few years, while the H2 perturbation decays with the timescale \({\tau }_{{{{{{{{{\rm{H}}}}}}}}}_{2}}\). The CH4 build-up then decays in the same manner as would a direct pulse of CH4 with a timescale defined by the CH4 feedback effect. In analytical terms, the perturbation of tropospheric CH4 mainly due to the excitation of H2 and CH4 modes is given by \(\delta [{{{{{{{{\rm{CH}}}}}}}}}_{4}]\approx 2.76{{{{{\rm{e}}}}}}^{{\lambda }_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}t}-2.82{{{{{\rm{e}}}}}}^{{\lambda }_{{{{{{{{{\rm{H}}}}}}}}}_{2}}t}+0.06{{{{{\rm{e}}}}}}^{{\lambda }_{{{{{{{{\rm{CO}}}}}}}}}t}\). Fig. 5: Transient dynamics. Tropospheric response to a pulse of H2 (10% increase of its concentration). Temporal dynamics of H2 (a), CH4 (b), OH (c), and CO (d). Colors highlight the contributions of the different modes. When different modes superimpose, the faster-decaying mode is shown on top of the others. Using this result in traditional GWP formulas35 yields a GWP for H2 due to direct CH4 perturbation around 7.8 with the 100-year time-horizon and 22 with the 20-year time horizon. It is estimated that around half of the H2 indirect radiative forcing is due to the direct CH4 perturbation, and the other half to the O3 and stratospheric H2O impacts caused by both H2 and H2-induced CH4 perturbations14. Taking this into account yields a total GWP for H2 of 15.6 with the 100-year time-horizon and 44 with the 20-year time-horizon. These values are in the upper range of the recent estimates of 11 ± 5 for GWP100 and 33\({}_{-13}^{+11}\) for GWP20 obtained with a detailed model of atmospheric chemistry16. Notably, the consequences of the H2 pulse on CH4 are relatively small in magnitude because most of the additional H2 is oxidized by soil bacteria and not by OH. The stability of this biotic sink as affected by climate change and anthropic pressure is hence a crucial aspect for the impact of future H2 emissions, as further discussed in the main text. Critical hydrogen emission intensity We here derive an explicit expression for the critical H2 emission intensity (HEIcr) for methane mitigation, defined as the emission rate that offsets the H2 replacement of fossil fuels. The expression is derived for an infinitesimal replacement of fossil fuel energy with H2 (dE in ExJ/yr), but well approximates the critical HEI for finite replacement of fossil fuel energy (see Supplementary Fig. 3). As a first step, we differentiate the system (5)–(8) at equilibrium (d[⋅]/dt = 0) with respect to E. This yields $${S}_{{{{{{{{{\rm{CH}}}}}}}}}_{4},E}-{k}_{1}[{{{{{{{{\rm{CH}}}}}}}}}_{4}]{[{{{{{{{\rm{OH}}}}}}}}]}_{E}=0,$$ $${S}_{{{{{{{{{\rm{H}}}}}}}}}_{2},E}+\alpha {k}_{1}[{{{{{{{{\rm{CH}}}}}}}}}_{4}]{[{{{{{{{\rm{OH}}}}}}}}]}_{E}-{k}_{2}{\left([{{{{{{{{\rm{H}}}}}}}}}_{2}][{{{{{{{\rm{OH}}}}}}}}]\right)}_{E}-{k}_{d}{[{{{{{{{{\rm{H}}}}}}}}}_{2}]}_{E}=0,$$ $${k}_{1}[{{{{{{{{\rm{CH}}}}}}}}}_{4}]{[{{{{{{{\rm{OH}}}}}}}}]}_{E}-{k}_{3}{\left([{{{{{{{\rm{CO}}}}}}}}][{{{{{{{\rm{OH}}}}}}}}]\right)}_{E}=0,$$ $${k}_{1}[{{{{{{{{\rm{CH}}}}}}}}}_{4}]{[{{{{{{{\rm{OH}}}}}}}}]}_{E}+{k}_{2}{\left([{{{{{{{{\rm{H}}}}}}}}}_{2}][{{{{{{{\rm{OH}}}}}}}}]\right)}_{E}+{k}_{3}{\left([{{{{{{{\rm{CO}}}}}}}}][{{{{{{{\rm{OH}}}}}}}}]\right)}_{E}+{k}_{4}[{{{{{{{\rm{X}}}}}}}}]{[{{{{{{{\rm{OH}}}}}}}}]}_{E}=0,$$ where subscript E indicates d ⋅ /dE. \({[{{{{{{{{\rm{CH}}}}}}}}}_{4}]}_{E}\,=\,0\) because of the definition of the critical H2 emission intensity, which leaves the methane concentration unaltered. We consider that only H2 and CH4 sources vary with E, while SOH,E = SCO,E = 0. These variations can be estimated as $${S}_{{{{{{{{{\rm{H}}}}}}}}}_{2},E}={a}_{{{{{{{{{\rm{H}}}}}}}}}_{2}}\left(-{{{{{{{{\rm{ff}}}}}}}}}_{{{{{{{{{\rm{H}}}}}}}}}_{2}}+\frac{{{{{{{{\rm{HEI}}}}}}}}}{{\eta }_{{{{{{{{{\rm{H}}}}}}}}}_{2}}(1-{{{{{{{\rm{HEI}}}}}}}})}\right),$$ $${S}_{{{{{{{{{\rm{CH}}}}}}}}}_{4},E}={a}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}\left(-{{{{{{{{\rm{ff}}}}}}}}}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}+\frac{r\,{{{{{{{\rm{MEI}}}}}}}}}{{\eta }_{{{{{{{{{\rm{H}}}}}}}}}_{2}}(1-{{{{{{{\rm{HEI}}}}}}}})}\right),$$ where HEI and MEI are the hydrogen and methane emission intensities, respectively (MEI = 0 for green H2); \({\eta }_{{{{{{{{{\rm{H}}}}}}}}}_{2}}\) is H2 higher heating value; r is the amount of CH4 needed to produce a unit of blue H2; \({{{{{{{{\rm{ff}}}}}}}}}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}\) and \({{{{{{{{\rm{ff}}}}}}}}}_{{{{{{{{{\rm{H}}}}}}}}}_{2}}\) are the average amounts of CH4 and H2 emitted per ExJ of fossil fuel energy; \({a}_{{{{{{{{{\rm{H}}}}}}}}}_{2}}\) and \({a}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}\) are conversion factors. Substituting Eqs. (14), (15) into the system (10)–(13) and after some algebra, one obtains the critical H2 emission intensity $${{{{{{{{\rm{HEI}}}}}}}}}_{{{{{{{{\rm{cr}}}}}}}}}=\frac{A\left({{{{{{{{\rm{ff}}}}}}}}}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}\,{\eta }_{{{{{{{{{\rm{H}}}}}}}}}_{2}}-r\,{{{{{{{\rm{MEI}}}}}}}}\right)+B\,{{{{{{{{\rm{ff}}}}}}}}}_{{{{{{{{{\rm{H}}}}}}}}}_{2}}\,{\eta }_{{{{{{{{{\rm{H}}}}}}}}}_{2}}}{A\,{{{{{{{{\rm{ff}}}}}}}}}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}\,{\eta }_{{{{{{{{{\rm{H}}}}}}}}}_{2}}+B\left({{{{{{{{\rm{ff}}}}}}}}}_{{{{{{{{{\rm{H}}}}}}}}}_{2}}{\eta }_{{{{{{{{{\rm{H}}}}}}}}}_{2}}+1\right)}.$$ where the dependence to the atmospheric composition is embedded in \(A={k}_{d}({k}_{4}[{{{{{{{\rm{X}}}}}}}}]+{k}_{2}[{{{{{{{{\rm{H}}}}}}}}}_{2}]+2{k}_{1}[{{{{{{{{\rm{CH}}}}}}}}}_{4}])+{k}_{2}[{{{{{{{\rm{OH}}}}}}}}]\left((\alpha+2){k}_{1}[{{{{{{{{\rm{CH}}}}}}}}}_{4}]+{k}_{4}[{{{{{{{\rm{X}}}}}}}}]\right)\) and B = 8k1k2[CH4][OH]. Parameters have been defined as follows: \({\eta }_{{{{{{{{{\rm{H}}}}}}}}}_{2}}=0.143\) ExJ/Tg\({}_{{{{{{{{{\rm{H}}}}}}}}}_{2}}\), r = 3.2 kg\({}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}\)/kg\({}_{{{{{{{{{\rm{H}}}}}}}}}_{2}}\), \({{{{{{{{\rm{ff}}}}}}}}}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}=0.225\) Tg\({}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}\)/ExJ, \({{{{{{{{\rm{ff}}}}}}}}}_{{{{{{{{{\rm{H}}}}}}}}}_{2}}=0.0225\) Tg\({}_{{{{{{{{{\rm{H}}}}}}}}}_{2}}\)/ExJ, \({a}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}\,=\,0.43\) ppb/Tg, \({a}_{{{{{{{{{\rm{H}}}}}}}}}_{2}}\,=\,8{a}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}\). To obtain the value of r, we used the estimate of 3.7 kg of natural gas for kg of H267, which includes feedstock and energy requirements, and we assumed that 85% of natural gas by weight is composed by methane. \({{{{{{{{\rm{ff}}}}}}}}}_{{{{{{{{{\rm{CH}}}}}}}}}_{4}}\) and \({{{{{{{{\rm{ff}}}}}}}}}_{{{{{{{{{\rm{H}}}}}}}}}_{2}}\) are obtained as the ratio between the global CH4 and H2 emissions due to fossil fuel use and the global fossil fuel energy. All data generated during this study are provided in the supplementary dataset file. Code availability The code used to generate the results is provided in the supplementary dataset file. IEA. Global Hydrogen Review. Technical Report (International Energy Agency, 2021). Hydrogen Council. Hydrogen for Net-Zero: A Critical Cost-Competitive Energy Vector. Technical Report (European Union, 2021). Wang, D. et al. Impact of a future H2-based road transportation sector on the composition and chemistry of the atmosphere–part 1: tropospheric composition and air quality. Atmos. Chem. Phys. 13, 6117–6137 (2013). van Ruijven, B., Lamarque, J.-F., van Vuuren, D. P., Kram, T. & Eerens, H. Emission scenarios for a global hydrogen economy and the consequences for global air pollution. Glob. Environ. Change 21, 983–994 (2011). Frazer-Nash Consultancy. Fugitive Hydrogen Emissions in a Future Hydrogen Economy. Technical Report (Frazer-Nash Consultancy, 2022). Cooper, J., Dubey, L., Bakkaloglu, S. & Hawkes, A. Hydrogen emissions from the hydrogen value chain-emissions profile and impact to global warming. Sci. Total Environ. 830, 154624 (2022). Derwent, R. G., Collins, W. J., Johnson, C. & Stevenson, D. Transient behaviour of tropospheric ozone precursors in a global 3-D CTM and their indirect greenhouse effects. Climatic Change 49, 463–487 (2001). Tromp, T. K., Shia, R.-L., Allen, M., Eiler, J. M. & Yung, Y. L. Potential environmental impact of a hydrogen economy on the stratosphere. Science 300, 1740–1742 (2003). Schultz, M. G., Diehl, T., Brasseur, G. P. & Zittel, W. Air pollution and climate-forcing impacts of a global hydrogen economy. Science 302, 624–627 (2003). Warwick, N., Bekki, S., Nisbet, E. & Pyle, J. Impact of a hydrogen economy on the stratosphere and troposphere studied in a 2-d model. Geophys. Res. Lett. 31, p.L05107 (2004). Ehhalt, D. H. & Rohrer, F. The tropospheric cycle of H2: a critical review. Tellus B: Chem. Phys. Meteorol. 61, 500–535 (2009). Zgonnik, V. The occurrence and geoscience of natural hydrogen: a comprehensive review. Earth-Sci. Rev. 203, 103140 (2020). Novelli, P. C. et al. Molecular hydrogen in the troposphere: global distribution and budget. J. Geophys. Res.: Atmospheres 104, 30427–30444 (1999). Paulot, F. et al. Global modeling of hydrogen using GFDL-AM4.1: sensitivity of soil removal and radiative forcing. Int. J. Hydrog. Energy 46, 13446–13460 (2021). Vogel, B., Feck, T., Grooß, J.-U. & Riese, M. Impact of a possible future global hydrogen economy on Arctic stratospheric ozone loss. Energy Environ. Sci. 5, 6445 (2012). Warwick, N. et al. Atmospheric Implications of Increased Hydrogen Use. Technical Report (Department for Business, Energy & Industrial Strategy Policy Paper, 2022). Kirschke, S. et al. Three decades of global methane sources and sinks. Nat. Geosci. 6, 813–823 (2013). Saunois, M. et al. The global methane budget 2000–2017. Earth Syst. Sci. Data 12, 1561–1623 (2020). Derwent, R. G. et al. Global modelling studies of hydrogen and its isotopomers using STOCHEM-CRI: Likely radiative forcing consequences of a future hydrogen economy. Int. J. Hydrog. Energy 45, 9211–9221 (2020). Patterson, J. D. et al. Atmospheric history of H2 over the past century reconstructed from south pole firn air. Geophys. Res. Lett. 47, e2020GL087787 (2020). European Commission. Launch by the United States, the European Union, and Partners of the Global Methane Pledge to Keep 1.5C Within Reach. https://ec.europa.eu/commission/presscorner/detail/en/statement_21_5766. Accessed 2021-11-30 (2021). BP. Statistical Review of World Energy. Technical Report (British Petroleum, 2021). Bond, S., Gül, T., Reimann, S., Buchmann, B. & Wokaun, A. Emissions of anthropogenic hydrogen to the atmosphere during the potential transition to an increasingly H2-intensive economy. Int. J. Hydrog. Energy 36, 1122–1135 (2011). Schwietzke, S. et al. Upward revision of global fossil fuel methane emissions based on isotope database. Nature 538, 88–91 (2016). Jackson, R. B. et al. Increasing anthropogenic methane emissions arise equally from agricultural and fossil fuel sources. Environ. Res. Lett. 15, 071002 (2020). IEA. Global Methane Tracker 2022. Technical Report (International Energy Agency, 2022). Zhang, Y. et al. Quantifying methane emissions from the largest oil-producing basin in the united states from space. Sci. Adv. 6, 5120 (2020). Alvarez, R. A. et al. Assessment of methane emissions from the U.S. oil and gas supply chain. Science 361, 186–188 (2018). ADS CAS Google Scholar Shen, L. et al. Satellite quantification of oil and natural gas methane emissions in the U.S. and Canada including contributions from individual basins. Atmos. Chem. Phys. Discuss. 22, 11203–11215 (2022). MacKay, K. et al. Methane emissions from upstream oil and gas production in Canada are underestimated. Sci. Rep. 11, 1–8 (2021). Lauvaux, T. et al. Global assessment of oil and gas methane ultra-emitters. Science 375, 557–561 (2022). UNEP. An Eye on Methane: International Methane Emissions Observatory. Technical Report (United Nations Environment Program, 2021). IEA. Net Zero by 2050 - A Roadmap for the Global Energy Sector. Technical Report (International Energy Agency, 2021). IRENA. World Energy Transitions Outlook: 1.5∘ C Pathway. Technical Report (International Renewable Energy Agency, 2021). Forster, P. et al. Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change Chap. 7 (Cambridge University Press, 2021). Howarth, R. W. & Jacobson, M. Z. How green is blue hydrogen? Energy Sci. Eng. 9, 1676–1687 (2021). Naik, V. et al. Preindustrial to present-day changes in tropospheric hydroxyl radical and methane lifetime from the Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP). Atmos. Chem. Phys. 13, 5277–5298 (2013). Lelieveld, J., Gromov, S., Pozzer, A. & Taraborrelli, D. Global tropospheric hydroxyl distribution, budget, and reactivity. Atmos. Chem. Phys. 16, 12477–12493 (2016). Murray, L. T., Fiore, A. M., Shindell, D. T., Naik, V. & Horowitz, L. W. Large uncertainties in global hydroxyl projections tied to fate of reactive nitrogen and carbon. Proc. Natl Acad. Sci. USA 118, e2115204118 (2021). Warneck, P. Chemistry of the Natural Atmosphere (Elsevier, 1999). Montzka, S. A. et al. Small interannual variability of global atmospheric hydroxyl. Science 331, 67–69 (2011). Ehhalt, D. & Rohrer, F. Deposition velocity of H2: a new algorithm for its dependence on soil moisture and temperature. Tellus B: Chem. Phys. Meteorol. 65, 19904 (2013). Bertagni, M. B., Paulot, F. & Porporato, A. Moisture fluctuations modulate abiotic and biotic limitations of soil H2 uptake. Global Biogeochem. Cycles 35, e2021GB006987 (2021) Bay, S. K. et al. Trace gas oxidizers are widespread and active members of soil microbial communities. Nat. Microbiol. 6, 246–256 (2021). Yashiro, H., Sudo, K., Yonemura, S. & Takigawa, M. The impact of soil uptake on the global distribution of molecular hydrogen: chemical transport model simulation. Atmos. Chem. Phys. 11, 6701–6719 (2011). Ji, M. et al. Atmospheric trace gases support primary production in Antarctic desert surface soil. Nature 552, 400–403 (2017). Osselin, F. et al. Orange hydrogen is the new green. Nat. Geosci. 15, 1–5 (2022). Ocko, I. B. & Hamburg, S. P. Climate consequences of hydrogen emissions. Atmos. Chem. Phys. 22, 9349–9368 (2022). IEA. Driving Down Methane Leaks from the Oil and Gas Industry. Technical Report (International Energy Agency, 2021). Jordan, A. & Steinberg, B. Calibration of atmospheric hydrogen measurements. Atmos. Meas. Tech. 4, 509–521 (2011). Patterson, J. D. & Saltzman, E. S. Diffusivity and solubility of H2 in ice ih: implications for the behavior of H2 in polar ice. J. Geophys. Res.: Atmos. 126, 2020–033840 (2021). Lamarque, J.-F. et al. The atmospheric chemistry and climate model intercomparison project (ACCMIP): overview and description of models, simulations, and climate diagnostics. Geosci. Model Dev. 6, 179–206 (2013). Shindell, D. T. et al. Radiative forcing in the ACCMIP historical and future climate simulations. Atmos. Chem. Phys. 13, 2939–2974 (2013). Prather, M. J. Lifetimes and eigenstates in atmospheric chemistry. Geophys. Res. Lett. 21, 801–804 (1994). Prather, M. J. Time scales in atmospheric chemistry: theory, GWPs for CH4 and CO, and runaway growth. Geophys. Res. Lett. 23, 2597–2600 (1996). Manning, M. R. Characteristic modes of isotopic variations in atmospheric chemistry. Geophys. Res. Lett. 26, 1263–1266 (1999). Prather, M. J. Lifetimes and time scales in atmospheric chemistry. Philos. Trans. R. Soc. A: Math., Phys. Eng. Sci. 365, 1705–1726 (2007). Gaubert, B. et al. Chemical feedback from decreasing carbon monoxide emissions. Geophys. Res. Lett. 44, 9985–9995 (2017). Heimann, I. et al. Methane emissions in a chemistry-climate model: Feedbacks and climate response. J. Adv. Modeling Earth Syst. 12, 2019–002019 (2020). Houghton, J. T. et al. Climate Change 2001: The Scientific Basis (The Press Syndicate of the University of Cambridge, 2001). Rhee, T. S., Brenninkmeijer, C. A. M. & Rockmann, T. The overwhelming role of soils in the global atmospheric hydrogen cycle. Atmos. Chem. Phys. 6, 1611–1625 (2006). Prather, M. J., Holmes, C. D. & Hsu, J. Reactive greenhouse gas scenarios: Systematic exploration of uncertainties and the role of atmospheric chemistry. Geophys. Res. Lett. 39, L09803 (2012). Xiao, X. et al. Optimal estimation of the soil uptake rate of molecular hydrogen from the Advanced Global Atmospheric Gases Experiment and other measurements. J. Geophys. Res.: Atmos. https://doi.org/10.1029/2006JD007241 (2007). Zheng, B. et al. Global atmospheric carbon monoxide budget 2000–2017 inferred from multi-species atmospheric inversions. Earth Syst. Sci. Data 11, 1411–1436 (2019). Holmes, C. D. Methane feedback on atmospheric chemistry: methods, models, and mechanisms. J. Adv. Modeling Earth Syst. 10, 1087–1099 (2018). Schmid, P. J. Nonmodal stability theory. Annu. Rev. Fluid Mech. 39, 129–162 (2007). Article ADS MathSciNet MATH Google Scholar Collodi, G., Azzaro, G., Ferrari, N. & Santos, S. Techno-economic evaluation of deploying ccs in SMR based merchant H2 production with NG as feedstock and fuel. Energy Proc. 114, 2690–2712 (2017). Trenberth, K. E. & Smith, L. The mass of the atmosphere: a constraint on global analyses. J. Clim. 18, 864–875 (2005). We acknowledge support from the US National Science Foundation (NSF) grant nos. EAR1331846 and EAR-1338694, the BP through the Carbon Mitigation Initiative (CMI) at Princeton University, and the Moore Foundation. We thank Larry Horowitz for critical reading of the manuscript. The High Meadows Environmental Institute, Princeton University, Guyot Hall, Princeton, 08544, NJ, USA Matteo B. Bertagni & Amilcare Porporato Department of Ecology and Evolutionary Biology, Princeton University, Guyot Hall, Princeton, 08544, NJ, USA Stephen W. Pacala Geophysical Fluid Dynamics Laboratory, National Oceanic and Atmospheric Administration, 201 Forrestal Rd, Princeton, 08540, NJ, USA Fabien Paulot Department of Civil and Environmental Engineering, Princeton University, Guyot Hall, Princeton, 08544, NJ, USA Amilcare Porporato Matteo B. Bertagni M.B.B., S.W.P., and A.P. conceptualized the work. M.B.B. developed the analytical model with contributions from F.P., analyzed the results and prepared the manuscript. A.P., F.P., and S.W.P. supervised the work and edited the manuscript. Correspondence to Matteo B. Bertagni. Nature Communications thanks Jasmin Cooper and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Description of Additional Supplementary Files Supplementary Dataset 1 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Bertagni, M.B., Pacala, S.W., Paulot, F. et al. Risk of the hydrogen economy for atmospheric methane. Nat Commun 13, 7706 (2022). https://doi.org/10.1038/s41467-022-35419-7 By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Editors' Highlights Nature Communications (Nat Commun) ISSN 2041-1723 (online)
CommonCrawl
Tsallis Statistics, Statistical Mechanics for Non-extensive Systems and Long-Range Interactions A standard assumption of statistical mechanics is that quantities like energy are "extensive" variables, meaning that the total energy of the system is proportional to the system size; similarly the entropy is also supposed to be extensive. Generally, at least for the energy, this is justified by appealing to the short-range nature of the interactions which hold matter together, form chemical bonds, etc. But suppose one deals with long-range interactions, most prominently gravity; one can then find that energy is not extensive. This makes the life of the statistical mechanic much harder. Constantino Tsallis is a physicist who came up with a supposed solution, based on the idea of maximum entropy. One popular way to derive the (canonical) equilibrium probability distribution in the following. One purports to know the average values of some quantities, such as the energy of the system, the number of molecules, the volume it occupies, etc. One then searches for the probability distribution which maximizes the entropy, subject to the constraint that it give the right average values for your supposed givens. Through the magic of Lagrange multipliers, the entropy-maximizing distribution can be shown to have the right, exponential, form, and the Lagrange multipliers which go along with your average-value constraints turn out to be the "intensive" variables paired with (or "conjugate to") the extensive ones whose means are constrained (energy <=> temperature, volume <=> pressure, molecular number <=> chemical potential, etc.). But, as I said, the entropy is an extensive quantity. What Tsallis proposed is to replace the usual (Gibbs) entropy with a new, non-extensive quantity, now commonly called the Tsallis entropy, and maximize that, subject to constraints. There is actually a whole infinite family of Tsallis entropies, indexed by a real-valued parameter q, which supposedly quantifies the degree of departure from extensivity (you get the usual entropy back again when q = 1). One can then grind through and show that many of the classical results of statistical mechanics can be translated into the new setting. What has really caused this framework to take off, however, is that while normal entropy-maximization gives you exponential, Boltzmann distributions, Tsallis statistics give you power-law, Pareto distributions, and everyone loves a power-law. (Strictly speaking, Tsallis distributions are type II generalized Pareto distributions, with power-law tails.) Today you'll find physicists applying Tsallis statistics to nearly anything with a heavy right tail. I have to say I don't buy this at all. Leaving to one side my skepticism about the normal maximum entropy story, at least as it's usually told (e.g. by E. T. Jaynes), there are a number of features which make me deeply suspicious of Tsallis statistics. It's simply not true that one maximizes the Tsallis entropy subject to constraints on the mean energy \( \langle E \rangle =\sum_{i}{p_i E_i} \). Rather, to get things to work out, you have to fix the value of a "generalized mean" energy, \( { \langle E \rangle }_{q} = \sum_{i}{p_i^q E_i} / \sum_{i}{p^q_i} \). (This can be interpreted as replacing the usual average, an expectation take with respect to the actual probability distribution, by an expectation taken with respect to a new, "escort" probability distribution.) I have yet to encounter anyone who can explain why such generalized averages should be either physically or probabilistically natural; the usual answer I get is "OK, yes, it's weird, but it works, doesn't it?" There is no information-theoretic justification for the Tsallis entropy, unlike the usual Gibbs entropy. The Tsallis form is, however, a kind of low-order truncation of the Rényi entropy, which does have information-theoretic interest. (The Tsallis form has been independently rediscovered many times in the literature, going back to the 1960s, usually starting from the Rényi entropy. A brief review of the "labyrinthic history of the entropies" can be found in one of Tsallis's papers, cond-mat/0010150.) Maximizing the Rényi entropy under mean-value constraints leads to different distributions than maximizing the Tsallis entropy. I have pretty severe doubts about the backing story here, about long-range interactions leading to a non-extensive form for the entropy, particularly when, in derivations which begin with such a story, I often see people blithely factoring the probability that a system is in some global state into the product of the probabilities that its components are in various states, i.e., assuming independent sub-systems. There are alternative, non-max-ent derivations of the usual statistical-mechanical distributions; such derivations do not seem forthcoming for Tsallis statistic. In particular, large deviations arguments, which essentially show how to get such distributions as emergent, probabilistic consequences of individual-level interactions, do not seem to ever lead to Tsallis statistics, even when one has the kind of long-range interactions which, supposedly, Tsallis statistics ought to handle. There is no empirical evidence that Tsallis statistics correctly gives the microscopic energy distribution for any known system. Zanette and Montemurro have shown that you can get any distribution you like out of the Tsallis recipe, simply by changing the function whose generalized average you take as your given. The usual power-law prescription only holds if you constrain either x or x2, but one of the more "successful" applications requires constraining the generalized mean of \( x^{2\alpha}/2 - c\mathrm{sgn}{x}({|x|}^{\alpha} - {|x|}^{3\alpha}/3) \), with c and \( \alpha \) as adjustable parameters! (In fairness, I should point out that if you're willing to impose sufficiently weird constraints, you can generate arbitrary distributions from the usual max. ent. procedure, too; this is one of the reasons why I don't put much faith in that procedure.) I think the extraordinary success of what is, in the end, a slightly dodgy recipe for generating power-laws illustrates some important aspects, indeed unfortunate weaknesses, in the social and intellectual organization of "the sciences of complexity". But that rant will have to wait for my book on The Genealogy of Complexity , which, prudently, means waiting until I'm safely tenured. I should also discuss the "superstatistics" approach here, which tries to generate non-Boltzmann statistics as mixtures of Boltzmann distributions, physically justified by appealing to fluctuating intensive variables, such as temperature. I will only remark that the superstatistics approach severes all connections between the use of these distributions and non-extensivity and long-range interactions; and that results in the statistical literature on getting generalized Pareto distributions from mixtures of exponentials go back to 1952 at least. Finally, it has come to my attention that some people are citing this notebook as though it had some claim to authority. Fond though I am of my own opinions, this seems to me to be deeply wrong. The validity of Tsallis statistics, as a scientific theory, ought to be settled in the usual way, by means of the peer-reviewed scientific literature, subject to all its usual conventions and controls. It's obvious from the foregoing that I have pretty strong beliefs in how that debate ought to go, and (this may not be so clear) enough faith in the scientific community that I think, in the long run, it will go that way, but no one should confuse my opinion with a scientific finding. For myself, this page is a way to organize my own thoughts; for everyone else, it's either entertainment, or at best an opinionated collection of pointers to the genuine discussion. Recommended, big picture: Tsallis & co. maintain a pretty comprehensive and ever-growing bibliography on Tsallis statistics. This includes replies to many of the papers I list here. Julien Barré, Freddy Bouchet, Thierry Dauxois and Stefano Ruffo, "Large deviation techniques applied to systems with long-range interactions", cond-mat/0406358 = Journal of Statistics Physics 119 (2005): 677--713 [What large deviation results for long-range interactions look like] Christian Beck, "Superstatistics: Recent developments and applications", cond-mat/0502306 Freddy Bouchet and Thierry Dauxois, "Prediction of anomalous diffusion and algebraic relaxations for long-range interacting systems, using classical statistical mechanics", Physical Review E 72 (2005): 045103 = cond-mat/0407703 Thierry Dauxois, "Non-Gaussian distributions under scrutiny", Journal of Statistical Mechanics (2007) N08001 Peter Grassberger, "Temporal scaling at Feigenbaum points and non-extensive thermodynamics", cond-mat/0508110 = Physical Review Letters 95 (2005): 140601 [I can't resist quoting the abstract in full, if only because I enjoy Prof. Grassberger's no-quarter-asked-or-given tone: "We show that recent claims for the non-stationary behaviour of the logistic map at the Feigenbaum point based on non-extensive thermodynamics are either wrong or can be easily deduced from well-known properties of the Feigenbaum attractor. In particular, there is no generalized Pesin identity for this system, the existing 'proofs' being based on misconceptions about basic notions of ergodic theory. In deriving several ew scaling laws of the Feigenbaum attractor, thorough use is made of its detailed structure, but there is no obvious connection to non-extensive thermodynamics." One point made here (but passed over in the abstract) is that there are nearly as many estimates of the "right" value of the non-extensivity parameter q at the period-doubling accumulation point as there are papers on the system. This tends to reduce one's confidence that any of them is a physically meaningful parameter.] Brian R. La Cour and William C. Schieve, "A Comment on the Tsallis Maximum Entropy Principle", Physical Review E 62 (2000): 7494--7496, cond-mat/0009216 Michael Nauenberg, "Critique of q-entropy for thermal statistics", Physical Review E 67 (2003): 036114 [From the abstract: "[I]t is shown here that the joint entropy for systems having different values of q is not defined in this formalism, and consequently fundamental thermodynamic concepts such as temperature and heat exchange cannot be considered for such systems. Moreover, for q &neq; 1 the probability distribution for weakly interacting systems does not factor into the product of the probability distribution for the separate systems, leading to spurious correlations and other unphysical consequences, e.g., nonextensive energy, that have been ignored in various applications given in the literature." That the probabilities for sub-systems do not factor is, I think, especially devastating, because almost all of the work on the subject assumes that it does. See also comment by Tsallis, cond-mat/305091, and reply by Nauenberg, cond-mat/0305365, which I believe to be correct.] Hugo Touchette, "Comment on 'Towards a large deviation theory for strongly correlated systems' ", arxiv:1209.2611 Damién H. Zanette and Marcelo A. Montemurro "A note on non-therrmodynamical applications of non-extensive statistics", cond-mat/0305070 = Physics Letters A 324 (2004): 383--387 [An amusing and quite conclusive assault, culminating in a demonstration that you can use the non-extensive formalism to "derive" any probability distribution whatsoever.] "Thermal measurement of stationary nonequilibrium systems: A test for generalized thermostatistics", Physics Letters A 316 (2003): 184--189 = cond-mat/0212327 [And it doesn't even work for for thermodynamic systems.] Recommended, close-ups: Freddy Bouchet, Thierry Dauxois, Stefano Ruffo, "Controversy about the applicability of Tsallis statistics to the HMF model", cond-mat/0605445 = Europhysics News 37 (2006): 9--10 G. Baris Bagci, Thomas Oikonomou, "Do Tsallis distributions really originate from the finite baths?", arxiv:1305.2493 A. G. Bashkirov, "Comment on 'Stability of Tsallis entropy and instabilities of Rényi and normalized Tsallis entropies: A basis for q-exponential distributions'," Physical Review E 72 (2005): 028101 [There is also a reply by S. Abe, the author of the original article, which, predictably, I find unconvincing: Physical Review E 72 (2005): 028102.] Alice M. Crawford, Nicolas Mordant, Andy M. Reynolds, Eberhard Bodenschatz, "Comment on 'Dynamical Foundations of Nonextensive Statistical Mechanics'", physics/0212080 H. J. Hilhorst "Central limit theorems for correlated variables: some critical remarks", Brazilian Journal of Physics 39 (2000): 371--379, arxiv:0901.1249 "Note on a q-modified central limit theorem", Journal of Statistical Mechanics (2010): P10023, arxiv:1008.4259 H. J. Hilhorst and G. Schehr, "A note on q-Gaussians and non-Gaussians in statistical mechanics", Journal of Statistical Mechanics (2007) P06003 [Analytical results on the limiting distributions of certain sums of correlated random variables, supposed to follow "q-Gaussians", but not actually doing so. It strikes me as extraordinary that no one in this literature, on either side, pays any attention to actual results in probability theory about generalizations of the central limit theorem; one searches these bibliographies in vain for names like Lévy and Rosenblatt.] B. H. Lavenda and J. Dunning-Davies, "Additive Entropies of degree-q and the Tsallis Entropy", physics/0310117 Modesty forbids me to recommend: CRS, "Maximum Likelihood Estimation for q-Exponential (Tsallis) Distributions", math.ST/0701854 [If you have to use these things, you really should estimate their parameters this way, and not try to fit curves to the sample distribution.] To read: Andrea Antoniazzi, Francesco Califano, Duccio Fanelli, and Stefano Ruffo, "Exploring the Thermodynamic Limit of Hamiltonian Models: Convergence to the Vlasov Equation", Physical Review Letters 98 (2007): 150602 R. Bachelard, C. Chandre, D. Fanelli, X. Leoncini and S. Ruffo, "Abundance of Regular Orbits and Nonequilibrium Phase Transitions in the Thermodynamic Limit for Long-Range Systems", Physical Review Letters 101 (2008): 260603 Fulvio Baldovin, Pierre-Henri Chavanis and Enzo Orlandini, "Microcanonical quasistationarity of long-range interacting systems in contact with a heat bath", Physical Review E 79 (2009): 011102 Fulvio Baldovin and Enzo Orlandini "Hamiltonian Dynamics Reveals the Existence of Quasistationary States for Long-Range Systems in Contact with a Reservoir", Physical Review Letters 96 (2006): 240602 = cond-mat/0603383 ["We introduce a Hamiltonian dynamics for the description of long-range interacting systems in contact with a thermal bath (i.e., in the canonical ensemble). The dynamics confirms statistical mechanics equilibrium predictions for the Hamiltonian mean field model and the equilibrium ensemble equivalence. We find that long-lasting quasistationary states persist in the presence of the interaction with the environment. Our results indicate that quasistationary states are indeed reproducible in real physical experiments."] "Quasi-stationary states in long-range interacting systems are incomplete equilibrium states", cond-mat/0603659 = Physical Review Letters 97 (2006): 100601 ["Despite the presence of an anomalous single-particle velocity distribution, we find that ordinary Central Limit Theorem leads to the Boltzmann factor in Gibbs' $\Gamma$-space. We identify the non-equilibrium sub-manifold of $\Gamma$-space responsible for the anomalous behavior and show that by restricting the Boltzmann-Gibbs approach to such sub-manifold we obtain the statistical mechanics of the quasi-stationary states."] Christian Beck, "Generalized information and entropy measures in physics", arxiv:0902.1235 Christian Beck, Ezechiel G. D. Cohen and Harry L. Swinney, "From time series to superstatistics", Physical Review E 72 (2005): 056133 Freddy Bouchet, "Stochastic process of equilibrium fluctuations of a system with long-range interactions", Link Physical Review E 70 (2004): 036113 F. Bouchet and J. Barré, "Classification of Phase Transitions and Ensemble Inequivalence, in Systems with Long Range Interactions", Journal of Statistical Physics 118 (2005): 1073--1105 Pierre-Henri Chavanis "Dynamics and thermodynamics of systems with long-range interactions: interpretation of the different functionals", arxiv:0904.2729 "Statistical mechanics of geophysical turbulence: application to jovian flows and Jupiter's great red spot", Physica D 200 (2005): 257--272 [Listed here because this is (judging by the abstract) an instance of Chavanis's more general non-Tsallisite (non-Tsallisian?) approach to statistical mechanics with long-range interactions] "Generalized Fokker-Planck equations and effective thermodynamics", cond-mat/0504716 = Physica A 340 (2004): 57 "Quasi-stationary states and incomplete violent relaxation in systems with long-range interactions", cond-mat/0509726 "Lynden-Bell and Tsallis distributions for the HMF model", cond-mat/0604234 Pierre-Henri Chavanis, C. Rosier and C. Sire, "Thermodynamics of self-gravitating systems," cond-mat/0107345 Thierry Dauxois, Stefano Ruffo, Ennio Arimondo and Martin Wilkens (eds.), Dynamics and Thermodynamics of Systems With Long Range Interactions [Blurb] Davide Ferrari and Yuhong Yang, "Maximum Lq-likelihood estimation", Annals of Statistics 38 (2010): 753--783 V. Garcia-Morales, J. Pellicer, "Statistical mechanics and thermodynamics of complex systems", math-ph/0304013 Toshiyuki Gotoh, Robert H. Kraichnan, "Turbulence and Tsallis Statistics", nlin.CD/0305040 D. H. E. Gross, "Non-extensive Hamiltonian systems follow Boltzmann's principle not Tsallis statistics," cond-mat/0106496 Shamik Gupta, David Mukamel, "Slow relaxation in long-range interacting systems with stochastic dynamics", arxiv:1006.0233 Rudolf Hanel and Stefan Thurner, "On the Derivation of power-law distributions within standard statistical mechanics", cond-mat/0412016 = Physica A 351 (2005): 260--268 Petr Jizba and Toshihico Arimitsu, "The world according to Rényi: Thermodynamics of multifractal systems," cond-mat/0207707 Ramandeep S. Johal, Antoni Planes, and Eduard Vives, "Equivalence of nonadditive entropies and nonadditive energies in long range interacting systems under macroscopic equilibrium", cond-mat/0503329 T. Kodama, H.-T. Elze, C. E. Aguiar, T. Koide, "Dynamical Correlations as Origin of Nonextensive Entropy", cond-mat/0406732 Hiroko Koyama, Tetsuro Konishi, and Stefano Ruffo, "Clusters die hard: Time-correlated excitation in the Hamiltonian Mean Field model", nlin.CD/0606041 ["The Hamiltonian Mean Field (HMF) model has a low-energy phase where $N$ particles are trapped inside a cluster. ... each particle can be identified as a high-energy particle (HEP) or a low-energy particle (LEP), depending on whether its energy is above or below the separatrix energy. We then define the trapping ratio as the ratio of the number of LEP to the total number of particles and the ``fully-clustered'' and ``excited'' dynamical states as having either no HEP or at least one HEP. We analytically compute the phase-space average of the trapping ratio by using the Boltzmann-Gibbs stable stationary solution of the Vlasov equation associated with the $N \to \infty$ limit of the HMF model. The same quantity, obtained numerically as a time average, is shown to be in very good agreement with the analytical calculation. ... the distribution of the lifetime of the ``fully-clustered'' state obeys a power law. This means that clusters die hard, and that the excitation of a particle from the cluster is not a Poisson process and might be controlled by some type of collective motion with long memory. Such behavior should not be specific of the HMF model and appear also in systems where {\it itinerancy} among different ``quasi-stationary'' states has been observed. ... "] Bernard H. Lavenda "Fundamental inconsistencies of 'superstatistics'", cond-mat/0408485 "Information and coding discrimination of pseudo-additive entropies (PAE)", cond-mat/0403591 Massimo Marino, "Power-law distributions and equilibrium thermodynamics", cond-mat/0605644 [Makes the interesting claims that if you want a consistent thermodynamics with power-law distributions, then the entropy is uniquely determined to be the Rényi entropy, not the Tsallis entropy] David Mukamel, "Statistical Mechanics of systems with long range interactions", arxiv:0811.3120 D. Mukamel, S. Ruffo and N. Schreiber, "Breaking of ergodicity and long relaxation times in systems with long-range interactions", cond-mat/0508604 Jan Naudts, "Parameter estimation in nonextensive thermostatistics", cond-mat/0509796 A. S. Parvana and T.S. Biró, "Extensive Rényi statistics from non-extensive entropy", Physics Letters A 340 (2005): 375--387 Aurelio Patelli, Shamik Gupta, Cesare Nardini, and Stefano Ruffo, "Linear response theory for long-range interacting systems in quasistationary states", Physical Review E 85 (2012): 021133 Daniel Pfenniger, "Virial statistical description of non-extensive hierarchical systems", cond-mat/0605665 Alessandro Pluchino, Vito Latora and Andrea Rapisarda, "Dynamics and Thermodynamics of a model with long-range interactions", cond-mat/0410213 S. M. Duarte Queirós and C. Tsallis, "Bridging a paradigmatic financial model and nonextensive entropy", Europhysics Letters 69 (2005): 893--899 [Approximation of ARCH model using Tsallis entropies. Thanks to Nick Watkins for bringing this to my attention.] M. S. Reis, V. S. Amaral, R. S. Sarthour and I. S. Oliveira, "Experimental determination of the non-extensive entropic parameter $q$", cond-mat/0512208 T. M. Rocha Filho, A. Figueiredo, and M. A. Amato, "Entropy of Classical Systems with Long-Range Interactions", Physical Review Letters 95 (2005): 190601 [From the abstract: "We discuss the form of the entropy for classical Hamiltonian systems with long-range interaction using the Vlasov equation which describes the dynamics of a N particle [as N goes to infinity]. ... We show that the stationary states correspond to [extrema] of the Boltzmann-Gibbs entropy, and their stability is obtained from the condition that this extremum is a maximum. As a consequence, the entropy is a function of an infinite set of Lagrange multipliers that depend on the initial condition."] Stefano Ruffo, "Equilibrium and nonequilibrium properties of systems with long-range interactions", European Physical Journal B 64 (2008): 355--363, arxiv:0711/1173 Previous versions: 2007-01-29 23:22; first version several years older (2003? earlier?) permanent link for this note RSS feed for this note Notebooks :
CommonCrawl
Jual cutting mat a1 Calculate time difference in php Mp 452 200 mold M52 cam upgrade 13. A horizontal line through y = 3. 14. For standard form, students might say it is easier to find the intercepts to graph the line. However, answers could vary. Some might like the consistency of doing the problems in the same manner, regardless of what form it is in. 15. Because this is a vertical line, it cannot be changed into slope ... Linear equations considered together in this fashion are said to form a system of equations. As in the above example, the solution of a system of linear equations can be a single ordered pair. Notice that when a system is inconsistent, the slopes of the lines are the same but the y-intercepts are different. For a multi_class problem, if multi_class is set to be "multinomial" the softmax function is used to find the predicted probability of each class. The latter have parameters of the form <component>__<parameter> so that it's possible to update each component of a nested object. 350 tbi cat delete Cedar creek golf club san antonio txWhen it comes to line equations, one very common and important property is the slope of the line. The slope of the line, generally represented by the letter m, is the gradient or incline of the line and it is an important concept to understand in algebra and geometry. The higher the slope of a line, the steeper the line grade is. Jan 18, 2020 · Find an equation of a line perpendicular to the given line and contains the given point. Write the equation in slope-intercept form. a. line 3x+y=5, point (-1,2) b. line y=3x+1, point (6,1) You can view more similar questions or ask a new question. is that we have a linear combination, say a1f1(x) + ... + aKfK(x). One then determines the a1, . . . , aK that minimize the variance (the sum of squares of the errors) by calculus and linear algebra. Find the matrix equation that the best t coefcients (a1, . . . , aK) must satisfy. Exercise 3.6. 28 nosler load data retumbo Uw bothell transfer redditThese are two lines with slope -a/b and -d/e, respectively. Let's define the determinant of a 2x2 system of linear equations to be the determinant of the matrix of coefficients A of the system. But as we have seen, the slopes of these lines are equal when the determinant of the coefficient matrix is zero. To find the point slope form manually, follow the steps below: Write down the coordinates for x 1, y 1 and slope; Write down the point slope formula. Substitute the values in the formula and calculate the equation of line. Point slope equation calculator Examples: Find the equation of the line that passes through (-3, 1) with slope of 2. Solution: y = x - 3 y = 6x + 3 y = -3x + 3 y = -3 Find teh slope of a line perpendicular to each given line. Get more help from Chegg Get 1:1 help now from expert Algebra tutors Solve it with our algebra problem solver and calculator Possible gamete combinations for 24 chromosomes Msi fan tunerTo derive the equation of a function from a table of values (or a curve), there are several mathematical methods. Method 1: detect remarkable solutions , like remarkable identities, it is sometimes easy to find the equation by analyzing the values (by comparing two successive values or by identifying certain... Gradient of a line perpendicular to the line 3x - 2y = 5 is ... Find the slope of any line perpendicular to line L. HARD. View Answer. 3.3B Slope and Graphs of Linear Equations August 08, 2011 ③ EXAMPLE Given the line 2x - 3y = 6 a) Write an equation of a line parallel to that line through the origin, (0,0). (The form will be y = mx + 0) b) Write an equation of a line perpendicular to that line through the origin,(0,0). (The form will be y = mx + 0) c) Graph the three lines. Balkan tv channels Download vpn for android free apkFree perpendicular line calculator - find the equation of a perpendicular line step-by-step This website uses cookies to ensure you get the best experience. By using this website, you agree to our Cookie Policy. State if the lines are Parallel, Perpendicular, or Oblique. 8) 6x ─ 12y = 24 9) 4x + y = 5 10) ─2x + 7y = 14. 4x + 2y = 8 3x +12y = ─6 4x = 14y 11) Write the equation of a vertical line through (-3, 0). 12) Write the equation of a horizontal line through (0. 8). Solve the systems of equations. Line and surface integrals: Solutions. Example 5.1 Find the work done by the force F(x, y) = x2i − xyj in moving a particle along the curve which runs from Example 5.7 Find the area of the ellipse cut on the plane 2x + 3y + 6z = 60 by the circular cylinder x2 = y2 = 2x. Solution The surface S lies in the plane... Craigslist dog breeding Go math grade 3 lesson 4.10 homework Digital certificate macbookUsing this relationship, find the moment of inertia of a thin uniform round disc of radius R and mass m relative to the axis coinciding with one of its diameters. Find the angular velocity of the rod as a function of its rotation angle φ counted relative to the initial position. Free solution >>. Dec 28, 2018 · Now we construct another line parallel to PQ passing through the origin. This line will have slope `B/A`, because it is perpendicular to DE. Let's call it line RS. We extend it to the origin `(0, 0)`. We will find the distance RS, which I hope you agree is equal to the distance PQ that we wanted at the start. Purplemath. There is one other consideration for straight-line equations: finding parallel and perpendicular lines.Here is a common format for exercises on this topic: Given the line 2x – 3y = 9 and the point (4, –1), find lines, in slope-intercept form, through the given point such that the two lines are, respectively,: (The slope-intercept form of the equation of a line states that y = mx + b. To find the slope-intercept form of the equation 6x − 2y − 4 = 0, you must isolate y on the left side of the equation, as follows: 6x−2y−4=0 −2y−4=−6x −2y=−6x+4 y=3x−2 Honeywell thermostat won percent27t work after replacing batteries Error fody is only supported on msbuild 16 and above current version 15 1 1 Pak navy result slip 2020Example 2 of the Slope of A line. The slope of a line through the points (3, 4) and (5, 1) is $$- \frac{3}{2}$$ because every time that the line goes down by 3(the change in y or the rise) the line moves to the right (the run) by 2. (The slope-intercept form of the equation of a line states that y = mx + b. To find the slope-intercept form of the equation 6x − 2y − 4 = 0, you must isolate y on the left side of the equation, as follows: 6x−2y−4=0 −2y−4=−6x −2y=−6x+4 y=3x−2 Correct score prediction free tips The slope perpendicular to that would be the negative reciprocal of -1, which is 1. i know that slope is rise over run, but how do i find the slope of a line with data points with a positive correlation that vary on the line? because there are points that the line doesn't go through. the data points represent. How to align checkboxes in wordSap coois layoutA line parallel to Vector (p,q,r) through Point (a,b,c) is expressed with. To find a step-by-step solution for the distance between two lines. Sending completion. To improve this 'Shortest distance between two lines Calculator', please fill in questionnaire. Finding Equation of a Line Given the Slope and a Point on It [08/24/2006] How do I find the equation of the line that goes through the point (2,-1) and has a slope of 1/4? Finding the Equation of a Line [02/17/2005] Find the equation of the line with an x intercept of 6 and perpendicular to the graph of x - 3y = 6. Page: 1 2 3 4 Havanese puppies hudson valleyHappy labs kennel montanaFind the equation of a line passing through point of intersection of the lines `5x+2y-11=0` and `3x-y+11=0` and it is perpendicular to `4x-3y+2=0` Solution: The point of intersection of the lines can be obtainted by solving the given equations. • Find, from their equations, lines that are parallel and perpendicular. • Identify and use intercepts. • Why is looking for slopes of pairs of parallel and perpendicular lines relevant? Demonstrates limited understanding of the link between the slope and the form of the equation of a straight line. Cheapest solar rackingGo math loginParallel lines have the same slope. Perpendicular lines have slopes that are the negated reciprocal of one another (for example, the negated reciprocal of 3 is -(1/3), also, the negated reciprocal of -(22/7) is 7/22). 3x + y = 8 and 2y + 6x = 1. I first turn each into y = mx + b (slope-intercept) form: 3x + y = 8 y = -3x + 8 and 2y + 6x = 1 2y = -6x + 1 y = -3x + 0.5 Example 2. Find the equation for the tangent line to f at a . There are no numbers, but don't panic. This will work the same way as all the other problems. f ' (2) is the slope of the tangent line to f at 2. Since we have two points on the tangent line, we can find its slope Bmw z3 ls swapTcpdump rotate file based on size#:.# The slope of the line perpendicular to #2x+3y=9#. #"the equation of a line in "color(blue)"slope-intercept form"# is. #•color(white)(x)y=mx+b#. Bell Work. Using the point-slope form, find the equation of the line with the given conditions. Express your answer in y=mx + b. Having a slope of 3 and passing through the point ( 4, 5 ). Triton mk3 router table setupMercedes c300 c63 swapFind the slope of a line perpendicular to the line y = 2x – 6. The given line is written in y = mx + b form, with m = 2 and b = -6. The slope is 2. Identify the slope of the given line. Answer . The slope of the perpendicular line is . To find the slope of a perpendicular line, find the reciprocal, , and then find the opposite of this ... O scale bridge plans Apr 25, 2017 · To construct a vector that is perpendicular to another given vector, you can use techniques based on the dot-product and cross-product of vectors. The dot-product of the vectors A = (a1, a2, a3) and B = (b1, b2, b3) is equal to the sum of the products of the corresponding components: A∙B = a1*b2 + a2*b2 + a3*b3. How to become an actor on bitlife Playgd mobile The slope perpendicular to that would be the negative reciprocal of -1, which is 1. i know that slope is rise over run, but how do i find the slope of a line with data points with a positive correlation that vary on the line? because there are points that the line doesn't go through. the data points represent.Shows how to find the perpendicular distance from a point to a line, and a proof of the formula. (BTW - we don't really need to say 'perpendicular' because the distance from a point to a line always means the shortest distance.) This line will have slope `B/A`, because it is perpendicular to DE.1. Find the equation of a sphere if one of its diameters has end points (1, 0, 5) and (5, −4, 7). Solution: √. (b) the line passing through the origin and perpendicular to the plane 2x − 4y = 9 Solution: Perpendicular to the plane ⇒ parallel to the normal vector n = 2, −4, 0 . Hence. 3 4 study guide and intervention slope intercept form continued, 3. y = 3x, 6x - 2y = 7, 3y = 9x - 1 all are parallel Write an equation in slope-intercept form for the line that passes through the given point and is perpendicular to the graph of each equation. 1. For one line to be perpendicular to another, the relationship between their slopes has to be negative reciprocal, so if the slope of one line is \(m I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). Mugshots in gregg county Learn and revise how to plot coordinates and create straight line graphs to show the relationship between two variables with GCSE Bitesize Edexcel Maths. To find the perpendicular gradient, find the number which will multiply by 2 to give -1. This is the negative reciprocal of the gradient. To find: The equation of the line for the given conditions. Explanation of Solution. Given: The required line is passing through the points (−1,−2) and parallel to 2x+5y+8=0. Property used: Two lines with slopes m1 and m2 are perpendicular if and only if m1m2=−1 their slopes are negative... Volume of a pyramid. Look carefully at the pyramid shown below. The volume of a pyramid can be computed as shown.We will start with a pyramid that has a square as the base. where m is the slope and b is the y-intercept. Deciding Whether Lines are Perpendicular Example 2 Decide whether the lines are perpendicular. line s: 3x — 2y 1 Solution line t: 6x + 9y 3 Rewrite each equation in slope-intercept form to find the slope. line s:3x — 2y 1 slope line t: 6x + 9y 3 slope Find The Slope From The Pair Of Points Answer Key Et01 sophos central overview engineer exam answers 12. Find the slope and y intercept of the line whose equation is 4y+8x=7. 13. Find the equation of the line that is 3/2 units away from the origin, if the perpendicular from the line to the origin forms an angle of 210 0 from the positive side of the X axis. 14. Find the equation of the line through (2,3) and perpendicular to 3x-2y=7. 15. Find ... Lettered lines also run parallel with one another, and are perpendicular to (at a right-angle to) the The plan below shows part of the floor of an office building. The perpendicular gridlines intersect at 6.2 Find words and expressions in B opposite with the following meanings. One question has two... The point -slope form of a line is y ± y1 = m (x ± x1) where m is the slope and ( x1, y1) is a point on the line. +HUH DQG x1, y1) = (0, ±3). Graph (0, ±3). Use the slope to find another point 5 units up and 11 units right. Then draw a line through the two points. $16:(5 Write an equation in point -slope form of the line having the given ... Latitude lines run east-west and are parallel to each other. If you go north, latitude values increase. Finally, latitude values (Y-values) range between -90 and +90 degrees. But longitude lines run north-south. They converge at the poles. And its X-coordinates are between -180 and +180 degrees. Free ableton live To find slope from a graph draw the triangle (you need to choose two points on the line first) To find slope from an equation - solve for y first, the slope is the coefficient of x. Parallel Lines they have the same slope ; Perpendicular Lines - slopes and opposites and reciprocals from each other ; 26 Sketching Lines. To sketch a line you need ... You can put this solution on YOUR website! The given equation is 2y + 6x = 24 Slope of this line is m = -(coefficient of x)/(coefficient of y) m = -6/2 m = -3 The slope of a line perpendicular to this line = 1/3 Solution for slope of a line parallel to each given line. 1) y= 2x+ 4 2 2) y 5 3) y=4x -5 10 4) y- 5 5) x- y= 4 6) 6x-5y 20 7) 7x+ y-2 8) 3+4y-8 Find the slope… Find the coordinates of the foot of the perpendicular and the perpendicular distance of the point P(3,2,1) from the plane `2x-y+z+1=0.` Find also, the image Theorem :-Every first degree equation in x,y represents a straight line. Find the slope of a line when two points having coordinates are given ? Mathematically, a line can be represented by a linear equation, that is, an equation of degree one. The tangent of the angle that a line makes with the positive x-axis in counter clockwise direction is defined to be the slope of Form of an equation of a line parallel or perpendicular to a given line. Sig p229 vs glock 19 comparison Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. You find some configuration options and a proposed problem below. You can accept it (then it's input into the calculator) or generate a new one. The calculator lacks the mathematical intuition that is very useful for finding an antiderivative, but on the other hand it can try a large number of possibilities... #:.# The slope of the line perpendicular to #2x+3y=9#. #"the equation of a line in "color(blue)"slope-intercept form"# is. #•color(white)(x)y=mx+b#. Fake nos bottle storage Zeposia side effects Freedom ride rescue Pua maryland P0730 honda odyssey Lagrange multiplier calculator three variables Dragon adventures vip server wiki Quake pak file download
CommonCrawl
On the twist condition and $c$-monotone transport plans DCDS Home Optimal location problems with routing cost April 2014, 34(4): 1319-1338. doi: 10.3934/dcds.2014.34.1319 Uniqueness for Keller-Segel-type chemotaxis models José Antonio Carrillo 1, , Stefano Lisini 2, and Edoardo Mainini 3, Department of Mathematics, Imperial College London, South Kensington Campus, London SW7 2AZ, United Kingdom Università degli Studi di Pavia, Dipartimento di Matematica "F. Casorati", via Ferrata 1, 27100 Pavia Dipartimento di Ingegneria meccanica, energetica, gestionale e dei trasporti (DIME), Università degli Studi di Genova, P.le Kennedy 1, 16129 Genova, Italy Received December 2012 Revised April 2013 Published October 2013 We prove uniqueness in the class of integrable and bounded nonnegative solutions in the energy sense to the Keller-Segel (KS) chemotaxis system. Our proof works for the fully parabolic KS model, it includes the classical parabolic-elliptic KS equation as a particular case, and it can be generalized to nonlinear diffusions in the particle density equation as long as the diffusion satisfies the classical McCann displacement convexity condition. The strategy uses Quasi-Lipschitz estimates for the chemoattractant equation and the above-the-tangent characterizations of displacement convexity. As a consequence, the displacement convexity of the free energy functional associated to the KS system is obtained from its evolution for bounded integrable initial data. Keywords: Chemotaxis, displacement convexity., Wasserstein distance, Gradient flows, Keller-Segel model. Mathematics Subject Classification: 35A02, 35K45, 35Q9. Citation: José Antonio Carrillo, Stefano Lisini, Edoardo Mainini. Uniqueness for Keller-Segel-type chemotaxis models. Discrete & Continuous Dynamical Systems - A, 2014, 34 (4) : 1319-1338. doi: 10.3934/dcds.2014.34.1319 P. Acquistapace, Evolution operators and strong solutions of abstract linear parabolic equations,, Differential Integral Equations, 1 (1988), 433. Google Scholar L. Ambrosio and N. Gigli, A user's guide to optimal transport,, in, (2013), 1. doi: 10.1007/978-3-642-32160-3_1. Google Scholar L. Ambrosio, N. Gigli and G. Savaré, "Gradient Flows in Metric Spaces and in the Spaces of Probability Measures,'', Lectures in Mathematics ETH Zürich, (2005). Google Scholar L. Ambrosio, E. Mainini and S. Serfaty, Gradient flow of the Chapman-Rubinstein-Schatzman model for signed vortices,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 28 (2011), 217. doi: 10.1016/j.anihpc.2010.11.006. Google Scholar L. Ambrosio and S. Serfaty, A gradient flow approach to an evolution problem arising in superconductivity,, Comm. Pure Appl. Math., 61 (2008), 1495. doi: 10.1002/cpa.20223. Google Scholar A. L. Bertozzi, T. Laurent and F. Léger, Aggregation and spreading via the Newtonian potential: The dynamics of patch solutions,, Math. Mod. Meth. Appl. Sci., 22 (2012). doi: 10.1142/S0218202511400057. Google Scholar A. L. Bertozzi, T. Laurent and J. Rosado, $L^p$ theory for the multidimensional aggregation equation,, Comm. Pure Appl. Math., 64 (2011), 45. doi: 10.1002/cpa.20334. Google Scholar P. Biler, Global solutions to some parabolic-elliptic systems of chemotaxis,, Adv. Math. Sci. Appl., 9 (1999), 347. Google Scholar A. Blanchet, V. Calvez and J. A. Carrillo, Convergence of the mass-transport steepest descent scheme for the sub-critical Patlak-Keller-Segel model,, SIAM J. Numer. Anal., 46 (2008), 691. doi: 10.1137/070683337. Google Scholar A. Blanchet, E. Carlen and J. A. Carrillo, Functional inequalities, thick tails and asymptotics for the critical mass Patlak-Keller-Segel model,, J. Func. Anal., 262 (2012), 2142. doi: 10.1016/j.jfa.2011.12.012. Google Scholar A. Blanchet, J. A. Carrillo and P. Laurençot, Critical mass for a Patlak-Keller-Segel model with degenerate diffusion in higher dimensions,, Calc. Var. Partial Differential Equations, 35 (2009), 133. doi: 10.1007/s00526-008-0200-7. Google Scholar A. Blanchet, J. Dolbeault and B. Perthame, Two-dimensional Keller-Segel model: Optimal critical mass and qualitative properties of the solutions,, Electron. J. Differential Equations, 2006 (). Google Scholar A. Blanchet and P. Laurençot, The parabolic-parabolic Keller-Segel system with critical diffusion as a gradient flow in $R^d$, $d\ge 3$,, Communication in Partial Differential Equations, 38 (2013), 658. doi: 10.1080/03605302.2012.757705. Google Scholar V. Calvez and J. A. Carrillo, Volume effects in the Keller-Segel model: Energy estimates preventing blow-up,, J. Math. Pure Appl. (9), 86 (2006), 155. doi: 10.1016/j.matpur.2006.04.002. Google Scholar V. Calvez and L. Corrias, The parabolic-parabolic Keller-Segel model in $\mathbbR^2$,, Commun. Math. Sci., 6 (2008), 417. Google Scholar S. Campanato, Equazioni paraboliche del secondo ordine e spazi $L^{2,\theta}(\Omega,\delta)$,, (Italian), 73 (1966), 55. doi: 10.1007/BF02415082. Google Scholar J. F. Campos and J. Dolbeault, Asymptotic estimates for the parabolic-elliptic Keller-Segel model in the plane,, preprint, (). Google Scholar J. A. Carrillo, R. J. McCann and C. Villani, Contractions in the $2$-Wasserstein length space and thermalization of granular media,, Arch. Rat. Mech. Anal., 179 (2006), 217. doi: 10.1007/s00205-005-0386-1. Google Scholar J. A. Carrillo and J. Rosado, Uniqueness of bounded solutions to aggregation equations by optimal transport methods,, in, (2010), 3. doi: 10.4171/077-1/1. Google Scholar L. Corrias, B. Perthame and H. Zaag, Global solutions of some chemotaxis and angiogenesis systems in high space dimensions,, Milan J. Math., 72 (2004), 1. doi: 10.1007/s00032-003-0026-x. Google Scholar L. Corrias, B. Perthame and H. Zaag, $L^p$ and $L^\infty$ a priori estimates for some chemotaxis models and applications to the Cauchy problem,, in, (2004). Google Scholar S. Daneri and G. Savaré, Eulerian calculus for the displacement convexity in the Wasserstein distance,, SIAM J. Math. Anal., 40 (2008), 1104. doi: 10.1137/08071346X. Google Scholar I. Kim and Y. Yao, The Patlak-Keller-Segel model and its variations: Properties of solutions via maximum principle,, SIAM J. Math. Anal., 44 (2012), 568. doi: 10.1137/110823584. Google Scholar A. N. Konënkov, The Cauchy problem for the heat equation in Zygmund spaces,, (Russian) Differ. Uravn., 41 (2005), 820. doi: 10.1007/s10625-005-0225-z. Google Scholar R. Kowalczyk, Preventing blow-up in a chemotaxis model,, J. Math. Anal. Appl., 305 (2005), 566. doi: 10.1016/j.jmaa.2004.12.009. Google Scholar R. Kowalczyk and Z. Szymańska}, On the global existence of solutions to an aggregation model,, J. Math. Anal. Appl., 343 (2008), 379. doi: 10.1016/j.jmaa.2008.01.005. Google Scholar O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva, "Linear and Quasi-Linear Equations of Parabolic Type,'', (Russian), (1968). Google Scholar G. Loeper, Uniqueness of the solution to the Vlasov-Poisson system with bounded densitiy,, J. Math. Pures Appl. (9), 86 (2006), 68. doi: 10.1016/j.matpur.2006.01.005. Google Scholar E. Mainini, A global uniqueness result for an evolution problem arising in superconductivity,, Boll. Unione Mat. Ital. (9), 2 (2009), 509. Google Scholar E. Mainini, Well-posedness for a mean field model of Ginzburg-Landau vortices with opposite degrees,, NoDEA Nonlinear Differential Equations Appl., 19 (2012), 133. doi: 10.1007/s00030-011-0121-6. Google Scholar A. J. Majda and A. L. Bertozzi, "Vorticity and Incompressible Flow,'', Cambridge Texts Appl. Math., (2002). Google Scholar R. McCann, A convexity principle for interacting gases,, Adv. Math., 128 (1997), 153. doi: 10.1006/aima.1997.1634. Google Scholar S. Serfaty and J. L. Vazquez, A mean field equation as limit of nonlinear diffusions with fractional Laplacian operators,, to appear in Calc. Var. PDEs., (). doi: 10.1007/s00526-013-0613-9. Google Scholar E. M. Stein, "Singular Integrals and Differentiability Properties of Functions,'', Princeton Mathematical Series, (1970). Google Scholar V. Yudovich, Nonstationary flow of an ideal incompressible liquid,, Zhurn. Vych. Mat., 3 (1963), 1032. Google Scholar A. Zygmund, Smooth functions,, Duke Math. J., 12 (1945), 47. doi: 10.1215/S0012-7094-45-01206-3. Google Scholar A. Zygmund, "Trigonometric Series,'', Vol. I, (2002). Google Scholar Yajing Zhang, Xinfu Chen, Jianghao Hao, Xin Lai, Cong Qin. Dynamics of spike in a Keller-Segel's minimal chemotaxis model. Discrete & Continuous Dynamical Systems - A, 2017, 37 (2) : 1109-1127. doi: 10.3934/dcds.2017046 Wenting Cong, Jian-Guo Liu. A degenerate $p$-Laplacian Keller-Segel model. Kinetic & Related Models, 2016, 9 (4) : 687-714. doi: 10.3934/krm.2016012 Qi Wang. Boundary spikes of a Keller-Segel chemotaxis system with saturated logarithmic sensitivity. Discrete & Continuous Dynamical Systems - B, 2015, 20 (4) : 1231-1250. doi: 10.3934/dcdsb.2015.20.1231 Qi Wang, Jingyue Yang, Lu Zhang. Time-periodic and stable patterns of a two-competing-species Keller-Segel chemotaxis model: Effect of cellular growth. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3547-3574. doi: 10.3934/dcdsb.2017179 Shangbing Ai, Zhian Wang. Traveling bands for the Keller-Segel model with population growth. Mathematical Biosciences & Engineering, 2015, 12 (4) : 717-737. doi: 10.3934/mbe.2015.12.717 Vincent Calvez, Benoȋt Perthame, Shugo Yasuda. Traveling wave and aggregation in a flux-limited Keller-Segel model. Kinetic & Related Models, 2018, 11 (4) : 891-909. doi: 10.3934/krm.2018035 Norikazu Saito. Error analysis of a conservative finite-element approximation for the Keller-Segel system of chemotaxis. Communications on Pure & Applied Analysis, 2012, 11 (1) : 339-364. doi: 10.3934/cpaa.2012.11.339 Marco Di Francesco, Donatella Donatelli. Singular convergence of nonlinear hyperbolic chemotaxis systems to Keller-Segel type models. Discrete & Continuous Dynamical Systems - B, 2010, 13 (1) : 79-100. doi: 10.3934/dcdsb.2010.13.79 Zhichun Zhai. Well-posedness for two types of generalized Keller-Segel system of chemotaxis in critical Besov spaces. Communications on Pure & Applied Analysis, 2011, 10 (1) : 287-308. doi: 10.3934/cpaa.2011.10.287 Sachiko Ishida, Yusuke Maeda, Tomomi Yokota. Gradient estimate for solutions to quasilinear non-degenerate Keller-Segel systems on $\mathbb{R}^N$. Discrete & Continuous Dynamical Systems - B, 2013, 18 (10) : 2537-2568. doi: 10.3934/dcdsb.2013.18.2537 Tian Xiang. On effects of sampling radius for the nonlocal Patlak-Keller-Segel chemotaxis model. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4911-4946. doi: 10.3934/dcds.2014.34.4911 Yadong Shang, Jianjun Paul Tian, Bixiang Wang. Asymptotic behavior of the stochastic Keller-Segel equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 1367-1391. doi: 10.3934/dcdsb.2019020 Tohru Tsujikawa, Kousuke Kuto, Yasuhito Miyamoto, Hirofumi Izuhara. Stationary solutions for some shadow system of the Keller-Segel model with logistic growth. Discrete & Continuous Dynamical Systems - S, 2015, 8 (5) : 1023-1034. doi: 10.3934/dcdss.2015.8.1023 Jean Dolbeault, Christian Schmeiser. The two-dimensional Keller-Segel model after blow-up. Discrete & Continuous Dynamical Systems - A, 2009, 25 (1) : 109-121. doi: 10.3934/dcds.2009.25.109 Shen Bian, Jian-Guo Liu, Chen Zou. Ultra-contractivity for Keller-Segel model with diffusion exponent $m>1-2/d$. Kinetic & Related Models, 2014, 7 (1) : 9-28. doi: 10.3934/krm.2014.7.9 Wenting Cong, Jian-Guo Liu. Uniform $L^{∞}$ boundedness for a degenerate parabolic-parabolic Keller-Segel model. Discrete & Continuous Dynamical Systems - B, 2017, 22 (2) : 307-338. doi: 10.3934/dcdsb.2017015 Xinru Cao. Large time behavior in the logistic Keller-Segel model via maximal Sobolev regularity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3369-3378. doi: 10.3934/dcdsb.2017141 Karoline Disser, Matthias Liero. On gradient structures for Markov chains and the passage to Wasserstein gradient flows. Networks & Heterogeneous Media, 2015, 10 (2) : 233-253. doi: 10.3934/nhm.2015.10.233 Qi Wang, Lu Zhang, Jingyue Yang, Jia Hu. Global existence and steady states of a two competing species Keller--Segel chemotaxis model. Kinetic & Related Models, 2015, 8 (4) : 777-807. doi: 10.3934/krm.2015.8.777 Jinhuan Wang, Li Chen, Liang Hong. Parabolic elliptic type Keller-Segel system on the whole space case. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 1061-1084. doi: 10.3934/dcds.2016.36.1061 José Antonio Carrillo Stefano Lisini Edoardo Mainini
CommonCrawl
Learning important features from multi-view data to predict drug side effects Xujun Liang ORCID: orcid.org/0000-0003-1698-51931, Pengfei Zhang1, Jun Li1, Ying Fu1, Lingzhi Qu1, Yongheng Chen1 & Zhuchu Chen1 Journal of Cheminformatics volume 11, Article number: 79 (2019) Cite this article The problem of drug side effects is one of the most crucial issues in pharmacological development. As there are many limitations in current experimental and clinical methods for detecting side effects, a lot of computational algorithms have been developed to predict side effects with different types of drug information. However, there is still a lack of methods which could integrate heterogeneous data to predict side effects and select important features at the same time. Here, we propose a novel computational framework based on multi-view and multi-label learning for side effect prediction. Four different types of drug features are collected and graph model is constructed from each feature profile. After that, all the single view graphs are combined to regularize the linear regression functions which describe the relationships between drug features and side effect labels. L1 penalties are imposed on the regression coefficient matrices in order to select features relevant to side effects. Additionally, the correlations between side effect labels are also incorporated into the model by graph Laplacian regularization. The experimental results show that the proposed method could not only provide more accurate prediction for side effects but also select drug features related to side effects from heterogeneous data. Some case studies are also supplied to illustrate the utility of our method for prediction of drug side effects. The safety assessment of candidate chemical compounds is essential for drug development. Detection of serious adverse effects of drugs in preclinical tests or clinical trials is one of the major reasons for the failure of drug development [1]. Furthermore, some side effects are only reported in postmarket surveillance, in which situation serious consequences such as hospitalizations and deaths may be caused by adverse drug reactions [2]. As the performance of traditional methods for side effect detection is limited and the cost of these methods is expensive, there is a great need for developing new approaches that can effectively reveal drug side effects. Computational approaches have been developed to study various pharmacological problems such as drug repositioning [3,4,5]. It is also demonstrated that in silico methods could be regarded as complementary or alternative ways to test drug toxicity and predict side effects [6, 7]. Recently, several computational methods utilizing diverse drug related information have been proposed for side effect prediction. Chemical structures of compounds have been conventionally used to predict side effects [8]. For example, Xu et al. employed a deep learning method to encode the chemical structures of drugs and predicted drug-induced liver injury [9]. Atias et al. systematically predicted multiple side effects with chemical structure features by canonical correlation analysis (CCA) [10]. Besides chemical information, biological knowledge of drugs is also useful for predicting side effects. Using drug-protein interactions as input, Mizutani et al. proposed a side effect prediction method based on sparse CCA (SCCA) [11]. Fukuzaki et al. predicted side effects by mapping the targets of drugs to biological pathways [12]. In another work, drug-induced gene expression changes were summarized as biological process terms to predict side effects [13]. Moreover, it's reasonable to presume that integrating chemical and biological information will help boost the accuracy of side effect prediction. For example, Yamanishi et al. predicted side effects by integrating chemical structures and target protein data of drugs [14]. Wang et al. prioritized drug side effects by combining chemical structures and gene expression changes [15]. There are also some studies that include other information such as phenotypic data of drugs to predict side effects [16, 17]. Computational methods are also developed to discover the drug features closely related to side effects. SCCA based methods were proposed to reveal chemical fragments and target proteins related to side effects [11, 18]. Xiao et al. suggested a latent Dirichlet allocation model to learn the relations between drug structures and side effects [19]. Kuhn et al. related the known drug target proteins to side effects by enrichment analysis [20]. Iwata et al. associated protein domains with side effects by sparse classifiers [21]. Chen et al. inferred the associations between proteins and side effects by random walk on a heterogeneous network [22]. Previous computational approaches have demonstrated their ability to predict drug side effects and reveal relevant features. However, there are still some limitations of existing methods. Firstly, although various types of drug features have been utilized for side effect prediction, how to effectively retrieve the complementary information from multiple sources of data is still an open problem. Secondly, some side effects relate to the same group of drugs, these side effects may have a similar molecular basis. How to utilize the correlations between side effect labels to promote the prediction performance is still not fully explored. Thirdly, the dimensions of drug features are usually high. Selecting informative drug features could alleviate the negative impact of the high dimension features on the predictive model and may hint at the molecular basis of side effects. Diverse types of drug characteristics could relate to the occurrence of side effects [21], but few methods are capable of selecting useful features from multiple data sources collectively. The influence of side effect label correlations on feature selection has also not been well considered. This will impair the performance of data fusion model. Multiple types of drug features could be integrated by multi-view learning. Multi-view learning aims at incorporating heterogeneous data in a unified model to retrieve complementary information and improve predictive performance [23]. Multi-view learning has also been applied in previous pharmacological studies. Zhang et al. have predicted drug target interactions by integrating multi-view network data [24]. In our previous work, a multi-view learning method was proposed for prediction of drug-disease associations [3]. In the study of Yamanishi et al. they used multiple kernel learning to predict drug side effects [14]. On the other hand, the correlations between side effect labels could be exploited by multi-label learning. Multi-label learning deals with the classification problems in which samples are associated with multiple labels. How to use the correlations between labels to improve classification accuracy is a major task of multi-label learning [25]. Multi-label learning has been applied to various problems, such as protein function and subcellular localization prediction [26, 27]. Besides, for multi-label classification, each class label may be discriminated by some special characteristics of its own. These discriminative characteristics are denoted as label specific features. Selection of label specific features could be of benefit to multi-label classification [28]. Multi-view learning and multi-label learning could handle different aspects of side effect prediction problem, thus combining these two methods could explore feature heterogeneity and label correlations simultaneously. Graph Laplacian regularization is one of the manifold learning algorithms and has many applications in machine learning [29]. It looks for a sufficiently smooth distribution of data in a low-dimensional manifold and encourages locality preserving properties of the learning model [30]. For multi-view data, the graph Laplacian based algorithms employ a neighbourhood graph to capture the local geometry of each view, then all the graphs are aligned to extract the complementary information [31]. Shi et al. utilized graph Laplacian regularization to integrate multi-view data and proposed a semi-supervised sparse feature selection method [32]. Our previous work on drug-disease association prediction also utilized the multi-view Laplacian regularization [3]. For multi-label classification, the correlations among labels can also be encoded as a graph. For example, in the work of Mojoo et al. they introduced graph Laplacian regularization to represent the co-occurrence dependency between image tags [33]. After realizing the limitations of existing methods, we intend to investigate the problem of side effect prediction by fusing the ideas of multi-view and multi-label learning. For this purpose, graph Laplacian regularization is employed to model both the relationships between heterogeneous features and the correlations between labels. Similar to the previous works [3, 32], the complementary information from multiple drug feature profiles is explored by combining all view-dependent graph models. The correlations between side effect labels are introduced as an additional graph Laplacian regularization term. Furthermore, linear regression with L1-norm penalty is incorporated into the model to get label specific features from different feature profiles, which is similar to the graph constrained Lasso [34]. An iterative algorithm is proposed to solve the optimization problem of the model. Then, four different feature profiles, including chemical substructures of drugs, protein domains and gene ontology terms of drug targets and drug-induced gene expression changes are collected. These heterogeneous features are integrated by our method to predict side effects of drugs. The performance of our method is compared with several existing methods. We also illustrate the predictive capability of the proposed method with some case studies and examine the selected features. The results show that our method outperforms the compared methods. The side effects of drugs were retrieved from SIDER [35]. The chemical structures of drugs were derived from PubChem [36]. The protein targets of drugs were obtained from DrugBank [37], only target proteins related to human were kept. The protein domains of the targets were collected from InterPro [38] and gene ontology information (only using biological process terms) was extracted from Uniprot [39]. The gene expression data of drugs in the LINCS L1000 project were downloaded [40]. Finally, 501 drugs with all above information were kept for the following analysis and model construction. The DrugBank identities of these drugs and the count of side effect labels for each drug could be found at the (Additional file 2: Table S1). The identifiers and names of the features are supplied at (Additional file 2: Table S2). We also obtained the off-label side effects recorded in FDA Adverse Event Report System (FAERS) from the previous work [15]. There are 106 drugs with all types of features and off-label side effects in FAERS but without any records in SIDER. 294 drugs are present in both SIDER data and FAERS data, and these drugs have additional 65873 drug-side effect associations in FAERS data. The information of these drugs is available at (Additional file 2: Tables S3 and S4). Data preprocessing and drug feature matrices building For chemical substructures of drugs, the fingerprints defined by PubChem were calculated using PyBioMed [41]. The chemical substructure matrix is built using the binary fingerprints (881 bits in total). With the target information of drugs, two binary feature matrices are constructed. The target protein domain matrix is a binary matrix in which the elements are 1 if the targets of drugs have the corresponding protein domains or 0 otherwise. There are 1307 unique protein domain features. In the target gene ontology matrix, the elements are set to 1 if the targets of drugs are annotated with the gene ontology terms. There are 3336 unique gene ontology features. For drug induced gene expression changes from LINCS L1000 project, only the values of the 978 landmark genes were kept. If the absolute value of the moderated z-score for a gene signature is bigger than 2, the signature value is set to 1 or − 1 depending on the original sign of the moderated z-score, otherwise the signature value is set to 0. After that, the elements of the gene expression matrix are evaluated as the averages of signature values of each gene for each drug. Through above data processing, four feature matrices \(X_p\in \mathbb {R}^{n\times d_p}\) of drugs are obtained, where n is the number of drugs, \(d_p\) is the number of features in the pth feature matrix. The relationships between drugs and side effects were extracted from the 'meddra_all_se' file of SIDER and only preferred terms for side effects were kept. There are 3260 side effect labels in total. If there is a record of the relationship between drug i and side effect j, we set the element in the ith row and jth column of the label matrix \(Y\in \mathbb {R}^{n\times l}\) to 1, otherwise set the element to 0, where l is the number of side effects. Problem formalization In this work, we plan to construct a computational model which could predict side effects of drugs and select label specific features by integrating multiple types of drug data \(\{X_p\}\). Firstly, the formulation of our method, which is named as multi-view Laplacian regularized sparse learning (Multi-LRSL), is introduced. Then an optimization algorithm for solving Multi-LRSL is presented. Multi-LRSL model Predicting side effects with special drug features We assume that different types of drug features are complementary to each other and could be exploited to predict side effects. Moreover, each side effect should be only associated with a subset of features from different feature profiles. That is, the drug features relevant to side effects are sparse. As a result, we model the relationships between drug features and side effects by least square loss, and use \(L_1\)-norm to regularize the coefficient matrices: $$\begin{aligned} \min _{G_p, F}\frac{\mu }{2}\sum _{p=1}^{m}{\Vert X_pG_p-F\Vert _F^2} + \beta \sum _{p=1}^{m}{\Vert G_p\Vert _1} \end{aligned}$$ where \(\Vert \cdot \Vert _F\) is the Frobenius norm, \(\mu\) and \(\beta\) are the model parameters. \(G_p\) represents the regression coefficient matrix for the pth feature profile, m is the number of feature types, F is the predicted side effect label matrix and contains continuous values. In the label matrix Y, the elements are set to 1 for positive labels and 0 for negative or unobserved labels. F should be similar but not identical to Y because Y may contain some missing and noisy values. The elements of F could be ranked, and the bigger values imply possible positive labels and the smaller values imply possible negative labels. In the second term, the \(L_1\)-norm with the parameter \(\beta\) controls the sparsity of side effect related features. The non-zero elements in the jth column of \(G_p\) are the relevant features for the jth side effect. Preserving the local structure of different feature space in the side effect label space We assume that drugs with similar features should have similar side effect labels. This is known as the smoothness assumption [42]. For each type of drug features, a pairwise drug similarity matrix is calculated, then the k-nearest neighbour (knn) graph \(S_p\) is constructed: $$\begin{aligned} S_p(i,j)=\left\{ \begin{array}{lll} sim(X_p(i,:),X_p(j,:)) &\quad \text {if } X_p(j,:) \text { is the k-nearest } \text {neighbor of } X_p(i,:)\\ 0 &\quad \text {otherwise,} \end{array}\right. \end{aligned}$$ where \(X_p(i,:)\) and \(X_p(j,:)\) are the row vectors of the pth feature matrix. In this work, we use cosine similarity for all feature profiles and set \(k = \lfloor 0.01n\rfloor\). The rows of F are the predicted side effect labels for drugs. As the result of the smoothness assumption, we get the following formula: $$\begin{aligned} \min _{F}\sum _{i,j}^{n}\Vert F(i,:)-F(j,:)\Vert _2^{2}S_{p}(i,j) \end{aligned}$$ This means that drugs with similar features in the pth feature profile should have similar predicted labels. The local geometry of the feature space is preserved in the predicted label space. The above formula could be rewritten as: $$\begin{aligned} \min _{F}Tr(F^{T}L_{p}F) \end{aligned}$$ where the Laplacian matrix \(L_p\) is defined as \(L_p = D_p - S_p\), and \(D_p(i,i) = \sum _j^nS_p(i,j)\). To explore the complementary information in different types of drug features, multi-view Laplacian regularization is adopted as in the previous studies [3, 32]. The graph Laplacian matrices of different feature profiles are combined using a weight vector \(\theta \in \mathbb {R}^{m\times 1}\). In addition, the predicted label matrix F should not only be smooth on the feature space but also be consistent with the original label matrix Y. These considerations give the following formula: $$\begin{aligned}&\min _{F,\theta}\frac{1}{2}Tr\big (\sum _{p=1}^m\theta _p^\gamma F^TL_pF\big ) + \frac{1}{2}\Vert F - Y\Vert _F^2\nonumber \\ & \quad s.t.\ \theta > 0,\ \sum _{p=1}^m{\theta _p}=1 \end{aligned}$$ In the above formula, the weights of graph Laplacian matrices mean that different types of features have different contributions to side effect prediction. The weight vector \(\theta\) could also be learned by optimization. The parameter \(\gamma >1\) is introduced to keep the elements of \(\theta\) from equalling zero. This will prevent the most predictive feature profile taking all the weights [43]. In this way, the correlated and complementary information from multiple data sources could be combined and transferred to predicted label space. Incorporating side effect label correlations Next, under the assumption that strongly correlated side effect labels will share more drug features, it is desirable to incorporate label correlations into our model. According to Eq. (1), the columns of the coefficient matrix \(G_p\) represent the drug features associated with side effects. For highly correlated side effect labels, the corresponding column vectors in \(G_p\) should have great similarity. Similar to the consideration for the relationship between drug similarity and side effect similarity, we use Laplacian graph to represent the relationships between label correlations and feature sharing. The cosine similarity is employed to describe the correlations between side effect labels. A knn graph \(R_0\) is constructed based on label correlations. As mentioned above, the known side effect labels are usually incomplete and noisy, we intend to refine the correlation graph while learning the feature coefficients. Then the graph regularization for label correlations is formulated as: $$\begin{aligned}&\min _{G_p,R}\frac{\lambda }{2}\sum _{p=1}^{m}{Tr(G_p(D_R-R)}G_p^T) + \frac{\alpha }{2}\Vert R - R_0\Vert _F^2\nonumber \\ & \quad s.t.\ R_{ij}=R_{ji}\ge 0 \end{aligned}$$ where R is the refined correlation graph, \(D_R\) is degree matrix of R, \(D_R-R\) is the Laplacian matrix. \({Tr(G_p(D_R-R)G_p^T)}\) is equal to \(\sum _{i,j}^l\Vert G_p(:,i)-G_p(:,j)\Vert _2^2R(i,j)\). As a result, this term encourages a pair of highly correlated side effect labels to be associated with similar drug feature coefficients. \(\alpha\) is a positive parameter which controls the extent of consistency between the refined correlation graph and the original correlation graph. The parameter \(\lambda\) controls the impact of label correlations on the similarities of feature coefficients. The final objective function After integrating the above formulae, the final objective function takes the following form: $$\begin{aligned}&\mathop {min}\limits _{F, G_p, R, \theta} \frac{1}{2}\Vert F-Y\Vert _F^2 + \frac{1}{2}Tr(F^TLF) \nonumber \\&\qquad + \frac{\mu }{2} \sum _{p=1}^{m}{\Vert X_pG_p-F\Vert _F^2 } \nonumber \\&\qquad + \frac{\lambda }{2}\sum _{p=1}^{m}{Tr(G_p(D_R -R)G_p^T)} + \frac{\alpha }{2}\Vert R-R_0\Vert _F^2 \nonumber \\&\qquad +\beta \sum _{p=1}^{m}{\Vert G_p\Vert _1} \nonumber \\&\quad s.t.\ L = \sum _{p=1}^m{\theta _p^\gamma L_p}, \ \sum _{p=1}^m{\theta _p} = 1,\ 0< \theta _p < 1, R_{ij}=R_{ji}\ge 0 \end{aligned}$$ The first three terms in above formula give a flexible manifold learning framework [44]. The fourth and fifth terms account for label correlations. Together with the last \(L_1\) penalties, these terms form a graph constrained Lasso [34]. Here, an alternating approach is proposed for optimizing the objective function (7). Update F First, \(G_p\), R and \(\theta\) are fixed, the derivative of objective function with respect to F is set to 0. The closed form solution of F is: $$\begin{aligned} F=PQ \end{aligned}$$ $$\begin{aligned} P = \bigl ((1+m\mu )I + L\bigr )^{-1} \end{aligned}$$ $$\begin{aligned} Q = Y + \mu \sum _{p=1}^m{X_pG_p} \end{aligned}$$ Update R Then F, \(\theta\) and \(G_p\) are fixed, R is optimized by multiplicative updates [45]. The Lagrangian function of R is: $$\begin{aligned} \mathcal {L}(R)=\frac{\lambda }{2}\sum _{p=1}^{m}{Tr(G_p(D_R -R)G_p^T)} + \frac{\alpha }{2}\Vert R-R_0\Vert _F^2 - Tr(\Gamma R) \end{aligned}$$ \(\Gamma\) is Lagrangian multiplier. Differentiating the above formula with respect to R: $$\begin{aligned} \frac{\partial \mathcal {L}}{R} = \frac{\lambda }{2}\sum _{p=1}^{m}{A_p}-\frac{\lambda }{2}\sum _{p=1}^{m}{B_p} + \alpha (R - R_0) - \Gamma \end{aligned}$$ $$\begin{aligned} A_p & = {} diag(G_p^T G_p)J+Jdiag(G_p^TG_p) - diag(G_p^TG_p) \end{aligned}$$ $$\begin{aligned} B_p & = {} 2G_p^TG_p - diag(G_p^TG_p) \end{aligned}$$ \(J_p\) is a \(l\times l\) matrix of all 1's. Using the KKT condition, \(\Gamma (i,j)R(i,j)=0\), then: $$\begin{aligned} R(i,j) \leftarrow R(i,j) \sqrt{\frac{\frac{\lambda }{2}\sum _{p=1}^{m}{(B_p)^+} + \alpha R_0}{\alpha R + \frac{\lambda }{2}\sum _{p=1}^{m}{(B_p)^-} + \frac{\lambda }{2}\sum _{p=1}^{m}{A_p}}}(i,j) \end{aligned}$$ where \((B_p)^+=(|B_p|+B_p)/2\) and \((B_p)^-=(|B_p|-B_p)/2\) Update \(G_p\) Next, fixing the other variables except \(G_p\), substituting F with P, Q and ignoring unrelated terms in the objective function (7), we get: $$\begin{aligned}&\mathop {min}\limits _{G_p} F(G_p) =\frac{1}{2}Tr\bigl (\sum _{p=1}^m{G_p^TX_p^T(\mu I-\mu ^2P^T)X_pG_p} \bigr )\nonumber \\&\quad - \mu Tr\bigl ( Y^TP^T\sum _{p=1}^m{X_pG_p}\bigr )\nonumber \\&\quad - \frac{\mu ^2}{2}Tr\bigl (\sum _{p=1}^m{\sum _{q\ne p}^m{G_p^TX_p^TP^TX_qG_q}}\bigr ) \nonumber \\&\quad + \frac{\lambda }{2}\sum _{p=1}^{m}{Tr(G_p(D_R -R)G_p^T)} + \beta \sum _{p=1}^{m}{\Vert G_p\Vert _1} \end{aligned}$$ Due to the \(L_1\)-norm regularization terms, the above Eq. (16) is convex but not smooth, so the accelerated proximal gradient method is utilized to solve it. For a special p, Let $$\begin{aligned} f(G_{p}) =&\frac{1}{2}Tr\bigl (G_{p}^TX_{p}^T(\mu I-\mu ^2P^T)X_{p}G_{p} \bigr )- \mu Tr\bigl ( Y^TP^TX_{p}G_{p}\bigr )\nonumber \\&- \mu ^2Tr\bigl (\sum _{q\ne p}^m{G_{p}^TX_{p}^TP^TX_{q}G_{q}}\bigr ) + \frac{\lambda }{2}Tr\left(G_{p}(D_R-R)G_{p}^T\right) \end{aligned}$$ $$\begin{aligned} g(G_{p}) = \beta \Vert G_{p}\Vert _1 \end{aligned}$$ Next, the proximal gradient algorithm is employed to minimize a sequence of quadratic approximations of \(F(G_{p})\): $$\begin{aligned} \mathop {min}\limits _{G_{p}}f(\tilde{G}_{p}^{(k)}) + \langle \nabla f(\tilde{G}_{p}^{(k)}), G_{p}-\tilde{G}_{p}^{(k)}\rangle + \frac{L_f}{2}\Vert G_{p} -\tilde{G}_{p}^{(k)}\Vert _F^2 + g(G_{p}) \end{aligned}$$ In the above formula, \(L_f\) is Lipschitz constant. \(\tilde{G}_{p}^{(k)}=G_{p}^{(k)} + \frac{t_{k-1} - 1}{t_{k}}\bigl (G_{p}^{(k)} - G_{p}^{(k-1)}\bigr )\), and \(t^2_{k}-t_{k}\le t_{k-1}^2\). According to [46] it could improve the convergence rate. Let \(Z_{p}^{(k)} = \tilde{G}_{p}^{(k)} - \frac{1}{L_f}\nabla f(\tilde{G}_{p}^{(k)})\), (19) could be rewritten as: $$\begin{aligned} \mathop {min}\limits _{G_{p}}\frac{L_f}{2}\Vert G_{p} - Z_{p}^{(k)}\Vert _F^2 + g(G_{p}) \end{aligned}$$ then, \(G_{p}\) could be optimized by $$\begin{aligned} G_{p} & = {} \mathop {argmin}\limits _{G_{p}}\frac{1}{2}\Vert G_{p} - Z_{p}^{(k)}\Vert _F^2 + \frac{\beta }{L_f}\Vert G_{p}\Vert _1 \nonumber \\ & = {} S_{\frac{\beta }{L_f}}(Z^{(k)}_{p}) \end{aligned}$$ where \(S_{\frac{\beta }{L_f}}\) is the soft-thresholding operator. Update \(\theta _p\) Finally, F, R and \(G_p\) are fixed and \(\theta _p\) is renewed $$\begin{aligned} \theta _p = \frac{\Bigl (\frac{1}{Tr(F^TL_pF)}\Bigr )^{\frac{1}{\gamma -1}}}{\sum _{p=1}^m{\Bigl (\frac{1}{Tr(F^TL_pF)}\Bigr )^\frac{1}{\gamma -1}}} \end{aligned}$$ Through the above updating steps, the objective function (7) is solved. The outline of multi-LRSL is shown in Algorithm 1. After training the model, we obtain the coefficient matrices \(G_p\)s and the predicted label matrix F. For new drugs, the features \(X_p^{new}\)are collected, and the side effects are predicted by \(\sum _{p=1}^{m}\theta _{p}X_{p}^{new}G_{p}\). The elements of F could be considered as the confidence of the predicted associations between drugs and side effects. The missing side effect labels in the training data could be inferred from F. For that, the values corresponding to the known drug-side effect associations in F are excluded, then the rest values are ranked in descending order. The top-ranked values suggest the most possible drug-side effect associations that are missing in the original data. Performance evaluation and comparison methods To evaluate the performance of our algorithm for side effect prediction, fivefold cross-validation was carried out in this work. We used six performance metrics implemented by scikit-learn [47]. First, the area under the receiver operating characteristic curve (AUC) score was employed. The receiver operating characteristic (ROC) curve is plotted with true positive rate (TPR) against the false positive rate (FPR). $$\begin{aligned} TPR= & {} \frac{TP}{TP+FN} \end{aligned}$$ $$\begin{aligned} FPR= & {} \frac{FP}{FP+TN} \end{aligned}$$ where TP is true positive, FN is false negative, FP is false positive, TN is true negative. The AUC score is the area under the ROC curve. Depending on which type of averaging was performed, three kinds of AUC scores were calculated. Sample-AUC considers the average AUC score for each drug, macro-AUC calculates AUC scores for each side effect label and finds the mean, micro-AUC takes all known drug-side effect pairs as a positive label. Second, three metrics special for multi-label classification were estimated. Coverage error represents the average number of side effect labels to be included in order to predict all true labels. Given the truth labels of test data \(Y_{test}\in \mathbb {R}^{n_{test}\times l}\), the score matrix from prediction method is \(\hat{Y}\), the coverage error is: $$\begin{aligned} Coverage\ error = \frac{1}{n_{test}}\sum _{i=1}^{n_{test}}\max _{j:Y_{test}(i,j)=1}{rank(i,j)}-1 \end{aligned}$$ where \(rank(i,j) = |\{k:\hat{Y}(i,k)\ge \hat{Y}(i,j)\}|\), \(|\cdot |\) is the number of elements in the set. \(n_{test}\) is the number of test drugs. Ranking loss is the average number of label pairs that are incorrectly ordered: $$\begin{aligned}&ranking\ loss\nonumber \\&\quad =\frac{1}{n_{test}}\sum _{i=1}^{n_{test}}\frac{1}{\Vert Y(i,:)\Vert _0(l-\Vert Y(i,:)\Vert _0)}\Big |\Big \{(k,j):\hat{Y}(i,k)\le \hat{Y}(i,j),\nonumber \\&\qquad Y(i,k)=1,Y(i,j)=0\Big \}\Big | \end{aligned}$$ where \(\Vert \cdot \Vert _0\) is the number of non-zero elements. Label ranking average precision (LRAP) finds the average fraction of the true labels in the highly ranked labels produced by a predictive method: $$\begin{aligned} LRAP & = {} \frac{1}{n_{test}}\sum _{i=1}^{n_{test}}\frac{1}{\Vert Y(i,:)\Vert _0}\nonumber \\&\sum _{j:Y(i,j)=1}\frac{|\{k:Y(i,k)=1,\hat{Y}(i,k)\ge \hat{Y}(i,j)\}|}{rank(i,j)} \end{aligned}$$ For our method, the score matrix \(\hat{Y}=\sum _{p=1}^{m}\theta _{p}X_{p}^{new}G_{p}\). The truth label matrix \(Y_{test}\) was used to calculate the performance metrics in each fold of cross-validation. The experiment was repeated 10 times with different division of data and the averages of the metrics were calculated. We compared our algorithm with the other five computational methods. L1-regularized logistic regression (L1LOG) and L1-regularized support vector machine (L1SVM) are widely used for feature selection and classification. These methods were also applied to inferring the relationships between side effects and target domains [21]. L1LOG and L1SVM were implemented with Liblinear [48]. Principal component regression (PCR) is based on principal component analysis (PCA). It firstly computes the principal components of the feature matrix, then uses some of the components as predictors to build model for prediction. In this work, PCR was implemented to represent another way for dimensionality reduction. SCCA was also demonstrated to be effective for discovering drug features related to side effects [11, 18]. As in the previous work, SCCA was implemented with PMA package [49]. Kernel Regression was used to integrate chemical structures and target proteins to predict side effects in [14]. Here we implemented it with scikit-learn. The parameters of all methods were determined by cross-validation. More details about the implementation of the comparison methods could be found in additional material. The algorithm complexity analysis and the parameter sensitivity analysis of the proposed method could also be found in the additional material. Drugs with similar features have similar side effect labels In these work, there are 501 drugs, 3260 side effects and 62620 associations between them. The distribution of the associations is shown in Additional file 1: Figure S1. In this section, we validated the basic assumption that the drug features collected from different sources were associated with the side effect labels of drugs. First, we examined the features of the drugs with at least one common side effect. It is noticed that the average similarity of the drugs with common side effects is significantly stronger than the same number of randomly selected drugs without any common side effects across all feature profiles (rank sum test, \(\hbox {p-value}<0.001,\) Additional file 1: Figure S2). This suggests that drugs which cause the same side effects share more common features. Next, we calculated the side effect similarities between drugs and divided the drugs into two groups at the median value of similarities. It is showed that drugs with more common side effects also display significantly stronger similarity in all types of features (rank sum test, \(\hbox {p-value}<0.001\), Additional file 1: Figure S3). These results imply that similar drugs possess similar side effect labels. To further verify this assumption, we calculated the inner products between the columns of the feature matrix \(X_p\) and the columns of the side effect matrix Y and utilized these products to represent the relationships between drug features and side effect labels. Then we computed the cosine similarity between side effects using different types of dug features related to them. After that, we built ordinary least squares models which took different feature similarities between side effects as explanatory variables and side effect label correlations as response variables. It is observed that the slopes of these linear models are positive, which means that the feature similarities positively correlate with the correlations of side effect labels. It is also noted that when the values of feature similarity get bigger, the slopes become steeper (Additional file 1: Figure S4). This suggests that the correlations between the drug features and the side effect labels are more obvious in local feature space. All above results demonstrate that the feature profiles employed in this work are associated with the side effects labels, so it is possible to predict the side effects of drugs with these features. There are four different types of drug features in this work. It is expected that these feature profiles will provide consistent as well as complementary information for side effect prediction. To explore the consistency and complementarity of these feature profiles, we performed hierarchical cluster analysis according to the drug similarity matrices calculated by the feature profiles and the side effect labels. As show in Fig. 1, there are many blocks along the diagonals of the similarity matrices. These blocks are drugs with strong similarity It is observed that the drugs in some blocks of the feature similarity matrices significantly overlap with the drugs in the blocks of the side effect similarity matrix (Fisher exact test, \(\hbox {p-value}<0.05\)). The overlapped blocks are marked by coloured rectangles in Fig. 1. Furthermore, for some blocks in the side effect similarity matrix, there are overlapped blocks across more than one feature similarity matrices. It is also found that some blocks in the side effect similarity matrix only overlap with the blocks from just one of the feature similarity matrices. These results indicate that there is both consistent and complementary information in different drug feature profiles, and combination of these feature profiles could be beneficial for side effect prediction. There are also some blocks in the feature similarity matrices which don't overlap with any blocks in the side effect similarity matrix. It is imply that there are irrelevant drug features to be excluded or missing associations between drugs and side effects. There is both consistent and complementary information related to side effects in different drug feature profiles. Drugs cluster together according to the similarities calculated with different features or side effect labels. The blocks of drugs along the diagonals are identified by the R package 'dynamicTreeCut' [67]. The overlaps between the blocks in each feature similarity matrix of drugs and the blocks in side effect similarity matrix of drugs are determined by Fisher's exact test (\(\hbox {p-value}<0.05\)). The significantly overlapping blocks are marked by coloured rectangles in the heat-maps. The purple rectangles indicate that the blocks in the side effect similarity matrix of drugs overlap with blocks in one of the feature similarity matrices (for example, block e1 overlaps with block a1 in the chemical similarity matrix). The green rectangles indicate that the blocks in the side effect similarity matrix of drugs overlap with blocks in two or three feature similarity matrices (for example, block e2 overlaps with a2 in the chemical similarity matrix and block d2 in the gene expression similarity matrix). The red rectangles indicate that the blocks in the side effect similarity matrix of drugs overlap with blocks in all feature matrices (for example, block e3 overlaps with block a3, b3, c3, d3). The legend indicates the value of similarity, from 0 (blue) to 1 (red) Performance comparison for prediction of side effects To evaluate the performance of the proposed method, we tested if the algorithm could correctly recover the known side effects of drugs. Five-fold cross-validation was performed. Drugs with known side effects from SIDER were divided into five subsets of roughly equal size. Each time one subset of drugs was used as test set while the other four sets were combined as training set. The experiment was repeated 10 times with different division of drugs. In this work, the performance for side effect prediction of the proposed method was compared with other algorithms using six different metrics. L1SVM and L1LOG are two widely used sparse models that could be applied to both classification and feature selection. PCR is a regression technique which could reduce dimensionality and mitigate overfitting. For the task of side effect prediction, the four drug feature matrices were concatenated into a single matrix then this long matrix was used as the input to L1SVM , L1LOG and PCR. For SCCA, the side effect label matrix, together with each feature matrix were used as input. The drug target matrix was also used as the input to SCCA following the previous work [11]. For kernel regression, the kernel similarity matrix for each feature profile was calculated, then the kernel functions were summed to integrate different feature profiles. For the performance metrics used here, larger values of sample-AUC, macro-AUC, micro-AUC and LRAP mean better performance, while smaller values of coverage error and ranking loss denote better performance. Table 1 shows the results of the cross-validation experiment for the comparison algorithms. Overall, the proposed integrative method outperforms all the comparison methods significantly. Furthermore, when just taking one single type of features as input, our method still has better performance in most of the metrics compared to SCCA. It is also observed that the target domain and the target gene ontology features generally show better performance compared to the chemical substructure features and the gene expression features if only one feature matrix is used as the input to our method. As the other information integration method, the kernel regression model also shows some advantages over SCCA. The performance of L1LOG and L1SVM is inferior to the other methods except PCR. This may be partially due to the lack of consideration for the correlations between side effect labels. The performance of PCR is comparable to L1LOG and L1SVM. The results imply that L1-regularization has similar effect with principal components analysis on the performance of side effect prediction. Table 1 Performance comparison of different algorithms for side effect prediction To further illustrate the predictive ability of the propose method, we exploited the drugs and their side effect labels from SIDER to train our model, then used the records of the off-label side effects from FAERS as independent test data. The proposed method still has better predictive performance for the drugs which are only present in FAERS compared to SCCA and kernel regression (Additional file 2: Table S5). Besides these new drugs, FAERS records some novel associations between drugs and side effects which are not present in SIDER. It is also found that the proposed method could better predict these novel drug-side effect associations (\(\hbox {micro-AUC}=0.6815\)) compared to SCCA and kernel regression model (\(\hbox {micro-AUC}= 0.6736\) and 0.6768 respectively). Moreover, because we only use the drugs with all four types of features in the above cross-validation experiment, there are still extra drugs which have side effect labels in SIDER but are not used for model training due to lack of target or gene expression information. The identities of these drugs are available at (Additional file 2: Table S6). We predicted side effects of these extra drugs with the chemical substructure features and calculated the performance metrics. It is noticed that the prediction performance of our model on these drugs is comparable with the cross-validation result (Additional file 2: Table S6). This indicates that the overfitting risk of the proposed model is under control and the model could be generalized well to unseen data. Selection of side effect related features In order to get a predictive model for the side effect prediction problem with high-dimension features, the L1 penalties are added to the model. As a result, our method could not only predict the side effects of drugs but also select the relevant drug features for each side effect. In this section, the feature selection capability of the proposed method was examined and compared with the other three L1-regularization methods: L1LOG, L1SVM and SCCA. The drug features from all feature profiles were selected by different methods. The positively weighted features were kept and thought to be closely related to the corresponding side effect labels. The number of the selected features from each feature profile by each method is shown in Fig. 2. The median numbers of the selected features for each side effect by the proposed method are 161 (chemical substructures), 43 (protein domain), 193 (gene ontology) and 43 (gene expression). All of these L1 regularization methods could get subsets of features from all possible drug features. L1SVM selected the smallest number of features from all feature profiles, while SCCA extracted the largest number of features in total. Multi-LRSL selected more features than L1LOG and L1SVM but less features than SCCA except for the chemical substructures. From the venn diagrams in Fig. 3, it is observed that the most features selected by L1LOG and L1SVM overlap with the features selected by multi-LRSL and SCCA. There are many features shared by SCCA and multi-LRSL as well as lots of features specifically selected by these two methods. Furthermore, the numbers of features selected by L1LOG and L1SVM from different feature profiles are quite uneven compared to the other two methods. There are much fewer features selected by L1LOG and L1SVM from the gene expression profile than the other feature types (as shown in Fig. 3 and Additional file 1: Figure S5). The number of features selected by different methods from different feature profiles The Venn diagrams show the overlaps of features selected by different methods from different feature profiles To further illustrate the feature selection property of our algorithm, we calculated the correlations between the drug features from both the same and different feature profiles using the original feature matrices, then we computed the correlations between the feature coefficients obtained by multi-LRSL. It is observed that the correlations between the drug feature coefficients tend to increase as the correlations between the original features become larger (Additional file 1: Figures S6 and S7). It is implied that multi-LRSL could select groups of highly correlated features both within and between multiple feature profiles. This property is similar to elastic net [50]. It should be an advantage for the problem of side effect prediction as there are many important but highly correlated features. Besides, it is realized that strongly correlated side effect labels could be associated with similar drug features. In Additional file 1: Figure S8, the affinity matrices of side effect labels calculated from the feature coefficients of multi-LRSL are visualized. All of these matrices are quite consistent with the affinity matrix of side effect labels calculated from the drug-side effect relation matrix. Thus, the proposed method could capture both the associations between drug features and the correlations between side effect labels. Next, we tested the stability of our algorithm for feature selection by training the model with random division of drugs. It is shows that the feature coefficients are stable when different subsets of drugs are used for training (Additional file 1: Figure S9). Furthermore, we intend to examine whether the selected features are relevant to the side effects. However, there are very few systematic records for the relationships between drug features and side effects. Thus, we try to verify these predicted associations by examining whether they are compatible with the information from independent data source. For this purpose, the disease terms in CTD [51] which overlapped with the side effect terms in this work were gathered. In CTD, the disease terms are associated with various chemicals and genes. The chemicals and genes labelled with 'marker/mechanism' correlate with the disease or participate in the etiology of the disease. We collected these chemicals and genes related to the disease terms which overlapped with side effect terms. Then, the substructures and the gene expression changes of these chemicals, the protein domains and the gene ontology terms of these genes were extracted. The occurrence frequency of chemical substructures, protein domains and gene ontology terms were calculated for each disease term according to its related chemicals and genes to form the corresponding features from CTD. The gene expression changes were averaged across the chemicals related to a disease term to form the gene expression features from CTD. The data from CTD could be considered as the additional evidence to support the relationships between features and side effects. In our model, we assume that a feature with a coefficient that is large in magnitude is predictive for the presence of a side effect. We assume that the features from CTD will also have large magnitude and match the sign of the coefficients. Thus, the correlations between the coefficients and the CTD features could be utilized to assess the consistency between the predicted feature-side effect association and the information in CTD. We calculated the Spearman correlations between the coefficients learned by multi-LRSL and the features obtained from CTD for each side effect. It is found that the average correlation is significantly bigger than the correlations of randomly paired coefficients and features (Fig. 4, \(\hbox {p-value}<0.05\)). Together with the previous results, it is suggested that the proposed method could help select drug features related to side effects. The average Spearman's correlation between the feature coefficients learned by multi-LRSL and feature data extracted from CTD for the same side effect is significantly bigger than random samples. The blue lines represent the density estimates for the averages of correlation coefficients of 1000 random samples. For each random sample, the average correlation is calculated with the same number of pairs of randomly selected feature coefficients and CTD feature data. The red arrows indicate the positions of the average correlation coefficients between paired feature coefficients and feature data (the frequency of features for chemical substructures, protein domains and gene ontology terms and the averages of gene expression changes). The p-values are estimated by Monte-Carlo test Predicting new drug-side effect associations To illustrate the utility of the proposed method for predicting side effects, we collected 320 drugs from DrugBank which were not included in the 501 drugs used for model construction. The DrugBank identities and names of these drugs are available at (Additional file 2: Table S7). We choose these drugs because all types of features could be obtained for them but they don't have any side effect records in SIDER. In order to predict the side effect labels of these drugs, the features of them were retrieved as previously described. Then, the feature coefficient matrices \(G_{p}\hbox {s}\) were learned by our model with all the 501 drugs as the training data. We predicted side effects of these new drugs by \(\sum _{p=1}^m{\theta _pX^{new}_pG_p}\). \(X^{new}_p\) is the pth feature matrix of these new drugs. Moreover, we inferred the missing labels of the training drugs with the predicted label matrix F (see methods section for more details). To give insights into the prediction results of our method, some examples are provided here. Hepatotoxicity is an important clinical adverse event that could cause hospitalizations and withdrawal of drugs. In Fig. 5a, 10 predicted drugs for hepatotoxicity (5 top-ranked new drugs, 5 top-ranked drugs in the training data without record of hepatotoxicity) are picked. The top-ranked features for hepatotoxicity (10 features with the highest coefficients of each feature type) are selected. The values of the top-ranked features in the original feature vectors of the 10 drugs are shown as heat-map. In the predicted drugs, dasatinib (DB01254), a selective tyrosine kinase receptor inhibitor for treatment of chronic myelogenous leukemia, was reported to induce live dysfunction [52]. Nintedanib (DB09079) also showed hepatotoxicity in a clinical trail [53]. It is observed that among the 10 top-ranked chemical substructures, 9 substructures (except sub0: \(\ge 4\,\hbox {H}\)) are enriched in the drugs with hepatotoxicity (Fisher's exact test, p-value 1e−4). Among the selected protein domain features, there are 8 domains related to protein kinase, and 6 of them are present in the targets of all these drugs. This is in accordance with the previous study that many tyrosine kinase inhibitors have been found to be hepatotoxic [54]. The selected gene ontology features are mainly involved in apoptosis and cell proliferation. These biological processes were also related to hepatotoxicity by previous studies [55, 56]. It is observed that all drugs show a similar pattern of disturbance to the expression level of MEF2C. MEF2C was reported to regulate the activation of hepatic stellate cells and play a key role in hepatic fibrosis, a pathological response to live injury [57]. In Fig. 5b, the side effects that most frequently co-occur with hepatotoxicity show similar patterns of top-ranked feature coefficients. Some of these side effect terms are semantically related to hepatotoxicity, such as hepatobiliary disease and hepatic failure, while the other side effects may be similar to hepatotoxicity in production mechanisms. For example, drug-related neutropenia could be caused by cytotoxic effect on cell replication [58]. Mucosal inflammation could be regulated by tyrosine kinases related signal pathways [59]. In Additional file 2: Table S8, the prediction results of another two side effects, renal impairment and acute myocardial infarction are provided as additional examples. Alvimopan (DB06274) is ranked 3th for acute myocardial infarction. This toxic effect has been proved by clinical observation [60]. The selected features may also give some hints about these side effects. DAXX is ranked 1st for acute myocardial infarction in the gene expression features, and a study showed that DAXX may participate in myocardial ischemia/reperfusion-induced cell death [61]. The relationship between SRC (ranked 2th) and renal damage was also implied by previous study [62]. The above instances suggest that the proposed method could predict novel drug-side effect associations and select important drug features. The prediction results for hepatotoxicity. a The X axis represents the features with the largest coefficients for hepatotoxicity (10 features from each feature profile). The Y axis represents the top-ranked predicted drugs (5 test drugs without any known side effects, and 5 drugs with known side effects but without record for hepatotoxicity. The DrugBank IDs of the drugs with known side effects are underlined). Here, the colours on the heat-map represent the values of the selected features in the feature vectors of these drugs. b The X axis represents the features with the largest coefficients for hepatotoxicity (10 features from each feature profile), and the Y axis represents the side effects most frequently co-occurred with hepatotoxicity. The colours on the heat-map represent the values of the coefficients learned by multi-LRSL for each side effect. In both (a) and (b) different types of features are separated by grey dash lines Side effects are unintended impacts of drugs on human bodies. It is important to develop efficient methods for identifying potential side effects. In this work, we propose a novel multi-view and multi-label learning algorithm to predict the side effects of small molecular drugs and select relevant features. The advantage of the proposed method is demonstrated by systematic comparison with other computational methods and some examples of application. The proposed method could integrate multiple types of features for side effect prediction. The rationality behind integration of multi-view data is that side effects are the results of complex interactions between drugs and biological systems. Drugs could interact with both intended therapeutic targets and unintended off-targets. While both types of targets could be associated with side effects, the full list of drug targets is still not available. Furthermore, targets could impact on the activities of various biological processes and pathways after binding with drugs. On the other hand, chemical structures determine how drugs interact with targets. Gene expression changes induced by drug reflect the overall biological effects of drug-target interactions. Thus, combination of chemical structures, gene expression and target information could provide a relatively complete description of drug bioactivity for side effect prediction. In this study, we show that there is consistent and complementary information related to side effects in these heterogeneous data. Our results also show that integration of multi-view data improves the performance of prediction. Therefore, data fusion for side effect prediction is reasonable and necessary. In this study, the graph Laplacian regularization of the predicted label matrix encourages the preservation of the local geometric structures of the feature space and lets the drugs with similar features have similar side effect labels. The graphs constructed from four types of features are combined to explore the complementary information from different sources. This strategy for heterogeneous data integration is similar to the previous works [3, 32]. However, there are also some special improvements for the side effect prediction problem. In the work of Shi et al. [32], different types of features were concatenated into a long vectors in the least square loss term. This will increase the computational complexity for updating coefficient matrix as the total dimension of features is high in the side effect prediction problem (as noted by the algorithm complexity analysis in the additional material). They also selected the features associated with the larger rows in the coefficient matrix by \(l_{2,1/2}\)-matrix norm [32]. This will keep a group of features from all feature profiles to predict all labels. In this work, the feature matrices and the coefficient matrices are separate in both the least square loss terms and the graph Laplacian regularization terms. The L1 penalties induce the element-wise sparsity of the coefficient matrices [63]. Thus, our method could select different features from each feature profile simultaneously for different labels and reduces the computational cost. Moreover, in this work, the information of the known labels is used differently from our previous work for drug-disease association prediction [3]. Instead of transforming the label similarity to drug similarity, the correlations between side effects labels are explicitly encoded by an additional graph Laplacian matrix that regularizes the feature coefficient matrices. The regularization makes the strongly correlated labels share more relevant features. Altogether, the proposed model could not only fuse multi-view data but also select label specific features with the consideration for label correlations. The dimensions of chemical and biological features of drugs are usually high. Feature selection is beneficial for side effect prediction, because it could reduce computational cost and prevent overfitting by excluding irrelevant features [64]. L1SVM, L1LOG, SCCA and our model introduce sparsity by L1-regularization. There is a trade-off between the feature sparsity and the prediction performance (Additional file 1: Figure S10). Although L1SVM and L1LOG selected less features, the correlations between features from multiple feature profiles could be missed by these two methods [50]. The correlations between side effect labels were also not taken into consideration. For SCCA, features and side effects appeared many times in multiple canonical components, which made the relationships between features and side effects less obvious. The proposed method fused heterogeneous data and considered the correlations between side effect labels. Thus, it could select features from multi-view data and associate every side effect with special drug features. As a result, our method could be utilized just for feature selection, and a separate classifier could be constructed with the selected features as the input to further improve the prediction performance. In that situation, the selection stability analysis should be conducted. Furthermore, the linear regression with L1 regularization in the model could increase the transparency of the model, that is, users could recognize which features are important for the model to make the predictions. However, it should be noticed that the model could not reveal the causal relationships between drug features and side effects. Users should be cautious about the meaning of the selected features. Feature selection sometimes could help researchers generate new hypothesises about the relationships between the selected features and the class labels. For this purpose, it is important to use interpretable features as input variables. The protein domain, gene ontology and gene expression features in this work could be more interpretable than chemical fingerprints. It is desirable to explore new forms of input in order to make the selected chemical features more meaningful in future work. Although the proposed method shows the advantage of information fusion for side effect prediction, there are also some limitations in current work. First, the number of samples used for training is crucial for prediction and feature selection. But collecting multiple types of data is difficult and some features may not be available for some drugs. For example, not all the drugs in SIDER have the records of protein targets or gene expression data. This leads to a smaller data set for training the algorithm. However, our method is scalable, it could take either single or multiple feature profiles as input. Additionally, like the previous study [32], our model is based on graph Laplacian regularization. The model could be extended to a semi-supervised method [29]. Semi-supervised learning could utilize unlabelled data to promote the prediction performance. It may alleviate the problem of the limited number of labelled samples in side effect prediction. Second, there are some discrepancies between multiple data types. For example, some targets of drugs may be missed and gene expression data may contain lots of noise. The discrepancy could impair the performance of prediction. Thus, it needs methods that could handle disparity and noise in the future work. Thirdly, the side effect labels of drugs may be also missing and noisy. For example, FAERS contains a lot of drug-side effect associations which are absent in SIDER. There may be also some false positive labels in SIDER [20]. Some side effects are non-specific and don't have causal relationships with drug features [65]. The missing and noisy labels could bring negative impact on side effect prediction. Because which side effect labels are missing is not known, all unobserved values are set to 0 in the label matrix, this could aggravate the bias of the model. The missing labels also aggravate the class imbalance and make the estimation of label correlations inaccurate. In the proposed model, we used the predicted label matrix F to approximate the true label matrix, and refined the label correlation matrix during optimization. This could alleviate the influence of missing label, but more sophisticated algorithms are needed to solve this problem in future study. For example, positive-unlabelled learning may alleviate the influence of the noisy negative labels [66]. In this work, we develop a novel computational method for predicting drug side effects. The proposed method could fuse multi-view data and explore the correlations between side effects. It could not only improve the performance of prediction but also select multiple types of features related to side effects. As a result, our method could be potentially used as an effective computational tool for recognizing patterns of side effect related features from various sources of data. In this way, the method could provide instructive information for drug development by mining heterogeneous data. All data are available together with the code for the proposed method at https://github.com/LiangXujun/multi-LRSL. Hornberg JJ, Laursen M, Brenden N, Persson M, Thougaard AV, Toft DB, Mow T (2014) Exploratory toxicology as an integrated part of drug discovery. Part i: why and how. Drug Discov Today 19(8):1131–1136 Giacomini KM, Krauss RM, Roden DM, Eichelbaum M, Hayden MR, Nakamura Y (2007) When good drugs go bad. Nature 446(7139):975–7 Liang X, Zhang P, Yan L, Fu Y, Peng F, Qu L, Shao M, Chen Y, Chen Z (2017) LRSSL: predict and interpret drug-disease associations based on data integration using sparse subspace learning. Bioinformatics (Oxford, England) 33:1187–1196. https://doi.org/10.1093/bioinformatics/btw770 Luo H, Wang J, Li M, Luo J, Peng X, Wu F-X, Pan Y (2016) Drug repositioning based on comprehensive similarity measures and bi-random walk algorithm. Bioinformatics (Oxford, England) 32:2664–2671. https://doi.org/10.1093/bioinformatics/btw228 Luo H, Li M, Wang S, Liu Q, Li Y, Wang J (2018) Computational drug repositioning using low-rank matrix approximation and randomized algorithms. Bioinformatics (Oxford, England) 34:1904–1912. https://doi.org/10.1093/bioinformatics/bty013 Modi S, Hughes M, Garrow A, White A (2012) The value of in silico chemistry in the safety assessment of chemicals in the consumer goods and pharmaceutical industries. Drug Discov Today 17(3–4):135–142 Ivanov SM, Lagunin AA, Poroikov VV (2016) In silico assessment of adverse drug reactions and associated mechanisms. Drug Discov Today 21(1):58 Yang H, Sun L, Li W, Liu G, Tang Y (2018) Prediction of chemical toxicity for drug design using machine learning methods and structural alerts. Front Chem 6:30. https://doi.org/10.3389/fchem.2018.00030 Xu Y, Dai Z, Chen F, Gao S, Pei J, Lai L (2015) Deep learning for drug-induced liver injury. J Chem Inf Model 55:2085–2093. https://doi.org/10.1021/acs.jcim.5b00238 Atias N, Sharan R (2011) An algorithmic framework for predicting side effects of drugs. J Comput Biol 18:207–218. https://doi.org/10.1089/cmb.2010.0255 Mizutani S, Pauwels E, Stoven V, Goto S, Yamanishi Y (2012) Relating drug-protein interaction network with drug side effects. Bioinformatics (Oxford, England) 28:522–528. https://doi.org/10.1093/bioinformatics/bts383 Fukuzaki M, Seki M, Kashima H, Sese, J (2009) Side effect prediction using cooperative pathways. In: Proceedings of IEEE international conference on bioinformatics and biomedicine, pp. 142–147. https://doi.org/10.1109/BIBM.2009.26 Lee S, Lee KH, Song M, Lee D (2011) Building the process-drug-side effect network to discover the relationship between biological processes and side effects. BMC Bioinf 12(2):2. https://doi.org/10.1186/1471-2105-12-S2-S2 Yamanishi Y, Pauwels E, Kotera M (2012) Drug side-effect prediction based on the integration of chemical and biological spaces. J Chem Inf Model 52:3284–3292. https://doi.org/10.1021/ci2005548 Wang Z, Clark NR, Ma'ayan A (2016) Drug-induced adverse events prediction with the lincs 1000 data. Bioinformatics (Oxford, England) 32:2338–2345. https://doi.org/10.1093/bioinformatics/btw168 Liu M, Wu Y, Chen Y, Sun J, Zhao Z, Chen X-W, Matheny ME, Xu H (2012) Large-scale prediction of adverse drug reactions using chemical, biological, and phenotypic properties of drugs. JAMIA 19:28–35. https://doi.org/10.1136/amiajnl-2011-000699 Cao D-S, Xiao N, Li Y-J, Zeng W-B, Liang Y-Z, Lu A-P, Xu Q-S, Chen AF (2015) Integrating multiple evidence sources to predict adverse drug reactions based on a systems pharmacology model. CPT Pharm Syst Pharmacol 4:498–506. https://doi.org/10.1002/psp4.12002 Pauwels E, Stoven V, Yamanishi Y (2011) Predicting drug side-effect profiles: a chemical fragment-based approach. BMC Bioinf 12:169. https://doi.org/10.1186/1471-2105-12-169 Xiao C, Zhang P, Chaowalitwongse WA, Hu J, Wang F (2017) Adverse drug reaction prediction with symbolic latent dirichlet allocation. In: Proceedings of the thirty-first AAAI conference on artificial intelligence Kuhn M, Al Banchaabouchi M, Campillos M, Jensen LJ, Gross C, Gavin A-C, Bork P (2013) Systematic identification of proteins that elicit drug side effects. Mol Syst Biol 9:663. https://doi.org/10.1038/msb.2013.10 Iwata H, Mizutani S, Tabei Y, Kotera M, Goto S, Yamanishi Y (2013) Inferring protein domains associated with drug side effects based on drug-target interaction network. BMC Syst Biol 7(6):18. https://doi.org/10.1186/1752-0509-7-S6-S18 Chen X, Shi H, Yang F, Yang L, Lv Y, Wang S, Dai E, Sun D, Jiang W (2016) Large-scale identification of adverse drug reaction-related proteins through a random walk model. Sci Rep 6:36325. https://doi.org/10.1038/srep36325 Xu C, Tao D, Xu C (2013) A survey on multi-view learning. arXiv:1304.5634v1 Zhang X, Li L, Ng MK, Zhang S (2017) Drug-target interaction prediction by integrating multiview network data. Comput Biol Chem 69:185–193. https://doi.org/10.1016/j.compbiolchem.2017.03.011 Zhang M, Zhou Z (2014) A review on multi-label learning algorithms. IEEE Transac Knowl Data Eng 26(8):1819–1837. https://doi.org/10.1109/TKDE.2013.39 Cerri R, Barros RC, de Carvalho AC, Jin Y (2016) Reduction strategies for hierarchical multi-label classification in protein function prediction. BMC Bioinf 17:373. https://doi.org/10.1186/s12859-016-1232-1 Wan S, Mak M-W, Kung S-Y (2016) Sparse regressions for predicting and interpreting subcellular localization of multi-label proteins. BMC Bioinf 17:97. https://doi.org/10.1186/s12859-016-0940-x Zhang M-L, Wu L (2015) Lift: Multi-label learning with label-specific features. IEEE Trans Pattern Anal Mach Intell 37:107–120. https://doi.org/10.1109/TPAMI.2014.2339815 Belkin M, Niyogi P, Sindhwani V (2006) Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. J. Mach. Learn. Res. 7:2399–2434 Belkin M, Niyogi P (2001) Laplacian eigenmaps and spectral techniques for embedding and clustering. Adv Neu Inf Process Syst 14(6):585–591 Xia T, Tao D, Mei T, Zhang Y (2010) Multiview spectral embedding. IEEE transactions on systems, man, and cybernetics. Part B, cybernetics. IEEE Syst Man Cyber Soc 40:1438–1446. https://doi.org/10.1109/TSMCB.2009.2039566 Shi C, Ruan Q, An G, Ge C (2015) Semi-supervised sparse feature selection based on multi-view laplacian regularization. Image Vision Comput 41:1–10. https://doi.org/10.1016/j.imavis.2015.06.006 Mojoo J, Kurosawa K, Kurita T (2017) Deep CNN with graph laplacian regularization for multi-label image annotation. In: Karray F, Campilho A, Cheriet F (eds) Image analysis and recognition. Springer, Cham. https://doi.org/10.1007/978-3-319-59876-5_3 Li C, Li H (2008) Network-constrained regularization and variable selection for analysis of genomic data. Bioinformatics (Oxford, England) 24:1175–1182. https://doi.org/10.1093/bioinformatics/btn081 Kuhn M, Letunic I, Jensen LJ, Bork P (2016) The sider database of drugs and side effects. Nucleic Acids Res 44:1075–1079. https://doi.org/10.1093/nar/gkv1075 Wang Y, Xiao J, Suzek TO, Zhang J, Wang J, Bryant SH (2009) Pubchem: a public information system for analyzing bioactivities of small molecules. Nucleic Acids Res 37:623–633. https://doi.org/10.1093/nar/gkp456 Law V, Knox C, Djoumbou Y, Jewison T, Guo AC, Liu Y, Maciejewski A, Arndt D, Wilson M, Neveu V, Tang A, Gabriel G, Ly C, Adamjee S, Dame ZT, Han B, Zhou Y, Wishart DS (2014) Drugbank 4.0: shedding new light on drug metabolism. Nucleic Acids Res 42:1091–1097. https://doi.org/10.1093/nar/gkt1068 Mitchell A, Chang H-Y, Daugherty L, Fraser M, Hunter S, Lopez R, McAnulla C, McMenamin C, Nuka G, Pesseat S, Sangrador-Vegas A, Scheremetjew M, Rato C, Yong S-Y, Bateman A, Punta M, Attwood TK, Sigrist CJA, Redaschi N, Rivoire C, Xenarios I, Kahn D, Guyot D, Bork P, Letunic I, Gough J, Oates M, Haft D, Huang H, Natale DA, Wu CH, Orengo C, Sillitoe I, Mi H, Thomas PD, Finn RD (2015) The interpro protein families database: the classification resource after 15 years. Nucleic Acids Res 43:213–221. https://doi.org/10.1093/nar/gku1243 Consortium U (2010) The universal protein resource (uniprot) in 2010. Nucleic Acids Res 38:142–148. https://doi.org/10.1093/nar/gkp846 Koleti A, Terryn R, Stathias V, Chung C, Cooper DJ, Turner JP, Vidovic D, Forlin M, Kelley TT, D'Urso A, Allen BK, Torre D, Jagodnik KM, Wang L, Jenkins SL, Mader C, Niu W, Fazel M, Mahi N, Pilarczyk M, Clark N, Shamsaei B, Meller J, Vasiliauskas J, Reichard J, Medvedovic M, Ma'ayan A, Pillai A, Schürer SC (2018) Data portal for the library of integrated network-based cellular signatures (lincs) program: integrated access to diverse large-scale cellular perturbation response data. Nucleic Acids Res 46:558–566. https://doi.org/10.1093/nar/gkx1063 Dong J, Yao Z-J, Zhang L, Luo F, Lin Q, Lu A-P, Chen AF, Cao D-S (2018) Pybiomed: a python library for various molecular representations of chemicals, proteins and dnas and their interactions. J Cheminf 10:16. https://doi.org/10.1186/s13321-018-0270-2 Zhu X, Lafferty J, Rosenfeld R (2005) Semi-supervised learning with graphs. Ph.D. thesis, Carnegie Mellon University, language technologies institute, school of computer science Pittsburgh Yu J, Wang M, Tao D (2012) Semisupervised multiview distance metric learning for cartoon synthesis. IEEE Transac Image Process 21(11):4636–4648. https://doi.org/10.1109/TIP.2012.2207395 Nie F, Xu D, Tsang IW, Zhang C (2010) Flexible manifold embedding: a framework for semi-supervised and unsupervised dimension reduction. IEEE Transac Image Process 19(7):1921–1932. https://doi.org/10.1109/TIP.2010.2044958 Ding C, Li T, Jordan MI (2010) Convex and semi-nonnegative matrix factorizations. IEEE Transac Pattern Anal Mach Intell 32(1):45–55 Nesterov Y (1983) A method of solving a convex programming problem with convergence rate \(o(1/k^2)\). Soviet Math Doklady 27(2):372–376 Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E (2011) Scikit-learn: machine learning in python. J Mach Learn Res 12:2825–2830 Fan R-E, Chang K-W, Hsieh C-J, Wang X-R, Lin C-J (2008) Liblinear: a library for large linear classification. J Mach Learn Res 9:1871–1874 Witten D, Tibshirani R, Gross S, Narasimhan B (2018) PMA: Penalized multivariate analysis. R package version 1.0.11. https://CRAN.R-project.org/package=PMA Zou H, Hastie T (2005) Regularization and variable selection via the elastic net. J R Stat Soc 67:301–320. https://doi.org/10.1111/j.1467-9868.2005.00503.x Davis AP, Grondin CJ, Johnson RJ, Sciaky D, King BL, McMorran R, Wiegers J, Wiegers TC, Mattingly CJ (2017) The comparative toxicogenomics database: update 2017. Nucleic Acids Res 45:972–978. https://doi.org/10.1093/nar/gkw838 Bonvin A, Mesnil A, Nicolini FE, Cotte L, Michallet M, Descotes J, Vial T (2008) Dasatinib-induced acute hepatitis. Leuk Lymph 49:1630–1632. https://doi.org/10.1080/10428190802136384 Takeda M, Okamoto I, Nakagawa K (2015) Clinical development of nintedanib for advanced non-small-cell lung cancer. Therap Clin Risk Manag 11:1701–1706. https://doi.org/10.2147/TCRM.S76646 Shah RR, Morganroth J, Shah DR (2013) Hepatotoxicity of tyrosine kinase inhibitors: clinical and regulatory perspectives. Drug Saf 36:491–503. https://doi.org/10.1007/s40264-013-0048-4 Jaeschke H, Duan L, Akakpo JY, Farhood A, Ramachandran A (2018) The role of apoptosis in acetaminophen hepatotoxicity. Food Chem Toxicol 118:709–718. https://doi.org/10.1016/j.fct.2018.06.025 Wang Q, Wei L-W, Xiao H-Q, Xue Y, Du S-H, Liu Y-G, Xie X-L (2017) Methamphetamine induces hepatotoxicity via inhibiting cell division, arresting cell cycle and activating apoptosis: in vivo and in vitro studies. Food Chem Toxicol 105:61–72. https://doi.org/10.1016/j.fct.2017.03.030 Wang X, Tang X, Gong X, Albanis E, Friedman SL, Mao Z (2004) Regulation of hepatic stellate cell activation and growth by transcription factor myocyte enhancer factor 2. Gastroenterology 127:1174–1188 McArthur K, D'Cruz AA, Segal D, Lackovic K, Wilks AF, O'Donnell JA, Nowell CJ, Gerlic M, Huang DCS, Burns CJ, Croker BA (2017) Defining a therapeutic window for kinase inhibitors in leukemia to avoid neutropenia. Oncotarget 8:57948–57963. https://doi.org/10.18632/oncotarget.19678 Coulthard MG, Morgan M, Woodruff TM, Arumugam TV, Taylor SM, Carpenter TC, Lackmann M, Boyd AW (2012) Eph/ephrin signaling in injury and inflammation. Am J Pathol 181:1493–1503. https://doi.org/10.1016/j.ajpath.2012.06.043 Becker G, Blum HE (2009) Novel opioid antagonists for opioid-induced bowel dysfunction and postoperative ileus. Lancet (London, England) 373:1198–1206. https://doi.org/10.1016/S0140-6736(09)60139-2 Roubille F, Combes S, Leal-Sanchez J, Barrère C, Cransac F, Sportouch-Dukhan C, Gahide G, Serre I, Kupfer E, Richard S, Hueber A-O, Nargeot J, Piot C, BarrèLemaire S (2007) Myocardial expression of a dominant-negative form of daxx decreases infarct size and attenuates apoptosis in an in vivo mouse model of ischemia/reperfusion injury. Circulation 116:2709–2717. https://doi.org/10.1161/CIRCULATIONAHA.107.694844 Xiong C, Zang X, Zhou X, Liu L, Masucci MV, Tang J, Li X, Liu N, Bayliss G, Zhao TC, Zhuang S (2017) Pharmacological inhibition of src kinase protects against acute kidney injury in a murine model of renal ischemia/reperfusion. Oncotarget 8:31238–31253. https://doi.org/10.18632/oncotarget.16114 Trevor BE, Hastie T, Johnstone L, Tibshirani R (2002) Least angle regression. Ann Stat 32:407–499 Reid S, Grudic G (2009) Regularized linear models in stacked generalization. Multiple classifier systems. Springer, Berlin, pp 112–121 Barsky AJ, Saintfort R, Rogers MP, Borus JF (2002) Nonspecific medication side effects and the nocebo phenomenon. JAMA 287:622–627. https://doi.org/10.1001/jama.287.5.622 Li X-l, Yu PS, Liu B, Ng SK (2009) Positive unlabeled learning for data stream classification. SDM, SIAM, San Diego Langfelder P, Zhang B, Horvath S (2007) Defining clusters from a hierarchical cluster tree: the dynamic tree cut package for R. Bioinformatics 24(5):719–720. https://doi.org/10.1093/bioinformatics/btm563 Thanks to Dr. Chao Ma for helpful discussion. This work is supported by the National Natural Science Foundation of Hunan Province (No. 2018JJ3807). NHC Key Laboratory of Cancer Proteomics, Xiangya Hospital, Central South University, XiangYa Road, Changsha, China Xujun Liang , Pengfei Zhang , Jun Li , Ying Fu , Lingzhi Qu , Yongheng Chen & Zhuchu Chen Search for Xujun Liang in: Search for Pengfei Zhang in: Search for Jun Li in: Search for Ying Fu in: Search for Lingzhi Qu in: Search for Yongheng Chen in: Search for Zhuchu Chen in: XJL conceived, designed the experiments, and drafted the manuscript. XJL collected the data and performed the experiments. YF, LZQ helped perform the experiments. PFZ and JL helped analyse the results. YHC and ZCC helped draft the manuscript. All authors read and approved the final manuscript. Correspondence to Xujun Liang. Additional file 1. Analysis of the proposed method and additional figures. Additional file 2. Additional tables. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Liang, X., Zhang, P., Li, J. et al. Learning important features from multi-view data to predict drug side effects. J Cheminform 11, 79 (2019). https://doi.org/10.1186/s13321-019-0402-3 Side effect prediction Heterogeneous data integration
CommonCrawl
What would it take to make a scientific second equal exactly one traditional second? A scientist is resetting the clock on his microwave one day as he considers the hopelessness of keeping the time exactly right. Not for the reasons we worry about like power outages and daylight savings times, but because in the back of his head, he knows that Earth's movements through space are not properly standardized for good time keeping system. This makes him unreasonably mad, so he decides that the best way to correct for this aberration is to simply alter the spin and orbital period of the earth so that a day is exactly 86400 seconds and a year is exactly 365 days based on caesium frequencies so that he never has to worry about converting units again. Although the historical definition of the unit (seconds) was based on this division of the Earth's rotation cycle, the formal definition in the International System of Units (SI) is a much steadier timekeeper: it is defined by taking the fixed numerical value of the caesium frequency ∆νCs, the unperturbed ground-state hyperfine transition frequency of the caesium 133 atom, to be 9192631770 when expressed in the unit Hz, which is equal to s−1.1[2] Because the Earth's rotation varies and is also slowing ever so slightly, a leap second is periodically added to clock time[nb 1] to keep clocks in sync with Earth's rotation. ~ https://en.wikipedia.org/wiki/Second So, our mad scientist devises a two step plan to unify metric and traditional time once and for all! The first stage it to use a series of powerful explosions to speed up/ slow down the Earth's movements to make days and seconds the right lengths, the second it to install propulsion systems on the Earth to keep it moving at these speeds indefinitely. The Question: How much force (and in what directions) does the scientist need to exert on the Earth to achieve his goals? Bonus points if new seconds actually = scientific seconds when rounded to the level of a double floating-point number, but I would be surprised if anyone could actually find measurements on the Earth accurate enough to do this, so no pressure. Caveats based on Comments: How long is indefinitely? If additional thrust needs to be added over time, the scientist trusts future generations (assuming he hasn't killed everyone) to continue his work. His initial thruster just needs to be strong enough to make sure that he doesn't start seeing desync start creeping back in before his own end-of-life. If thrust needed decreases over time, assume his thruster can throttle down to compensate. the explosions and "propulsion" would probably devastate the biosphere at worst it might even generate enough energy to melt the crust and boil the oceans to a significant extent. Devastation does not necessarily need to be addressed for purposes of this question unless it involves there not being an Earth left to have a day/night cycle. planets hard-science time time-keeping Nosajimiki - Reinstate Monica Nosajimiki - Reinstate MonicaNosajimiki - Reinstate Monica $\begingroup$ I see you've asked for precision rather than accuracy. I'm pretty certain I can manage that ;-) $\endgroup$ – Starfish Prime Nov 21 '19 at 18:21 $\begingroup$ It won't be pretty, at best the explosions and "propulsion" would probably devastate the biosphere at worst it might even generate enough energy to melt the crust and boil the oceans to a significant extent. I won't bother with the calculation as Starfish Prime has it in hand, we shall see. $\endgroup$ – Slarty Nov 21 '19 at 18:32 $\begingroup$ @Slarty oh gosh, don't hold out on my account; I'm doing a bunch of other things at the same time, so it'll take mne a while to assemble my answer. $\endgroup$ – Starfish Prime Nov 21 '19 at 18:33 $\begingroup$ World devastation is an acceptable consequence when it comes to solving important issues such as this. $\endgroup$ – Nosajimiki - Reinstate Monica Nov 21 '19 at 18:38 $\begingroup$ The real problem isn't fixing the Earth's rotation now, it's how to keep the rotation constant after you've fixed it. Otherwise you're going to have to keep making adjustments... $\endgroup$ – Matthew Nov 21 '19 at 18:44 Lets first think about how much energy this needs. You've asked for spurious precision, but I'll save that til the end because no-one wants to see all the tedious decimal places in the workings (and if they do, they can repeat the process themselves). You want an orbit with a period of precisely 365 days, each 24 hours long. Via Kepler's third law, we can see that this will need to reduce the semimajor axis of Earth's orbit by about 71950km. At its perihelion of its current orbit, earth has a velocity of about 30.2868km/s. In its new orbit, with the same perihelion (and hence a reduced aphelion) it will have a velocity of more like 30.2797km/s. Given Earth's mass, that will require its kinetic energy be changed by a bit over 9.02 x 1031 joules. I'm not really certain where you'd get this much energy from... it is nearly two orders of magnitude more energy than the kinetic energy of Mars if you crashed it into earth at 4km/s (about the speed of the hypothesized Theia impact), and more energy than you'd get from all the solar radiation falling onto Earth in about 16 million (old-style) years. If anyone has any suggestions on where to source 500 billion tonnes of antimatter, that'd be great. It is also about 2/5ths of Earth's gravitational binding energy, meaning that it if it were not released carefully over an extended period of time you'd reduce the planet to a ring of gravel orbitting the sun. Releasing the energy slowly and carefully enough will probably take entirely too long for anyone's attention span. (energy change would be ~9.015096928089181 x 1031 joules) So much for the year. What about the day? The angular kinetic energy of the Earth (using the Lambeck 1980 figure for the Earth's moment of inertia about its polar axis from here) is about 2.136 x 1029J. Speeding up the Earth to give it a nice round 24 hour day requires an angular KE of more like 2.124 x 1029J, giving a required oomph of about 1.165 x 1027J, a much more manageable figure as I'm sure you'll agree. Please use caution releasing this much energy in the atmosphere all at once, because whilst it isn't quite enough to vapourise our oceans, it is more than enough to boil them, and the clouds of hot steam will spoil the view. I did have a look at imparting this energy using a train of carefully aimed asteroids, with trajectories in the Earth's equatorial plane, hitting the equator at an optimal 15 degrees angle. Unfortunately the plan started to resemble a re-run of the Hadean era, and the inefficiencies of using explosions or rocks to change the Earth's rate of rotation resulted in a lot of waste heat and seemed lamentably inefficient. There may or may not be any oceans or atmosphere afterwards, but the clouds of dust and debris and subsequent re-entry probably preclude any appreciation of a day-night cycle for some time (possibly thousands of years). (energy change would be ~1.1648246454801083x 1027 joules) So much for the day. Can we just deal with the changing day length, if nothing else? Turns out that no-one can seem to say anything useful about exactly how much deviation you'd need to correct for... the state of ΔT is woefully confusing. The day length only changes by milliseconds per century, but the leap seconds keep on coming. Lets just look at a system that can manage to change earth's day length by a second (because I'm despairing of getting anything to work). This requires adding ~4.96 x 1024 joules of angular kinetic energy. By a happy coincidence, this is a little under the total amount of solar energy that strikes the Earth every year (more like 5.5 x 1024 J). Using a rocket to do this needs 1.57 x 1017W of useful thrust. Given efficiency issues, it will not alas be practical to resurface the Earth with solar panels and use the Earth's own oceans as reaction mass, but it is soooo close. I have a final alternative plan for you. A lot of our problems are caused by the moon. It takes ~7.62×1028J to throw that rock into deep space, where it will never offend your eyes or your day length again. Just say the word, and we'll draw up a plan for you... (rocket thrust power would be ~1.571089676036397 x 1017 watts) Starfish PrimeStarfish Prime $\begingroup$ Bravo @Starfish Prime, gorgeous answer. $\endgroup$ – Gustavo Nov 22 '19 at 1:22 $\begingroup$ I love the well thought out logic of this answer, but there seems to be a math error in step one that makes things look more bleek than they are. 30286.8 - 30279.7 = 7.1m/s. Earth's mass is ~ 5.972 × 10^24kg so then energy change should "only" be 1.20524*10^26J . I have not checked all your math, but you may want to double check it all just to make sure. $\endgroup$ – Nosajimiki - Reinstate Monica Nov 22 '19 at 15:20 $\begingroup$ @Nosajimiki So, I think I see where the issue arises.The energy I quoted was the difference between the kinetic energy at the perihelion in the old orbit, and perihelion at the new orbit, and I'm reasonably certain that this change in energy is the correct figure to think a about. The energy cost of a given $\Delta_v$ depends on the object's current velocity... have a look at this physics.SE question covering the same subject. $\endgroup$ – Starfish Prime Nov 22 '19 at 17:55 $\begingroup$ Ah, I see where that is coming from now. Thanks. $\endgroup$ – Nosajimiki - Reinstate Monica Nov 22 '19 at 22:07 Explosives on the Earth's surface, no matter their power level (short of ejecting significant chunks of the crust) will never change the Earth's rotation rate or orbit. Nor will a reaction drive of any kind -- with the exception that if its exhaust, after exiting the atmosphere, is still above Earth's escape speed, some tiny fraction of its thrust will act to change the Earth's velocity. Modern times changes in Earth rotation rate have been attributed to changes in the amount of water captured behind dams (hence further from the Earth's centroid than its natural height average), melting of glaciers or ice caps, and (very rarely) land movement due to tectonic events (major eruptions and earthquakes). Your mad scientist need "merely" alter the proportion of water trapped in the polar ice caps relative to the oceans in order to take and maintain very fine control (on the order of microseconds alteration in the day length) of the Earth's rotation. Now, to change the orbit will require going off Earth. The most likely way to accomplish this (to cut around a quarter day off the period -- ideally without changing the eccentricity or ecliptic plane) would be to attach large mass drivers to a biggish asteroid (Vesta, perhaps), use them to drive it around the Solar System, and then use the asteroid as a gravity tug to subtly change the Earth's orbit. Whether the mad scientist can gain the required precision in altering either the Earth's rotation or its orbital period is up to him/her -- but with good enough computers and software, and a willingness to spend multiple decades on the project, he can quit having to deal with messy numbers of seconds in a day or year -- at the expense of making every astronomer alive an enemy. If he's careful, he could probably accomplish the whole project without a single (attributable) casualty. If not, he might kill a few million with the mass driver exhaust, and a hard-to-count number due to climate changes produce by or required to manage the ice cap project. Zeiss IkonZeiss Ikon The Earth is rotating too slowly for our scientist's liking, and it's also getting slower all the time due to gravitational tidal drag and other factors. This is currently happening at a rate of about $\mathrm{7.3×10^{−13} day/day}$, which is also the fraction by which the Earth's angular momentum needs to be topped up. Let's specify the propulsion system to be able to compensate for drift up to $\mathrm{10^{−12} day/day}$ for proper redundancy and future-proofing. The Earth's total rotational angular momentum is $L = I \omega = \frac{2}{5}MR^2 \times \frac{2\pi}{86400} \approx \mathrm{7.2\times 10^{33}\ kg \ m^2 \ s^{-1}}$ And we need to be able to change this by one part in a trillion. In order to get on to torque we need to decide how long the scientist is willing to wait to apply this correction. Let's say he's moderately impatient and wants it to apply over 1000 seconds, or $\mathrm{10^{−15} day/day/s}$ (yes those units are getting a bit crazy now). We must therefore be able to apply a torque to the Earth of approximately $\mathrm{10^{19} N \ m}$. As noted the best way to do this is actually to move large masses of water closer or further from the Earth's rotation axis, but you've specified explosions, so let's go with that. We do our explosion on the correct place on the equator and somehow manage to focus it so that all the debris is ejected directly backwards. It's really important that all the debris reaches escape velocity or it doesn't actually contribute a net change to the angular momentum, just sloshes it around in time, so let's say all our debris ends up moving at $\sim \mathrm{10^4\ m \ s^{-1}}$, so from our launch site at $R \approx \mathrm{6.4 \times 10^{6}\ m}$ each kilo of debris contributes $\sim \mathrm{10^{10}\ N \ m}$, meaning we only need to launch $\sim \mathrm{10^{9} kg}$ of material at each correction. Simple, that's just a lump . Handling the collateral damage from that is left as an engineering exercise. $\begingroup$ Don't forget the year adjustment -- he wants to get rid of the .2404 (IIRC) extra day in the year, too, so as to eliminate pesky leap years (never mind having to remember which century years are, and which aren't leap years). $\endgroup$ – Zeiss Ikon Nov 21 '19 at 20:10 Simply put, sane or not, if he were any scientist worth his salt, he'd understand that he cannot make a day any closer to the 86400 seconds that it currently is defined as. How precise can we be? The length of the year is ~365.2422ish days. This is the oft-cited duration of a tropical year, the "mean time between between vernal equinoxes", but is in fact arguably wrong (see https://www.hermetic.ch/cal_stud/cassidy/err_trop.htm for the drama inherent in this issue). The time to orbit the sun is ~365.2564 days, and now you're already deeply into the weeds and your scientist is going mad asking himself "so what IS a year? What do I want to align TO?" And you can't get more precise than those four significant figures (well, five if you're optimistic, some use 365.24219, to specify it to the nearest second, but then it depends where you measure from). Any more digits simply aren't meaningful, because it varies by a few fractions of a second each year, due to chaotic atmospheric effects, geological effects (mantle convection, glacial rebound, etc), the rotation of the earth slowing (due to tidal friction, etc), and more. This is why we occasionally get leap-seconds. How much energy do we need? The rotational kinetic energy of the planet is around 2.138 * 10^29 Joules. To convert 365.2422ish days to to 365.0000 precisely, would require about a 0.066% change to that energy. We need to find 1.4186948 * 10^26 J, and apply it to slow down the rotation of the planet. Can we do it with a gravity tug? A gravity tug seems to have the same problem as moving the earth using mass-drivers to eject mass: it requires more energy than we have either on the earth or on the tug. Can we do that with a solar sail? Sunlight hits every square meter of our planet at ~1000 Joules/second, so mirrors placed around the equator so that there was always a 1m square mirror reflecting the sun on the side seeing the sunrise, you'd get a retardation force of that much. Over a year, that'd be 3.1536 * 10^10 J per year, which means you'd have it to the right speed in 10^14 years. If you made the mirrors 10km square, or 100,000,000 square m, it would still take a million years. OK, solar sails aren't the answer. What about orbital bombardment? The problem there is that, sure, it's easy to drop rocks onto the planet. Well, OK, actually it's quite hard, they have this bad habit of falling down but continuously missing, which terrible habit we give the less embarassing name "orbiting". But we need to get it to only-just-fail-to-miss. To collide with the most glancing blow possible, to impart as much of its energy in the direction of rotation and as little as possible downwards towards the crust. It's impossible to do that 100% efficiency, but I'll assume you can get close. But the problem is that Chicxulub, the dinosaur-killer, imparted only 4.20 * 10^23 joules. That means you'd need a thousand dinosaur-killers hitting at just the right angle to change the world's spin enough. Everyone dies! A thousand times over. Dewi MorganDewi Morgan Not the answer you're looking for? Browse other questions tagged planets hard-science time time-keeping or ask your own question. Orbiting one star in a binary system: what are the effects of the second star on the planet? How do seasons work in a binary system (planet orbits one star, not both)? Effects of local solar time with flexible hours If clock and calendar was reinvented with today's knowledge, what could they look like? How does a spacecraft attempt an intercept course with a hostile one realistically (Part II)? Designing a super-comfortable Earth analog Can someone with a better understanding of physics review or correct my design of a hypothetical fusion reactor for space vessels?
CommonCrawl
Two increasingly popular options are amphetamines and methylphenidate, which are prescription drugs sold under the brand names Adderall and Ritalin. In the United States, both are approved as treatments for people with ADHD, a behavioural disorder which makes it hard to sit still or concentrate. Now they're also widely abused by people in highly competitive environments, looking for a way to remain focused on specific tasks. The main concern with pharmaceutical drugs is adverse effects, which also apply to nootropics with undefined effects. Long-term safety evidence is typically unavailable for nootropics.[13] Racetams — piracetam and other compounds that are structurally related to piracetam — have few serious adverse effects and low toxicity, but there is little evidence that they enhance cognition in people having no cognitive impairments.[19] There are some other promising prescription drugs that may have performance-related effects on the brain. But at this point, all of them seem to involve a roll of the dice. You may experience a short-term brain boost, but you could also end up harming your brain (or some other aspect of your health) in the long run. "To date, there is no safe drug that may increase cognition in healthy adults," Fond says of ADHD drugs, modafinil and other prescription nootropics. Some nootropics are more commonly used than others. These include nutrients like Alpha GPC, huperzine A, L-Theanine, bacopa monnieri, and vinpocetine. Other types of nootropics ware still gaining traction. With all that in mind, to claim there is a "best" nootropic for everyone would be the wrong approach since every person is unique and looking for different benefits. Some people aren't satisfied with a single supplement—the most devoted self-improvers buy a variety of different compounds online and create their own custom regimens, which they call "stacks." According to Kaleigh Rogers, writing in Vice last year, companies will now take their customers' genetic data from 23andMe or another source and use it to recommend the right combinations of smart drugs to optimize each individual's abilities. The problem with this practice is that there's no evidence the practice works. (And remember, the FDA doesn't regulate supplements.) Find out the 9 best foods to boost your brain health. 70 pairs is 140 blocks; we can drop to 36 pairs or 72 blocks if we accept a power of 0.5/50% chance of reaching significance. (Or we could economize by hoping that the effect size is not 3.5 but maybe twice the pessimistic guess; a d=0.5 at 50% power requires only 12 pairs of 24 blocks.) 70 pairs of blocks of 2 weeks, with 2 pills a day requires (70 \times 2) \times (2 \times 7) \times 2 = 3920 pills. I don't even have that many empty pills! I have <500; 500 would supply 250 days, which would yield 18 2-week blocks which could give 9 pairs. 9 pairs would give me a power of: You may have come across this age-old adage, "Work smarter, not harder." So, why not extend the same philosophy in other aspects of your life? Are you in a situation wherein no matter how much you exercise, eat healthy, and sleep well, you still struggle to focus and motivate yourself? If yes, you need a smart solution minus the adverse health effects. Try 'Smart Drugs,' that could help you out of your situation by enhancing your thought process, boosting your memory, and making you more creative and productive. In nootropic stacks, it's almost always used as a counterbalance to activating ingredients like caffeine. L-Theanine, in combination with caffeine, increases alertness, reaction time, and general attention [40, 41]. At the same time, it reduces possible headaches and removes the jitteriness caused by caffeine [42]. It takes the edge of other nootropic compounds. Texas-based entrepreneur and podcaster Mansal Denton takes phenylpiracetam, a close relative of piracetam originally developed by the Soviet Union as a medication for cosmonauts, to help them endure the stresses of life in space. "I have a much easier time articulating certain things when I take it, so I typically do a lot of recording [of podcasts] on those days," he says. I started with the 10g of Vitality Enhanced Blend, a sort of tan dust. Used 2 little-spoonfuls (dust tastes a fair bit like green/oolong tea dust) into the tea mug and then some boiling water. A minute of steeping and… bleh. Tastes sort of musty and sour. (I see why people recommended sweetening it with honey.) The effects? While I might've been more motivated - I hadn't had caffeine that day and was a tad under the weather, a feeling which seemed to go away perhaps half an hour after starting - I can't say I experienced any nausea or very noticeable effects. (At least the flavor is no longer quite so offensive.) Ginsenoside Rg1, a molecule found in the plant genus panax (ginseng), is being increasingly researched as an effect nootropic. Its cognitive benefits including increasing learning ability and memory acquisition, and accelerating neural development. It targets mainly the NMDA receptors and nitric oxide synthase, which both play important roles in personal and emotional intelligence. The authors of the study cited above, say that their research findings thus far have boosted their confidence in a "bright future of cognitive drug development." Rogers RD, Blackshaw AJ, Middleton HC, Matthews K, Hawtin K, Crowley C, Robbins TW. Tryptophan depletion impairs stimulus-reward learning while methylphenidate disrupts attentional control in healthy young adults: Implications for the monoaminergic basis of impulsive behaviour. Psychopharmacology. 1999;146:482–491. doi: 10.1007/PL00005494. [PubMed] [CrossRef] These are the most popular nootropics available at the moment. Most of them are the tried-and-tested and the benefits you derive from them are notable (e.g. Guarana). Others are still being researched and there haven't been many human studies on these components (e.g. Piracetam). As always, it's about what works for you and everyone has a unique way of responding to different nootropics. The evidence? Ritalin is FDA-approved to treat ADHD. It has also been shown to help patients with traumatic brain injury concentrate for longer periods, but does not improve memory in those patients, according to a 2016 meta-analysis of several trials. A study published in 2012 found that low doses of methylphenidate improved cognitive performance, including working memory, in healthy adult volunteers, but high doses impaired cognitive performance and a person's ability to focus. (Since the brains of teens have been found to be more sensitive to the drug's effect, it's possible that methylphenidate in lower doses could have adverse effects on working memory and cognitive functions.) There is no official data on their usage, but nootropics as well as other smart drugs appear popular in the Silicon Valley. "I would say that most tech companies will have at least one person on something," says Noehr. It is a hotbed of interest because it is a mentally competitive environment, says Jesse Lawler, a LA based software developer and nootropics enthusiast who produces the podcast Smart Drug Smarts. "They really see this as translating into dollars." But Silicon Valley types also do care about safely enhancing their most prized asset – their brains – which can give nootropics an added appeal, he says. The soft gels are very small; one needs to be a bit careful - Vitamin D is fat-soluble and overdose starts in the range of 70,000 IU35, so it would take at least 14 pills, and it's unclear where problems start with chronic use. Vitamin D, like many supplements, follows a U-shaped response curve (see also Melamed et al 2008 and Durup et al 2012) - too much can be quite as bad as too little. Too little, though, is likely very bad. The previously cited studies with high acute doses worked out to <1,000 IU a day, so they may reassure us about the risks of a large acute dose but not tell us much about smaller chronic doses; the mortality increases due to too-high blood levels begin at ~140nmol/l and reading anecdotes online suggest that 5k IU daily doses tend to put people well below that (around 70-100nmol/l). I probably should get a blood test to be sure, but I have something of a needle phobia. This would be a very time-consuming experiment. Any attempt to combine this with other experiments by ANOVA would probably push the end-date out by months, and one would start to be seriously concerned that changes caused by aging or environmental factors would contaminate the results. A 5-year experiment with 7-month intervals will probably eat up 5+ hours to prepare <12,000 pills (active & placebo); each switch and test of mental functioning will probably eat up another hour for 32 hours. (And what test maintains validity with no practice effects over 5 years? Dual n-back would be unusable because of improvements to WM over that period.) Add in an hour for analysis & writeup, that suggests >38 hours of work, and 38 \times 7.25 = 275.5. 12,000 pills is roughly $12.80 per thousand or $154; 120 potassium iodide pills is ~$9, so \frac{365.25}{120} \times 9 \times 5 = 137. "In the hospital and ICU struggles, this book and Cavin's experience are golden, and if we'd have had this book's special attention to feeding tube nutrition, my son would be alive today sitting right here along with me saying it was the cod liver oil, the fish oil, and other nutrients able to be fed to him instead of the junk in the pharmacy tubes, that got him past the liver-test results, past the internal bleeding, past the brain difficulties controlling so many response-obstacles back then. Back then, the 'experts' in rural hospitals were unwilling to listen, ignored my son's unexpected turnaround when we used codliver oil transdermally on his sore skin, threatened instead to throw me out, but Cavin has his own proof and his accumulated experience in others' journeys. Cavin's boxed areas of notes throughout the book on applying the brain nutrient concepts in feeding tubes are powerful stuff, details to grab onto and run with… hammer them! The surveys just reviewed indicate that many healthy, normal students use prescription stimulants to enhance their cognitive performance, based in part on the belief that stimulants enhance cognitive abilities such as attention and memorization. Of course, it is possible that these users are mistaken. One possibility is that the perceived cognitive benefits are placebo effects. Another is that the drugs alter students' perceptions of the amount or quality of work accomplished, rather than affecting the work itself (Hurst, Weidner, & Radlow, 1967). A third possibility is that stimulants enhance energy, wakefulness, or motivation, which improves the quality and quantity of work that students can produce with a given, unchanged, level of cognitive ability. To determine whether these drugs enhance cognition in normal individuals, their effects on cognitive task performance must be assessed in relation to placebo in a masked study design. Another factor to consider is whether the nootropic is natural or synthetic. Natural nootropics generally have effects which are a bit more subtle, while synthetic nootropics can have more pronounced effects. It's also important to note that there are natural and synthetic nootropics. Some natural nootropics include Ginkgo biloba and ginseng. One benefit to using natural nootropics is they boost brain function and support brain health. They do this by increasing blood flow and oxygen delivery to the arteries and veins in the brain. Moreover, some nootropics contain Rhodiola rosea, panxax ginseng, and more. And there are other uses that may make us uncomfortable. The military is interested in modafinil as a drug to maintain combat alertness. A drug such as propranolol could be used to protect soldiers from the horrors of war. That could be considered a good thing – post-traumatic stress disorder is common in soldiers. But the notion of troops being unaffected by their experiences makes many feel uneasy. The truth is that, almost 20 years ago when my brain was failing and I was fat and tired, I did not know to follow this advice. I bought $1000 worth of smart drugs from Europe, took them all at once out of desperation, and got enough cognitive function to save my career and tackle my metabolic problems. With the information we have now, you don't need to do that. Please learn from my mistakes! Unfortunately, cognitive enhancement falls between the stools of research funding, which makes it unlikely that such research programs will be carried out. Disease-oriented funders will, by definition, not support research on normal healthy individuals. The topic intersects with drug abuse research only in the assessment of risk, leaving out the study of potential benefits, as well as the comparative benefits of other enhancement methods. As a fundamentally applied research question, it will not qualify for support by funders of basic science. The pharmaceutical industry would be expected to support such research only if cognitive enhancement were to be considered a legitimate indication by the FDA, which we hope would happen only after considerably more research has illuminated its risks, benefits, and societal impact. Even then, industry would have little incentive to delve into all of the issues raised here, including the comparison of drug effects to nonpharmaceutical means of enhancing cognition. Aniracetam is known as one of the smart pills with the widest array of uses. From benefits for dementia patients and memory boost in adults with healthy brains, to the promotion of brain damage recovery. It also improves the quality of sleep, what affects the overall increase in focus during the day. Because it supports the production of dopamine and serotonin, it elevates our mood and helps fight depression and anxiety. Some suggested that the lithium would turn me into a zombie, recalling the complaints of psychiatric patients. But at 5mg elemental lithium x 200 pills, I'd have to eat 20 to get up to a single clinical dose (a psychiatric dose might be 500mg of lithium carbonate, which translates to ~100mg elemental), so I'm not worried about overdosing. To test this, I took on day 1 & 2 no less than 4 pills/20mg as an attack dose; I didn't notice any large change in emotional affect or energy levels. And it may've helped my motivation (though I am also trying out the tyrosine). If smart drugs are the synthetic cognitive enhancers, sleep, nutrition and exercise are the "natural" ones. But the appeal of drugs like Ritalin and modafinil lies in their purported ability to enhance brain function beyond the norm. Indeed, at school or in the workplace, a pill that enhanced the ability to acquire and retain information would be particularly useful when it came to revising and learning lecture material. But despite their increasing popularity, do prescription stimulants actually enhance cognition in healthy users? Some smart drugs can be found in health food stores; others are imported or are drugs that are intended for other disorders such as Alzheimer's disease and Parkinson's disease. There are many Internet web sites, books, magazines and newspaper articles detailing the supposed effects of smart drugs. There are also plenty of advertisements and mail-order businesses that try to sell "smart drugs" to the public. However, rarely do these businesses or the popular press report results that show the failure of smart drugs to improve memory or learning. Rather, they try to show that their products have miraculous effects on the brain and can improve mental functioning. Wouldn't it be easy to learn something by "popping a pill" or drinking a soda laced with a smart drug? This would be much easier than taking the time to study. Feeling dull? Take your brain in for a mental tune up by popping a pill! But while some studies have found short-term benefits, Doraiswamy says there is no evidence that what are commonly known as smart drugs — of any type — improve thinking or productivity over the long run. "There's a sizable demand, but the hype around efficacy far exceeds available evidence," notes Doraiswamy, adding that, for healthy young people such as Silicon Valley go-getters, "it's a zero-sum game. That's because when you up one circuit in the brain, you're probably impairing another system." (We already saw that too much iodine could poison both adults and children, and of course too little does not help much - iodine would seem to follow a U-curve like most supplements.) The listed doses at iherb.com often are ridiculously large: 10-50mg! These are doses that seems to actually be dangerous for long-term consumption, and I believe these are doses that are designed to completely suffocate the thyroid gland and prevent it from absorbing any more iodine - which is useful as a short-term radioactive fallout prophylactic, but quite useless from a supplementation standpoint. Fortunately, there are available doses at Fitzgerald 2012's exact dose, which is roughly the daily RDA: 0.15mg. Even the contrarian materials seem to focus on a modest doubling or tripling of the existing RDA, so the range seems relatively narrow. I'm fairly confident I won't overshoot if I go with 0.15-1mg, so let's call this 90%. Most people I talk to about modafinil seem to use it for daytime usage; for me that has not ever worked out well, but I had nothing in particular to show against it. So, as I was capping the last of my piracetam-caffeine mix and clearing off my desk, I put the 4 remaining Modalerts pills into capsules with the last of my creatine powder and then mixed them with 4 of the theanine-creatine pills. Like the previous Adderall trial, I will pick one pill blindly each day and guess at the end which it was. If it was active (modafinil-creatine), take a break the next day; if placebo (theanine-creatine), replace the placebo and try again the next day. We'll see if I notice anything on DNB or possibly gwern.net edits. Does little alone, but absolutely necessary in conjunction with piracetam. (Bought from Smart Powders.) When turning my 3kg of piracetam into pills, I decided to avoid the fishy-smelling choline and go with 500g of DMAE (Examine.com); it seemed to work well when I used it before with oxiracetam & piracetam, since I had no piracetam headaches, and be considerably less bulky. There is much to be appreciated in a brain supplement like BrainPill (never mind the confusion that may stem from the generic-sounding name) that combines tried-and-tested ingredients in a single one-a-day formulation. The consistency in claims and what users see in real life is an exemplary one, which convinces us to rate this powerhouse as the second on this review list. Feeding one's brain with nootropics and related supplements entails due diligence in research and seeking the highest quality, and we think BrainPill is up to task. Learn More... These pills don't work. The reality is that MOST of these products don't work effectively. Maybe we're cynical, but if you simply review the published studies on memory pills, you can quickly eliminate many of the products that don't have "the right stuff." The active ingredients in brain and memory health pills are expensive and most companies sell a watered down version that is not effective for memory and focus. The more brands we reviewed, the more we realized that many of these marketers are slapping slick labels on low-grade ingredients. As opposed to what it might lead you to believe, Ginkgo Smart is not simply a Ginkgo Biloba supplement. In all actuality, it's much more than that – a nootropic (Well duh, we wouldn't be reviewing it otherwise). Ginkgo Smart has actually been seeing quite some popularity lately, possibly riding on the popularity of Ginkgo Biloba as a supplement, which has been storming through the US lately, and becoming one of the highest selling supplement in the US. We were pleasantly pleased at the fact that it wasn't too hard to find Ginkgo Smart's ingredients… Learn More... Most diehard nootropic users have considered using racetams for enhancing brain function. Racetams are synthetic nootropic substances first developed in Russia. These smart drugs vary in potency, but they are not stimulants. They are unlike traditional ADHD medications (Adderall, Ritalin, Vyvanse, etc.). Instead, racetams boost cognition by enhancing the cholinergic system. But perhaps the biggest difference between Modafinil and other nootropics like Piracetam, according to Patel, is that Modafinil studies show more efficacy in young, healthy people, not just the elderly or those with cognitive deficits. That's why it's great for (and often prescribed to) military members who are on an intense tour, or for those who can't get enough sleep for physiological reasons. One study, by researchers at Imperial College London, and published in Annals of Surgery, even showed that Modafinil helped sleep-deprived surgeons become better at planning, redirecting their attention, and being less impulsive when making decisions. P.S. Even though Thrive Natural's Super Brain Renew is the best brain and memory supplement we have found, we would still love to hear about other Brain and Memory Supplements that you have tried! If you have had a great experience with a memory supplement that we did not cover in this article, let us know! E-mail me at : [email protected] We'll check it out for you and if it looks good, we'll post it on our site! COGNITUNE is for informational purposes only, and should not be considered medical advice, diagnosis or treatment recommendations. Always consult with your doctor or primary care physician before using any nutraceuticals, dietary supplements, or prescription medications. Seeking a proper diagnosis from a certified medical professional is vital for your health. Vinh Ngo, a San Francisco family practice doctor who specializes in hormone therapy, has become familiar with piracetam and other nootropics through a changing patient base. His office is located in the heart of the city's tech boom and he is increasingly sought out by young, male tech workers who tell him they are interested in cognitive enhancement. You'll find several supplements that can enhance focus, energy, creativity, and mood. These brain enhancers can work very well, and their benefits often increase over time. Again, nootropics won't dress you in a suit and carry you to Wall Street. That is a decision you'll have to make on your own. But, smart drugs can provide the motivation boost you need to make positive life changes. Iluminal is an example of an over-the-counter serotonergic drug used by people looking for performance enhancement, memory improvements, and mood-brightening. Also noteworthy, a wide class of prescription anti-depression drugs are based on serotonin reuptake inhibitors that slow the absorption of serotonin by the presynaptic cell, increasing the effect of the neurotransmitter on the receptor neuron – essentially facilitating the free flow of serotonin throughout the brain. At dose #9, I've decided to give up on kratom. It is possible that it is helping me in some way that careful testing (eg. dual n-back over weeks) would reveal, but I don't have a strong belief that kratom would help me (I seem to benefit more from stimulants, and I'm not clear on how an opiate-bearer like kratom could stimulate me). So I have no reason to do careful testing. Oh well. On the plus side: - I noticed the less-fatigue thing to a greater extent, getting out of my classes much less tired than usual. (Caveat: my sleep schedule recently changed for the saner, so it's possible that's responsible. I think it's more the piracetam+choline, though.) - One thing I wasn't expecting was a decrease in my appetite - nobody had mentioned that in their reports.I don't like being bothered by my appetite (I know how to eat fine without it reminding me), so I count this as a plus. - Fidgeting was reduced further Many laboratory tasks have been developed to study working memory, each of which taxes to varying degrees aspects such as the overall capacity of working memory, its persistence over time, and its resistance to interference either from task-irrelevant stimuli or among the items to be retained in working memory (i.e., cross-talk). Tasks also vary in the types of information to be retained in working memory, for example, verbal or spatial information. The question of which of these task differences correspond to differences between distinct working memory systems and which correspond to different ways of using a single underlying system is a matter of debate (e.g., D'Esposito, Postle, & Rypma, 2000; Owen, 2000). For the present purpose, we ignore this question and simply ask, Do MPH and d-AMP affect performance in the wide array of tasks that have been taken to operationalize working memory? If the literature does not yield a unanimous answer to this question, then what factors might be critical in determining whether stimulant effects are manifest? My first impression of ~1g around 12:30PM was that while I do not feel like running around, within an hour I did feel like the brain fog was lighter than before. The effect wasn't dramatic, so I can't be very confident. Operationalizing brain fog for an experiment might be hard: it doesn't necessarily feel like I would do better on dual n-back. I took 2 smaller doses 3 and 6 hours later, to no further effect. Over the following weeks and months, I continued to randomly alternate between potassium & non-potassium days. I noticed no effects other than sleep problems. In the largest nationwide study, McCabe et al. (2005) sampled 10,904 students at 119 public and private colleges and universities across the United States, providing the best estimate of prevalence among American college students in 2001, when the data were collected. This survey found 6.9% lifetime, 4.1% past-year, and 2.1% past-month nonmedical use of a prescription stimulant. It also found that prevalence depended strongly on student and school characteristics, consistent with the variability noted among the results of single-school studies. The strongest predictors of past-year nonmedical stimulant use by college students were admissions criteria (competitive and most competitive more likely than less competitive), fraternity/sorority membership (members more likely than nonmembers), and gender (males more likely than females). Smart drugs, formally known as nootropics, are medications, supplements, and other substances that improve some aspect of mental function. In the broadest sense, smart drugs can include common stimulants such as caffeine, herbal supplements like ginseng, and prescription medications for conditions such as ADHD, Alzheimer's disease, and narcolepsy. These substances can enhance concentration, memory, and learning. Omega-3 fatty acids: DHA and EPA – two Cochrane Collaboration reviews on the use of supplemental omega-3 fatty acids for ADHD and learning disorders conclude that there is limited evidence of treatment benefits for either disorder.[42][43] Two other systematic reviews noted no cognition-enhancing effects in the general population or middle-aged and older adults.[44][45]
CommonCrawl
Short communication | Open | Published: 13 January 2017 Jim O' Doherty1 & Paul Schleyer2 Simultaneous cardiac perfusion studies are an increasing trend in PET-MR imaging. During dynamic PET imaging, the introduction of gadolinium-based MR contrast agents (GBCA) at high concentrations during a dual injection of GBCA and PET radiotracer may cause increased attenuation effects of the PET signal, and thus errors in quantification of PET images. We thus aimed to calculate the change in linear attenuation coefficient (LAC) of a mixture of PET radiotracer and increasing concentrations of GBCA in solution and furthermore, to investigate if this change in LAC produced a measurable effect on the image-based PET activity concentration when attenuation corrected by three different AC strategies. We performed simultaneous PET-MR imaging of a phantom in a static scenario using a fixed activity of 40 MBq [18 F]-NaF, water, and an increasing GBCA concentration from 0 to 66 mM (based on an assumed maximum possible concentration of GBCA in the left ventricle in a clinical study). This simulated a range of clinical concentrations of GBCA. We investigated two methods to calculate the LAC of the solution mixture at 511 keV: (1) a mathematical mixture rule and (2) CT imaging of each concentration step and subsequent conversion to LAC at 511 keV. This comparison showed that the ranges of LAC produced by both methods are equivalent with an increase in LAC of the mixed solution of approximately 2% over the range of 0–66 mM. We then employed three different attenuation correction methods to the PET data: (1) each PET scan at a specific millimolar concentration of GBCA corrected by its corresponding CT scan, (2) each PET scan corrected by a CT scan with no GBCA present (i.e., at 0 mM GBCA), and (3) a manually generated attenuation map, whereby all CT voxels in the phantom at 0 mM were replaced by LAC = 0.1 cm−1. All attenuation correction methods (1–3) were accurate to the true measured activity concentration within 5%, and there were no trends in image-based activity concentrations upon increasing the GBCA concentration of the solution. The presence of high GBCA concentration (representing a worst-case scenario in dynamic cardiac studies) in solution with PET radiotracer produces a minimal effect on attenuation-corrected PET quantification. Gadolinium-based contrast agents (GBCA) represent the most common types of magnetic resonance contrast agents, used primarily as a T1 contrast agent. GBCA consist of transitional (i.e., heavy) metal Gd ions bound by chelating agents to form a stable complex of relatively low toxicity [1]. Many GBCA have different molecular structures yet similar pharmacokinetic properties, and therefore, few differences can be discerned in clinical practice [2]. Paramagnetic ions such as Gd3+ in GBCA dissolved in an aqueous solution act as microscopic magnets in the local environment causing water protons to "feel" a large magnetic moment and thus a local change in the average relaxation time. They are most commonly employed due to a predominant shortening of T1 relaxation time, which results in an increased signal intensity on a T1-weighted image (known as positive enhancement). The use of simultaneous PET-MR (positron emission tomography-magnetic resonance) in cardiology opens up the potential for the simultaneous injection of a PET perfusion tracer (such as [15O]-H2O, [13N]-NH3, or [82Rb]-Cl) with GBCA for parallel myocardial perfusion quantification using both methodologies. Also, as cardiac MR imaging is prone to scanner-dependent calibration curves and saturation effects [3, 4], this quantification methodology could also allow direct comparison between calculated PET and MR perfusion variables and quantification techniques [5]. Previous investigations of the effect of GBCA in clinical PET-MR imaging have shown that MR-based attenuation maps acquired via a two-point VIBE-based DIXON sequence (whereby an automated segmentation algorithm provides four different tissue classes: fat, soft tissue, lung, and air) may be affected only by orally administered iron-oxide-based contrast agent and that neither intra-venous injections nor orally administered GBCA significantly affect the attenuation of the PET emission data [6]. This group's work looked at clinically relevant concentrations of GBCA for static whole-body imaging, determining a worst-case scenario for the concentration in the blood. However, not yet investigated are the technical considerations for a dynamic simultaneous PET-MR acquisition, such as those required for calculation of an image-derived input function and myocardial uptake curves in PET cardiology studies. In this work, we aimed to assess the effects of high concentrations of GBCA firstly on the change in linear attenuation coefficient (LAC—the fraction of photons absorbed per unit thickness of the material) of a solution of increasing GBCA concentration and PET radiotracer and secondly on PET image-based activity concentration. We employed CT imaging and a mathematical model to provide measurements of the LAC at 511 keV, as well as investigated the effects of any change in LAC on the quantification of PET image-based activity concentration using three different attenuation correction methods. Solution preparation In order to simulate a "worst-case scenario" of the maximum possible GBCA concentration in the left ventricle of the heart during clinical imaging, an assumption was made that an entire bolus of GBCA can be present in the left ventricle. Thus, we assumed a maximum bolus volume of 20 ml being diluted in an average end diastolic left ventricle volume (EDV) of 150 ml (142 ± 21 ml is a reported EDV range [7]). Assuming 20 ml of 0.5 mmol/ml solution GBCA in the left ventricle, the molar concentration of GBCA (from Table 1) can reach a potential maximum of approximately 70 mM. After ejection of the GBCA from the heart, the concentration in the left ventricle then quickly reduces (over approximately 30–50 s) as it distributes into a larger blood volume. Thus, our static experiments aimed to cover the minimum to potential maximum range of GBCA concentrations in the left ventricle during the times that both MR and PET arterial input functions are measured on resulting reconstructed images. Table 1 The composition of common MR contrast agents in terms of the amount of Gadolinium present in the solution from the summary of product characteristics datasheets A thin plastic bottle (max volume = 160 ml, d = 5 cm, h = 8.5 cm) was filled with 120 ml of distilled water together with 40 MBq of [18F]-NaF in 0.2 ml (as measured in a standard dose calibrator with ±5% accuracy) in order to provide measurements of PET activity concentration. We then added DOTAREM 0.5 mmol/l [8]—a GBCA utilized throughout our hospital—in incremental 3-mM steps until a 30-mM solution was reached (ten concentration steps). After reaching 30 mM, 4-mM steps (ten steps in total) were added to make a solution with final concentration of 66 mM. At each concentration step, the solution was scanned on a CT scanner followed by a PET-MRI scanner. CT images were acquired only for calculation of the LAC of the solution on a GE Discovery 710 PET-CT scanner (140 kV, 20 mA, 0.5-s rotation, 40-mm collimation). No PET scanning was performed on the PET-CT scanner. PET-MR scans were performed on a simultaneous whole-body PET-MR scanner (Siemens Biograph mMR, Siemens Healthcare, Erlangen, Germany) located next door to the PET-CT scanner. Each PET-MR scan lasted 3 min, and all PET data was decay corrected to a common time point. By default, during PET scanning, an MR-based attenuation correction (MRAC) sequence was performed with each PET-MR scan at each GBCA concentration step. This was generated using the standard dual-point VIBE T1-weighted Dixon sequence provided by the manufacturer on the scanner. Mixture rule for calculation of LAC In order to understand how the introduction of GBCA can affect the image-based PET activity concentration during simultaneous PET-MR, it is important to understand the attenuation properties of the different components at 511 keV. Data for the mass attenuation coefficient (MAC—characterizes how easily the material is penetrated by gamma radiation) of Gd and water are shown in Fig. 1 [9]. At 150 kV (close to the CT energy of 140 kV), markedly different MACs of 1.1 and 0.1505 cm2/g for Gd and water, respectively can be observed. However, at 500 kV (close to PET gamma energy of 511 keV), these MACs are more similar, 0.1139 and 0.0969 cm2/g for Gd and water, respectively. The measured LAC of other tissues of the body at 511 keV are also similar at this energy [9] (skeletal muscle = 0.1010 cm−1 [10], adipose tissue = 0.09 cm−1 [11], and whole blood = 0.0905 cm−1 [11]). Mass attenuation spectra of water and gadolinium, with a line drawn at 511 keV showing the similar mass attenuation coefficients. Inset shows a close-up of the values at 511 keV. At 500 keV (the last measured point), this difference in μ M is 14.95%. Data has been replotted from tabulated data originally published by Hubbell [9] The MAC of a homogenously mixed solution can be approximated by Hubbell's weighted average mixture rule for homogenous solutions with photon energies >10 keV [9]: $$ {\mu}_{\mathrm{M}\left(\mathrm{soln}\right)}={\displaystyle {\sum}_i{\mu_{\mathrm{M}}}_{(i)}}{w}_{(i)} $$ where μ M(soln) represents the MAC of the total solution and μ M(i) and w (i) represent the MAC (cm2/g) and fractional weight of the i th components of the mixture. Given that the solution of GBCA can be approximated to a mixture of gadolinium (Gd) and water (wa), this can be written as: $$ {\mu}_{\mathrm{M}\left(\mathrm{soln}\right)}={\mu}_{\mathrm{M}(wa)}{w}_{(wa)}+{\mu}_{\mathrm{M}\left(\mathrm{G}\mathrm{d}\right)}{w}_{\left(\mathrm{G}\mathrm{d}\right)} $$ Thus assuming that the measured values of MAC at 500 keV are representative of those at 511 keV, in order to determine the MAC, and hence, the LAC (LAC = MAC * solution density), the total mass of solution and fractional weights of water and gadolinium are required. Given that μ M(wa) and μ M(Gd) at 511 keV are 0.9687 and 0.1139 cm2/g, respectively (from Fig. 1), the final mixture will have μ M(soln) confined to μ M(Gd) > μ M(soln) > μ M(wa) . Image reconstruction and analysis Investigation of LAC All CT images were reconstructed on the PET-CT scanner using a filtered back-projection (FBP) algorithm as standard on the scanner software. Transformation from Hounsfield units (HU) to LAC at 511 keV was performed offline using a bi-linear calibration curve (140 kVp) as implemented on the PET-CT scanner. LAC values applied to the images by the MRAC segmentation procedure (each voxel in the image represents LAC × 10,000) were obtained from the MR attenuation map by viewing the images on the scanner software and noting down the common LAC value applied to each voxel of the solution in the phantom. PET quantification In order to quantify any effect, a change in GBCA concentration (and hence a change in LAC) of the solution may have on final PET image data, attenuation correction of the PET data is required. All PET data were reconstructed on the PET-MR scanner using standard clinical reconstruction parameters (OSEM, 3 iterations, 21 subsets, 344 image matrix). PET data was not reconstructed using the default MRAC algorithm provided on the scanner of each GBCA step, instead PET emission sinograms were attenuation corrected with three different methods: AC1—Each PET image corrected by its corresponding CT-derived attenuation map. Each CT-derived attenuation map was registered to the MR-derived attenuation map using a rigid registration through Niftyreg software [12] and subsequently uploaded to the PET-MR scanner for attenuation correction of PET data. AC2—A CT-derived attenuation map with LAC values resulting from a CT scan of the phantom at 0 mM (i.e., no GBCA present). The dataset was registered and uploaded to the scanner as described for method AC1. AC3—A manually generated attenuation map whereby all CT voxels in the phantom were manually set to 0.1 cm−1. Method AC1 provides a standard method for attenuation correction, given that the LAC calculated from the bi-linear scaling of CT data from each GBCA step is being used to correct its corresponding PET scan. Method AC2 is employed as is common in a clinical scenario, where a single MR attenuation map acquired before the injection of PET radiotracer and GBCA is used to attenuation correct all dynamic PET frames. Method AC3 represents a scenario of using a single "soft tissue" LAC value as would be assigned by the MRAC segmentation algorithm on clinical scanning. All PET and CT image analyses were performed in OsiriX [Pixmeo SARL, Geneva, Switzerland]. A rectangular volume of interest (VOI) corresponding to a central portion of the solution was drawn on the phantom (volume = 75 cm3) at each concentration step. The average HU, image-based PET activity concentration (kBq/ml), and VOI standard deviations were obtained from the relevant slices (28 CT slices, 50 PET slices). Resulting PET data were decay corrected to a common time point and were also corrected for the increasing volume of the solution in order to visualize differences from the true activity concentration and from the LAC of the solution at 0-mM concentration. Figure 2 shows a comparison of LAC with increasing GBCA concentration for the mixture rule (Eq.2) and resulting LAC from CT scanning (bi-linear conversion from HU to LAC at 511 keV). LACs as generated by the MRAC segmentation are also shown for comparison only. LACs of the solution generated from CT imaging show a maximum increase of approximately 2% over the range of 0 and 66 mM, which correlates well with the increase predicted from the mixture model as described above. The LAC as determined by bi-linear CT calculation and the theoretical mixture model. MRAC-derived LAC values are shown for comparison only and were not used to correct PET data. CT and mixture model are closely correlated, showing an increase of approximately 2% up to 66 mM. The MRAC segmentation routine fails to determine accurate LAC at higher mM concentrations due to T1-shortening effects caused by the presence of high concentrations of GBCA Effect on PET quantification Figure 3 shows the effect of the different attenuation correction strategies (AC1, AC2, and AC3) on the quantification of PET data. The true activity concentration in the phantom was calculated at each time point from the knowledge of the original activity placed in the phantom, compensated for decay, and also the increasing volume at each concentration step. The image-based activity concentration is comparable across all three attenuation correction methods, and no trends are visible with increasing GBCA. Error bars in the activity concentration represent the mean kBq/ml ± one standard deviation of the mean, in order to indicate the level of noise present in the resulting images. It should be noted that in dynamic imaging a higher level of noise is likely to be obtained due to short frame times (potentially as short as 5–10 s depending on the imaging protocol), and low noise here indicates good count statistics only. Comparison of decay-corrected and volume-corrected image-based PET activity concentrations. PET data acquired on the PET-MR system were attenuation corrected by three different methods: (top) AC1 using the CT scan from each increasing step in GBCA, (middle) AC2 by using the first CT scan with no GBCA present, and (bottom) AC3 from a manually generated attenuation map where all voxels have LAC = 0.1 cm−1. Error bars represent one standard deviation of the VOI used to calculate the mean of PET data. No trends in mean image-based activity concentration can be observed while increasing GBCA concentration for any of the attenuation correction methods Our primary goal in these experiments was to evaluate the change in LAC of a mixture of PET radiotracer and increasing concentrations of GBCA, and also to investigate if this change produces a measurable effect on the image-based PET activity concentration. As proposed by Fig. 1, the effect of increasing concentrations of GBCA on quantification of image-based PET activity concentration should be limited to a very small range between the MAC of water and gadolinium at a photon energy of 511 keV. As detailed in Fig. 2, LAC comparisons via bi-linear CT closely match the LAC values resulting from the mixture rule calculations (Eq.2) with an increase of approximately 2% in LAC over the increasing GBCA range of 0 to 66 mM. This details that, in general, the mixture model can be utilized to predict the LAC of a solution of water and GBCA for phantom studies. Erroneous values of LAC derived from the MRAC segmentation procedure are shown in Fig. 2 for comparison to the data derived from the mixture model and from CT imaging only. Studies have shown in vivo the T1- and T2-shortening effects due to the use of GBCA in contrast enhancement studies, with a range from 30 to 68% shortening of T1 post administration of 0.1 mmol/kg body weight [13]. The effect of GBCA on clinically derived MR attenuation maps has recently been demonstrated [14], showing an overestimation of image-based activity concentration due to an assignment of part of the lung tissue to the soft tissue by the MRAC due to the presence of GBCA. This produced a measurable effect due to the large difference in LAC between the lung and soft tissue. In a simultaneous PET-MRI clinical cardiac acquisition, the AC procedure would be free from the influences of GBCA if the MRAC scan were performed before the administration of GBCA. However, if additional MRAC are performed after GBCA administration, effects of GBCA on the segmentation algorithm have to be taken into account. Effects on PET quantification Figure 3 details the accuracy of the correction strategies (AC1, AC2, and AC3) to the true activity concentration of each solution. We did not employ attenuation correction via the default MRAC procedure due to the inaccuracies of the MRAC in defining LAC of the solution as detailed above. All three AC methodologies were within 5% of the ground truth activity concentration, although AC3 gave the most accurate mean image-based activity concentration over all solutions to the true value. Values consistently lower than the true activity concentration were a maximum of 2.5% and are likely to originate from the calibration factor between the dose calibrator and PET scanner, although this value is well within the locally set tolerance of 5%. A mean difference of 1.5% was observed between PET data corrected by methods AC1 and AC2. Method AC3 represents the closest to a clinical approximation, as this is the determined LAC of the "soft tissue" class from MRAC segmentation, and would be applied to the heart and its contents in a clinical cardiac PET-MR acquisition. Although an LAC of 0.1 cm−1 was manually applied to the phantom data to simulate the value applied to the heart by the MRAC segmentation, method AC3 is valid only for this phantom setup as the effects of segmentation and LAC determination of structures external to the phantom (such as non-cardiac tissues in a clinical scenario) were not investigated in this work. It is important to note also that the solution in our study was water mixed with tracer and GBCA, rather than blood (MACBLOOD = 0.0959 cm2/g at 511 keV), which may produce a different effect on LAC determination from the automatic segmentation routine. Our assumption of all of the GBCA pooling in the left ventricle together with the radiotracer is likely to be an overestimation of the true scenario. In practical circumstances, the GBCA in cardiac MR studies is injected at a rate of 3 ml/s. With a standard heart rate of 60 bpm, GBCA would be cleared rapidly from the ventricle, indicating that the true GBCA concentration during a dynamic acquisition is likely to be a lot lower that 66 mM. However, we have addressed a broad concentration range of GBCA up to this maximum point. Static imaging, in order to investigate the effects of GBCA on image-based activity concentration, was performed in order to control all parameters except GBCA concentration rather than the use of a dynamic phantom whereby concentrations of both GBCA and PET radiotracer are both changing rapidly. The use of a dynamic cardiac perfusion phantom for investigations into quantification of MR cardiac perfusion studies [15] would allow the investigation of the attenuation effects on dynamically acquired PET and MR input functions. Use of such high concentration of GBCA (66 mM) may lead to effects of signal saturation (itself potentially corrected for by adjustment of the magnetization flip angle in gradient echo sequences [16]) in the derivation of an MR input function, the effects of which could also be investigated with a phantom. Furthermore, the use of an anthropomorphic torso phantom with cardiac insert could provide a more realistic comparison to a clinical scenario (i.e., such as scattering of gamma ray photons). This would have required regular access to the cardiac chamber of the phantom which was impractical with the amount of steps of increasing concentration used in this study. Also, the study was concerned mainly with a carefully controlled study of the quantitative accuracy of PET when mixed in solution with GBCA, and thus, a true patient representation was not required. In order to avoid the potential confounding effects of dead time on the PET scanner when all of the radiotracer is placed in the field of view of the scanner, we utilized a PET activity of [18 F]-NaF of 40 MBq. This represents an activity far lower than that usually received by patients at our center undergoing [13 N]-NH3 cardiac imaging. The effect of dead time has been quantified on previous cardiac studies on PET-CT systems, for example, the effect on myocardial perfusion quantification [17], and also the limit of dead time losses by weight-based activity administration protocols [18]. Dead time effects have yet to be investigated in cardiac PET-MR imaging. As this work investigated the effect of GBCA on image-based measurements of PET activity concentration, the total activity in the phantom is not an important factor, as any GBCA effect would have the same contribution regardless of the total activity. We also aimed to reduce the radiation dose to the operator as much as possible due to multiple handling, filling, and transport steps performed. Our work employed a static simulation of a bolus of gadolinium-based contrast agent (GBCA) in solution with water and PET radiotracer in a simulated left ventricle. Our results have shown that when considering high concentrations of up to 66 mM of GBCA, the linear attenuation coefficient (LAC) of the mixed solution increases by approximately 2% over the 0–66 mM range. The quantitative accuracy of the resulting reconstructed PET images when attenuation corrected by CT data, and also a manually applied attenuation map is minimally affected by the presence of the GBCA. Hao D, Ai T, Goerner F, Hu X, Runge VM, Tweedle M. MRI contrast agents: basic chemistry and safety. J Magn Reson Imaging. 2012;36(5):1060–71. Pintaske J, Martirosian P, Graf H, et al. Relaxivity of gadopentetate dimeglumine (Magnevist), gadobutrol (Gadovist), and gadobenate dimeglumine (MultiHance) in human blood plasma at 0.2, 1.5, and 3 tesla. Invest Radiol. 2006;41(3):213–21. Jerosch-Herold M. Quantification of myocardial perfusion by cardiovascular magnetic resonance. J Cardiovasc Magn Reson. 2010;12:57. Keenan K, Stupic KF, Boss MA, Russek SE. Standardized phantoms for quantitative cardiac MRI. J Cardiovasc Magn Reson. 2015;17 Suppl 1:W36. Morton G, Chiribiri A, Ishida M, et al. Quantification of absolute myocardial perfusion in patients with coronary artery disease: comparison between cardiovascular magnetic resonance and positron emission tomography. J Am Coll Cardiol. 2012;60(16):1546–55. Lois C, Bezrukov I, Schmidt H, et al. Effect of MR contrast agents on quantitative accuracy of PET in combined whole-body PET/MR imaging. Eur J Nucl Med Mol Imaging. 2012;39(11):1756–66. Maciera AM, Prasad SK, Pennell DJ. Normalized left ventricular systolic and diastolic function by steady state free precession cardiovascular magnetic resonance. J Cardiovasc Magn Reson. 2006;8(3):417–26. DOTAREM ® summary of product characteristics. http://www.guerbet-us.com/products/dotarem.html. Available at: http://braccoimaging.com/sites/braccoimaging.com/files/technica_sheet_pdf/MultiHance.pdf Prescribing Information.pdf. Accessed 10 Feb2016. Hubbell JH, Seltzer SM. Tables of X-ray mass attenuation coefficients and mass energy-absorption coefficients from 1 keV to 20 MeV for elements Z = 1 to 92 and 48 additional substances of dosimetric interest (version 1.4) [Online]. NIST Physical Reference Data [2004; https://www.nist.gov/pml/x-ray-mass-attenuation-coefficients. Accessed 13 Feb 2016. Hubbell JH. Photon cross sections, attenuation coefficients, and energy absorption coefficients from 10 keV to 100 GeV. Washington, DC: National Bureau of Standards; 1969. International Committee on Radiation Units & Measurements. Tissue substitutes in radiation dosimetry and measurement. Bethesda: ICRU; 1989. Ourselin S, Roche A, Subsol G, Pennec X, Ayache N. Reconstructing a 3d structure from serial histological sections. Image Vis Comput. 2001;19(1–2):25–31. Yamada S, Kubota R, Yamada K, et al. T1 and T2 relaxation times on gadolinium-diethylenetriaminepentaacetic acid enhanced magnetic resonance images of brain tumors. Tohoku J Exp Med. 1990;160:145–8. Rischpler C, Nekolla SG, Kunze KP, Schwaiger M. PET/MRI of the heart. Semin Nucl Med. 2015;45(3):234–47. Chiribiri A, Schuster A, Ishida M, et al. Perfusion phantom: an efficient and reproducible method to simulate myocardial first-pass perfusion measurements with cardiovascular magnetic resonance. Magn Reson Med. 2013;69(3):698–707. De Naeyer D, Verhulst J, Ceelen W, Segers P, De Deene Y, Verdonck P. Flip angle optimization for dynamic contrast-enhanced MRI-studies with spoiled gradient echo pulse sequences. Phys Med Biol. 2011;56(16):5373–95. O' Doherty J, Schleyer P, Pike L, Marsden P. Effect of scanner dead time on kinetic parameters determined from image derived input functions in 13N cardiac PET. J Nucl Med. 2014;55(supplement 1):605. Renaud JM, Yip K, Guimond J et al. J Nucl Med. 2017;58:103–109. JOD acknowledges financial support from the Department of Health through the National Institute for Health Research (NIHR) comprehensive Biomedical Research Centre award to Guy's and St Thomas' NHS Foundation Trust in partnership with King's College London and King's College Hospital NHS Foundation Trust and The Centre of Excellence in Medical Engineering funded by the Wellcome Trust and EPSRC under grant number WT 088641/Z/09/Z. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, the DoH, EPSRC, or the Wellcome Trust. PS is an employee of Siemens Healthcare UK. The authors are grateful to the two anonymous reviewers for their constructive comments on the drafts of this manuscript. JOD and PS both designed the study. JOD carried out phantom preparations, acquisitions, and reconstructions on both PET-CT and PET-MR scanning. JOD and PS analyzed and interpreted the resulting data, and drafted and revised the manuscript. Both authors approved the final manuscript. PET Imaging Centre, Division of Imaging Sciences and Biomedical Engineering, King's College London, King's Health Partners, St. Thomas' Hospital, 1st Floor, Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK Jim O' Doherty Siemens Healthcare Limited, Frimley, Camberley, UK Paul Schleyer Search for Jim O' Doherty in: Search for Paul Schleyer in: Correspondence to Jim O' Doherty. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. PET-MR Cardiac PET
CommonCrawl
Fake news propagates differently from real news even at early stages of spreading Zilong Zhao1,2, Jichang Zhao3, Yukie Sano4, Orr Levy5, Hideki Takayasu6,7, Misako Takayasu7, Daqing Li1,2, Junjie Wu3,8 & Shlomo Havlin5,7 EPJ Data Science volume 9, Article number: 7 (2020) Cite this article Social media can be a double-edged sword for society, either as a convenient channel exchanging ideas or as an unexpected conduit circulating fake news through a large population. While existing studies of fake news focus on theoretical modeling of propagation or identification methods based on machine learning, it is important to understand the realistic propagation mechanisms between theoretical models and black-box methods. Here we track large databases of fake news and real news in both, Weibo in China and Twitter in Japan from different cultures, which include their traces of re-postings. We find in both online social networks that fake news spreads distinctively from real news even at early stages of propagation, e.g. five hours after the first re-postings. Our finding demonstrates collective structural signals that help to understand the different propagation evolution of fake news and real news. Different from earlier studies, identifying the topological properties of the information propagation at early stages may offer novel features for early detection of fake news in social media. Maintext Social networks such as Twitter or Weibo, involving billions of users around the world, have tremendously accelerated the exchange of information and thereafter have led to fast polarization of public opinion [1]. For example, there is a large amount of fake news about the 3.11 earthquake in Japan, where about 80 thousand people have been involved in both diffusion and correction [2]. These fake news, which can be fabricated stories or statements yet without confirmation, circulate online pervasively through the conduit offered by on-line social networks. Without proper debunking and verification, the fast circulation of fake news can largely reshape public opinion and undermine modern society [3]. Even worse, fake news can be intentionally fabricated, leading to diverse threats to modern society including turmoil or riot. The later fake news is identified and corrected the greater the damage it can make, due to its fast propagation. Thus, detecting fake news at their early stages, in order to effectively avoid further risks and damages, is crucial. Different from the age of word of mouth, identification of fake news in the online social network by experts is generally labor-intensive with low efficiency [4], which has attracted much research attention to provide alternative solutions. One intuitive idea for understanding fake news spreading is inspired by epidemic models. In the 1960s, Daley and Kendall proposed the so-called DK model [5] in which agents are divided into ignorant, spreader and stifle. Its later extensions are based on the known epidemic spreading models such as SIS model [6, 7], SIR model [8, 9], SI model [10, 11] and SIRS model [12]. While these studies focus on theoretical modeling of fake news propagation, the availability of real data in online social platforms, as we show here, can provide an opportunity to deepen our understanding of the realistic information cascades. Different kinds of observations have been made in empirical studies of fake news, including linguistic features [13], temporal features of re-postings [14–16] and user profiles [17–19]. Actually, information cascades in online social networks are collective propagation networks of which critical topological features remain yet unknown. This motivates our present study to analyze and compare empirically the propagation networks between fake and real news, especially in their early stage, so as to identify the propagation differences and mechanisms behind. These topological features could help to design machine learning approaches to essentially boost the accuracy of fake news targeting [20–22]. Very recently, based on empirical datasets, it has been found that the propagation network of fake news is different from that of real news [23]. They have found that falsehood propagates significantly farther, faster, deeper, and broader than truth news in many categories of information. While this study provides the possibility to differentiate fake news from real news based on the propagation network, it remains unclear how this difference between fake news and real news emerges and how soon one can separate these two types. Thus, a systematic study for the dynamic evolution of propagation topology is still missing. This motivated us to explore deeper in this direction of how the propagation evolves topologically in different scenarios. With collected real data, we identified early signals for identifying fake news, at five hours from the first re-posting, without other information on contents or users. Note that different from considering all the cascade components [23], our finding is valid for even only following the largest cascade component. Based on realistic traces of real and fake news propagation in both Weibo (from China) and Twitter (from Japan), we use the re-posting relationships between different users to establish propagation networks (see Methods for details). Given similar popularity scales, we find that fake news shows significant different topological features from real news. These novel topological features will enable us to design an efficient algorithm to distinguish between fake news and real news even shortly after their birth. To construct the propagation network of fake and real news, we utilize the re-posting relation between different users participating in circulating the same message (see Methods and Table 1). A schematic description of such propagation networks is shown in Fig. 1A. Typical propagation networks of fake news and real news in Weibo and Twitter are demonstrated in Fig. 1B–E. The topology of the propagation network of fake news and real news can be seen to be different. For example, the number of layers in fake news (Fig. 1B and 1D) is typically larger than that of real news (Fig. 1C and 1E). Additionally, from looking at various examples of fake news propagation networks, it is somewhat surprising that for widely distributed fake news, the creator does not usually have the largest degree in the propagation network (Figs. S1 and S2). In the following, our analysis considers also real news created by non-official sources, to avoid the artificial differences due to different types of information creators (official or non-official accounts). Typical examples of fake and real news networks. (A) Schematic diagram of the propagation of a post and its re-posting. The nodes represent the users and the edges are the re-postings. The directionality determines which user is the re-poster among the two users: the origin is the former re-poster and the target is the later re-poster. A layer consists of re-postings whose re-posters have the same distance from the creator. We color the edges according to their layer from light to dark blue. (B) A real typical Weibo network of fake news with 1123 nodes. The edge's arrow stands for its direction. This fake news is about health problems due to a milk tea shop. (C) A typical Weibo real news network with 215 nodes. This is about a tip of preventing sunstroke. (D) A typical Twitter fake news network with 199 nodes. This tweet is about an electric store that raised the price of a battery unreasonably. (E) A typical real news network on Twitter with 578 nodes. This tweet is a correction tweet against fake news about Cosmo oil by the Asahi newspaper. We applied the Fruchterman–Reingold layout by using Pajek software here Table 1 Number of users and networks for different propagation networks Layer ratio. The layer number is defined as the number of hops from the creator to a given node for a given propagation network. The cumulative numbers of nodes at different layers as a function of time for four typical networks of fake news (Fig. 2A for Weibo and2C for Twitter) and real news (Fig. 2B for Weibo and2D for Twitter) are demonstrated. The fraction of re-postings in the first layer of fake news network is found significantly smaller than that of real news, while the fraction in other layers for fake news is significantly larger than that of real news. Early adopters re-posting the message shortly after the creator play a dominant role in circulating real news comparatively. These different roles lead to distinctive landscapes of propagation networks. Different layer sizes as a function of time in typical networks. The y axis is the cumulative number of re-postings at different layers of typical networks in Weibo and Twitter. The x axis is the time (in hours) from the time of news creation and the different colors stand for different layers. Shown examples are (A) fake news and (B) real news in Weibo, as well as (C) fake news and (D) real news on Twitter. These four typical networks are the same networks shown in Fig. 1. In Fig. 2A, the fraction of nodes located in the first layer is around 45% of all the nodes at the end of propagation. However, in Fig. 2B, the total number of nodes in the first layer occupies about 78% of all the nodes at the end of propagation. If the total number of nodes does not change much after 20 hours, we ignore the re-posting after 20 hours in order to clearly see the layers in the figure. It is seen that the layer sizes of real news and fake news are significantly different in both Weibo and Twitter. Real news networks tend to have a relatively larger first layer, while fake news networks are relatively uniformly distributed in different layers The investigation of layer sizes in propagation networks demonstrated in Fig. 2, are systematically extended to all the available messages. As shown in Fig. 3A and 3B, fake news networks tend to possess a relatively smaller first layer, while other layers are larger comparatively. Therefore, we can define the ratio of layer size as the ratio between the size of the second and the first layer. As shown in ratio distribution (Fig. 3C and 3D), the ratio in fake news is significantly larger than that of real news. The distribution for the ratio of layer sizes separates fake and real news well with only a small overlapped area. Furthermore, it is seen in Fig. 3C that this difference is already significant only at five hours since the first re-posting. In Fig. 3D, it is seen that, for the whole lifespan, the separation of the fake and the real is also significant. In the circulation of fake news, the success of the propagation depends highly on the branching process creating different layers, which show different evolution paths between fake and real news. We further investigate the probability difference between fake and real news based on distributions of layer ratio from the time of first re-posting (Figs. S3 and S4). Note that the layer size distribution has a peak around layer four on Twitter in Fig. 3B, probably due to secondary outbreaks. Ratio of layer sizes differentiates fake news from real news. The distribution of the ratio of layer sizes and its development after a period of time can differentiate fake news from real news. These differences appear already after a few hours. (A) The PDF of all re-postings in the first five layers averaged over all of the Weibo propagation networks. The p-value of Mann–Whitney is below 0.01. (B) The PDF of all re-postings in the first five layers in all networks of Twitter. The p-value of Mann–Whitney is below 0.01. (C) Distribution of the ratio of layer sizes at five hours from the first re-posting. The ratio of layer sizes is the size of the second layer divided by the size of the first layer. The p-value of Mann–Whitney between fake and real news is below 0.01. This figure considers all 1701 fake news, all 492 real news and 51 real news with non-official creators at five hours from the first re-posting. (D) Distribution of the ratio of layer sizes of all re-postings for the whole lifespan. The p-value between fake and real news is below 0.01. Here we consider all available Weibo propagation networks (all 1701 fake news, all 492 real news and 51 real news with non-official creators) It should be noted that real news is more likely to be created by official accounts such as government agencies or mass media agencies. In order to eliminate the possible effects of official creators, we also investigate the distribution of the ratio of layer sizes in real news from only non-official creators. While official news and non-official news have different sample sizes here, we found they both have different propagation patterns from fake news. For example, in Fig. 3C and 3D, the non-official real news and the fake news are found to have different distribution of layer size ratio. To verify our results, we also analyze data of 2000 more real news from non-official accounts in a more recent dataset from 2016 to 2018 shown in Figs. S5 and S6. The distributions of this real news dataset are also distinct from that of fake news. Characteristic distance. While the ratio of layer sizes can be regarded as a local feature of the network structure, we further inspect a global feature in terms of characteristic distance in a propagation network. As seen in Fig. 4A, distances between pairs of nodes in fake news are longer than those of real news, implying that later adopters foster the penetration of fake news in social networks. In order to quantify this finding for all the networks, we propose a second measure called characteristic distance (a) shown in Fig. 4B (see Methods). Considering the distance of all the networks as in Fig. 4B, fake news possesses a significantly longer characteristic distance (4.26) than that of real news (2.59). Similar results can also be observed in Twitter propagations (Fig. 4C). The distributions of characteristic distances for all networks are shown in Fig. 4D, where the two curves of fake and real news are well separated. Different from the results in [23], we show that the size distributions of fake and real news are similar (Fig. S7). This suggests that with similar levels of popularity, the characteristic distance is significantly different in fake news compared to real news. We also verified that the propagation size has less correlation with the characteristic distance (Fig. S8). To verify our results, we also analyze data of 2000 more real news from another dataset shown in Fig. S5. Characteristic distance differentiates fake news from real news. (A) The PDF of distances for three typical examples of networks for both fake and real news in Weibo. (B) The PDF of distances for all real and fake news networks in Weibo. (C) The PDF of distances of all real and fake news networks in Twitter. (D) The PDF of the characteristic distances (details in Methods) for fake news and real news. The p-value between fake and real news is below 0.01 Structural heterogeneity. Network topology describes the geometry of connections, with more information embedded than the scale statistics in [23]. Here we measure the Heterogeneity (see Methods) between propagation networks in fake and real news. The parameter h reflects the difference between a given propagation network and its counterpart of a star network with the same-size. Network with smaller h means similar to a star network. Although the out-degree distribution demonstrates only a minor difference between fake news and real news (Fig. S9), it is interestingly found here that the topology heterogeneity is significantly distinguishable. Note that the relationship between heterogeneity and N for star networks is power-law as seen in Fig. 5A. The h is the difference between the logarithm of a real network heterogeneity value \(H_{r}\) and the logarithm of heterogeneity value of the same-size star network \(H_{s}\). The parameter h of fake news is significantly larger compared to that of real news. Consistent findings can also be observed on Twitter (Fig. 5B). In order to quantify the heterogeneity systematically, two distributions of h considering different time intervals are calculated. In Fig. 5C, it shows a significant difference at five hours from the first re-posting. For the whole propagation lifespan in Fig. 5D, h of fake news is also significantly larger than that of real news. Fake news networks have typically lower heterogeneity (larger h) since their propagation involves few dominant broadcasters. On the contrary, real news demonstrates higher heterogeneity (smaller h) and a more star-like layout. The ability to distinguish fake news from real ones is also valid for real news posted by non-official users (Fig. S10). This implies that the indicator based on structural heterogeneity is independent of the creator type. Additionally, another measure named the Herfindahl–Hirschman Index (HHI [24]) shows also a distinction between fake news and real news (Fig. S11). Heterogeneity measure for fake and real news in Weibo and Twitter. (A) The x axis is the size of the propagation network, and the y axis is the heterogeneity measure of the networks. The black line is the value of the star layout. The h is the difference of heterogeneity value between a real network and the corresponding value of star layout. (B) The scatter plot like in (A) for Twitter. (C) Distribution of h at five hours from the first re-posting of the Weibo propagation networks. The p-value here is below 0.01. (D) Distribution of h of all re-postings in Weibo for the whole lifespan. The p-value is below 0.01 The distinction between fake and real news of the heterogeneity measure is the highest among the above three indicators as seen in Fig. 6 and Table 2. For a given Weibo network, measuring its h provides a clear difference between fake news and real news, even only considering re-postings at five hours from the first re-posting (Fig. 6A). This identification becomes even sharper in Fig. 6B, when we consider all re-postings. We show in Fig. 6C the difference significance (see Methods) between fake news and real news for different h. The differences are about 76% and 79% respectively for re-postings at a relatively short time (five hours) and all re-postings. Note that the probability of being fake news at five hours is already very similar to that for the whole propagation lifespan. The verification analysis (shown in Figs. S5 and S6) also demonstrates the difference significance between fake news and real news from another dataset, which is fully published by non-official accounts. Our results suggest that even without sophisticated features like texts or user profiles, direct and understandable topological features can offer high significance for developing early detections. The heterogeneity measure shows a high difference between fake news and real news of Weibo in its early stage of propagation. (A) Probability of being fake news at five hours. The three vertical lines divide the figure into four parts with an equal number of networks. For example, the area on the left of the first left line has 25% of all the Weibo propagation networks. (B) Probability of being fake news for the whole lifespan. (C) The difference significance between fake news and real news Table 2 Comparison between three methods Classifier. The three features mentioned above, namely the ratio of layer sizes, the characteristic distance, and the heterogeneity parameter could be used to create a Support Vector Machine (SVM) classifier. Here we divide the dataset into training set (60%) and test set (40%) ten times randomly. We find that the average accuracy of this classifier is 79.5% when applying the RBF kernel. Being the most vital and popular form of new media, online social networks, fundamentally enhance the creation and dissemination of fake news [25, 26]. Though existing solutions, especially the inspired machine learning approaches, perform impressively on targeting fake news, their black-box style essentially prevents a solid understanding and corresponding method development of debunking or blocking false information. On the other way, the human-intensive labor approach is time-consuming and expensive. For example, it usually takes at least three days [4] for verification and therefore misses the optimal prevention window before massive spreading. In this sense, novel approaches that could help to identify fake news at early stages are urgently needed in preventing the negative impact of false information propagation on modern society. We show here that fake news spread with very different network topology, even at early stages, from authentic messages. We focus, in this manuscript on the evolution differences between the propagation topology of two types of information at early stages rather than providing a comprehensive prediction approach [22]. Even taking only one feature, the difference between fake news and real news is significant. The propagation mechanism, which essentially couples information dynamics and collective cognition in social networks, results in a distinctive landscape of circulations between fake and real news. In this way, several early signals can be derived, including the layer-ratio, the characteristic distance and the heterogeneity. Varol et al. study early detection of promoted campaigns by using supervised machine learning, which contains features about diffusion patterns, content information, sentiment information, temporary signals, and user data [27]. Moreover, Vicario et al. study fake news by identifying polarizing content, which contains structural features, semantic features, user-based features and sentiment-based features [28]. In contrast, our suggested measures focus on structural features which are simple, without text analysis, and time efficient. For example, the weak heterogeneity of fake news might be the result of opinion competition from weak ties between social communities. As stated that "bad" is usually more influential than "good" [29], the unconsciousness of "negative-bias" might result in a late burst of fake news, which essentially differs from the spread of real ones. Disclosing intelligence factors that generate the specific topological features we found here can be a promising research direction in the future. Moreover, once we identify fake news, it is possible to study the nodes that participated in many networks. These nodes are much more active in the permeation of fake news, and as a result, they are more likely to be bots. The study of these vital nodes in the fake news propagation will play an important role in identifying and analyzing bots. Note that our study has several major differences from Vosoughi et al. [23]. We focus more on the topological features (shape of a network), rather than on scale measures of propagation networks (depth or width). Furthermore, we focus on the largest cascade component of the propagation network, while all the cascade components are considered in [23]. As both manuscripts confirm the difference between fake news and real news in different aspects, we find surprisingly that this difference can be very significant even at the early stages of propagation. Weibo data preprocessing. We analyze 1701 fake news of Weibo propagation networks (with 973,391 users) and 492 real news of Weibo propagation networks (with 347,401 users) that spread on Weibo from 2011 to 2016. We choose here large networks with more than 200 tweets. More details are given in Table 1. The topics of these Weibo propagation networks include political fake news, economic fake news, fraudulent fake news, tidbit fake news and pseudoscience fake news (Fig. S12). Fake news is officially investigated and confirmed by the platform of Weibo [30]. Regarding real news, we collect them directly from reliable Weibo accounts. Creators of the real news can be official accounts, for example, government accounts and on-line newspaper accounts. All these real news accounts have been officially verified by the platform of Weibo. On the other hand, we also select manually 51 out of 492 real news networks whose creators are not official accounts. To verify our results, we also analyze another dataset (2000 more recent real news) from Weibo in Figs. S5 and S6. These 2000 real news networks are from more recent records that has been collected in the same way as above, and from non-official accounts. In order to create the network, in which nodes are users of Weibo and links are re-postings, we first mine the following data both for fake and real news: Users: the unique serial number of users who participate in the same network. We also mark the node of the network creator. Re-postings: the unique serial number of directed re-posting activities, and the serial number of source users and reposted users of this re-posting. Twitter data preprocessing. Twitter data was collected from Japanese tweets posted during the period between March 11th and March 17th in 2011, which is the Great East Japan earthquake period. During this period, a lot of fake news propagated on Japanese Twitter. After gathering fake and real news tweets on a keyword basis, we focused on those with more than 200 tweets to create a retweet network. Here we define screen names as nodes, which appeared in the tweet context, and links are mention signs "@" between the author of the tweet and screen names after the sign. This is because many fake retweet users have already deleted their tweet or account itself, and do not appear in the database. Deleting the tweet or account makes the network more segregated and more challenging to capture the real structure of the networks. To avoid network segregation, we use the above-mentioned context-based method to create retweet networks. Furthermore, as of March in 2011, many Japanese Twitter users did not clearly distinguish between mention symbol "@" and clear retweet symbol "RT @". Note that if there are multiple "@" in one tweet, according to the above rules, we extracted multiple screen names as nodes and linked them in order from the beginning of the sentence to create the networks. We compared two types of networks defined by mention symbol and retweet symbol in Fig. S13, and found our major results still hold. After creating networks, we extract the largest connected component (LCC) without consideration of link directions and analyze only those with LCC size above roughly 200 nodes. A node with the oldest tweet time in LCC was treated as creators. All the fake and real news that we determined are shown in Additional file 1. Our method of creating a retweet network is different from the way of previous literature [20, 23] that used follower graphs and tweet data simultaneously to create a retweet network. In case that we do not have a follower graph as of 2011, we applied this approximate method of extracting as much information as possible from the tweet context. In principle, because retweet information remains in the tweet context, the topology of the network should be equivalent to the previous literature, but the time information in resolution of seconds is not accurate in our case. Therefore, we only use time information in hours in the Twitter analysis. Definition of fake news and real news. In a recent paper by Lazer et al. [31], "fake news" is defined as fabricated information that mimics news media content in the form, without news media's editorial norms and processes for ensuring the accuracy and credibility of the information. In our manuscript with Weibo data, the fake news is false information fact-checked by the platform and verified as having been fabricated. Regarding real news, we collect them directly from reliable Weibo accounts. And all these real news accounts have been officially verified by the platform of Weibo. For Twitter data, the fake news is also false information which is fact-checked by reliable evidences [32–34]. This is similar as the true/false news defined in paper by Vosoughi et al. [23] that their rumor cascades are checked independently by six fact-checking organizations. However, since there were no official anti-rumor website in Japan as of 2011, we first gathered 57 topics listed on websites [32, 33] and a book [34]. These contents include tweets based on no evidence and malicious tweets, such as starvation of babies and elderly people, someone under the server rack needed help, and the Japan prime minister is taking luxury supper during the disaster. When collecting tweets, we combine a few keywords related to the contents of each fake news. These keywords were proper nouns, such as place names and personal names. After that, we excluded correction tweets whose contents are against fake news including keywords such as "false" and "mistake". Our typical procedure to gather fake news tweets is explained in a previous work [2]. To validate the fake news tweets, three graduate students at the University of Tsukuba checked independently whether these topics are fake and the gathered tweets are properly classified into fake news. For real news in Twitter, we gathered 71 topics by combining keywords (proper nouns, such as place names and personal names) as with the fake news. We collected most of tweets originated from official accounts with verified Twitter badges such as government agencies, major newspapers and famous people. The contents included tweets about earthquake information, traffic information, donation information and so on. In addition, we also collected five topics originated from civilians without badges, which were widely retweeted. These tweet contents were related to small correct tips during the disaster. Establishing a network model. Based on the information we analyze above, we establish a directed network as demonstrated in Fig. 1A. The users are the nodes in the network, and the re-postings are the edges in the network. And we the mark network creator using color green. Each edge has a direction that is either from creator to re-poster or from former re-poster to later re-poster. We plot figures of typical networks for both fake and real news of Weibo and Twitter as shown in Fig. 1B to 1E. Ratio of layer sizes. The layer number is defined as the number of hops from the creator to a given re-poster. The ratio of layer sizes is a measure for each network defined as: $$ \text{ratio of layer sizes} = \frac{n_{2}}{n_{1}}, $$ \({n}_{1}\) and \({n}_{2}\) are the sizes (number of nodes) of the first and second layer for a certain network respectively. Characteristic distances. In order to measure the distances, for each network we first calculate the distances between all pairs of nodes in the network and plot the distribution in a logarithmic scale (y axis). It can be seen from Fig. 4 that the function can be approximated by an exponential function. We consider the linear part of curves where their x value (distance) is above one. We calculate the characteristic distance (a) accordingly: $$ y\sim e^{ - \frac{x}{a} + b}. $$ Heterogeneity measure. The heterogeneity [35] is defined as: $$ \mathrm{Heterogeneity} = \frac{\sqrt{ \langle k^{2}\rangle }}{ \langle k\rangle } = \frac{\sqrt{\frac{1}{N}\sum_{i = 1}^{N} k_{i}^{2}}}{\frac{1}{N}\sum_{i = 1}^{N} k_{i}}, $$ N: The number of nodes in the network, \(k_{i}\): The degree of node i. We show a scatter plot (Fig. 5A) for both fake and real news of Weibo. The black line is the theoretical line for star network: $$ \mathrm{Heterogeneous} \sim \sqrt{N}. $$ The h is the difference between the logarithm of a real network heterogeneity value \(H_{r}\) and the logarithm of heterogeneity value of the same-size star network \(H_{s}\) as shown below: $$ h = \log (H_{s}) - \log (H_{r}). $$ Probability of being fake news. Here we use the ratio of layer sizes as an example. We divide the ratio of layer sizes into n portions. In the ith portion, the probability of being fake news is: $$ p = \frac{p_{i}^{f}}{p_{i}^{f} + p_{i}^{r}}, $$ \(p_{i}^{f}\): The probability of fake news in the ith portion (the number of fake news in this portion divided by the total number of fake news). \(p_{i}^{r}\): The probability of real news in the ith portion. Significance of difference. When we distinguish fake news from real ones using topological measures such as the ratio of layer sizes or the characteristic distance, it is important to know the significance of the difference. Here we use the ratio of layer sizes as an example. First, we rank the Weibo propagation networks by their ratio of layer sizes ignoring their types (fake or real). Second, we randomly split these propagation networks into n portions that have the same number of networks. Finally, we calculate the difference significance using the following formula: $$ Q = \frac{1}{n}\sum_{1}^{n} \frac{\max (p_{i}^{r},p_{i}^{f})}{p_{i}^{r} + p_{i}^{f}}, $$ n: The number of portions that we divide. SVM: RBF: Radial basis function (Gaussian) kernel largest connected component Schmidt AL, Zollo F, Del VM et al. (2017) Anatomy of news consumption on Facebook. Proc Natl Acad Sci USA 114(12):3035 Takayasu M, Sato K, Sano Y, Yamada K, Miura W, Takayasu H (2015) Rumor diffusion and convergence during the 3.11 earthquake: a Twitter case study. PLoS ONE 10(4):e0121443 A BuzzFeed news of hyperpartisan Facebook pages are publishing false and misleading information at an alarming rate. https://www.buzzfeed.com/craigsilverman/partisan-fb-pages-analysis?utm_term=.glr1n5VYr#.kaJBYd4a8 Fact-checking fake news on Facebook works—just too slowly. https://phys.org/news/2017-10-fact-checking-fake-news-facebook-.html#jCp (2018.1.23 accessed) Daley DJ, Kendall DG (1964) Epidemics and rumours. Nature 204(4963):1118 Pastorsatorras R, Vespignani A (2000) Epidemic spreading in scale-free networks. Phys Rev Lett 86(14):3200 Eguíluz VM, Klemm K (2002) Epidemic threshold in structured scale-free networks. Phys Rev Lett 89(10):108701 Newman MEJ (2002) Spread of epidemic disease on networks. Phys Rev E, Stat Nonlinear Soft Matter Phys 66(1 Pt 2):016128 MathSciNet Article Google Scholar Moreno Y, Pastor-Satorras R, Vespignani A (2002) Epidemic outbreaks in complex heterogeneous networks. Eur Phys J B 26(4):521–529 Barthélemy M, Barrat A, Pastor-Satorras R et al. (2004) Velocity and hierarchical spread of epidemic outbreaks in scale-free networks. Phys Rev Lett 92(17):178701 Zhou T, Liu JG, Bai WJ et al. (2006) Behaviors of susceptible-infected epidemics on scale-free networks with identical infectivity. Phys Rev E, Stat Nonlinear Soft Matter Phys 74(5 Pt 2):056109 Kuperman M, Abramson G (2000) Small world effect in an epidemiological model. Phys Rev Lett 86(13):2909–2912 Yang F, Liu Y, Yu X et al (2012) Automatic detection of rumor on Sina Weibo. In: ACM, pp 1–7 Mendoza M, Poblete B, Castillo C (2010) Twitter under crisis: can we trust what we RT? In: Social media analytics, SOMA, KDD workshop, pp 71–79 Ma J, Gao W, Wei Z et al. (2015) Detect rumors using time series of social context information on microblogging websites. In: ACM international on conference on information and knowledge management. ACM, New York, pp 1751–1754 Zheng H, Xue M, Lu H et al (2017) Smoke screener or straight shooter: detecting elite sybil attacks in user-review social networks. arXiv preprint. arXiv:1709.06916 Castillo C, Mendoza M, Poblete B (2011) Information credibility on Twitter. In: International conference on World Wide Web, WWW 2011, Hyderabad, India, March 28–April, DBLP, pp 675–684 Qazvinian V, Rosengren E, Radev DR et al. (2011) Rumor has it: identifying misinformation in microblogs. In: Proceedings of the conference on empirical methods in natural language processing, Association for Computational Linguistics, pp 1589–1599 Zollo F, Bessi A, Del VM et al. (2017) Debunking in a world of tribes. PLoS ONE 12(7):e0181821 Kwon S, Cha M, Jung K et al. (2014) Prominent features of rumor propagation in online social media. In: IEEE, international conference on data mining. IEEE, pp 1103–1108 Wu K, Yang S, Zhu KQ (2015) False rumors detection on Sina Weibo by propagation structures. In: IEEE, international conference on data engineering. IEEE, pp 651–662 Vosoughi S (2015) Automatic detection and verification of rumors on Twitter. Ph.D. thesis, Massachusetts Institute of Technology Vosoughi S, Roy D, Aral S (2018) The spread of true and false news online. Science 359(6380):1146–1151 Rhoades SA (1993) The Herfindahl–Hirschman index. Fed Reserve Bull 79:188 Ma R (2008) Spread of SARS and war-related rumors through new media in China. Commun Q 56(4):376–391 Chua AYK, Banerjee S (2017) A study of tweet veracity to separate rumors from counter-rumors. In: Proceedings of the 8th international conference on social media & society, pp 1–8 Varol O, Ferrara E, Menczer F et al. (2017) Early detection of promoted campaigns on social media. EPJ Data Sci 6(1):13 Del Vicario M, Quattrociocchi W, Scala A et al (2018) Polarization and fake news: early warning of potential misinformation targets Baumeister RF, Bratslavsky E, Finkenauer C et al. (2001) Bad is stronger than good. Rev Gen Psychol 5(4):477–509 Weibo official web page for fake news reporting. http://service.account.weibo.com (2018.1.23 accessed) Lazer D, Baum MA, Benkler Y et al. (2018) The science of fake news. Science 359(6380):1094 Matsunaga H Social psychology at the time of panic which classified and organized 80 hoaxes after the earthquake. http://blogos.com/article/2530/ (in Japanese) April 8th 2011 (2018.1.20 accessed) Ogiue C (2011) Validation: rumor and hoax during the Great East Japan Earthquake, Kobunsha, Japan (in Japanese) Ishizawa Y, Akamine T Time series analysis of "hoax information" diffused on Twitter during earthquake. https://sites.google.com/site/prj311/event/presentation-session/presentation-session4#TOC-Twitter-2 (in Japanese) (2018.11.27 accessed) Dong J, Horvath S (2007) Understanding network concepts in modules. BMC Syst Biol 1(1):24 SH thanks the Israel Science Foundation, ONR, the Israel Ministry of Science and Technology (MOST) with the Italy Ministry of Foreign Affairs, BSF-NSF, MOST with the Japan Science and Technology Agency, the BIU Center for Research in Applied Cryptography and Cyber Security, and DTRA (Grant no. HDTRA-1-10-1-0014) for financial support. JZ was supported by NSFC (No. 71871006) and the National Key Research and Development Program of China (No. 2016QY01W0205). YS was supported by by JSPS KAKENHI Grand Number 17K12783. HT and MT are supported by JST Strategic International Collaborative Research Program (SICORP) on the topic of "ICT for a Resilient Society" by Japan and Israel. JW was partially supported by the National Key R&D Program of China (2019YFB2101804), the National Special Program on Innovation Methodologies (SQ2019IM4910001), and the National Natural Science Foundation of China (71531001, 71725002, U1636210). We also thank Jiali Gao for providing new dataset of real news. Our data is provided by author Jichang Zhao and will be available from him based on reasonable request. School of Reliability and Systems Engineering, Beihang University, Beijing, China Zilong Zhao & Daqing Li National Key Laboratory of Science and Technology on Reliability and Environmental Engineering, Beijing, China School of Economics and Management, Beihang University, Beijing, China Jichang Zhao & Junjie Wu Faculty of Engineering, Information and Systems, University of Tsukuba, Tsukuba, Japan Yukie Sano Department of Physics, Bar-Ilan University, Ramat Gan, Israel Orr Levy & Shlomo Havlin Sony Computer Science Laboratories, Tokyo, Japan Hideki Takayasu Institute of Innovative Research, Tokyo Institute of Technology, Yokohama, Japan Hideki Takayasu, Misako Takayasu & Shlomo Havlin Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing, China Junjie Wu Zilong Zhao Jichang Zhao Orr Levy Misako Takayasu Daqing Li Shlomo Havlin DL designed the research. ZZ, JZ, YS and OL contributed equally to this paper. ZZ and YS performed data calculation. OL, ZZ, SH and DL wrote the paper. Other authors have analyzed the results and revised the paper. All authors read and approved the final manuscript. Correspondence to Daqing Li or Junjie Wu. Zilong Zhao, Jichang Zhao, Yukie Sano and Orr Levy contributed equally to this work. Supplementary Information (DOCX 1.3 MB) Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Zhao, Z., Zhao, J., Sano, Y. et al. Fake news propagates differently from real news even at early stages of spreading. EPJ Data Sci. 9, 7 (2020). https://doi.org/10.1140/epjds/s13688-020-00224-z Accepted: 25 February 2020 DOI: https://doi.org/10.1140/epjds/s13688-020-00224-z
CommonCrawl
Proving that two sets of strings are equal I am stucked at this problem: Let $A=(\Sigma, Q, q_1, F, \delta)$ be a finite deterministic automaton (I.e. $\delta:Q\times\Sigma\to Q$) such that $Q=\{q_1,...,q_m\}$. Let's define foreach $i,j\in\{1,...,m\}$, $k\in\{0,1,...,m\}$ (Note: the $\delta$ below is the extension of $\delta$ to $\Sigma^*$) $L_{i,j}^k=\{w\in\Sigma^*|\delta(q_i,w)=q_j \land \forall u\in PREFIX(w)-\{\epsilon,w\}, \delta(q_i,u)=q_x\to x\leq k\}$ Now let's define the following sets recursively: For all $i,j\in\{1,...,m\}$: $M_{i,j}^0=\{\sigma\in\Sigma|\delta(q_i,\sigma)=q_j\}\cup\begin{cases} \emptyset, \text{if $i\neq j$} \\ \{\epsilon\}, \text{if $i=j$} \end{cases}$ For all $i,j,k\in\{1,...,m\}$: $M_{i,j}^k=M_{i,k}^{k-1}\cdot (M_{k,k}^{k-1})^* \cdot M_{k,j}^{k-1}\cup M_{i,j}^{k-1}$ Prove that for all $i,j\in\{1,...,m\}$ and $k\in\{0,1,...,m\}$ we get $L_{i,j}^k=M_{i,j}^k$. I've tried to prove it by induction on $k$ but failed. (Note: I've encountered these sets in the proof of the Theorem that says that every regular language has a regular expression) formal-languages regular-languages automata finite-automata MathNerd MathNerdMathNerd $\begingroup$ What is the meaning of these sets? $\endgroup$ – Raphael♦ Dec 14 '15 at 12:31 $\begingroup$ @Raphael I've encountered these sets in the proof of the theorem that says: every regular language has a corresponding regular expression. My university book claims that the sets are the same and doesn't bother to prove why they are the same. $\endgroup$ – MathNerd Dec 14 '15 at 12:32 $\begingroup$ In the definition of $L^k_{i,j}$ the typing for $\delta$ seems incorrect. Since the question is sort of a flurry of notation, you probably want $\delta : Q\times \Sigma^* \to Q$. $\endgroup$ – Louis Dec 14 '15 at 12:35 $\begingroup$ @Louis The $\delta$ in $L_{i,j}^k$ is the extension of $\delta$ to $\Sigma^*$. $\endgroup$ – MathNerd Dec 14 '15 at 12:38 $\begingroup$ @MathNerd: I'm aware, but really the obligation is on you, since you didn't bother to write things out. $\endgroup$ – Louis Dec 14 '15 at 12:39 It is not totally clear what are your main obstacles to get the proof done. Note that $L_{i,j}^k$ is the set of all strings $w$ on a path $\pi$ from $q_i$ to $q_j$ such that for each intermediate state $q_x$ we have $x\le k$; i.e., all intermediate states have index at most $k$. Consider state $q_k$. Now either the path $\pi$ does not enter $q_k$ at all: then all indices are at most $k-1$ meaning that $w$ belongs to $L_{i,j}^{k-1}$. Or it may enter $q_k$ one or more times. Thus the path is of the form $\pi: q_i \leadsto q_k \leadsto \dots \leadsto q_k \leadsto q_j$, where the subpaths may start and end in $q_k$, but do not pass that state. So the string $w$ can be partitioned into strings that belong to $L_{i,k}^{k-1}$, $L_{k,k}^{k-1}$, ..., $L_{k,k}^{k-1}$, $L_{k,j}^{k-1}$. That should prove that $L_{i,j}^{k} = L_{i,k}^{k-1} ( L_{k,k}^{k-1} )^*L_{k,j}^{k-1} \cup L_{i,j}^{k-1}$ (or at least the inclusion from left to right, the other inclusion follows similar argumentation). With this knowledge the induction is on the subscript $k$, and the induction step is If $L_{i,j}^{k-1} = M_{i,j}^{k-1}$ for all $i,j$ then $L_{i,j}^{k} = M_{i,j}^{k}$ for all $i,j$. This should be obvious, just plug in the equations we have. Hendrik JanHendrik Jan $\begingroup$ Thanks, Intuitively I understand that it says what you wrote but I cannot prove it rigorously. How can I show that $L_{i,j}^k=M_{i,j}^k$ by showing $L_{i,j}^k\subseteq M_{i,j}^k$ and $M_{i,j}^k\subseteq L_{i,j}^k$. I've tried induction but I failed to show it. $\endgroup$ – MathNerd Dec 14 '15 at 12:51 $\begingroup$ I see. Only inclusions for the $L_{ij}^k = \dots L_{ij}^{k-1} \dots$ side need to be proved. Stepping between $M_{ij}^k$ and $L_{ij}^k$ is then easy. $\endgroup$ – Hendrik Jan Dec 14 '15 at 15:37 Not the answer you're looking for? Browse other questions tagged formal-languages regular-languages automata finite-automata or ask your own question. Proof that finite automata is closed under intersection Show that regular languages are closed under Mix operations If L is a regular language then the language replace(L,σ,τ) is also regular How to Trace Path in Proof that Regular Languages are Closed Under Reversal Equivalence of PDA and a counting PDA Is $2^{1+\sqrt{mn}}$ an upper bound for the size of the DFA resulting from the determinisation of n DFAs of size m Language of all words accepted by a TM by at most $t$ steps is regular How to prove certain parts of one regular language restricted by another regular language is also regular? How to prove that if $L, G$ are regular languages then $\{w\in L|\exists x\in G: |x|=2\cdot |w|\}$ is a context-free language?
CommonCrawl
Probabilistic Volcanic Ash Hazard Analysis (PVAHA) I: development of the VAPAH tool for emulating multi-scale volcanic ash fall analysis A. N. Bear-Crozier1, V. Miller1, V. Newey1, N. Horspool1,2 & R. Weber1 Significant advances have been made in recent years in probabilistic analysis of geological hazards. Analyses of this kind are concerned with producing estimates of the probability of occurrence of a hazard at a site given the location, magnitude, and frequency of hazardous events around that site; in particular Probabilistic Seismic Hazard Analysis (PSHA). PSHA is a method for assessing and expressing the probability of earthquake hazard for a site of interest, at multiple spatial scales, in terms of probability of exceeding certain ground motion intensities. Probabilistic methods for multi-scale volcanic ash hazard assessment are less developed. The modelling framework presented here, Probabilistic Volcanic Ash Hazard Analysis (PVAHA), adapts the seismologically based PSHA technique for volcanic ash. PVAHA considers a magnitude-frequency distribution of eruptions and associated volcanic ash load attenuation relationships and integrates across all possible events to arrive at an annual exceedance probability for each site across a region of interest. The development and implementation of the Volcanic Ash Probabilistic Assessment tool for Hazard (VAPAH), as a mechanism for facilitating multi-scale PVAHA, is also introduced. VAPAH outputs are aggregated to generate maps that visualise the expected volcanic ash hazard for sites across a region at timeframes of interest and disaggregated to determine the causal factors which dominate volcanic ash hazard at individual sites. VAPAH can be used to identify priority areas for more detailed PVAHA or local scale ash dispersal modelling that can be used to inform disaster risk reduction efforts. Numerous approaches have been adopted in the past to assess volcanic ash hazard at the local-scale (10s km) including observational, statistical, deterministic and probabilistic techniques (Bonadonna et al. 2002a; Bonadonna et al. 2002b; Blong 2003; Bonadonna and Houghton 2005; Costa et al. 2006; Magill et al. 2006; Jenkins et al. 2008; Costa et al. 2009; Folch et al. 2009; Folch and Sulpizio 2010; Simpson et al. 2011; Bear-Crozier et al. 2012; Jenkins et al. 2012a; Jenkins et al. 2012b). However, incomplete historical data on the magnitude and frequency of eruptions worldwide and the difficulties associated with up-scaling computationally intensive volcanic ash dispersal models have limited regional or global scale assessments (100 s km). Simple assessments of volcanic ash hazard are based on compiling observations of the distribution of volcanic ash from historical eruptions, an approach that is still adopted worldwide (McKee et al. 1985; Barberi et al. 1990; Bonadonna et al. 1998; Costa et al. 2009). These maps discriminate land areas buried by volcanic ash fallout in the past from those that have not. Deterministic methods extend the usefulness of observational methods by utilising the benefits of numerical and computational models and typically consider the causes driving the hazard. The advantage of this approach is that it is computationally straightforward and provides a conservative result, which can be used to maximise safety. The disadvantage is that subjective and implicit assumptions made on the probability of the chosen scenario commonly result in an overestimation of conservative hazard values, whereby the largest possible eruption may be possible but highly unlikely. Probabilistic methods estimate the probability of occurrence of the hazard at a site according to the location, magnitude and frequency of occurrence of hazardous events around that site. They are flexible and can take into account as much data as you have available. Probabilistic methods produce hazard curves, which provide information on the level of expected hazard for any given timeframe. Incorporating occurrence rate information into hazard analysis is more complex than deterministic, statistical or observational approaches. However, the resulting hazard curve is more useful for prioritising regions where more detailed analysis is needed. Probabilistic approaches to volcanic ash hazard assessment are the focus of this study. In the past, probabilistic analyses of volcanic ash hazard have focused on quantitative assessments of the frequency and potential consequences of eruptions. Simpson et al. (2011) undertook a quantitative assessment of volcanic ash hazard across the Asia-Pacific region using the Smithsonian Institution's Global Volcanism Program (GVP) database that enabled the straightforward production of magnitude–frequency plots for each country and, to some extent, provinces within countries, in the region. Quantitative approaches have focused on a single source or site of interest at the local scale (10s of km) using tephra dispersal models (e.g. Campi Flegrei, Italy (Costa et al. 2009); Gunung Gede, Indonesia (Bear-Crozier et al. 2012); Okataina, New Zealand (Jenkins et al. 2008); Somma-Vesuvio, Italy (Folch and Sulpizio 2010) and Tarawera, New Zealand (Bonadonna and Houghton 2005). Numerical simulations of volcanic ash fallout generally involve running a deterministic eruptive scenario that represents the most likely event (based on historical investigation and/or modern analogues) over a period of time sufficiently large as to capture all possible meteorological conditions (Magill et al. 2006; Folch et al. 2008a; Folch et al. 2008b; Folch and Sulpizio 2010; Bear-Crozier et al. 2012) Regional-scale probabilistic volcanic ash hazard assessments are less common (Yokoyama et al. 1984; Hoblitt et al. 1987; Hurst 1994; Hurst and Turner 1999; Magill et al. 2006; Ewert 2007). (Jenkins et al. 2012a; Jenkins et al. 2012b) employed a stochastic simulation technique that up-scales implementation of the ash dispersal model ASHFALL for regional-scale assessments (Hurst 1994; Hurst and Turner 1999). This approach presented a method for assessing regional-scale ash fall hazard, which had not been attempted previously and represents an important step forward in the development of techniques of this kind. However, limitations associated with up scaling conventional ash dispersal modelling methods include the computationally intensive nature of regional-scale applications that would require significant high-performance computing resources, long simulation times and could potentially constrain the spatial resolution, geographic extent and number of sources considered. Hazard curves of annual exceedance probability versus volcanic ash hazard for individual sites of interest are typically not generated and therefore disaggregating the dominant contribution to the hazard at particular sites of interest by magnitude, source or distance is not captured. Motivation for the current work Workers in other geohazards fields (earthquake, wind, flood etc.) have faced similar limitations associated with quantifying hazard on the regional-scale. Major developments were made by seismologists working in this space in the 1960's, with a view to assessing ground motion hazard at multiple sites associated with potential earthquake activity (Cornell 1968). A methodology was developed for quantifying earthquake hazard at the regional-scale named Probabilistic Seismic Hazard Analysis (PSHA; Cornell 1968; McGuire 1995, 2008). PSHA consists of a four-step framework for which uncertainty in size, location and likelihood of plausible earthquakes can be incorporated to model the potential impact of future events (Robinson et al. 2006). This methodology has since been adapted for tsunami (Lin and Tung 1982; Rikitake and Aida 1988; Geist and Parsons 2006; Thio et al. 2007; Thomas and Burbidge 2009; Sørensen et al. 2012; Power et al. 2013) and applied to regional–scale tsunami hazard assessments (e.g. Indonesia; Horspool et al. 2014). Early attempts at partially adapting PSHA to volcanic ash (on a local scale) were reported by Stirling and Wilson (2002) for two volcanic complexes on the North Island of New Zealand (Okataina and Taupo). This study seeks to further advance the adaptation of PSHA for volcanic ash hazard at regional spatial scales. Probabilistic Seismic Hazard Assessment (PSHA) For decades seismologists have been faced with the inherent difficulty of assessing and expressing the probability of earthquake hazard at a site of interest in terms of maximum credible intensity (Cornell 1968; McGuire 1995). PSHA was developed to consider a multitude of earthquake occurrences and ground motions and produce an integrated description of seismic hazard representing all events (Cornell 1968; McGuire 1995). It is derived from the early formulation of seismic hazard analysis by Cornell (1968) and Esteva (1968). Initially, PSHA was developed to assess seismic risk at individual sites however over time the methodology was applied systematically to a grid of points yielding a regional seismic probability map with contours of maximum ground motion of equal timeframe (Cornell 1968; McGuire 1995). Traditional PSHA considers the contribution of magnitude and distance to the hazard and selects the most likely combination of these to accurately replicate the uniform hazard spectrum (McGuire 1995). Advances in seismic hazard analysis and the proliferation of high-performance computing have led to the development of event-based PSHA. Event-based Probabilistic Seismic Hazard Analysis allows calculation of ground-motion fields from stochastic event sets. The four-step procedure for event-based PSHA reported by Musson (2000) is summarised below and presented in Fig. 1: Schematic representation of the four stage procedure for PSHA: 1. Source; 2. Recurrence; 3. Ground Motion and; 4. Probability of exceedance (modified after TERA (1980) and Musson (2000)) Seismicity data for the region of interest must be spatially disaggregated into discrete seismic sources. For each seismic source the seismicity is characterised with respect to time (i.e. the annual rate of occurrence of different magnitudes). A stochastic event set is developed which represents the potential realisation of seismicity over time and a realisation of the geographic distribution of ground motion is computed for each event (taking into account the aleatory uncertainties in the ground motion model). This database of ground-motion fields, representative of the possible shaking scenarios that the investigated area can experience over a user-specified time span, are used to compute the corresponding hazard curve for each site. Hazard curves are computed for each event individually and aggregated to form probabilistic estimates. This paper presents a methodology developed at Geoscience Australia, which modifies the four-step procedure of PSHA for volcanic ash hazard analysis at a regional-scale. The framework named here, Probabilistic Volcanic Ash Hazard Analysis (PVAHA) considers the magnitude-frequency distribution of eruptions and associated volcanic ash load attenuation relationships and produces an integrated description of volcanic ash hazard for all events across a region of interest. An algorithm was developed to facilitate a PVAHA named here, the Volcanic Ash Probabilistic Assessment tool for Hazard (VAPAH). This approach builds on the previous work of Stirling and Wilson (2002), Simpson et al. (2011) and Jenkins et al. (2012a; 2012b) towards the development of tools and techniques for conducting regional-scale probabilistic volcanic ash hazard assessment. An assessment for the Asia-Pacific region was undertaken during the development of the VAPAH algorithm. The reader is referred to the companion paper (Miller et al. 2016) for a detailed workflow and discussion of the Asia-Pacific region case study. This paper focuses on the adaptation of PSHA for volcanic ash, the development of the PVAHA framework and the VAPAH algorithm itself. Results from sub-regions of the Asia-Pacific study are only included here as needed to illustrate concepts and to describe the advantages and disadvantages of the overall approach. This manuscript is divided into four sections: A description of the proposed framework for PVAHA. The procedure used for identification of source volcanoes, development of eruption statistics and calculation of magnitude frequency relationships for each source. Derivation and validation of ash load prediction equations derived from volcanic ash dispersal modelling used to inform the PVAHA. Development of the VAPAH algorithm. A framework for Probabilistic Volcanic Ash Hazard Analysis (PVAHA) A probabilistic framework for assessing volcanic ash hazard at multiple spatial scales (PVAHA) adapted from PSHA is presented here (Fig. 2). The modified four-step procedure is outlined below: Schematic representation of the modified PSHA procedure for Probabilistic Volcanic Ash Hazard Assessment (PVAHA); 1. Sources; 2. Magnitude-frequency relationships; 3. Ash load attenuation with distance and; 4. Probability of exceedance Volcanic sources with respect to any given site of interest must be identified. For each volcanic source the annual eruption probability must be calculated based on magnitude-frequency relationships of past events. For a set of stochastic events (synthetic catalogue) volcanic ash load attenuation relationships must be calculated (derived from conventional ash dispersal modelling). Calculation of the annual exceedance probability versus volcanic ash hazard for each stochastic event at each site across a region of interest. Database development and data completeness Following the procedure analogous to Jenkins et al. (2012a) a database of volcanic sources and events for the region of interest was prepared using entries from the GVP catalogue. The Smithsonian Institution's Global Volcanism Program (GVP) catalogue of Holocene events was used to identify volcanic sources for analysis (Siebert et al. 2010). The GVP reports on current eruptions from active volcanoes around the world and maintains a database repository on historical eruptions over the past 10,000 years. We acknowledge this database is not a complete record and does contain gaps in the eruption record. Factors that contribute to these gaps, particularly in data-sparse regions like the Asia-Pacific include incomplete or non-existent historical records, poor preservation of deposits or lack of accessibility to geographically remote sources. However, the GVP database is widely recognised as the most complete global resource currently available and represents the authoritative source for information of this kind. Other sources of data can be used to augment an analysis of this kind including volcano observatory archive data and other databases including but not limited to the Large Magnitude Explosive Volcanic Eruptions database (LaMEVE; Crosweller et al. 2012). The procedure for creating a database of volcanic sources and events for a PVAHA is described below using the Asia-Pacific examples to provide context where needed (Miller et al., 2016). Database fields including volcano ID, region, sub-region, volcano type, volcano name, latitude, longitude, eruption year and Volcano Explosivity Index (VEI; Newhall and Self 1982) were captured for each eruption at each volcano in a region of interest. These entries were further examined and volcanic sources classified as submarine, hydrothermal, and fumerolic or of unknown type were discarded. Before calculating magnitude frequency relationships each source in the database, the record must be assessed for completeness (Simpson et al. 2011). The eruption record consists of all known events for each volcano in the database. Different magnitude eruptions have different time periods for which the record is considered complete, and these periods may vary significantly across a region (Simpson et al. 2011). Additionally, larger eruptions are better preserved in the record than smaller eruptions and this has important implications for data completeness (Jenkins et al. 2012a). With this in mind, events in the database are grouped into sub-regions defined by geographic boundaries already adopted by the GVP catalogue for consistency and further subdivided into magnitude classes, VEI 2–3 for smaller magnitude events and VEI 4–7 for larger magnitude events following the methodology of Jenkins et al. (2012a). Jenkins et al. (2012a) approach for calculating individual record of completeness for each magnitude class was based on reporting by Simkin and Siebert (1994) who declared smaller magnitude eruptions globally complete from the 1960's and larger magnitude eruptions globally complete over the last century. Sources with no assigned VEI but designated caldera 'C' or Plinian 'P' are allocated to the larger magnitude class as arbitrary VEI 4 events. This does not include all the remaining caldera and Plinian eruptions in the database, which were assigned a specific VEI (typically in range VEI 4–7). It's important to note here that only those events classified 'C' or 'P' with no assigned VEI were arbitrarily allocated to VEI 4. We acknowledge that Plinian style, caldera-forming eruptions are commonly associated with eruptions greater than VEI 4 however VEI 4 is selected as a minimum magnitude representing a conservative estimate for the magnitude for the smaller number of these events given the absence of further information (only 13 events identified where this is the case for the Asia-Pacific region). The record of completeness (ROC) was then assessed for each sub-region using the 'break in slope' method by plotting the cumulative number of eruptions against time for each magnitude class for each sub region (Simpson et al. 2011). Similar to Jenkins et al. (2012a) completeness was identified by a linear increase in the cumulative number of eruptions per unit time. The reader is referred to Miller et al. (2016) for the individual ROC values for the Asia-Pacific. An example is provided here for the Indonesia sub-region (Fig. 3). Example Record of Completeness plots for events in the sub-region of Indonesia; (a) Record of completeness (red line) for small magnitude eruptions; (b) Record of completeness (red line) for large magnitude eruptions Magnitude-frequency relationships Having established the record of completeness for each source in the database a procedure analogous to developing earthquake magnitude-frequency distributions for PSHA is adopted here for assessing the annual rate of occurrence for eruptions of different magnitudes at each source (Musson 2000). Where traditional probabilistic techniques focus on a single volcano for which the hazard is estimated independent of the probability of the eruption occurring, the framework reported here is based on the premise that the ash fall hazard associated with a given site may represent a maximum expected hazard from multiple sources. By extension of this, each source is likely to have varying eruption probabilities, styles and magnitudes and therefore traditional approaches must be modified to accommodate for this heterogeneity (Connor et al. 2001; Bonadonna and Houghton 2005; Jenkins et al. 2008; Jenkins et al. 2012a). In order to calculate the annual eruption probability for each volcanic source at each magnitude the probability of an event of any magnitude occurring and the conditional probability of an event of a particular magnitude occurring must be calculated first. Probability of an event of any magnitude Firstly, the annual eruption probability for each volcanic source (λ) must be determined by dividing the total number of events (N) by the time period for which the catalogue is thought to be complete (T): $$ \uplambda = \mathrm{N}/\mathrm{T} $$ Equation 1 must be solved for each volcanic source in the small and large magnitude classes separately using the associated record of completeness values calculated in the previous section (i.e. λ (VEI 2–3) and λ (VEI 4–7); Table 1). In order to arrive at the likelihood of an event of 'any magnitude' occurring (i.e. λ (VEI 2–7) at each volcanic source (analogous to PSHA) the total number of events (N) and ROC (T) for each magnitude class in a sub-region must be aggregated into a single value for each. This is achieved by normalising the occurrence interval for the small magnitude class (240 years) to the occurrence interval of the large magnitude class (420 years), with an assumed constant eruption rate. The conversion factor for the number of events is calculated by taking the ratio of the large to the small magnitude completeness periods (e.g. Indonesia; 420:240 = 1.75). The normalised number of small magnitude events 'N (VEI 2–3) normalised' is then calculated for each source by multiplying the conversion factor (e.g. Indonesia = 1.75) by the number of small magnitude events 'N (VEI 2–3; Table 1). The final value for 'N (VEI 2–7) for each source is calculated by adding the number of small magnitude events 'N (VEI 2–3) normalised' to the number of large magnitude events 'N (VEI 4–7; Table 1)'. Sources with no events during the time period for which their record is deemed complete (e.g. Besar) are assigned records from analogous volcanoes (those of the same type category) following the method of Jenkins et al. (2012a) in order to provide some insight on likely eruption behaviour in the absence of empirical data. Equation 1 can then be solved for the annual eruption probability of each source at any magnitude 'λ(VEI 2–7)' using the calculated values for 'N (VEI 2–7)' and 'T' (e.g. Indonesia = 420 years). Table 1 Annual eruption probability 'λ (VEI 2–7)' for each volcanic source in the Indonesian sub-region using a conversion factor of 1.75 Conditional probability of an event of a particular magnitude In the previous step, the probability of an event of any magnitude occurring λ(VEI 2–7) at each source was ascertained. The next step is to ascertain the conditional probability of an event being a 'particular' magnitude (e.g. VEI 2, 3, 4, 5, 6 or 7). This calculation is performed using the database of events and a classification scheme for volcano morphology (shape) used by Jenkins et al. (2012a). Firstly, all source volcanoes in the database (e.g. Asia-Pacific) are assigned a type category related to their morphology and previous eruption style. Five type categories are considered; lava dome, small cone, large cone, shield and caldera; Table 1). The conditional probability of an event at each magnitude is equal to the number of events for a type category at a particular magnitude divided by the total number of events in the magnitude class (i.e. small magnitude class (VEI 2–3) - Table 2; large magnitude class (VEI 4–7) - Table 3) in the database. Table 2 Conditional probability of an event VEI 3 or less occurring for each of the five volcano types in the database (e.g. Asia-Pacific) Table 3 Conditional probability of an event VEI 4 or greater occurring for each of the five volcano types in the database (e.g. Asia-Pacific) Annual probability of an event The annual probability of an event of a given magnitude for each source, needed for the PVAHA, can now be calculated by multiplying the annual probability of an event of any magnitude for a source by the probability that the event will be a particular magnitude (e.g. Indonesia; Table 4). Metadata is developed to preserve the distinction between annual probability values based on historical data versus analogues (e.g. Besar) so that uncertainty associated with these assumptions is carried through the remaining PVAHA. These magnitude-frequency relationship calculations are repeated for all sub-regions in the database. Table 4 Annual probability of an event of a particular magnitude occurring for each volcanic source in the Indonesian sub-region Emulating volcanic ash load attenuations relationships Earthquake hazard is measured in terms of the level of ground motion that has a certain probability of being exceeded over a given time period (McGuire 1995, 2008). Ground motion prediction equations (GMPEs) or attenuation relationships are used to provide a means of predicting the level of ground shaking and its associated uncertainty at any given site or location (Fig. 4). GMPEs are based on an earthquake magnitude, source-to-site distance, local soil conditions and fault mechanism and are an integral part of PSHA. A process was developed here for adapting the GMPE approach used for seismic hazard to volcanic ash hazard. The process involves derivation of a mathematical expression for volcanic ash load attenuation with distance from source, for an event, using a gridded hazard footprint generated by an ash dispersal model. The resulting equation, named here an Ash Load Prediction Equation (ALPE), statistically emulates the volcanic ash attenuation relationship (Fig. 4). Where up scaling of conventional volcanic ash dispersal modelling techniques would be computationally intensive and time consuming, ALPEs can be used to emulate generalised volcanic ash hazard (derived from dispersal models) for any given event(s) at any location(s), from any volcanic source of interest as a function of distance of the site from each source. a Example plot of predicted peak ground acceleration with changing distance using two European GMPE's for different magnitude ranges modified after Akkar and Bommer (2007) and Bommer et al. (2007); (b) Example plot of volcanic ash load attenuation relationships using three ALPEs generated through dispersal modelling for this study (ALPE 100 - VEI 3; ALPE 150 - VEI 4; ALPE 201 - VEI 5) The procedure for calculating ALPEs for a PVAHA is described below and includes the following: Development of a synthetic catalogue of events Volcanic ash dispersal modelling Derivation of an ALPE Validation of an ALPE Development of synthetic catalogue of events Similar to approach taken for PSHA, a synthetic catalogue of events is developed as a basis for the generation of the ALPEs (one ALPE per event) needed for the PVAHA. A relationship for the rate of volcanic ash load decay with distance from the source, as a function of magnitude, column height, duration, wind turbulence, direction and speed, must be established for each event of interest. The dispersal of volcanic ash through the atmosphere produces deposits at ground level that diminish gradually in load (kg/m2) with distance from the source but in directions controlled by the wind. Consequently, ash load attenuation is a complex function of distance and azimuth from source (Stirling and Wilson 2002). Synthetic events developed here were based on the development of a logic-tree data structure (Fig. 5). The purpose of this structure was to capture all possible variations in volcanological conditions and to quantify the uncertainty associated with the inputs for each event (Bommer and Scherbaum 2008). The influences of site-specific meteorological conditions are considered separately at a later stage in the procedure. Schematic representation of logic tree structure adopted for creation of synthetic catalogue of events; Node 1: Eruption column height (1000 - 40000 m); Node 2: Eruption Duration (1–12 h); Node 3: Source term (Plinian, Sub-Plinian, Vulcanian, Strombolian or Violent Strombolian) Schematic representation of a FALL3D volcanic ash load hazard footprint; (b) Example plot of volcanic ash load attenuation relationship derived from the ALPE extracted from the generic hazard footprint in (a) A simplified, schematic representation of the logic tree data structure used is presented in Fig. 5. Input parameters included: eruption column height (in meters; between 1 000 and 40 000), eruption duration (in hours between 1 and 12) and; eruption style (Strombolian, Vulcanian, Sub-Plinian and Plinian). A total of 1056 events were developed and assigned an equal weighting for probability of occurrence. Events are not volcano specific but rather represent a suite of synthetic eruptions, which when coupled with magnitude-frequency statistics and prevailing meteorological conditions for a region of interest, can be used to assess a range of potential events at any volcanic source. It is important to note that assignment of weightings can be modified where prevalent eruption behaviour for a region of interest is well known. The volcanic ash dispersal model FALL3D was used here to computationally model volcanic ash fall hazard footprints needed for the calculation of ALPEs. FALL3D is a time-dependant Eulerian model that solves the advection–diffusion-sedimentation (ADS) equation on a structured terrain-following mesh. FALL3D outputs time-dependant deposit load at ground level as a hazard footprint of changing ash load with distance from source (Folch et al. 2012). It is acknowledged that FALL3D is one of a number of suitable ash dispersal models that could be utilised for an analysis of this kind. An assessment of the mass eruption rate, height and shape of the eruption column is made for each stochastic event in the catalogue. These parameters together describe the eruptive source term needed to simulate the dispersal of volcanic ash using FALL3D. The source term can be defined as either a 1-D buoyant plume model (Bursik 2001) or as an empirical relationship (Suzuki 1983). The empirical relationship (Suzuki, 1983) used here estimates the mass eruption rate (MER) given an eruption column height (H) using known best-fit relationships of MER versus H (Sparks et al. 1997). A generalised total grainsize distribution (TGSD) is used to account for a range of eruption potential eruption styles which includes minimum and maximum grainsize (phi), average grainsize (phi), sorting, density range (kg/cm3) and sphericity of clasts. FALL3D was used to simulate 1056 events in the synthetic catalogue. Derivation of ash load prediction equations A script was developed for extracting the volcanic ash attenuation relationship (changing ash load with distance) for each hazard footprint generated by the dispersal model in the synthetic catalogue (Fig. 6). Each ALPE represents a single event (1056 total). When coupled with magnitude-frequency statistics and prevailing meteorological conditions for a region of interest, each ALPE can be used to statistically emulate the expected volcanic ash hazard from an event of this kind at any location of interest from any volcanic source as a function of distance of the site from the source. Not unlike GMPEs used to conduct PSHA, the generation and application of ALPEs will have a considerable influence on the outcome of the PVAHA. The ALPEs developed here use the dispersal model FALL3D however other dispersal models could be used (e.g. ASHFALL (Hurst 1994; Hurst and Turner 1999), HAZMAP (Costa et al. 2009), or TEPHRA (Bonadonna and Houghton 2005)) and the authors would encourage the development of ALPEs using a range of dispersal models currently available to build on and compare/contrast with the current work. In order to determine the uncertainty of the ALPEs or the degree to which they accurately reproduce simulated ash fallout generated by FALL3D and observed deposit data gathered from field studies of historical eruptions, a validation is presented for the F2 Plinian fall deposit generated by the 1815 eruption of Tambora, on the island of Sumbawa, Indonesia. FALL3D has already been widely validated against several tephra deposits and airborne ash cloud observations from different eruptions (Costa et al. 2006; Macedonio et al. 2008; Scollo et al. 2008a; Scollo et al. 2008b; Scollo et al. 2009; Corradini et al. 2011; Costa et al. 2012; Kandlbauer et al. 2013; Costa et al. 2014; Kandlbauer and Sparks 2014). An inversion simulation for the Tambora F2 deposit data is presented here using FALL3D by comparing observed measurements for the F2 deposit at 27 sample locations with simulated ash measurement generated by FALL3D at all corresponding sample sites (Sigurdsson and Carey, 1989). We then generate an ALPE from the FALL3D data following the methodology outlined above and use this equation to calculate the expected ash fallout for the F2 deposit. We present difference plots to compare observed versus simulated (FALL3D), simulated versus calculated (ALPE) and calculated versus observed estimates (Sigurdsson and Carey 1989) for volcanic ash load and comment on performance of the ALPE for emulating the Tambora F2 Plinian fall deposit. Validation of FALL3D for the Tambora F2 Plinian fall deposit To inverse model the Tambora F2 Plinian fall deposit, the validation method of Costa et al. (2012) is adopted. Costa et al. (2012) constrained the eruption dynamics and ash dispersal characteristics associated with the Campanian Ignimbrite (39 ka) eruption in Italy and later the Youngest Toba Tuff (75 ka) super-eruption in Indonesia. This approach combines time-dependant meteorological fields for the region, a spectrum of volcanological parameters (erupted mass, mass eruption rate (MER), column height and total grainsize distribution) and over 100 simulations of the ash dispersal model, FALL3D. Optimal values of the input parameters are obtained by best-fitting measured Tambora F2 tephra thicknesses over the entire dispersal area (27 locations) and minimising the deviation of regression. The explored range of input parameters for the Tambora-F2 eruption is reported in Table 5. Table 5 The explored range and best-fit input parameters modelled dispersion of the Tambora-F2 fall deposit using FALL3D Ten years of wind data (January 2000 to December 2010) were obtained from the National Centres for Environmental Protection (NCEP) and Atmospheric Research (NCA) global reanalysis project (Kistler et al. 2001). The NCEP/NCAR reanalysis data archive contains six hourly data at 17 pressure levels ranging from 1000 to 10 mb with a 2.5° horizontal resolution. The methodology of Costa et al. (2012) used here assumes that this collection of modern meteorological fields can statistically represent a proxy for those at the time of the Tambora F2 eruption (~200 years ago). Vertical meteorological profiles were extracted from the wind data at a gridded location closest to the source and interpolated to the FALL3D computational grid. In order to reduce the computational requirements, vertical, but not horizontal, changes in wind conditions with distance from the source were accounted for here at six hourly intervals using linear temporal and spatial interpolations. The computational domain was discretised by a horizontal grid step of Δx = Δy = 1.35 km and a vertical step of Δz = 1 km. The computational domain extended from 9° S to 7° N and from 117° E to 118° W. The distribution of mass within the column was calculated using an empirical parameterisation based on that of Suzuki (1983) and Pfeiffer et al. (2005). In order to account for aggregation processes (clustering of fine ash particles) an aggregation model similar to that of Cornell et al. (1983) and used by Costa et al. (2012; 2014) was adopted here. This aggregation model assumes that 50 % of the 63–44 μm (4–4.5Ø) ash, 75 % of the 44–31 μm (4.5-5Ø) ash and 95 % of the sub-31 μm (<5Ø) ash fell as aggregated particles. The diameter and density of the aggregates were determined by best fit in the simulations. An approach developed for best fitting the spatial variation between recorded and simulated tsunami heights, the Aida indices Aida (1978) was adopted by Costa et al. (2014) for tephra and is similarly used here to measure the reliability of modelled results. The Aida index K represents the geometric average of the distribution and the second index k is the associated standard deviation of the distribution: $$ \log\ \mathrm{K}=1\mathrm{n}\mathrm{i}=1\mathrm{n} \log\ {\mathrm{K}}_{\mathrm{i}} $$ $$ \log\ \mathrm{k} = {\left[1\mathrm{n}\mathrm{n}=1\mathrm{n}{\left( \log\ \mathrm{K}\mathrm{i}\ \right)}^2\hbox{-} {\left( \log\ \mathrm{K}\ \right)}^2\right]}^{\frac{1}{2}} $$ where n is the total number of measurements and Ki = Mi/Hi is the ratio of measured thickness (load) at Mi,,the i-the location and Hi is the simulated thickness (load) at the same site. In keeping with the approach for tsunami and Costa et al. (2014) we consider the simulated tephra thickness results satisfactory when: $$ 0.95<\mathrm{K}>1.05\ \mathrm{and}\ \mathrm{k}<1.45 $$ The best-fit results from FALL3D are reported in Table 5 and indicate that the MER was ~1.2 x 108 kg/s, the eruption column height was ~33 km and the total mass deposits as fallout was ~ 1.2 x 1014 kg. These results are in good agreement with estimates made by Sigurdsson and Carey (1989) based on field observations. The corresponding simulated, best-fit, deposit thicknesses are depicted in a difference plot and reported in Fig. 7. The correlation coefficient between log (measured thickness) and log (simulated thickness) is 0.87 for the best meteorological fit. All simulated thicknesses are between 1/5 and 5 times the observed thicknesses and the reliability of the best-fit results are further emphasised by the Aida index values; reflecting a geometric average, K = 0.98 and a geometric standard deviation, k = 1.29. a Comparison between the thicknesses (cm) from best fit FALL3D simulations and field data for the F2 Plinian fall event, Tambora at each of the 27 sample points (modified after Sigurdsson and Carey, 1989); (b) Difference plot for best fit FALL3D simulation against observed thicknesses converted to load (kg/m2; Sigurdsson and Carey, 1989). The solid line represents a perfect agreement and the dotted and dashed black lines mark the region that is different from the observed by a factor of 10 (1/10) and 5 (1/5) respectively; (c) Difference plot for calculated ash load (ALPE) against simulated load (FALL3D) (d) Difference plot for calculated load (ALPE) against observed load (kg/m2) Validation of an ALPE for the Tambora F2 Plinian fall deposit Following the procedure outlined above the volcanic ash attenuation relationship for the best-fit FALL3D simulation was considered and an ALPE was derived for this event. For the purposes of this validation all volcanic ash thicknesses were converted to load (kg/m2) the preferred unit of measurement for PVAHA. Validation of the ALPE was a two-step process involving first, verification that the ALPE could statistically emulate the simulated deposit load generated by FALL3D in previous step and secondly that when compared with the observed data of Sigurdsson and Carey (1989) for this event, ash load values were in good agreement with field measurements at the 27 sample localities. Following the methodology of Costa et al. (2012) the modelled results are considered to be in good agreement with the measured observations when they are between 1/5 and 5 times the observed thickness (Fig. 7). A difference plot indicates good agreement (within 1/5 to 5 times), between the simulated best-fit (FALL3D) load (kg/m2) and the observed load (converted from thickness; (Sigurdsson and Carey 1989) plotted here at 27 sample localities (Fig. 7b). To verify that the calculated load from the ALPE closely approximates the simulated load from FALL3D (from which it was derived), a difference plot was generated. As expected, the calculated load and the simulated best-fit load are in good agreement (Fig. 7c). Finally, to complete the validation process the calculated load must be compared with the observed field data. A difference plot is generated and the calculated load (ALPES) and the observed load are found to be in good agreement (Fig. 7d). Volcanic Ash Probabilistic Assessment tool for Hazard (VAPAH) An algorithm was developed to facilitate the fourth step of the PVAHA framework named here, the Volcanic Ash Probabilistic Assessment tool for Hazard (VAPAH). VAPAH utilises a scripted interface and high performance computing technology in order to undertake assessments at multiple spatial scales. The VAPAH algorithm reads in magnitude-frequency relationships, an ALPE catalogue and global scale meteorological conditions for a region of interest and integrates across all possible events to arrive at a preliminary annual exceedance probability for each site across the region of interest. Other algorithms of this kind have been developed for probabilistic earthquake hazard assessment (e.g. the Earthquake Risk Model (EQRM; (Robinson et al., 2005) however this algorithm is the first of its kind specifically designed for volcanic ash hazard. Inputs for the VAPAH algorithm include: Identification of volcanic sources for analysis. Characterisation of magnitude-frequency relationships for each volcanic source. Characterisation of the volcanic ash load attenuation relationship (ALPE catalogue). A spatial grid of pre-determined resolution clipped to the domain extent (default = auto-generated). Characterisation of meteorological conditions - prevailing wind direction (degrees) and wind speed (m/s) Identification of volcanic sources, characterisation of magnitude-frequency relationships for each source and development of an ALPE catalogue have all been described previously. Characterisation of the meteorological conditions for the region of interest are discussed below using the Indonesian sub-region as an example. Characterisation of meteorological conditions The VAPAH algorithm requires an estimation of the prevailing wind direction (degrees) and speed (m/s) at a single pressure level for each source. Meteorological data can be sourced from direct observations (e.g. weather balloons, anemometers and wind vanes) or modelled data available at multiple spatial scales depending on the purpose of the PVAHA (e.g. National Centres for Environmental Protection and Atmospheric Research re-analysis (NCEP/NCAR), Weather Research and Forecasting Model (WRF) and the Australian Community Climate and Earth System Simulator ACCESS). These estimates for wind direction and wind speed are considered to be highly prevalent at each location by the algorithm and are assigned a probability weighting according to a Gaussian distribution with a standard deviation assigned by the user. An example is provided below for the deriving these variables for the Indonesia sub-region using one potential source of meteorological data. Sixty-four years of meteorological data (January 1950 - December 2014) was sourced from the NCEP/NCAR re-analysis for the Indonesia sub-region, available at grid intervals of 2.5° globally. Among other variables, wind direction and wind speed vector components are available for 17 pressure levels to a height of 40 km. Monthly mean vector components for meridional wind (u-component) and zonal wind (z-component) were extracted at 16 locations across Indonesia, from NCEP grid points closest to each volcanic source at the 250mb pressure level (Tropopause) for a 64 year period (Fig. 8). Monthly mean wind direction (degrees) and wind speed (m/s) were derived from the u and v wind components and aggregated first for each year and then for the 64 year period using the freeware meteorological analysis and plotting tool, WRPLOT (Table 6). Prevailing wind direction and wind speed were assigned to each source from the closest NCEP point. Monthly mean wind direction (deg) and wind speed (m/s) for the 250mb pressure level (Tropopause) aggregated for a 64 year period (1950–2014) for four of the 16 NCEP grid points used for the Indonesian sub-region. Wind rose diagrams depict the frequency of winds blowing 'from' a particular direction over the 64-year period. The length of each spoke is related to the frequency that the wind blows from a particular direction (e.g. N, S E or W) over the 64-year period and each concentric circle represents a different frequency (e.g. 10 %, 20 %, 30 %) emanating from zero at the centre Table 6 Monthly mean wind direction and wind speed aggregated for a 64-year period (1950–2014) for 16 NCEP grid points across the Indonesian sub-region VAPAH algorithm procedure The operational procedure for the VAPAH algorithm is presented in Fig. 9. A configuration file is used to customise the extent of the assessment (Attachment 1). The configuration file reads in a series of CSV (comma separated value) files including: Schematic representation of the operational procedure for the VAPAH tool. * VAPAH will auto-generate a spatial grid using user-specified parameters in the configuration file if one is not provided as a pre-processed input the ALPE catalogue, the volcano sources (included prevailing wind speed and direction) the sites of interest (pre-processed spatial grid) The user can then customise the run further through the configuration file. If no sites file is provided by the user, geographic coordinates for the region of interest and the resolution can be input for auto-generation of the spatial grid by VAPAH. The user must define the timeframes of interest (e.g. 100, 500, 1000 etc.) in the configuration file. Mean wind direction and mean wind speed for each source are pre-configured (see previous section) however wind direction and wind speed probabilities vary seasonally, particularly in equatorial regions. The configuration file captures this uncertainty through the wind direction distribution parameter (e.g. normal or bimodal), the wind direction distribution standard deviation parameter (degrees), the wind speed distribution parameter (e.g. normal or bimodal distribution), the wind speed distribution standard deviation parameter (m/s) and the number of wind speeds. Finally, hazard thresholds such as maximum distance from source, maximum ash load value at source and minimum sum of ash load needed to generate hazard curves or histograms can be set here by the user. The VAPAH algorithm can be run in serial or parallel computing environments (i.e. on one or many processors simultaneously) but is optimised for high performance computing platforms utilising thousands of CPUs. Simulation time will vary according to the number of events, the number of sources, the resolution of the hazard grid and the distribution and standard deviation of meteorological conditions. A script is used to execute the procedure as follows: The first site is located on the hazard grid and the distance is calculated in kilometres between this site and the first source in the catalogue. The distance value is used to evaluate the first ALPE for the first synthetic event in the catalogue in order to derive the expected ash load (kg/m2) at that site for the first event. The calculated ash load for the first event and its associated probability derived from the magnitude-frequency relationships for the first source in the catalogue are written to a results file. The algorithm repeats (2) and (3) for each ALPE for the first source. The algorithm then moves to the next source in the catalogue and repeats steps (2), (3) and (4) until all ALPES have been assessed for the first source for the first site. The algorithm will then calculates the cumulative probability of each event (e.g. each instance where a volcanic ash load was recorded from one or more sources) for the first site and generates a hazard curve of volcanic ash load (kg/m2) versus annual probability of exceedance and the maximum expected ash hazard for timeframes of interest (specified by the user in the configuration file). This process will capture all instances where the first site might experience multiple ash hazards from more than one source. The algorithm then loops to the next site and repeats steps (1) – (6) until all sites have been evaluated. VAPAH results VAPAH generates a database file of hazard calculations collectively referred to here as the PVAHA. Like, PSHA the PVAHA results can be post-processed using VAPAH to generate hazard curves for annual probability of exceedance at sites of interest (or all sites), maps for maximum expected ash hazard at timeframes of interest and histograms which disaggregate the hazard (e.g. location, magnitude, source etc.) for determining the primary causal factors at sites of interest. Examples of each for the Indonesia sub-region are reported in Fig. 10). Hazard curves report the annual exceedance probability versus volcanic ash load for a site of interest. These curves capture all instances where the site experienced ash loading from events originating from one or more sources (potentially thousands of events) that ash load will exceed a particular value (Fig. 10). The expected maximum ash load for timeframes of interest is also calculated. By aggregating the hazard calculations for each site, hazard maps can be generated which display the maximum expected ash load (kg/m2) at each site across the region for a timeframe of interest (e.g. 1-in-100 year event; Fig. 10). It's important to clarify that a 1-in-100 year event does not suggest that the maximum expected ash load for a site will occur regularly every 100 years, or only once in 100 years but rather, given any 100 year period, the maximum expected ash load for a particular site may occur once, twice, more, or not at all. a Hazard map of volcanic ash load (kg/m2) for Indonesia at 1 km resolution for the 1000 year timeframe; (b) Annual exceedance probability curve versus ash load for a site in Jakarta and (c) Histogram disaggregating the % contribution to ash load hazard of magnitude (VEI) for each source at the site in Jakarta for the 1000 year timeframe Disaggregation of the hazard calculations can ascertain which events dominate the hazard at a particular site. The ability to disaggregate the primary causal factors contributing to the hazard at a given site (i.e. magnitude, source, distance, ash load etc.) is an inherent strength of this approach. The user can specify disaggregation parameters in the configuration file prior to undertaking an assessment and the VAPAH algorithm will generate histograms as part of the PVAHA results set (Fig. 10). Alternatively, histograms can be generated using a pre-generated database of events from assessments already computed. This functionality is useful for demonstrating what the percentage contribution to hazard of different volcanic sources on a site located in for example 'Jakarta' might be, and of those volcanic sources what proportion are VEI 2, 3, 4, 5, 6 and 7 (Fig. 10). The PVAHA methodology presented here integrates across all possible events and expected volcanic ash loads to arrive at a combined probability of exceedance for a site of interest. The analysis incorporates the relative frequencies of occurrence of different events and a wide range of volcanic ash dispersal characteristics. Like PSHA, this quantitative method of estimating volcanic ash hazard has the advantage of providing consistent estimates of hazard and can be prepared for one site or many, all in the same region but in significantly different geographic orientations with respect to potential hazard sources (McGuire 1995, 2008). The methodology is highly customisable allowing for the flexible integration of ALPEs generated using numerous ash dispersal models and eruption statistics derived from a variety of sources (Whelley et al. 2015). Capacity to build in flexibility in input assumptions highlights the power of this approach for quantifying uncertainty. The VAPAH algorithm effectively replaces the need for computationally intensive and time-consuming ash dispersal modelling that required to survey the number of events, spatial scale and resolution achievable using the technique. Similar to PSHA, the PVAHA framework proposed here is not intended to replace conventional ash modelling techniques. PVAHA considers all possible events, from all possible sources for a region of interest and produces a broad-brush, first approximation of the hazard (like PSHA) which can be updated and re-run regularly. It provides a quantitative mechanism for constraining the causal factors of ash hazard globally and can be used to underpin prioritisation of sources for further local scale dispersal modelling work. While the PVAHA procedure, through utilisation of the VAPAH algorithm, is primarily concerned with aggregating the hazard contributions from all sources, disaggregating the volcanic ash hazard has two important implications for the usefulness of the technique over other approaches to multi-scale assessments. Firstly, the causal factors, which dominate the volcanic ash hazard for each site including magnitude, distance and source, are captured in a results file that can be easily interrogated (a limitation of dispersal modelling outputs typically generated as gridded data). Second, disaggregation can be used to identify priority areas (sites or sources) from the multitude of volcanic events, for subsequent, more detailed analysis at the local scale that could be used to inform decision-making (e.g. targeted ash modelling at Merapi for Yogyakarta hazard assessment). The benefit of disaggregating the analysis is a better overall understanding of the contributing factors to volcanic ash hazard for a region and evidence-based targeting of disaster risk reduction efforts. Addressing uncertainty The quality and value of the resulting assessment is controlled by the quality of the input models and it is critical that the uncertainties in parameter values, as well as those associated with the dispersal model itself are suitably accounted for. Uncertainty is addressed here through the development of a suite of ALPEs that account for the full spectrum of input parameters and the associated uncertainty of the dispersal model in use. This process also allows multiple competing hypotheses on models and parameters to be incorporated into the analysis (i.e. ALPEs based on other ash dispersal models). Integrating synthetic catalogue of events with magnitude-frequency relationships derived for each source and prevailing meteorological conditions for the region and carrying those probabilities through to the analysis outputs using the VAPAH algorithm also mitigates the uncertainty in assumptions made for annual eruption probability. Sensitivity analyses should be periodically carried out for all parameters and models and updated as new data and information become available in order to refine the resulting analysis. Limitations, assumptions and caveats The PVAHA methodology presented here incorporates a number of assumptions and is subject to limitations on what is produced and how the information can be interpreted and used. All assumptions are made explicit and are open to review and refinement with new evidence. Key assumptions made, limitations and caveats on the resulting assessment include the following: Determination of the record of completeness for a sub-region is difficult to identify and can have a significant impact on the probability values derived for those sources (e.g. volcanoes with long repose periods might be under-represented). The probability of an event is based on the type of source (e.g. caldera, large cone etc.) and this study assumes a static source type (i.e. a caldera remains a caldera) however morphologies are typically dynamic and evolve over the history of a source (e.g. large cones can become calderas). This has implications for estimating the probability of events. All events are assumed to follow a memory-less Poisson process meaning the probability of an event occurring today is not contingent on whether or not an event occurred yesterday. A 2 km radii is applied to each source area and volcanic ash load estimates within this zone are not utilised due to over-estimations of proximal deposits (a feature of the dispersal model and considered acceptable due to the general absence of population, buildings and infrastructure within 2 km of a source (i.e. on the edifice slopes) This procedure and the VAPAH tool are intended to be one in a range of tools and techniques used to provide a consistent fit-for-purpose approach to hazard assessment across multiple spatial scales. The PVAHA methodology presented here is fully customisable and can be modified to reflect advances in our understanding of the dynamics of volcanic ash dispersal, improvement of statistical analysis techniques for historical eruptive events and ever increasing capabilities in high-performance computing. There is no limit to number of ash dispersal models which could be used to generate ALPEs for consideration and this allows multiple competing hypotheses on models and parameters to be incorporated into the analysis. The VAPAH algorithm currently addresses hazard, however the modular nature of the tool supports a framework for risk analysis. For example, a python module for damage (i.e. building damage, infrastructure damage or agricultural crop damage), containing vulnerability functions for volcanic ash could be developed and implemented as part of the VAPAH algorithm. Vulnerability functions are defined here as the relationship between the potential damage to exposed elements (e.g. buildings, agricultural crops, critical infrastructure, airports) and the amount of ash load (Blong 1981; Casadevall et al. 1996; Blong 2003; Spence et al. 2005; Guffanti et al. 2010; Wilson et al. 2012). Through integrating the hazard module (presented here) with a damage module, the conditional probability of damage (or loss in dollars) for an exposed element could be calculated for a given threshold of volcanic ash load. The resulting damage curves could be integrated with an exposure data module (e.g. population density, building footprints and crop extents) for the region of interest and the potential impact of events could be quantified in a risk framework. Significant advances have been made in the field of probabilistic natural hazard analysis in recent decades leading to the development of PSHA, a method for assessing and expressing the probability of earthquake hazard at a site of interest in terms of maximum credible ground-shaking intensity for timeframes of interest. The PVAHA methodology presented here modifies the four-step procedure of PSHA for volcanic ash hazard assessment at multiple spatial scales. This technique considers a magnitude-frequency distribution of eruptions, associated volcanic ash load attenuation relationships and prevailing meteorological conditions and integrates across all possible events to arrive at a combined probability of exceedance for a site of interest. The analysis incorporates the relative frequencies of occurrence of different events and a wide range of volcanic ash dispersal characteristics. This quantitative procedure can provide rigorous and consistent estimates of volcanic ash hazard across multiple spatial scales in less time and using far fewer computational resources than those needed to up-scale conventional ash dispersal modelling. The VAPAH algorithm, developed here to facilitate this procedure calculates the probability of exceeding a given ash load for a site of interest and generates a hazard curve of annual probability of exceedance versus volcanic ash load (kg/m2). VAPAH also calculates the maximum expected ash load at a given site for timeframes of interest. The database of results obtained for a grid of sites can be aggregated to generate maps of expected volcanic ash load at different timeframes or disaggregated by event in order to determine the percentage contribution to the hazard by magnitude, distance, source etc. This has important implications for understanding the causal factors, which dominate volcanic ash hazard at a given site, and identifying priority areas for more detailed, localised modelling that can be used to inform disaster risk reduction efforts. Aida I. Reliability of a tsunami source model derived from fault parameters. J Phys Earth. 1978;26(1):57–73. Akkar, S, Bommer, JJ. Empirical prediction equations for peak ground velocity driven from strong motion records in Europe and the Middle East. Bulletin of Seismological Society of America. 2007;97(2):511-530. Barberi F, Macedonio G, Pareshci MT, Santacroce R. Mapping tephra fallout risk: an example from Vesuvius, Italy. Nature. 1990;344:142–4. Bear-Crozier A, Kartadinata N, Heriwaseso A, Nielsen O. Development of python-FALL3D: a modified procedure for modelling volcanic ash dispersal in the Asia-Pacific region. Nat Hazards. 2012;64(1):821–38. Blong R (1981) Some effects of tephra falls on buildings. In: Tephra Studies. Springer, pp 405–420 Blong R. Building damage in Rabaul, Papua New Guinea, 1994. Bull Volcanol. 2003;65(1):43–54. Bommer, JJ, Stafford, PJ, Alarcon, JE, Akkar, S. The influence of magnitude ranges on ground motion prediction. Bulletin of the Seismological Society of America. 2007;97(6):2152-2170. Bommer JJ, Scherbaum F. The Use and Misuse of Logic Trees in Probabilistic Seismic Hazard Analysis. Earthquake Spectra. 2008;24(4):997–1009. Bonadonna C, Houghton BF. Total grain-size distribution and volume of tephra-fall deposits. Bull Volcanol. 2005;67:441–56. Bonadonna C, Ernst G, Sparks RSJ. Thickness varations and volume estimates of tephra fall deposits: the importance of particle Renolds number. J Volcanol Geotherm Res. 1998;81:173–87. Bonadonna C, Macedonio G, Sparks RSJ (2002a) Numerical modelling of tephra fallout associated with dome collapses and Vulcanian explosions: application to hazard assessment on Montserrat. In: Druitt TH, Kokelaar BP (eds) The eruption of Soufrière Hills Volcano, Montserrat, from 1995 to 1999. Geological Society London Memoir, pp 517–537 Bonadonna C, Mayberry GC, Calder ES, Sparks RSJ, Choux C, Jackson P, et al. (2002b) Tephra fallout in the eruption of Soufrière Hills Volcano, Montserrat. In: Druitt TH, Kokelaar BP (eds) The eruption of Soufrière Hills Volcano, Montserrat, from 1995 to 1999. Geological Society London Memoir, pp 483–516 Bursik M. Effect of wind on the rise height of volcanic plumes. Geophys Res Lett. 2001;18:3621–4. Casadevall TJ, Delos Reyes P, Schneider DJ (1996) The 1991 Pinatubo eruptions and their effects on aircraft operations. Fire and Mud: eruptions and lahars of Mount Pinatubo. University of Washington Press. Philippines:625–636 Connor CB, Hill BE, Winfrey B, Franklin NM, Femina PCL. Estimation of volcanic hazards from tephra fallout. Nat Hazard Rev. 2001;2(1):33–42. Cornell CA. Engineering seismic risk analysis. Bull Seismol Soc Am. 1968;58(5):1583–606. Cornell W, Carey S, Sigurdsson H. Computer-simulation of transport and deposition of the Campanian Y-5 Ash. J Volcanol Geotherm Res. 1983;17(1–4):89–109. Corradini S, Merucci L, Folch A. Volcanic ash cloud properties: Comparison between MODIS satellite retrievals and FALL3D transport model. IEEE Geosci Remote Sens Lett. 2011;8(2):248–52. Costa A, Macedonio G, Folch A. A three-dimensional Eulerian model for transport and deposition of volcanic ashes. Earth Planet Sci Lett. 2006;241:634–47. Costa A, Dell'Erba F, Di Vito MA, Isaia R, Macedonio G, Orsi G, et al. Tephra fallout hazard assessment at Campi Flegrei caldera (Italy). Bull Volcanol. 2009;71:259–73. Costa A, Folch A, Macedonio G, Giaccio B, Isaia R, Smith V (2012) Quantifying volcanic ash dispersal and impact of the Campanian Ignimbrite super‐eruption. Geophys Res Lett 39 (10) http://dx.doi.org/10.1029/2012GL051605. Costa A, Smith VC, Macedonio G, Matthews NE. The magnitude and impact of the Youngest Toba Tuff super-eruption. Front Earth Sci. 2014;2:16. Crosweller HS, Arora B, Brown SK, Cottrell E, Deligne NI, Guerrero NO, et al. Global database on large magnitude explosive volcanic eruptions (LaMEVE). J Appl Volcanol. 2012;1(1):1–13. Esteva L. Bases para la formulacion de decisiones de disen ̃o sismico. Mexico: Universidad Autonoma Nacional de Me ́xico; 1968. Ewert JW. System for ranking relative threats of US volcanoes. Nat Hazard Rev. 2007;8(4):112–24. Folch A, Sulpizio R. Evaluating long-range volcanic ash hazard using supercomputing facilities: application to Somma-Vesuvius (Itay), and consequences for civil aviation over the Central Mediterranean Area. Bull Volcanol. 2010;72:1039–59. Folch A, Cavazzoni C, Costa A, Macedonio G. An automatic procedure to forecast tephra fallout. J Volcanol Geotherm Res. 2008a;177:767–77. Folch A, Jorba O, Viramonte J. Volcanic ash forecast - application to the May 2008 Chaiten eruption. Nat Hazards Earth Syst Sci. 2008b;8:927–40. Folch A, Costa A, Macedonio G. FALL3D: A computational model for transport and deposition of volcanic ash. Comput Geosci. 2009;35:1334–42. Folch A, Costa A, Basart S (2012) Validation of the FALL3D ash dispersion model using obsrvations of the 2010 Eyjafjallajokull volcanic ash clouds. Atmos Environ in press:1–19 Geist EL, Parsons T. Probabilistic Analysis of Tsunami Hazards*. Nat Hazards. 2006;37(3):277–314. Guffanti M, Casadevall TJ, Budding K (2010) Encounters of aircraft with volcanic ash clouds; A compilation of known incidents, 1953–2009. US Geological Survey. US Numbered Series. Series Number 545. Hoblitt RP, Miller CD, Scott WE (1987) Volcanic hazards with regard to siting nuclear-power plants in the Pacific Northwest. US Geological Survey. US Numbered Series. Series Number 87-297. Horspool N, Pranantyo I, Griffin J, Latief H, Natawidjaja D, Kongko W, et al. A probabilistic tsunami hazard assessment for Indonesia. Nat Hazard Earth Syst Sci. 2014;14(11):3105–22. Hurst AW (1994) ASHFALL - A computer program for estimating volcanic ash fallout (Report and User Guide). Institute of Geological & Nuclear Sciences science report 94 (23). pp 23. Hurst AW, Turner R. Performance of the program ASHFALL for forecasting ashfall during the 1995 and 1996 eruptions of Ruapehu volcano. N Z J Geol Geophys. 1999;42:615–22. Jenkins S, Magill C, McAneney J, Hurst AW (2008) Multi-stage volcanic events: tephra hazard simulations for the Okataina Volcanic Centre, New Zealand. J Geophys Res 113 (F04012) Jenkins S, Magill C, McAneney J, Blong R. Regional ash fall hazard I: a probabilistic assessment methodology. Bull Volcanol. 2012a;74(7):1699–712. Jenkins S, McAneney J, Magill C, Blong R. Regional ash fall hazard II: Asia-Pacific modelling results and implications. Bull Volcanol. 2012b;74(7):1713–27. Kandlbauer J, Sparks R. New estimates of the 1815 Tambora eruption volume. J Volcanol Geotherm Res. 2014;286:93–100. Kandlbauer J, Carey S, Sparks RS. The 1815 Tambora ash fall: implications for transport and deposition of distal ash on land and in the deep sea. Bull Volcanol. 2013;75(4):1–11. doi:10.1007/s00445-013-0708-3. Kistler R, Collins W, Saha S, White G, Woollen J, Kalnay E, et al. The NCEP-NCAR 50-year reanalysis: Monthly means CD-ROM and documentation. Bull Am Meteorol Soc. 2001;82(2):247–67. Lin I-C, Tung CC. A preliminary investigation of tsunami hazard. Bull Seismol Soc Am. 1982;72(6A):2323–37. Macedonio G, Costa A, Folch A. Ash fallout scenarios at Vesuvius: Numerical simulations and implications for hazard assessment. J Volcanol Geotherm Res. 2008;178:366–77. Magill CR, Hurst AW, Hunter LJ, Blong RJ. Probabilistic tephra fall simulation for the Auckland Region, New Zealand. J Volcanol Geotherm Res. 2006;153(3–4):370–86. McGuire RK. Probabilistic seismic hazard analysis and design earthquakes: closing the loop. Bull Seismol Soc Am. 1995;85(5):1275–84. McGuire RK. Probabilistic seismic hazard analysis: Early history. Earthquake Eng Struct Dyn. 2008;37(3):329–38. McKee C, Johnson RW, Lowenstein PL, Riley SJ, Blong RJ, De Saint OP, et al. Rabaul Caldera, Papua New Guinea: volcanic hazards, surveillance, and eruption contingency planning. J Volcanol Geotherm Res. 1985;23:195–237. Miller V, Bear-Crozier A, Newey V, Horspool N, Weber R. Probabilistic Volcanic Ash Hazard Analysis (PVAHA) II: Assessment of the Asia-Pacific region using VAPAH. J Appl Volcanol. 2016; doi:10.1186/s13617-016-0044-3. Musson R. The use of Monte Carlo simulations for seismic hazard assessment in the UK. Annali di Geofisica. 2000;43(1):1-9. Newhall CG, Self S. The volcanic explosivity index/VEI/- An estimate of explosive magnitude for historical volcanism. J Geophys Res. 1982;87(C2):1231–8. Pfeiffer T, Costa A, Macedonio G. A model for the numerical simulation of tephra fall deposits. J Volcanol Geotherm Res. 2005;140:273–94. Power W, Wang X, Lane E, Gillibrand P. A probabilistic tsunami hazard study of the auckland region, part I: propagation modelling and tsunami hazard assessment at the shoreline. Pure Appl Geophys. 2013;170(9–10):1621–34. Rikitake T, Aida I. Tsunami hazard probability in Japan. Bull Seismol Soc Am. 1988;78(3):1268–78. Robinson D, Fulford G, Dhu T (2005) EQRM: Geoscience Australia's Earthquake Risk Model: Technical Manual Version 3.0. Geoscience Australia Record 2006/01. Geoscience Australia. Canberra. pp. 148. Robinson, D, Dhu, T, Schnieder, J. Practical probabilistic seismic risk analysis: a demonstration of capability. Seismol Res Lett. 2006;77(4):453-459. Scollo S, Folch A, Costa A. A parametric and comparative study of different tephra fallout models. J Volcanol Geotherm Res. 2008a;176:199–211. Scollo S, Tarantola S, Bonadonna C, Coltelli M, Saltelli A (2008b) Sensitivity analysis and uncertainty estimatio for tephra dispersal models. J Geophys Res. doi:10.1029/2006JB004864. Scollo S, Prestifilippo M, Spata G, D'Agostino M, Coltelli M. Monitoring and forecasting Etna volcanic plumes. Nat Hazards Earth Syst Sci. 2009;9:1573-1585. Siebert L, Simkin T, Kimberley P (2010) Volcanoes of the World. 3rd edn. Smithsonian Institution, Washington DC. University of California, Berkeley Sigurdsson H, Carey S. Plinian and co-ignimbrite tephra fall from the. Bull Volcanol. 1989;51(4):243–70. Simkin T, Siebert L. Volcanoes of the World: A Regional Directory, Gazetteer, and Chronology of Volcanism During the Last 10,000 Years. Tucson, Ariz: Geoscience; 1994. p. 349. Simpson A, Johnson RW, Cummins P. Volcanic threat in developing countries of the Asia-Pacific region: probabilistic hazard assessment, population risks, and information gaps. Nat Hazards. 2011;57:151–65. Sørensen MB, Spada M, Babeyko A, Wiemer S, Grünthal G. Probabilistic tsunami hazard in the Mediterranean Sea. J Geophys Res Solid Earth. 2012;1978–2012:117 (B1). Sparks RSJ, Bursik M, Carey S, Gilbert JS, Graze LS, Sigurdsson H, et al. Volcanic Plumes. Chichester: Wiley and Sons; 1997. Spence RJS, Kelman I, Baxter PJ, Zuccaro G, Petrazzuoli S. Residential building and occupant vulnerability to tephra fall. Nat Hazards Earth Syst Sci. 2005;5(4):477–94. Stirling M, Wilson C. Development of a volcanic hazard model for New Zealand: first approaches from the methods of probabilistic seismic hazard analysis. Bull NZ Soc Earthq Eng. 2002;35(4):266–77. Suzuki T. A theoretical model for dispersion of tephra. In: Shimozuru D, I Y (eds) Arc volcanism: Physics and tectonics. Tokyo: Terra Scientific Publishing Company; 1983. TERA (1980). Seismic hazard analysis: a methodology for the Eastern United States. US Nuclear Regulatory Commisson Report No. NUREG/CR-1582. Thio HK, Somerville P, Ichinose G. Probabilistic analysis of strong ground motion and tsunami hazards in Southeast Asia. J Earthquake Tsunami. 2007;1(02):119–37. Thomas C, Burbidge D (2009) A Probabilistic Tsunami Hazard Assessment of the Southwest Pacific Nations. Geoscience Australia Professional Opinion 2009/02 Whelley P, Newhall C, Bradley K. The frequency of explosive volcanic eruptions in Southeast Asia. Bull Volcanol. 2015;77(1):1–11. doi:10.1007/s00445-014-0893-8. Wilson TM, Stewart C, Sword-Daniels V, Leonard GS, Johnston DM, Cole JW, et al. Volcanic ash impacts on critical infrastructure. Phys Chem Earth Parts A/B/C. 2012;45:5–23. Yokoyama I, Tilling RI, Scarpa R (1984) International Mobile Early-Warning System (s) for Volcanic Eruptions and Related Seismic Activities. Paris; UNESCO FP/2106-82-01 (2296). pp. 102. This research was undertaken with the assistance of resources from the Australian National Computational Infrastructure (NCI), which is supported by the Australian Government. The authors gratefully acknowledge our partners at Global Volcano Model (GVM) for numerous scientific discussions throughout the development of this manuscript. The authors also wish to thank D. Robinson, J. Griffin, A. Jones and J. Schneider for their constructive reviews that greatly improved the quality of this manuscript. This paper is published with the permission of the Chief Executive Officer of Geoscience Australia. Geoscience Australia, GPO Box 378, Canberra, ACT, 2601, Australia A. N. Bear-Crozier , V. Miller , V. Newey , N. Horspool & R. Weber Present address: GNS Science, PO Box 30368 Avalon, Lower Hutt, 5040, New Zealand N. Horspool Search for A. N. Bear-Crozier in: Search for V. Miller in: Search for V. Newey in: Search for N. Horspool in: Search for R. Weber in: Correspondence to A. N. Bear-Crozier. Competing interest ABC contributed to the conceptual framework for the PVAHA, the technical development of the VAPAH tool, executing the Asia-Pacific case study and was responsible for the preparation of this manuscript. VM contributed to the conceptual framework for the PVAHA, the technical development of the VAPAH tool and executing the Asia-Pacific case study. VN developed the scientific coding behind the VAPAH tool and assisted in the execution of the Asia-Pacific case study. NH contributed to the conceptual framework for the PVAHA and RW contributed to the statistical analysis for the Asia-Pacific case study. All authors read and approved the final manuscript. Bear-Crozier, A.N., Miller, V., Newey, V. et al. Probabilistic Volcanic Ash Hazard Analysis (PVAHA) I: development of the VAPAH tool for emulating multi-scale volcanic ash fall analysis. J Appl. Volcanol. 5, 3 (2016) doi:10.1186/s13617-016-0043-4 Probabilistic Statistical emulator Computational modelling
CommonCrawl
End-to-end Named Entity Recognition and Relation Extraction using Pre-trained Language Models by John Giorgi, et al. Named entity recognition (NER) and relation extraction (RE) are two important tasks in information extraction and retrieval (IE & IR). Recent work has demonstrated that it is beneficial to learn these tasks jointly, which avoids the propagation of error inherent in pipeline-based systems and improves performance. However, state-of-the-art joint models typically rely on external natural language processing (NLP) tools, such as dependency parsers, limiting their usefulness to domains (e.g. news) where those tools perform well. The few neural, end-to-end models that have been proposed are trained almost completely from scratch. In this paper, we propose a neural, end-to-end model for jointly extracting entities and their relations which does not rely on external NLP tools and which integrates a large, pre-trained language model. Because the bulk of our model's parameters are pre-trained and we eschew recurrence for self-attention, our model is fast to train. On 5 datasets across 3 domains, our model matches or exceeds state-of-the-art performance, sometimes by a large margin. John Giorgi Xindi Wang nicola-sahar Won Young Shin Gary D. Bader Joint entity recognition and relation extraction as a multi-head selection problem State-of-the-art models for joint entity recognition and relation extrac... Giannis Bekoulis, et al. ∙ Separating Retention from Extraction in the Evaluation of End-to-end Relation Extraction State-of-the-art NLP models can adopt shallow heuristics that limit thei... Bruno Taillé, et al. ∙ Memorization vs. Generalization: Quantifying Data Leakage in NLP Performance Evaluation Public datasets are often used to evaluate the efficacy and generalizabi... Aparna Elangovan, et al. ∙ An Empirical Investigation Towards Efficient Multi-Domain Language Model Pre-training Pre-training large language models has become a standard in the natural ... Kristjan Arumae, et al. ∙ Extracting Semantics from Maintenance Records Rapid progress in natural language processing has led to its utilization... Sharad Dixit, et al. ∙ Neural Metric Learning for Fast End-to-End Relation Extraction Relation extraction (RE) is an indispensable information extraction task... Tung Tran, et al. ∙ Automated tabulation of clinical trial results: A joint entity and relation extraction approach with transformer-based language representations Evidence-based medicine, the practice in which healthcare professionals ... Jetsun Whitton, et al. ∙ The extraction of named entities (named entity recognition, NER) and their semantic relations (relation extraction, RE) are key tasks in information extraction and retrieval (IE & IR). Given a sequence of text (usually a sentence), the objective is to identify both the named entities and the relations between them. This information is useful in a variety of NLP tasks such as question answering, knowledge base population, and semantic search (Jiang, 2012) . In the biomedical domain, NER and RE facilitate large-scale biomedical data analysis, such as network biology (Zhou et al., 2014) , gene prioritization (Aerts et al., 2006) , drug repositioning (Wang and Zhang, 2013) and the creation of curated databases (Li et al., 2015) . In the clinical domain, NER and RE can aid in disease and treatment prediction, readmission prediction, de-identification, and patient cohort identification (Miotto et al., 2017) . Most commonly, the tasks of NER and RE are approached as a pipeline, with NER preceding RE. There are two main drawbacks to this approach: (1) Pipeline systems are prone to error propagation between the NER and RE systems. (2) One task is not able to exploit useful information from the other (e.g. the type of relation identified by the RE system may be useful to the NER system for determining the type of entities involved in the relation, and vice versa). More recently, joint models that simultaneously learn to extract entities and relations have been proposed, alleviating the aforementioned issues and achieving state-of-the-art performance (Miwa and Sasaki, 2014; Miwa and Bansal, 2016; Gupta et al., 2016; Li et al., 2016, 2017; Zhang et al., 2017; Adel and Schütze, 2017; Bekoulis et al., 2018b, a; Nguyen and Verspoor, 2019; Li et al., 2019) . Many of the proposed joint models for entity and relation extraction rely heavily on external natural language processing (NLP) tools such as dependency parsers. For instance, Miwa and Bansal (2016) propose a recurrent neural network (RNN)-based joint model that uses a bidirectional long-short term memory network (BiLSTM) to model the entities and a tree-LSTM to model the relations between entities; Li et al. (2017) propose a similar model for biomedical text. The tree-LSTM uses dependency tree information extracted using an external dependency parser to model relations between entities. The use of these external NLP tools limits the effectiveness of a model to domains (e.g. news) where those NLP tools perform well. As a remedy to this problem, Bekoulis et al. (2018b) proposes a neural, end-to-end system that jointly learns to extract entities and relations without relying on external NLP tools. In Bekoulis et al. (2018a) , they augment this model with adversarial training. Nguyen and Verspoor (2019) propose a different, albeit similar end-to-end neural model which makes use of deep biaffine attention (Dozat and Manning, 2016) . Li et al. (2019) approach the problem with multi-turn question answering, posing templated queries to a BERT-based QA model (Devlin et al., 2018) whose answers constitute extracted entities and their relations and achieve state-of-the-art results on three popular benchmark datasets. While demonstrating strong performance, end-to-end systems like Bekoulis et al. (2018b, a) and Nguyen and Verspoor (2019) suffer from two main drawbacks. The first is that most of the models parameters are trained from scratch. For large datasets, this can lead to long training times. For small datasets, which are common in the biomedical and clinical domains where it is particularly challenging to acquire labelled data, this can lead to poor performance and/or overfitting. The second is that these systems typically contain RNNs, which are sequential in nature and cannot be parallelized within training examples. The multi-pass QA model proposed in Li et al. (2019) alleviates these issues by incorporating a pre-trained language model, BERT (Devlin et al., 2018) , which eschews recurrence for self-attention. The main limitation of their approach is that it relies on hand-crafted question templates to achieve maximum performance. This may become a limiting factor where domain expertise is required to craft such questions (e.g., for biomedical or clinical corpora). Additionally, one has to create a question template for each entity and relation type of interest. In this study, we propose an end-to-end model for joint NER and RE which addresses all of these issues. Similar to past work, our model can be viewed as a mixture of a NER module and a RE module (Figure 1). Unlike most previous works, we include a pre-trained, transformer-based language model, specifically BERT (Devlin et al., 2018) , which achieved state-of-the-art performance across many NLP tasks. The weights of the BERT model are fine-tuned during training, and the entire model is trained in an end-to-end fashion. Our main contributions are as follows: (1) Our solution is truly end-to-end, relying on no hand-crafted features (e.g. templated questions) or external NLP tools (e.g. dependency parsers). (2) Our model is fast to train (e.g. under 10 minutes on a single GPU for the CoNLL04 corpus), as most of its parameters are pre-trained and we avoid recurrence. (3) We match or exceed state-of-the-art performance for joint NER and RE on 5 datasets across 3 domains. 2 The Model Figure 1 illustrates the architecture of our approach. Our model is composed of an NER module and an RE module. The NER module is identical to the one proposed by Devlin et al. (2018) . For a given input sequence s of N word tokens w1, w2, …, wN, the pre-trained BERTBASE model first produces a sequence of vectors, x(NER)1, x(NER)2, …, x(NER)N which are then fed to a feed-forward neural network (FFNN) for classification. Figure 1: Joint named entity recognition (NER) and relation extraction (RE) model architecture. s(NER)i=FFNNNER(x(NER)i) (1) The output size of this layer is the number of BIOES-based NER labels in the training data, |C(NER)|. In the BIOES tag scheme, each token is assigned a label, where the B- tag indicates the beginning of an entity span, I- the inside, E- the end and S- is used for any single-token entity. All other tokens are assigned the label O. During training, a cross-entropy loss is computed for the NER objective, LNER=−N∑n=1log(es(NER)n∑C(NER)ces(NER)n,c) (2) where s(NER)n is the predicted score that token n∈N belongs to the ground-truth entity class and s(NER)n,c is the predicted score for token n belonging to the entity class c∈C(NER). In the RE module, the predicted entity labels are obtained by taking the argmax of each score vector s(NER)1, s(NER)2, …, s(NER)N. The predicted entity labels are then embedded to produce a sequence of fixed-length, continuous vectors, e(NER)1, e(NER)2, …, e(NER)N which are concatenated with the hidden states from the final layer in the BERT model and learned jointly with the rest of the models parameters. x(RE)i=x(NER)i∥e(NER)i (3) Following Miwa and Bansal (2016) and Nguyen and Verspoor (2019) , we incrementally construct the set of relation candidates, R, using all possible combinations of the last word tokens of predicted entities, i.e. words with E- or S- labels. An entity pair is assigned to a negative relation class (NEG) when the pair has no relation or when the predicted entities are not correct. Once relation candidates are constructed, classification is performed with a deep bilinear attention mechanism (Dozat and Manning, 2016) , as proposed by Nguyen and Verspoor (2019) . To encode directionality, the mechanism uses FFNNs to project each x(RE)i into head and tail vector representations, corresponding to whether the ith word serves as head or tail argument of the relation. h(head)i =FFNNhead(x(RE)i) (4) h(tail)i =FFNNtail(x(RE)i) (5) These projections are then fed to a biaffine classifier, s(RE)j,k =Biaffine(h(head)j,h(tail)k) (6) Biaffine(x1,x2) =xT1\mathsfitUx2+W(x1∥x2)+b (7) where \mathsfitU is an m×|C(RE)|×mtensor, W is a |C(RE)|×(2∗m) matrix, and b is a bias vector. Here, m is the size of the output layers of FFNNhead and FFNNtail and C(RE) is the set of all relation classes (including NEG). During training, a second cross-entropy loss is computed for the RE objective LRE=−R∑r=1log(es(RE)r∑C(RE)ces(RE)r,c) (8) where s(RE)r is the predicted score that relation candidate r∈R belongs to the ground-truth relation class and s(RE)r,c is the predicted score for relation r belonging to the relation class c∈C(RE). The model is trained in an end-to-end fashion to minimize the sum of the NER and RE losses. L=LNER+LRE (9) 2.1 Entity Pretraining In Miwa and Bansal (2016) , entity pre-training is proposed as a solution to the problem of low-performance entity detection in the early stages of training. It is implemented by delaying the training of the RE module by some number of epochs, before training the entire model jointly. Our implementation of entity pretraining is slightly different. Instead of delaying training of the RE module by some number of epochs, we weight the contribution of LRE to the total loss during the first epoch of training L=LNER+λ L% RE (10) where λ is increased linearly from 0 to 1 during the first epoch and set to 1 for the remaining epochs. We chose this scheme because the NER module quickly achieves good performance for all datasets (i.e. within one epoch). In early experiments, we found this scheme to outperform a delay of a full epoch. 2.2 Implementation We implemented our model in PyTorch (Paszke et al., 2017) using the BERTBASE model from the PyTorch Transformers library111https://github.com/huggingface/pytorch-transformers. Our model is available at our GitHub repository222https://github.com/bowang-lab/joint-ner-and-re. Furthermore, we use NVIDIAs automatic mixed precision (AMP) library Apex333https://github.com/NVIDIA/apex to speed up training and reduce memory usage without affecting task-specific performance. 3 Experimental Setup 3.1 Datasets and evaluation To demonstrate the generalizability of our model, we evaluate it on 5 commonly used benchmark corpora across 3 domains. All corpora are in English. Detailed corpus statistics are presented in Table A.1 of the appendix. 3.1.1 Ace04/05 The Automatic Content Extraction (ACE04) corpus was introduced by Doddington et al. (2004) , and is commonly used to benchmark NER and RE methods. There are 7 entity types and 7 relation types. ACE05 builds on ACE04, splitting the Physical relation into two classes (Physical and Part-Whole), removing the Discourse relation class and merging Employment-Membership-Subsidiary and Person-Organization-Affiliation into one class (Employment-Membership-Subsidiary). For ACE04, we follow Miwa and Bansal (2016) by removing the Discourse relation and evaluating our model using 5-fold cross-validation on the bnews and nwire subsets, where 10% of the data was held out within each fold as a validation set. For ACE05, we use the same test split as Miwa and Bansal (2016) . We use 5-fold cross-validation on the remaining data to choose the hyperparameters. Once hyperparameters are chosen, we train on the combined data from all the folds and evaluate on the test set. For both corpora, we report the micro-averaged F 1 score. We obtained the pre-processing scripts from Miwa and Bansal (2016) 444https://github.com/tticoin/LSTM-ER/tree/master/data. 3.1.2 CoNLL04 The CoNLL04 corpus was introduced in Roth and Yih (2004) and consists of articles from the Wall Street Journal (WSJ) and Associated Press (AP). There are 4 entity types and 5 relation types. We use the same test set split as Miwa and Sasaki (2014) 555https://github.com/pgcool/TF-MTRNN/tree/master/data/CoNLL04. We use 5-fold cross-validation on the remaining data to choose hyperparameters. Once hyperparameters are chosen, we train on the combined data from all folds and evaluate on the test set, reporting the micro-averaged F1 score. 3.1.3 Ade The adverse drug event corpus was introduced by Gurulingappa et al. (2012) to serve as a benchmark for systems that aim to identify adverse drug events from free-text. It consists of the abstracts of medical case reports retrieved from PubMed666https://www.ncbi.nlm.nih.gov/pubmed. There are two entity types, Drug and Adverse effect and one relation type, Adverse drug event. Similar to previous work (Li et al., 2016, 2017; Bekoulis et al., 2018a) , we remove ∼130 relations with overlapping entities and evaluate our model using 10-fold cross-validation, where 10% of the data within each fold was used as a validation set, 10% as a test set and the remaining data is used as a train set. We report the macro F1 score averaged across all folds. 3.1.4 i2b2 The 2010 i2b2/VA dataset was introduced by Uzuner et al. (2011) for the 2010 i2b2/Va Workshop on Natural Language Processing Challenges for Clinical Records. The workshop contained an NER task focused on the extraction of 3 medical entity types (Problem, Treatment, Test) and an RE task for 8 relation types. In the official splits, the test set contains roughly twice as many examples as the train set. To increase the number of training examples while maintaining a rigorous evaluation, we elected to perform 5-fold cross-validation on the combined data from both partitions. We used 10% of the data within each fold as a validation set, 20% as a test set and the remaining data was used as a train set. We report the micro F1 score averaged across all folds. To the best of our knowledge, we are the first to evaluate a joint NER and RE model on the 2010 i2b2/VA dataset. Therefore, we decided to compare to scores obtained by independent NER and RE systems. We note, however, that the scores of independent RE systems are not directly comparable to the scores we report in this paper. This is because RE is traditionally framed as a sentence-level classification problem. During pre-processing, each example is permutated into processed examples containing two blinded entities and labelled for one relation class. E.g. the example: His PCP had recently started ciprofloxacinTREATMENT for a UTIPROBLEM becomes His PCP had recently started @TREATMENT$ for a @PROBLEM$, where the model is trained to predict the target relation type, Treatment is administered for medical problem (TrAP). This task is inherently easier than the joint setup, for two reasons: relation predictions are made on ground-truth entities, as opposed to predicted entities (which are noisy) and the model is only required to make one classification decision per pre-processed sentence. In the joint setup, a model must identify any number of relations (or the lack thereof) between all unique pairs of predicted entities in a given input sentence. To control for the first of these differences, we report scores from our model in two settings, once when predicted entities are used as input to the RE module, and once when ground-truth entities are used. 3.2 Hyperparameters Besides batch size, learning rate and number of training epochs, we used the same hyperparameters across all experiments (see Table A.2). Similar to Devlin et al. (2018) , learning rate and batch size were selected for each dataset using a minimal grid search (see See Table A.3). One hyperparameter selected by hand was the choice of the pre-trained weights used to initialize the BERTBASE model. For general domain corpora, we found the cased BERTBASE weights from Devlin et al. (2018) to work well. For biomedical corpora, we used the weights from BioBERT (Lee et al., 2019) , which recently demonstrated state-of-the-art performance for biomedical NER, RE and QA. Similarly, for clinical corpora we use the weights provided by Peng et al. (2019) , who pre-trained BERTBASE on PubMed abstracts and clinical notes from MIMIC-III777https://mimic.physionet.org/. 4.1 Jointly learning NER and RE Table 1 shows our results in comparison to previously published results, grouped by the domain of the evaluated corpus. We find that on every dataset besides i2b2, our model improves NER performance, for an average improvement of ∼2%. This improvement is particularly large on the ACE04 and ACE05 corpora (3.98% and 2.41% respectively). On i2b2, our joint model performs within 0.29% of the best independent NER solution. For relation extraction, we outperform previous methods on 2 datasets and come within ∼2% on both ACE05 and CoNLL04. In two cases, our performance improvement is substantial, with improvements of 4.59% and 10.25% on the ACE04 and ADE corpora respectively. For i2b2, our score is not directly comparable to previous systems (as discussed in section 3.1.4) but will facilitate future comparisons of joint NER and RE methods on this dataset. By comparing overall performance, we find that our approach achieves new state-of-the-art performance for 3 popular benchmark datasets (ACE04, ACE05, ADE) and comes within 0.2% for CoNLL04. General ACE04 Miwa and Bansal (2016) 81.80 48.40 65.10 -5.69 Bekoulis et al. (2018a) 81.64 47.45 64.54 -6.25 Li et al. (2019) 83.60 49.40 66.50 -4.29 Ours 87.580.2 53.990.1 70.79 – ACE05 Miwa and Bansal (2016) 83.40 55.60 69.50 -3.42 Zhang et al. (2017) 83.50 57.50 70.50 -2.42 CoNLL04 Miwa and Sasaki (2014) 80.70 61.00 70.85 -7.30 Li et al. (2019) 87.80 68.90 78.35 0.20 Biomedical ADE Li et al. (2016) 79.50 63.40 71.45 -16.21 Clinical i2b2* Si et al. (2019) ** 89.55 – – – Peng et al. (2019) – 76.40 – – Ours (gold) – 72.030.1 – – To the best of our knowledge, there are no published joint NER and RE models that evaluate on the i2b2 2010 corpus. We compare our model to the state-of-the-art for each individual task (see section 3.1.4). We compare to the scores achieved by their BERTBASE model. Table 1: Comparison to previously published F1 scores for joint named entity recognition (NER) and relation extraction (RE). Ours (gold): our model, when gold entity labels are used as input to the RE module. Ours: full, end-to-end model (see section 2 ). Bold: best scores. Subscripts denote standard deviation across three runs. Δ: difference to our overall score. Full model (a) w/o Entity pre-training 85.910.1 59.440.2 72.67 -0.63 (b) w/o Entity embeddings 86.070.1 59.520.2 72.80 -0.51 (c) Single FFNN 86.020.1 60.130.0 73.07 -0.23 (d) w/o Head/Tail 83.930.2 54.670.8 69.30 -4.00 (e) w/o Bilinear 86.350.1 59.600.0 72.98 -0.33 Table 2: Ablation experiment results on the CoNLL04 corpus. Scores are reported as a micro-averaged F1 score on the validation set, averaged across three runs of 5-fold cross-validation. (a) Without entity pre-training (section 2.1). (b) Without entity embeddings (eq. 3). (c) Using a single FFNN in place of FFNNhead and FFNNtail (eq. 4 and 5) (d) Without FFNNhead and FFNNtail (e) Without the bilinear operation (eq. 7). Bold: best scores. Subscripts denote standard deviation across three runs. Δ: difference to the full models score. 4.2 Ablation Analysis To determine which training strategies and components are responsible for our models performance, we conduct an ablation analysis on the CoNLL04 corpus (Table 2). We perform five different ablations: (a) Without entity pre-training (see section 2.1 ), i.e. the loss function is given by equation 9. (b) Without entity embeddings, i.e. equation 3 becomes x(RE)i=x(NER)i. (c) Replacing the two feed-forward neural networks, FFNNhead and FFNNtail with a single FFNN (see equation 4 and 5). (d) Removing FFNNhead and FFNNtail entirely. (e) Without the bilinear operation, i.e. equation 7 becomes a simple linear transformation. Removing FFNNhead and FFNNtail has, by far, the largest negative impact on performance. Interestingly, however, replacing FFNNhead and FFNNtail with a single FFNN has only a small negative impact. This suggests that while these layers are very important for model performance, using distinct FFNNs for the projection of head and tail entities (as opposed to the same FFNN) is relatively much less important. The next most impactful ablation was entity pre-training, suggesting that low-performance entity detection during the early stages of training is detrimental to learning (see section 2.1). Finally, we note that the importance of entity embeddings is surprising, as a previous study has found that entity embeddings did not help performance on the CoNLL04 corpus (Bekoulis et al., 2018b) , although their architecture was markedly different. We conclude that each of our ablated components is necessary to achieve maximum performance. Figure 2: Visualization of the attention weights from select layers and heads of BERT after it was fine-tuned within our model on the CoNLL04 corpus. Darker squares indicate larger attention weights. Attention weights are shown for the input sentence: "Ruby fatally shot Oswald two days after Kennedy was assassinated.". The CLS and SEP tokens have been removed. Four major patterns are displayed: paying attention to the next word (first image from the left) and previous word (second from the left), paying attention to the word itself (third from the left) and the end of the sentence (fourth from the left). 4.3 Analysis of the word-level attention weights One advantage of including a transformer-based language model is that we can easily visualize the attention weights with respect to some input. This visualization is useful, for example, in detecting model bias and locating relevant attention heads (Vig, 2019) . Previous works have used such visualizations to demonstrate that specific attention heads mark syntactic dependency relations and that lower layers tend to learn more about syntax while higher layers tend to encode more semantics (Raganato and Tiedemann, 2018) . In Figure 2 we visualize the attention weights of select layers and attention heads from an instance of BERT fine-tuned within our model on the CoNLL04 corpus. We display four patterns that are easily interpreted: paying attention to the next and previous words, paying attention to the word itself, and paying attention to the end of the sentence. These same patterns have been found in pre-trained BERT models that have not been fine-tuned on a specific, supervised task (Vig, 2019; Raganato and Tiedemann, 2018) , and therefore, are retained after our fine-tuning procedure. To facilitate further analysis of our learned model, we make available Jupyter and Google Colaboratory notebooks on our GitHub repository888https://github.com/bowang-lab/joint-ner-and-re, where users can use multiple views to explore the learned attention weights of our models. We use the BertViz library (Vig, 2019) to render the interactive, HTML-based views and to access the attention weights used to plot the heat maps. 5 Discussion and Conclusion In this paper, we introduced an end-to-end model for entity and relation extraction. Our key contributions are: (1) No reliance on any hand-crafted features (e.g. templated questions) or external NLP tools (e.g. dependency parsers). (2) Integration of a pre-trained, transformer-based language model. (3) State-of-the-art performance on 5 datasets across 3 domains. Furthermore, our model is inherently modular. One can easily initialize the language model with pre-trained weights better suited for a domain of interest (e.g. BioBERT for biomedical corpora) or swap BERT for a comparable language model (e.g. XLNet (Yang et al., 2019) ). Finally, because of (2), our model is fast to train, converging in approximately 1 hour or less on a single GPU for all datasets used in this study. Our model out-performed previous state-of-the-art performance on ADE by the largest margin (6.53%). While exciting, we believe this corpus was particularly easy to learn. The majority of sentences (∼68%) are annotated for two entities (drug and adverse effect, and one relation (adverse drug event). Ostensibly, a model should be able to exploit this pattern to get near-perfect performance on the majority of sentences in the corpus. As a test, we ran our model again, this time using ground-truth entities in the RE module (as opposed to predicted entities) and found that the model very quickly reached almost perfect performance for RE on the test set (∼98%). As such, high performance on the ADE corpus is not likely to transfer to real-world scenarios involving the large-scale annotation of diverse biomedical articles. In our experiments, we consider only intra-sentence relations. However, the multiple entities within a document generally exhibit complex, inter-sentence relations. Our model is not currently capable of extracting such inter-sentence relations and therefore our restriction to intra-sentence relations will limit its usefulness for certain downstream tasks, such as knowledge base creation. We also ignore the problem of nested entities, which are common in biomedical corpora. In the future, we would like to extend our model to handle both nested entities and inter-sentence relations. Finally, given that multilingual, pre-trained weights for BERT exist, we would also expect our model's performance to hold across multiple languages. We leave this question to future work. H. Adel and H. Schütze (2017) Global normalization of convolutional neural networks for joint entity and relation classification . In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark. Cited by: §1. S. Aerts, D. Lambrechts, S. Maity, P. V. Loo, B. Coessens, F. D. Smet, L. Tranchevent, B. D. Moor, P. Marynen, B. Hassan, P. Carmeliet, and Y. Moreau (2006) Gene prioritization through genomic data fusion. Nature Biotechnology 24 (5), pp. 537–544. External Links: Document Cited by: §1. G. Bekoulis, J. Deleu, T. Demeester, and C. Develder (2018a) Adversarial training for multi-context joint entity and relation extraction. arXiv preprint arXiv:1808.06876. Cited by: §1, §1, §1, §3.1.3, Table 1. G. Bekoulis, J. Deleu, T. Demeester, and C. Develder (2018b) Joint entity recognition and relation extraction as a multi-head selection problem. Expert Systems with Applications 114, pp. 34–45. Cited by: §1, §1, §1, §4.2. J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: Table A.3, §1, §1, §1, §2, §3.2, §3.2. G. Doddington, A. Mitchell, M. Przybocki, L. Ramshaw, S. Strassel, and R. Weischedel (2004) The automatic content extraction (ACE) program – tasks, data, and evaluation. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04), Lisbon, Portugal. External Links: Link Cited by: §3.1.1. T. Dozat and C. D. Manning (2016) Deep biaffine attention for neural dependency parsing. arXiv preprint arXiv:1611.01734. Cited by: §1, §2. P. Gupta, H. Schütze, and B. Andrassy (2016) Table filling multi-task recurrent neural network for joint entity and relation extraction. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pp. 2537–2547. Cited by: §1. H. Gurulingappa, A. M. Rajput, A. Roberts, J. Fluck, M. Hofmann-Apitius, and L. Toldo (2012) Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports. Journal of Biomedical Informatics 45 (5), pp. 885 – 892. Note: Text Mining and Natural Language Processing in Pharmacogenomics External Links: ISSN 1532-0464, Document, Link Cited by: §3.1.3. J. Jiang (2012) Information extraction from text. In Mining Text Data, pp. 11–41. External Links: ISBN 978-1-4614-3223-4, Document, Link Cited by: §1. J. Lee, W. Yoon, S. Kim, D. Kim, S. Kim, C. H. So, and J. Kang (2019) Biobert: pre-trained biomedical language representation model for biomedical text mining. arXiv preprint arXiv:1901.08746. Cited by: Table A.3, §3.2. F. Li, M. Zhang, G. Fu, and D. Ji (2017) A neural joint model for entity and relation extraction from biomedical text. BMC bioinformatics 18 (1), pp. 198. Cited by: §1, §1, §3.1.3, Table 1. F. Li, Y. Zhang, M. Zhang, and D. Ji (2016) Joint models for extracting adverse drug events from biomedical text.. In IJCAI, Vol. 2016, pp. 2838–2844. Cited by: §1, §3.1.3, Table 1. G. Li, K. E. Ross, C. N. Arighi, Y. Peng, C. H. Wu, and K. Vijay-Shanker (2015) miRTex: a text mining system for miRNA-gene relation extraction. PLOS Computational Biology 11 (9), pp. e1004391. External Links: Document Cited by: §1. X. Li, F. Yin, Z. Sun, X. Li, A. Yuan, D. Chai, M. Zhou, and J. Li (2019) Entity-relation extraction as multi-turn question answering. arXiv preprint arXiv:1905.05529. Cited by: §1, §1, §1, Table 1. I. Loshchilov and F. Hutter (2017) Fixing weight decay regularization in adam. CoRR abs/1711.05101. External Links: Link, 1711.05101 Cited by: Table A.2. R. Miotto, F. Wang, S. Wang, X. Jiang, and J. T. Dudley (2017) Deep learning for healthcare: review, opportunities and challenges. Briefings in bioinformatics 19 (6), pp. 1236–1246. Cited by: §1. M. Miwa and M. Bansal (2016) End-to-end relation extraction using lstms on sequences and tree structures. arXiv preprint arXiv:1601.00770. Cited by: §1, §1, §2.1, §2, §3.1.1, Table 1. M. Miwa and Y. Sasaki (2014) Modeling joint entity and relation extraction with table representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1858–1869. Cited by: §1, §3.1.2, Table 1. D. Q. Nguyen and K. Verspoor (2019) End-to-end neural relation extraction using deep biaffine attention. In Advances in Information Retrieval, L. Azzopardi, B. Stein, N. Fuhr, P. Mayr, C. Hauff, and D. Hiemstra (Eds.), Cham, pp. 729–738. External Links: ISBN 978-3-030-15712-8 Cited by: §1, §1, §1, §2. R. Pascanu, T. Mikolov, and Y. Bengio (2013) On the difficulty of training recurrent neural networks. In International Conference on Machine Learning , pp. 1310–1318. Cited by: Table A.2. A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in PyTorch. In NIPS Autodiff Workshop, Cited by: §2.2. Y. Peng, S. Yan, and Z. Lu (2019) Transfer learning in biomedical natural language processing: an evaluation of BERT and elmo on ten benchmarking datasets. CoRR abs/1906.05474. External Links: Link, 1906.05474 Cited by: Table A.3, §3.2, Table 1. A. Raganato and J. Tiedemann (2018) An analysis of encoder representations in transformer-based machine translation. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 287–297. Cited by: §4.3, §4.3. D. Roth and W. Yih (2004) A linear programming formulation for global inference in natural language tasks . In Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004, pp. 1–8. Cited by: §3.1.2. Y. Si, J. Wang, H. Xu, and K. Roberts (2019) Enhancing clinical concept extraction with contextual embedding. CoRR abs/1902.08691. External Links: Link, 1902.08691 Cited by: Table 1. Ö. Uzuner, B. R. South, S. Shen, and S. L. DuVall (2011) 2010 i2b2/va challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Association 18 (5), pp. 552–556. Cited by: §3.1.4. J. Vig (2019) A multiscale visualization of attention in the transformer model. CoRR abs/1906.05714. External Links: Link, 1906.05714 Cited by: §4.3, §4.3, §4.3. Z. Wang and H. Zhang (2013) Rational drug repositioning by medical genetics. Nature Biotechnology 31 (12), pp. 1080–1082. External Links: Document Cited by: §1. Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. Salakhutdinov, and Q. V. Le (2019) XLNet: generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Cited by: §5. M. Zhang, Y. Zhang, and G. Fu (2017) End-to-end neural relation extraction with global optimization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 1730–1740. Cited by: §1, Table 1. X. Zhou, J. Menche, A. Barabási, and A. Sharma (2014) Human symptoms–disease network. Nature Communications 5. External Links: Document Cited by: §1. Appendix A Appendix a.1 Corpus Statistics Table A.1 lists detailed statistics for each corpus used in this study. Entity classes (count) Relation classes (count) General ACE04 Person (12508), Organization (4405), Geographical Entities (4425), Location (614), Facility (688), Weapon (119), and Vehicle (209) Physical (1202), Person-Social (362), Employment-Membership-Subsidiary (1591), Agent-Artifact (212), Person-Organization-Affiliation (141), Geopolitical Entity-Affiliation (517) ACE05 Person (20891), Organization (5627), Geographical Entities (7455), Location(1119), Facility (1461), Weapon (911), and Vehicle (919) Physical (1612), Part-Whole (1060), Person-Social (615), Agent-Artifact (703), Employment-Membership-Subsidiary (1922), Geopolitical Entity-Affiliation (730) CoNLL04 Location (4765), Organization (2499), People (3918), Other (3011) Kill (268), Live in (521), Located in (406), OrgBased in (452), Work for (401) Biomedical ADE Drug (4979), Adverse effect (5669) Adverse drug event (6682) Clinical i2b2 Problem (19664), Test (13831), Treatment (14186) PIP (2203), TeCP (504), TeRP (3053), TrAP (2617), TrCP (526), TrIP (203), TrNAP (174), TrWP (133) Table A.1: Detailed entity and relation counts for each corpus used in this study. a.2 Hyperparameters and Model Details Table A.2 lists hyperparameters and model details that were held constant across all experiments. Table A.3 lists those that were specific to each evaluated corpus. Hyperparameter Tagging scheme BIOES Single token entities are tagged with an S- tag, the beginning of an entity span with a B- tag, the last token of an entity span with an E- tag, and tokens inside an entity span with an I- tag. Dropout rate 0.1 Dropout rate applied to the output of all FFNNs and the attention heads of the BERT model. Entity embeddings 128 Output dimension of the entity embedding layer. FFNNhead / FFNNtail 512 Output dimension of the FFNNhead and FFNNtail layers No. layers (NER module) 1 Number of layers used in the FFNN of the NER module. No. layers (RE module) 2 Number of layers used in the FFNNs of the RE module. Optimizer AdamW Adam with fixed weight decay regulatization (Loshchilov and Hutter, 2017) . Gradient normalization Γ=1 Rescales the gradient whenever the norm goes over some threshold Γ (Pascanu et al., 2013) . Weight decay 0.1 L2 weight decay. Table A.2: Hyperparameter values and model details used across all experiments. No. Epochs Batch size Learning rate Initial BERT weights ACE04 15 16 2e-5 BERT-Base (cased) (Devlin et al., 2018) ACE05 15 32 3e-5 BERT-Base (cased) CoNLL04 10 16 3e-5 BERT-Base (cased) ADE 7 16 2e-5 BioBERT (cased) (Lee et al., 2019) i2b2 12 16 2e-5 NCBI-BERT (uncased) (Peng et al., 2019) Table A.3: Hyperparameter values specific to individual datasets. Similar to Devlin et al. (2018) , a minimal grid search was performed over the values 16, 32 for batch size and 2e-5, 3e-5, and 5e-5 for learning rate.
CommonCrawl
Throwing a micro black hole into the sun: does it collapse into a black hole or does it result in a supernova? What do we know about accretion rates of micro black holes? Suppose a relative small black hole (mass about $10^9$ kilograms) would be thrown into the sun. Eventually this black hole will swallow all matter into the star, but how much time will pass before this happens? Are there any circumstances where the black hole would trigger a gravitational collapse in the core, and result in a supernova? There seems to be some margin for the accretion heating to counter or exceed the heating from fusion, so it could throw the star over the temperature threshold for carbon-12 fusion and above. The black hole is converting nearly 80% - 90% of the rest-mass of the accretion matter to heat, while fusion is barely getting about 0.5% - 1%. Bonus question: Could this be used to estimate a bound on primordial micro black holes with the fraction of low-mass stars going supernova? gravitational-collapse Peter Mortensen lurscherlurscher $\begingroup$ Related: worldbuilding.stackexchange.com/q/6426. $\endgroup$ – HDE 226868 $\begingroup$ dat feeling when answers on a scifi SE have better physics than on the physics SE ;) $\endgroup$ – lurscher $\begingroup$ Worldbuilding. Not Sci-fi. We get touchy about that. :-) $\endgroup$ $\begingroup$ Supernovae occur due to two processes: core-collapse or thermonuclear run-away; both require a more-massive-than-our-sun star (even with the paltry 1e9 kg added) $\endgroup$ – Kyle Kanos $\begingroup$ I think his question was more along the lines if a black hole eats the inside of a star, could the star collapse into that space that the black hole consumed. A kind of smaller core-collapse scenario. I'm pretty sure the answer is no, cause a mini black hole and it's accretion disk would heat up the inside of the star. It would be very different than a core collapse, it would probobly expand the star. And you'd need a bigger black hole than 1e9 KG. One that wouldn't evaporate in a poof of Hawking radiation. $\endgroup$ – userLTK The micro black hole would be unable to accrete very quickly at all due to intense radiation pressure. The intense Hawking radiation would have an luminosity of $3.6 \times 10^{14}$ W, and a roughly isotropic flux at the event horizon of $\sim 10^{48}$ W m$^{-2}$. The Eddington limit for such an object is only $6 \times 10^{9}$ W. In other words, at this luminosity (or above), the accretion stalls as matter is driven away by radiation pressure. There is no way that any matter from the Sun would get anywhere near the event horizon. If the black hole was rotating close to the maximum possible then the Hawking radiation would be suppressed and accretion at the Eddington rate would be allowed. But this would then drop the black hole below its maximum spin rate, leading to swiftly increasing Hawking radiation again. As the black hole evaporates, the luminosity increases, so the accretion problem could only become more severe. The black hole will entirely evaporate in about 2000 years. Its final seconds would minutely increase the amount of power generated inside the Sun, but assuming that the ultra-high energy gamma rays thermalised, this would be undetectable. EDIT: The Eddington limit may not be the appropriate number to consider, since we might think that the external pressure of gas inside the Sun might be capable of squeezing material into the black hole. The usual Eddington limit is calculated assuming that the gas pressure is small compared with the radiation pressure. And indeed that is probably the case here. The gas pressure inside the Sun is $2.6 \times 10^{16}$ Pa. The outward radiation pressure near the event horizon would be $\sim 10^{40}$ Pa. The problem is that the length scales are so small here that it is unclear to me that these classical arguments will work at all. However, even if we were to go for a more macroscopic 1 micron from the black hole, the radiation pressure still significantly exceeds the external gas pressure. Short answer: we wouldn't even notice - nothing would happen. Bonus Question: The answer to this is it doesn't have a bearing on the supernova rate, because the mechanism wouldn't cause supernovae. Even if the black hole were more massive and could grow, the growth rate would be slow and no explosive nucleosynthesis would occur because the gas would not be dense enough to be degenerate. Things change in a degenerate white dwarf, where the enhanced temperatures around an accreting mini-black hole could set off runaway thermonuclear fusion of carbon, since the pressure in a degenerate gas is largely independent of temperature. This possibility has been explored by Graham et al (2015) (thanks Timmy), who indeed conclude that type Ia supernova rates could constrain the density of micro black holes in the range $10^{16}$ to $10^{21}$ kg. Brian Moths ProfRobProfRob $\begingroup$ that relies heavily on the micro black hole being Schwarzschild. If the micro black hole is spinning near extremality, it would be cooler and allow accretion $\endgroup$ $\begingroup$ @lurscher see my edit. The luminosity would need to be reduced by a factor of $10^5$ to allow accretion. Even then the accretion timescale would be very long and if it did accrete something it would no longer be maximally rotating and would start to evaporate. I think the black hole needs to be several orders of magnitude more massive. $\endgroup$ – ProfRob $\begingroup$ that sounds good! but I'm a little unconvinced that accretion would make the black hole necessarily drift away from extremality. Wouldn't the accretion rates that counterrotate the black hole be lower than the accretion rate of the mass rotating in the same direction? $\endgroup$ $\begingroup$ @lurscher Where would the specific angular momentum come from to ensure that J/M remained maximal? $\endgroup$ $\begingroup$ I don't see how the Eddington limit is relevant here. That measures what the black hole can suck in, it says nothing about what the pressure of the star's matter can press in. $\endgroup$ – Loren Pechtel The intense flux of Hawking radiation of about $10^{13}$ Watt will prevent any solar matter from coming close to the event horizon. So, the Hawking radiation creates a small bubble preventing it from growing by accretion. Count IblisCount Iblis $\begingroup$ Power output of the sun is $10^{26}\,\rm W$ so I'm not convinced something so small as $10^{13}\,\rm W$ could be described as intense or prevent accretion. $\endgroup$ $\begingroup$ I think his point was, a black hole of that size would be much hotter than the inside of the sun and being that much hotter, it would lose mass to the sun faster than it can absorb matter from it. A black hole that small couldn't form an accretion disk either, so any matter eating would be more random collisions. (an accretion disk speeds up the rate a black hole takes in matter). Also, the heat of the black hole would push matter away from it. For the entire sun, the additional hawking radiation would be irrelevant but around the black hole, it would prevent the hole from eating much. $\endgroup$ $\begingroup$ the assumption that it will radiate Hawking radiation relies heavily on the micro black hole being of the Schwarzschild type. If the micro black hole is spinning near extremality, it might be cooler and allow accretion $\endgroup$ $\begingroup$ @KyleKanos you are confusing power with power density. The radius of the sun is about 7 x 10^8 m. From your link, surface emission is ~60 MW/sq m. For a black hole at a distance of 1 nm, emission power is about 8 x 10^29 W/sq m. That's a factor of about 10^21 greater for the black hole. $\endgroup$ – WhatRoughBeast $\begingroup$ Ah. Sorry. Perils of the Socratic Method, and all that. $\endgroup$ This might help: http://xaonon.dyndns.org/hawking/ 10^9 KG gives it: a temperature of 1.227203e+14 Kelvin and a luminosity of 3.563442e+14 watts and a size about 500 times smaller than a proton by radius - that would make an absorption rate equivalent to its Hawking radiation pretty difficult because it's over five orders of magnitude hotter than the inside of the sun and at the same time, much smaller than an atom. At that mass, a black hole wouldn't even create a good pocket of very dense material gravitationally pulled material around it. At just the distance of one atomic radius, even in the densely packed center of the sun, its gravity would drop off well over a million fold. At that size, it's hard to imagine that it would have even significant tidal effects either. If such a black hole existed and you were able to approach it (ignoring the Hawking radiation it shoots out), you'd have to get about 3 inches from it to even feel a 1 G force from it - which would feel strange because the tidal forces would drop off the gravitation rapidly, but as long as you kept a reasonable distance, it wouldn't feel dangerous - perhaps like what it feels like holding a magnet, but you're the magnet. Now if it was to pass through you it would likely leave a bullet sized hole - so that wouldn't be fun - and its radiation would also be lethal, but if you keep your distance, it would seem gravitationally pretty wimpy until you were very close. So, if you want a black hole that would eat the sun, I think you have to go bigger - as a ballpark guess, maybe 10^13 or 10^14 kg - give or take and even then, I expect it would take a long time to eat the sun. Now as to eating the core leading to collapse, a black hole that small wouldn't have a noticeable effect, but as it gets bigger, two things would happen. It could create a small area of higher pressure, essentially an accretion disk inside the sun and, the formation of the accretion disk would create additional heat as well as those lovely jets that shoot out the poles. The extra heat would likely push matter away from the center of the sun faster than the pocket of high gravity would drag things towards it. The net effect would be complicated because in the localized area you'd have more energy, but that more energy would heat up the sun, causing the sun to expand. It would also have a stirring effect of sorts from the jets of energy. The total effect is, for me, very hard to say. Now, as the micro black hole gets bigger, the sun would eventually look less and less like a sun and more and more like an accretion disk with two jets shooting out. The intermediate stages are complicated, but the beginning (not much difference) and end (black hole accretion disk) aren't hard to predict. Now, on going supernova, that, I don't think so because black holes, while eating, shoot out too much heat in the process. A star goes nova because the core cools and in cooling it collapses and in collapsing - well, you know the rest. A black hole would provide steady and consistent heat while it eats, so I see no mechanism for a nova moment - and that's basically how a nova works - it happens kind of all at once. A nova is like a perfect storm, where, everything falls in very fast and then all that matter bounces off of itself and explodes outwards. A core collapse is a very different event than a black hole with an accretion disk. Maybe I missed something, but that's my take on this rather improbable scenario, and for the record, I don't believe micro black holes exist. userLTKuserLTK $\begingroup$ Low-mass stars don't collapse, they go through runaway nuclear fusion. $\endgroup$ $\begingroup$ @ Lursher, Oh, I'm sure locally, near the black hole inside the star, it could create all kinds of reactions, perhaps even making some heavy molecules. The problem is one of how the energy would behave. The jets would react with the matter in the star, but eventually the Jets would break through the star. I think this scenario would eventually stir up the star a lot, but I don't see how it would look like a super-nova. Granted, my answer is purely speculative. $\endgroup$ $\begingroup$ Are the 7 significants digits warranted for the temperature and luminosity? $\endgroup$ – Peter Mortensen $\begingroup$ That would be a no. I just copy/pasted from the website. $\endgroup$ It appears that for white-dwarfs, the answer is supernova, if the masses are large enough: see http://arxiv.org/abs/1505.04444, a blog discussing the paper is here: http://astrobites.org/2015/06/03/detonating-white-dwarfs-with-black-holes/ On the grounds that the link above specifically discussed white-dwarfs, I am guessing that for the lower density of a normal star, a micro-black-hole actually passes straight through, presumably gaining some mass. The paper does indeed discuss primordial micro black holes, and states "primordial black holes with masses ∼ $10^{20}$ gm - $10^{24}$ gm cannot be a significant component of dark matter." $\begingroup$ The gas in the Sun is neither degenerate, nor made of carbon, so the conditions that might ignite a runaway detonation in a white dwarf are not present. $\endgroup$ $\begingroup$ Just be patient and wait a few billion years then :-) $\endgroup$ $\begingroup$ Yes, I see your (tongue in cheek?) thinking, but the problem is that a small black hole of $10^{9}$ kg evaporates in 2000 years. $\endgroup$ Highly active question. Earn 10 reputation (not counting the association bonus) in order to answer this question. The reputation requirement helps protect this question from spam and non-answer activity. Quantum tunneling into a black hole Ways to make a star go supernova? What would happen if an asteroid-sized primordial black hole encountered a very large star? At what point is a shrinking black hole unable to feed itself due to gravity? micro black hole forces Could micro black holes obey the Eddington limit? Could the black hole in the center of the galaxy be a white hole? Mechanism for collapse of iron stars into black holes via quantum tunnelling Can a hydrogen cloud directly collapse to form a black hole? Can a rotating neutron star collapse to a black hole through a reduction in rotation?
CommonCrawl
Preparation of Alcohols from G... The product of the following reaction: Hint:. The Grignard reagent $\text{ }\left( \text{RMgX} \right)\text{ }$ is a highly polar molecule. The Grignard reagent is a versatile reagent which reacts with aldehyde, ketone, and esters to form addition products where these are decomposed further to give alcohol. The Grignard reagent attacks the carbonyl carbon atom and generates tertiary alcohol. Complete step by step answer: - The Grignard reagent $\text{ }\left( \text{RMgX} \right)\text{ }$ is alkyl or aryl magnesium halides. The $\text{ C}-\text{Mg }$ bond in the Grignard reagent $\text{ }\overset{\delta -}{\mathop{\text{R}}}\,-\overset{\delta +}{\mathop{\text{Mg}}}\,-\text{X }$ is highly polar because carbon is electronegative relative to electropositive magnesium. Due to the polar nature of $\text{ C}-\text{Mg }$ the bond, Grignard reagents are very versatile reagents in organic synthesis. The Grignard reagent reacts with aldehyde, ketones, and esters to form addition products which decompose with dil.$\text{ HCl }$ or dil.$\text{ }{{\text{H}}_{\text{2}}}\text{S}{{\text{O}}_{\text{4}}}\text{ }$ to give alcohols. - Here, the carbon double bond shifts on the oxygen atom and generates a positively charged carbon atom. In the next step the methyl group of the Grignard reagent $\text{ C}{{\text{H}}_{\text{3}}}\text{MgBr }$ attacks on the positively charged carbon atom. The methyl group is added to the electron-deficient carbon atom. In the next step water molecules attack on the oxygen atom such that the oxygen atom abstracts a hydrogen atom from the water molecules. This results in the formation of 1-methylcyclohex-2-enol. The course of reaction is as shown below: The obtained product is 1-methylcyclohex-1-enol. Note: Note that Grignard reagent acts as a nucleophile because of the slight negative character of a carbon atom and partial positive character of carbon of carbonyl compound. The attacks take place in such a manner that the ketones produce tertiary alcohol.
CommonCrawl
Advances in Intelligent Computing and Communication Advances in Intelligent Computing and Communication pp 447-457 | Cite as Frequency Regulation in an Islanded Microgrid with Optimal Fractional Order PID Controller Narendra Kumar Jena Subhadra Sahoo Amar Bijaya Nanda Binod Kumar Sahu Kanungo B. Mohanty The aim of this paper is to address the load frequency control (LFC) in an isolated ac microgrid endorsing a fractional order PID controller. This microgrid comprises of several distributed generators (DGs) along with energy-storing elements such as photovoltaic panel (PV), wind turbine generator (WTG), diesel generator, fuel cell (FC), microturbine generator (MTG), flywheel (FH) storage unit, and battery energy storage (BES) unit. Due to the intermittent property of natural sources, e.g., fluctuation of wind speed, variation of irradiance, active power fluctuates, by which the frequency in the AC microgrid fluctuates. Besides this, low inertia of MTG and diesel generator, and no inertia of energy-storing element cause the frequency fluctuation in the microgrid in the wake of capricious load demand. So a robust secondary controller is quintessential. In this recommended model, a fractional PID controller is designed by applying symbiotic organisms search (SOS) computational technique. To validate the performance of FOPID, its result is compared with PID controller. Further, the LFC of the system is investigated by enforcing different combinations of chaotic load perturbation, wind power alternation, and solar power variation which confers the robustness of the controller. Load frequency control Microgrid Fractional order PID controller SOS algorithm $$ \begin{array}{*{20}l} \begin{aligned} D & = {\text{Damping}}\,{\text{coefficient}} \\ & = 0.012 \\ \end{aligned} \hfill & \begin{aligned} T_{\text{MT}} & = {\text{Time}}\,{\text{constant}}\,{\text{of}}\,{\text{MT}} \\ & = 2\,{\text{s}} \\ \end{aligned} \hfill & \begin{aligned} T_{\text{FH}} & = {\text{Time}}\,{\text{constant}}\,{\text{of}}\,{\text{FH}} \\ & = 0.1\,{\text{s}} \\ \end{aligned} \hfill \\ \begin{aligned} M & = {\text{Inertia}}\,{\text{constant}} \\ & = 0.2 \\ \end{aligned} \hfill & \begin{aligned} T_{\text{Di}} & = {\text{Time}}\,{\text{constant}}\,{\text{of}}\,{\text{diesel}}\;{\text{engine}} \\ & = 2\,{\text{s}} \\ \end{aligned} \hfill & \begin{aligned} T_{\text{BES}} & = {\text{Time}}\,{\text{constant}}\,{\text{of}}\,{\text{BES}} \\ & = 0.1\,{\text{s}} \\ \end{aligned} \hfill \\ \begin{aligned} T_{\text{PV}} & = {\text{Time}}\,{\text{constant}}\,{\text{of}}\,{\text{PV}} \\ & = 1.8\,{\text{s}} \\ \end{aligned} \hfill & \begin{aligned} T_{\text{FC}} & = {\text{Time}}\,{\text{constant}}\,{\text{of}}\,{\text{FC}} \\ & = 4\,{\text{s}} \\ \end{aligned} \hfill & \begin{aligned} T_{\text{wtg}} & = {\text{Time}}\,{\text{constant}}\,{\text{of}}\,{\text{WTG}} \\ & = 1.5\,{\text{s}} \\ \end{aligned} \hfill \\ \end{array} $$ Yang J, Zeng Z, Tang Y, Yan J, He H, Wu Y (2015) Load frequency control in isolated micro-grids with electrical vehicles based on multivariable generalized predictive theory. Energies 8(3):2145–2164CrossRefGoogle Scholar Bevrani H, Habibi F, Babahajyani P, Watanabe M, Mitani Y (2012) Intelligent frequency control in an ac microgrid: online PSO based fuzzy tuning approach. IEEE Trans Smart Grid 3(4):1935–1944CrossRefGoogle Scholar Zheng S, Tang X, Song B, Lu S, Ye B (2013) Stable adaptive PI control for permanent magnet synchronous motor drive based on improved JITL technique. ISA Trans 52(4):539–549CrossRefGoogle Scholar Li X, Song YJ, Han SB (2008) Frequency control in micro-grid power system combined with electrolyzer system and fuzzy PI controller. J Power Sources 180(1):468–475CrossRefGoogle Scholar Engin Y (2014) Interval type-2 fuzzy PID load frequency controller using big bang-big crunch optimization. Appl Soft Comput 15:100–112CrossRefGoogle Scholar Khooban MH, Niknam T, Blaabjerg F, Davari P, Dragicevic T (2016) A robust adaptive load frequency control for micro-grids. ISA Trans 65:220–229CrossRefGoogle Scholar Sahu PC, Mishra S, Prusty RC, Panda S (2018) Improved-salp swarm optimized type-II fuzzy controller in load frequency control of multi area islanded AC microgrid. Sustain Energy Grids Netw 16:380–392CrossRefGoogle Scholar Singh VP, Mohanty SR, Kishor N, Ray PK (2013) Robust H-infinity load frequency control in hybrid distributed generation system. Int J Elect Power Energy Syst 46:294–305CrossRefGoogle Scholar Bevrani H, Feizi MR, Ataee S (2015) Robust frequency control in an islanded microgrid: H∞, and μ-synthesis approaches. IEEE Trans Smart Grid 7(2):706–717Google Scholar Khadanga RK, Padhy S, Panda S, Kumar A (2018) Design and analysis of tilt integral derivative controller for frequency control in an islanded microgrid: a novel hybrid dragonfly and pattern search algorithm approach. Arab J Sci Eng 43(6):3103–3114CrossRefGoogle Scholar Kerdphol T, Rahman FS, Watanabe M, Mitani Y, Turschner D, Beck HP (2019) Enhanced virtual inertia control based on derivative technique to emulate simultaneous inertia and damping properties for microgrid frequency regulation. IEEE Access 7:14422–14433CrossRefGoogle Scholar Barik AK, Das DC (2018) Expeditious frequency control of solar photovoltaic/biogas/biodiesel generator based isolated renewable microgrid using grasshopper optimisation algorithm. IET Renew Power Gener 12(14):1659–1667CrossRefGoogle Scholar Podlubny I (1999) Fractional-order systems and PI/sup/spl/lambda//D/sup/spl/mu//-controllers. IEEE Trans Autom Control 44(1):208–214MathSciNetCrossRefGoogle Scholar Min-Yuan C, Prayogo D (2014) Symbiotic organisms search: a new metaheuristic optimization algorithm. Comput Struct 139:98–112CrossRefGoogle Scholar © Springer Nature Singapore Pte Ltd. 2020 1.Department of Electrical EngineeringSiksha 'O' Anusandhan Deemed to be UniversityBhubaneswarIndia 2.Department of Electrical EngineeringNIT, RourkelaRourkelaIndia Jena N.K., Sahoo S., Nanda A.B., Sahu B.K., Mohanty K.B. (2020) Frequency Regulation in an Islanded Microgrid with Optimal Fractional Order PID Controller. In: Mohanty M., Das S. (eds) Advances in Intelligent Computing and Communication. Lecture Notes in Networks and Systems, vol 109. Springer, Singapore DOI https://doi.org/10.1007/978-981-15-2774-6_53 Publisher Name Springer, Singapore Print ISBN 978-981-15-2773-9 Online ISBN 978-981-15-2774-6 eBook Packages Intelligent Technologies and Robotics
CommonCrawl
Took pill around 6 PM; I had a very long drive to and from an airport ahead of me, ideal for Adderall. In case it was Adderall, I chewed up the pill - by making it absorb faster, more of the effect would be there when I needed it, during driving, and not lingering in my system past midnight. Was it? I didn't notice any change in my pulse, I yawned several times on the way back, my conversation was not more voluminous than usual. I did stay up later than usual, but that's fully explained by walking to get ice cream. All in all, my best guess was that the pill was placebo, and I feel fairly confident but not hugely confident that it was placebo. I'd give it ~70%. And checking the next morning… I was right! Finally. Furthermore, there is no certain way to know whether you'll have an adverse reaction to a particular substance, even if it's natural. This risk is heightened when stacking multiple substances because substances can have synergistic effects, meaning one substance can heighten the effects of another. However, using nootropic stacks that are known to have been frequently used can reduce the chances of any negative side effects. I can't try either of the products myself – I am pregnant and my doctor doesn't recommend it – but my husband agrees to. He describes the effect of the Nootrobox product as like having a cup of coffee but not feeling as jittery. "I had a very productive day, but I don't know if that was why," he says. His Nootroo experience ends after one capsule. He gets a headache, which he is convinced is related, and refuses to take more. "It is just not a beginner friendly cocktail," offers Noehr. As Sulbutiamine crosses the blood-brain barrier very easily, it has a positive effect on the cholinergic and the glutamatergic receptors that are responsible for essential activities impacting memory, concentration, and mood. The compound is also fat-soluble, which means it circulates rapidly and widely throughout the body and the brain, ensuring positive results. Thus, patients with schizophrenia and Parkinson's disease will find the drug to be very effective. Blinding stymied me for a few months since the nasty taste was unmistakable and I couldn't think of any gums with a similar flavor to serve as placebo. (The nasty taste does not seem to be due to the nicotine despite what one might expect; Vaniver plausibly suggested the bad taste might be intended to prevent over-consumption, but nothing in the Habitrol ingredient list seemed to be noted for its bad taste, and a number of ingredients were sweetening sugars of various sorts. So I couldn't simply flavor some gum.) For obvious reasons, it's difficult for researchers to know just how common the "smart drug" or "neuro-enhancing" lifestyle is. However, a few recent studies suggest cognition hacking is appealing to a growing number of people. A survey conducted in 2016 found that 15% of University of Oxford students were popping pills to stay competitive, a rate that mirrored findings from other national surveys of UK university students. In the US, a 2014 study found that 18% of sophomores, juniors, and seniors at Ivy League colleges had knowingly used a stimulant at least once during their academic career, and among those who had ever used uppers, 24% said they had popped a little helper on eight or more occasions. Anecdotal evidence suggests that pharmacological enhancement is also on the rise within the workplace, where modafinil, which treats sleep disorders, has become particularly popular. But there are some potential side effects, including headaches, anxiety and insomnia. Part of the way modafinil works is by shifting the brain's levels of norepinephrine, dopamine, serotonin and other neurotransmitters; it's not clear what effects these shifts may have on a person's health in the long run, and some research on young people who use modafinil has found changes in brain plasticity that are associated with poorer cognitive function. Attention-deficit/hyperactivity disorder (ADHD), a behavioral syndrome characterized by inattention and distractibility, restlessness, inability to sit still, and difficulty concentrating on one thing for any period of time. ADHD most commonly occurs in children, though an increasing number of adults are being diagnosed with the disorder. ADHD is three times more… "It is important to note that Abilify MyCite's prescribing information (labeling) notes that the ability of the product to improve patient compliance with their treatment regimen has not been shown. Abilify MyCite should not be used to track drug ingestion in "real-time" or during an emergency because detection may be delayed or may not occur," the FDA said in a statement. Serotonin, or 5-hydroxytryptamine (5-HTP), is another primary neurotransmitter and controls major features of the mental landscape including mood, sleep and appetite. Serotonin is produced within the body by exposure, which is one reason that the folk-remedy of "getting some sun" to fight depression is scientifically credible. Many foods contain natural serotonergic (serotonin-promoting or releasing) compounds, including the well-known chemical L-Tryptophan found in turkey, which can promote sleep after big Thanksgiving dinners. Phenylpiracetam (Phenotropil) is one of the best smart drugs in the racetam family. It has the highest potency and bioavailability among racetam nootropics. This substance is almost the same as Piracetam; only it contains a phenyl group molecule. The addition to its chemical structure improves blood-brain barrier permeability. This modification allows Phenylpiracetam to work faster than other racetams. Its cognitive enhancing effects can last longer as well. On 8 April 2011, I purchased from Smart Powders (20g for $8); as before, some light searching seemed to turn up SP as the best seller given shipping overhead; it was on sale and I planned to cap it so I got 80g. This may seem like a lot, but I was highly confident that theanine and I would get along since I already drink so much tea and was a tad annoyed at the edge I got with straight caffeine. So far I'm pretty happy with it. My goal was to eliminate the physical & mental twitchiness of caffeine, which subjectively it seems to do. A total of 330 randomly selected Saudi adolescents were included. Anthropometrics were recorded and fasting blood samples were analyzed for routine analysis of fasting glucose, lipid levels, calcium, albumin and phosphorous. Frequency of coffee and tea intake was noted. 25-hydroxyvitamin D levels were measured using enzyme-linked immunosorbent assays…Vitamin D levels were significantly highest among those consuming 9-12 cups of tea/week in all subjects (p-value 0.009) independent of age, gender, BMI, physical activity and sun exposure. Hall, Irwin, Bowman, Frankenberger, & Jewett (2005) Large public university undergraduates (N = 379) 13.7% (lifetime) 27%: use during finals week; 12%: use when party; 15.4%: use before tests; 14%: believe stimulants have a positive effect on academic achievement in the long run M = 2.06 (SD = 1.19) purchased stimulants from other students; M = 2.81 (SD = 1.40) have been given stimulants by other studentsb What worries me about amphetamine is its addictive potential, and the fact that it can cause stress and anxiety. Research says it's only slightly likely to cause addiction in people with ADHD, [7] but we don't know much about its addictive potential in healthy adults. We all know the addictive potential of methamphetamine, and amphetamine is closely related enough to make me nervous about so many people giving it to their children. Amphetamines cause withdrawal symptoms, so the potential for addiction is there. Although piracetam has a history of "relatively few side effects," it has fallen far short of its initial promise for treating any of the illnesses associated with cognitive decline, according to Lon Schneider, a professor of psychiatry and behavioral sciences at the Keck School of Medicine at the University of Southern California. "We don't use it at all and never have." Modafinil is a prescription smart drug most commonly given to narcolepsy patients, as it promotes wakefulness. In addition, users indicate that this smart pill helps them concentrate and boosts their motivation. Owing to Modafinil, the feeling of fatigue is reduced, and people report that their everyday functions improve because they can manage their time and resources better, as a result reaching their goals easier. Googling, you sometimes see correlational studies like Intake of Flavonoid-Rich Wine, Tea, and Chocolate by Elderly Men and Women Is Associated with Better Cognitive Test Performance; in this one, the correlated performance increase from eating chocolate was generally fairly modest (say, <10%), and the maximum effects were at 10g/day of what was probably milk chocolate, which generally has 10-40% chocolate liquor in it, suggesting any experiment use 1-4g. More interesting is the blind RCT experiment Consumption of cocoa flavanols results in acute improvements in mood and cognitive performance during sustained mental effort11, which found improvements at ~1g; the most dramatic improvement of the 4 tasks (on the Threes correct) saw a difference of 2 to 6 at the end of the hour of testing, while several of the other tests converged by the end or saw the controls winning (Sevens correct). Crews et al 2008 found no cognitive benefit, and an fMRI experiment found the change in brain oxygen levels it wanted but no improvement to reaction times. Noopept is a nootropic that belongs to the ampakine family. It is known for promoting learning, boosting mood, and improving logical thinking. It has been popular as a study drug for a long time but has recently become a popular supplement for improving vision. Users report seeing colors more brightly and feeling as if their vision is more vivid after taking noopept. The data from 2-back and 3-back tasks are more complex. Three studies examined performance in these more challenging tasks and found no effect of d-AMP on average performance (Mattay et al., 2000, 2003; Mintzer & Griffiths, 2007). However, in at least two of the studies, the overall null result reflected a mixture of reliably enhancing and impairing effects. Mattay et al. (2000) examined the performance of subjects with better and worse working memory capacity separately and found that subjects whose performance on placebo was low performed better on d-AMP, whereas subjects whose performance on placebo was high were unaffected by d-AMP on the 2-back and impaired on the 3-back tasks. Mattay et al. (2003) replicated this general pattern of data with subjects divided according to genotype. The specific gene of interest codes for the production of Catechol-O-methyltransferase (COMT), an enzyme that breaks down dopamine and norepinephrine. A common polymorphism determines the activity of the enzyme, with a substitution of methionine for valine at Codon 158 resulting in a less active form of COMT. The met allele is thus associated with less breakdown of dopamine and hence higher levels of synaptic dopamine than the val allele. Mattay et al. (2003) found that subjects who were homozygous for the val allele were able to perform the n-back faster with d-AMP; those homozygous for met were not helped by the drug and became significantly less accurate in the 3-back condition with d-AMP. In the case of the third study finding no overall effect, analyses of individual differences were not reported (Mintzer & Griffiths, 2007). In August 2011, after winning the spaced repetition contest and finishing up the Adderall double-blind testing, I decided the time was right to try nicotine again. I had since learned that e-cigarettes use nicotine dissolved in water, and that nicotine-water was a vastly cheaper source of nicotine than either gum or patches. So I ordered 250ml of water at 12mg/ml (total cost: $18.20). A cigarette apparently delivers around 1mg of nicotine, so half a ml would be a solid dose of nicotine, making that ~500 doses. Plenty to experiment with. The question is, besides the stimulant effect, nicotine also causes habit formation; what habits should I reinforce with nicotine? Exercise, and spaced repetition seem like 2 good targets. Fish oil (Examine.com, buyer's guide) provides benefits relating to general mood (eg. inflammation & anxiety; see later on anxiety) and anti-schizophrenia; it is one of the better supplements one can take. (The known risks are a higher rate of prostate cancer and internal bleeding, but are outweighed by the cardiac benefits - assuming those benefits exist, anyway, which may not be true.) The benefits of omega acids are well-researched. Using prescription ADHD medications, racetams, and other synthetic nootropics can boost brain power. Yes, they can work. Even so, we advise against using them long-term since the research on their safety is still new. Use them at your own risk. For the majority of users, stick with all natural brain supplements for best results. What is your favorite smart pill for increasing focus and mental energy? Tell us about your favorite cognitive enhancer in the comments below. Finally, two tasks measuring subjects' ability to control their responses to monetary rewards were used by de Wit et al. (2002) to assess the effects of d-AMP. When subjects were offered the choice between waiting 10 s between button presses for high-probability rewards, which would ultimately result in more money, and pressing a button immediately for lower probability rewards, d-AMP did not affect performance. However, when subjects were offered choices between smaller rewards delivered immediately and larger rewards to be delivered at later times, the normal preference for immediate rewards was weakened by d-AMP. That is, subjects were more able to resist the impulse to choose the immediate reward in favor of the larger reward. If you happen to purchase anything recommended on this or affiliated websites, we will likely receive some kind of affiliate compensation. We only recommend stuff that we truly believe in and share with our friends and family. If you ever have an issue with anything we recommend please let us know. We want to make sure we are always serving you at the highest level. If you are purchasing using our affiliate link, you will not pay a different price for the products and/or services, but your purchase helps support our ongoing work. Thanks for your support! REPUTATION: We were blown away by the top-notch reputation that Thrive Naturals has in the industry. From the consumers we interviewed, we found that this company has a legion of loyal brand advocates. Their customers frequently told us that they found Thrive Naturals easy to communicate with, and quick to process and deliver their orders. The company has an amazing track record of customer service and prides itself on its Risk-Free No Questions Asked 1-Year Money Back Guarantee. As an online advocate for consumer rights, we were happy to see that they have no hidden fees nor ongoing monthly billing programs that many others try to trap consumers into. Barbaresi WJ, Katusic SK, Colligan RC, Weaver AL, Jacobsen SJ. Modifiers of long-term school outcomes for children with attention-deficit/hyperactivity disorder: Does treatment with stimulant medication make a difference? Results from a population-based study. Journal of Developmental and Behavioral Pediatrics. 2007;28:274–287. doi: 10.1097/DBP.0b013e3180cabc28. [PubMed] [CrossRef] Somewhat ironically given the stereotypes, while I was in college I dabbled very little in nootropics, sticking to melatonin and tea. Since then I have come to find nootropics useful, and intellectually interesting: they shed light on issues in philosophy of biology & evolution, argue against naive psychological dualism and for materialism, offer cases in point on the history of technology & civilization or recent psychology theories about addiction & willpower, challenge our understanding of the validity of statistics and psychology - where they don't offer nifty little problems in statistics and economics themselves, and are excellent fodder for the young Quantified Self movement4; modafinil itself demonstrates the little-known fact that sleep has no accepted evolutionary explanation. (The hard drugs also have more ramifications than one might expect: how can one understand the history of Southeast Asia and the Vietnamese War without reference to heroin, or more contemporaneously, how can one understand the lasting appeal of the Taliban in Afghanistan and the unpopularity & corruption of the central government without reference to the Taliban's frequent anti-drug campaigns or the drug-funded warlords of the Northern Alliance?) The compound is one of the best brain enhancement supplements that includes memory enhancement and protection against brain aging. Some studies suggest that the compound is an effective treatment for disorders like vascular dementia, Alzheimer's, brain stroke, anxiety, and depression. However, there are some side effects associated with Alpha GPC, like a headache, heartburn, dizziness, skin rashes, insomnia, and confusion. Many of the most popular "smart drugs" (Piracetam, Sulbutiamine, Ginkgo Biloba, etc.) have been around for decades or even millenia but are still known only in medical circles or among esoteric practicioners of herbal medicine. Why is this? If these compounds have proven cognitive benefits, why are they not ubiquitous? How come every grade-school child gets fluoride for the development of their teeth (despite fluoride's being a known neurotoxin) but not, say, Piracetam for the development of their brains? Why does the nightly news slant stories to appeal more to a fear-of-change than the promise of a richer cognitive future? Most research on these nootropics suggest they have some benefits, sure, but as Barbara Sahakian and Sharon Morein-Zamir explain in the journal Nature, nobody knows their long-term effects. And we don't know how extended use might change your brain chemistry in the long run. Researchers are getting closer to what makes these substances do what they do, but very little is certain right now. If you're looking to live out your own Limitless fantasy, do your research first, and proceed with caution. An entirely different set of questions concerns cognitive enhancement in younger students, including elementary school and even preschool children. Some children can function adequately in school without stimulants but perform better with them; medicating such children could be considered a form of cognitive enhancement. How often does this occur? What are the roles and motives of parents, teachers, and pediatricians in these cases? These questions have been discussed elsewhere and deserve continued attention (Diller, 1996; Singh & Keller, 2010). Long-term use is different, and research-backed efficacy is another question altogether. The nootropic market is not regulated, so a company can make claims without getting in trouble for making those claims because they're not technically selling a drug. This is why it's important to look for well-known brands and standardized nootropic herbs where it's easier to calculate the suggested dose and be fairly confident about what you're taking. Two additional studies assessed the effects of d-AMP on visual–motor sequence learning, a form of nondeclarative, procedural learning, and found no effect (Kumari et al., 1997; Makris, Rush, Frederich, Taylor, & Kelly, 2007). In a related experimental paradigm, Ward, Kelly, Foltin, and Fischman (1997) assessed the effect of d-AMP on the learning of motor sequences from immediate feedback and also failed to find an effect. Smart pills are defined as drugs or prescription medication used to treat certain mental disorders, from milder ones such as brain fog, to some more severe like ADHD. They are often referred to as 'nootropics' but even though the two terms are often used interchangeably, smart pills and nootropics represent two different types of cognitive enhancers. Fitzgerald 2012 and the general absence of successful experiments suggests not, as does the general historic failure of scores of IQ-related interventions in healthy young adults. Of the 10 studies listed in the original section dealing with iodine in children or adults, only 2 show any benefit; in lieu of a meta-analysis, a rule of thumb would be 20%, but both those studies used a package of dozens of nutrients - and not just iodine - so if the responsible substance were randomly picked, that suggests we ought to give it a chance of 20% \times \frac{1}{\text{dozens}} of being iodine! I may be unduly optimistic if I give this as much as 10%. They can cause severe side effects, and their long-term effects aren't well-researched. They're also illegal to sell, so they must be made outside of the UK and imported. That means their manufacture isn't regulated, and they could contain anything. And, as 'smart drugs' in 2018 are still illegal, you might run into legal issues from possessing some 'smart drugs' without a prescription. With just 16 predictions, I can't simply bin the predictions and say yep, that looks good. Instead, we can treat each prediction as equivalent to a bet and see what my winnings (or losses) were; the standard such proper scoring rule is the logarithmic rule which pretty simple: you earn the logarithm of the probability if you were right, and the logarithm of the negation if you were wrong; he who racks up the fewest negative points wins. We feed in a list and get back a number: Stimulants are drugs that accelerate the central nervous system (CNS) activity. They have the power to make us feel more awake, alert and focused, providing us with a needed energy boost. Unfortunately, this class encompasses a wide range of drugs, some which are known solely for their side-effects and addictive properties. This is the reason why many steer away from any stimulants, when in fact some greatly benefit our cognitive functioning and can help treat some brain-related impairments and health issues. "As a neuro-optometrist who cares for many brain-injured patients experiencing visual challenges that negatively impact the progress of many of their other therapies, Cavin's book is a god-send! The very basic concept of good nutrition among all the conflicting advertisements and various "new" food plans and diets can be enough to put anyone into a brain fog much less a brain injured survivor! Cavin's book is straightforward and written from not only personal experience but the validation of so many well-respected contemporary health care researchers and practitioners! I will certainly be recommending this book as a "Survival/Recovery 101" resource for all my patients including those without brain injuries because we all need optimum health and well-being and it starts with proper nourishment! Kudos to Cavin Balaster!" Nootropics – sometimes called smart drugs – are compounds that enhance brain function. They're becoming a popular way to give your mind an extra boost. According to one Telegraph report, up to 25% of students at leading UK universities have taken the prescription smart drug modafinil [1], and California tech startup employees are trying everything from Adderall to LSD to push their brains into a higher gear [2].
CommonCrawl
Conclusion and policy implications Abbas Ali Chandio1Email authorView ORCID ID profile, Yuansheng Jiang1 and Abdul Rehman2 This study examines the access to credit, credit investment, and credit fungibility for small-holder farmers and medium- and large-scale farmers in the agricultural sector of the Shikarpur District of Sindh, Pakistan. A standardized questionnaire was used to collect data from 87 farmers in the Shikarpur District. We investigated the availability of credit and the use of credit fungibility by farmers with small-, medium-, and large-scale holdings by applying a credit fungibility ratio and an ANOVA technique. The factors that influence the farmers' access to agricultural credit were analyzed using a probit regression model. The results revealed that farmers in both study groups used some amount of their agricultural credit for non-agricultural activities. Further, the results of the probit regression analysis showed that formal education, farming experience, household size, and farm size had a positive and significant influence on the farmers' access to agricultural credit. Based on these findings, our study suggests that a strong monitoring of farmers is needed in the study area. Agricultural credit Fungibility Investment of Credit Credit margin The agriculture sector has an important role in the economy of Pakistan. About 42.3% of employment and near about 19.5% of the GDP were generated by this sector (GOP 2017a; Rehman et al. 2015). Agricultural credit is an essential element of agricultural growth in the developing countries. It is a temporary substitute for personal saving by accelerating technological change by stimulating smallholder productivity, asset formation, food security and the subsequent rural agricultural income to stimulate agricultural production (Kimuyu and Omiti 2000). The World Bank has also promoted agricultural credit through its private finance department and other banks such as the International Finance Corporation (IFC). Small-scale peasant farmers must be provided formal funding if they are able to generate a marketable surplus that can contribute to the development process (WB 2008). Different studies have been done regarding agricultural credit in the Pakistan and its impact on agricultural growth as well as for the economic growth. The access of agricultural credit has a vital role for smallholder farmers in the Pakistan, furthermore, Rehman et al. (2017a) study on fertilizers consumption, water availability and credit distribution results revealed that the credit distribution had a positive influence to agricultural production in the Pakistan. Similarly, Saqib et al. (2016) study results show that the small-scale farmers have limited access to the agricultural credit as compared to the medium and large-scale farmers. The limited access to agricultural credit has been identified as a major constraint on the agricultural development of smallholder farmers in many developing countries (Chandio et al., 2016a; Dercon and Christiaensen 2011; Guirkinger and Boucher 2008; Karlan et al. 2014; Keramati et al. 2016; Rehman et al., 2017b). Smallholder farmers, that consider the important sectorial drivers, have low access to the credit, one of the key constraints. A research study in the key areas conducted in Kenya shows that low credit access is one of the main constraints highlighted to improve access, increase productivity and overcome rural poverty (RoK 2006). The chances for small farmers to increase their output and eventually improve their income that depend largely on their access to credit and their ability to make effective use of credit (Chandio et al. 2017c; Mahmood et al. 2009; Siddiqi and Baluch 2009). In recognition of this, the government of developing countries provide subsidized credit to the small farmers (Ellis 1992). Hussein and Ohlmer (2008) study revealed that where individuals are subject to credit constraints, individuals cannot take advantage of their own credit and thirst for the relevant market conditions at the moment. Market imperfections, institutions, and individuals or family-related factors may limit access to credit markets. Inefficiencies or imperfections in credit markets in developing countries are often caused by government interest ceilings, monopoly power, large transaction costs and moral hazard (Bell et al. 1997; Li et al. 2014; Wu et al. 2016). Access and utilization of the agricultural credit are therefore considered as an important means of increasing agricultural production and improving rural livelihoods (Gatti and Love 2008; Shimamura and Lastarria-Cornhiel 2010). At the macro level, limited credit has been identified as a major constraint preventing people from getting out of poverty (Kumar et al. 2013). Several studies have found that the bulk of agricultural credit is used for the non-agricultural purposes, including the purchase of consumer goods and celebration of festivals (Muhumuza 1997; Siddiqi and Baluch 2009). The production and development loans provided by the financial institution are important for the growth and development of agriculture sector (Chandio et al. 2017d). Production loans are used to purchase seeds, chemical fertilizers, insecticides, water charges, labor, animal feeds, and medicines. Similarly, development loans are used to purchase agricultural equipment such as tractors, threshers, trolleys, cutting machine adhesives, spray machinery and pipe-handling equipment. In this regard, the agricultural output of small-scale farmers is very low, their land size is small and also their capital investment is small. Consequently the role of agricultural credit is crucial for the agricultural development (Chandio et al. 2016b; Chandio et al., 2017e; Fayaz et al. 2006). Simon (2013) research suggested that, among other things, the age and sex of family heads of households and the size of their families are the main determinants of rural credit usage in the Zimbabwe. Similarly, Amjad and Hasnu (2007) analyzed the use of rural credit by small farmers in Pakistan and found that household labor and the literacy of heads of household are affecting the use of credit by farmers one of the factors. Credit swaps have been explored by some research institutes. For instance, credit is spent on consumption and festivals, education and healthcare, and repayments of loans (Akram and Hussain 2008; Hussain and Thapa 2016). Nosiru (2010) and Enimu et al. (2017) research showed that in Nigeria microfinance was provided to support farmers' investment in purchasing more agricultural inputs to enhance their agricultural productivity. The findings revealed that the microcredit has negative effect on agricultural productivity. This was due to the utilization of microcredit in other necessities. In various parts of the world, an ample of empirical literature regarding smallholders access to formal credit and its effect on agricultural productivity and as well as the livelihood of smallholders has discussed. To study agricultural credit fungibility issues and utilization of credit in agriculture sector limited empirical literature is available in Pakistan, particularly in Sindh. Small, medium and large-scale farmers were acquired agricultural credit from both formal financial institutions and informal financial channels in the study area. The main objective of this study was to examine agricultural credit investment in the agricultural sector and credit fungibility which is known as the utilization of agricultural credit in the non-agricultural sector. This study was conducted in the Shikarpur District of Sindh Province, Pakistan. The total area of the Shikarpur District is 2,512 square kilometers. The 2017 census showed that the total population of the district was 1,231,481, and the total number of households was 207,555. Out of the total population, approximately 303,249 people were living in urban areas, and 928,232 people resided in rural areas (GOP 2017b). Shikarpur District is situated in the northern part of the province, and plays an important role in rice cultivation. The majority of rural households in this region rely on rice cultivation as their major source of employment and livelihood. Sample size and data For this study, a three-stage random sampling technique was adopted. In the first stage, we selected the Shikarpur District over several other possible districts because Shikarpur is the main rice growing district of Sindh Province. In the second stage, Lakhi Gullam Shah, an administrative subdivision (taluka) of Shikarpur, was selected at random for the study. In the final stage, 15 landholder farmers were selected randomly from each of 6 villages. These farmers were interviewed personally by means of a pretested questionnaire. Thus, the total sample size was 90 landholder farmers, but we included only 87 landholder farmers as the sample for data analysis in this research. In the district, smallholder farmers need more credit to purchase farm inputs (e.g., feed and fertilizer) and farming implements. In this study, landholder farmers were identified specifically for sampling, and the sample size was set according to the method described by Yamane (1967). Primary data were collected from the respondents by means of a questionnaires. The survey included inquiries about the age of the head of the household, education level, farming experience, amount of credit obtained from different sources, amount of credit used for agriculture, and the amount used for other purposes. The collected sample size was determined with a margin of error as specified below: $$ n=\frac{N}{1+N{e}^2} $$ Where n indicates the sample size, N indicates the total number of landholder farmers, and e is the margin error. Analytical techniques The methodology for the credit margin of investment and credit fungibility in the agricultural sector following (Hussain, 2012) and specified as; $$ {CR}_F=\frac{CR_f}{{\widehat{CR}}_t}\times 100 $$ where CRF indicates the credit fungibility in percentage, CRfindicates the annual average of credit used for other needs and \( {\widehat{CR}}_t \) represents an annual average of credit obtained from different sources. The credit margin of investment is specified in the eq. 3 and below; $$ {CR}_m=\left({\widehat{CR}}_t-{CR}_f\ \right) $$ $$ {CR}_{in}=\frac{CR_m}{{\widehat{CR}}_t}\times 100 $$ where CRm indicates an annual credit margin of investment and CRin indicates credit margin of investment in percentage. In this study, the dependent variable is a dummy variable 1 for access to credit from formal sources and 0 for access to credit from informal sources. Consequently, the Probit regression model was used to examine the important factors that influence farmers' access to credit. $$ {Y}_i={\uppsi}_0+{\psi}_1{X}_1+{\psi}_2{X}_2+{\psi}_3{X}_3+{\psi}_4{X}_4+{\psi}_5{X}_5+{\mu}_i $$ where Y is access to credit (binary dependent variable), X1denotes the age of the household head, X2 represents education level, X3 represents farming experience, X4 represents household size, X5 represents landholding size, ψ0 to ψ5 represents parameters of the model to be estimated and μi denotes error term. The result of Table 1 reports the differences between the means of demographic characteristics of the sample of eighty-seven landholder farmers. The whole sample average age was 38.29 years while smallholder farmers had 36.54 years and medium and large-scale farmers had 41.63 years. The average years of the formal education for the whole sample were 6.36 years, smallholder farmers had 5.35 of education and medium and large-scale farmers had up to 8.30 years of formal education. Furthermore, the farming experience of smallholder farmers, medium, and large-scale farmers had 26.22 and 27.43 years. Additionally, an average farm size for the whole sample was 11.20 acres while smallholder farmers and medium and large-scale farmers had farm size of 5.67 and 21.73 acres, respectively. The value of T-test showed a significant difference between smallholder farmers and medium and large-scale farmers with respect to their age, education level, and farm size. On the other hand, there was no significant difference in farming experience among the group of farmers. Characteristics of Farmers Group Smallholder farmers (n = 55) Medium & large-scale farmers (n = 32) Total (n = 87) 3.03*** Education level (years) Farming experience (years) Farm size (acres) Note: *** shows significance at P < 0.01 Table 2 reports the results of credit margin of investment in the agricultural and credit fungibility by farmer's group. The total annual average of credit is Rs. 58,547.368 and Rs.135833.3 were received by smallholder farmers and medium and large-scale farmers. Furthermore, results showed that amount of out of total credit, Rs. 30,167.68 and Rs. 76,763.27 had invested per year in the agricultural by smallholder farmers and medium and large-scale farmers (see Table.2). Additionally, the results revealed that there was a big amount of fungibility in the total credit acquired by a group of farmers in the study area. The majority of farmers had utilized credit for non-agricultural purposes. In the study area, farmers had utilized their credit in family expenditures, health, education and other businesses respectively. Regarding credit, fungibility results showed that around 48.47% and 43.49% had the credit fungibility by smallholder farmers and medium and large-scale farmers. Among smallholder farmers, credit fungibility was observed more than those of medium and large-scale farmers. Our findings are consistent with the findings of (Ayaz and Hussain 2011), who compared to medium and large farmers that they used more credit in consumption, social activities and off-farm activities other than agriculture activities. Out of total amount of credit, 51.53% of funds had invested in agriculture by smallholder farmers. Whereas, 56.51% of credit had invested by medium and large-scale farmers. In the study area, it was observed that medium and large-scale farmers had more invested in agriculture than smallholder farmers. T-test value indicated that there is a highly significant difference in credit investment in agriculture and credit fungibility between smallholder farmers and medium and large-scale farmers. Credit Margin of Investment in the Agricultural Sector and Credit Fungibility Farmer's Group Credit received by farmers Credit Margin of Investment %age of Credit Investment Credit Used for Other Needs %age of Credit Fungibility \( {\widehat{CR}}_t \) CR m CR in CR f Smallholder farmers (55) 58,547.368 28,379.68421 Medium and large-scale farmers (32) 135,833.3 T-test = 11.93789***; P-value(0.000) 1.41739*; P-value(0.0831) NB: *** and * show significance at 1% and 10% Source: Computed from data from field survey, 2016 The amount of agricultural credit invested by a group of farmers in the agricultural is further analyzed and the results are presented in Table 3. Regarding the land preparation and investment, it was a highly significant difference was observed among the group of farmers. For instance, on the average amount of agricultural credit had more invested by medium and large-scale farmers than smallholder farmers (p < 0.01). Similarly, in seeds, chemical fertilizers, insecticides, irrigation and in labor investment the farmers were found significantly different (p < 0.01). The medium and large farmers have more investment as compare to smallholder farmers, and the smallholder farmers invested more about Rs. 8288.59 for the land preparation as compared to other activities. Similarly, Rs. 22,140.00 more invested in land preparation than seeds, chemical fertilizers, insecticides, irrigation and in labor cost by medium and large farmers. Investment of Credit in the Agricultural Sector by Farmers Group Land preparation 13.11*** Chemical fertilizers NB: T-test was applied to test for difference of investment of credit in the agricultural sector by farmer's group ***shows significance at P < 0.01 Source: Field survey data, 2016 Determinants of rice farmers' access to credit The determinants of rice farmers' access to credit were estimated employing probit regression model, and the estimated results were illustrated in Table 4. The analysis shows that formal education, household size, and farm size were the important factors influencing rice farmers' access to credit in the study area. However, the age of the rural household head has a negative effect on access to credit while rice farming experience has a statistically insignificant influence access to credit. Results of Probit Regression z-value P > z Marginal effect −0.0059 0.0909** Farm size 0.1452*** Number of observation Log likelihood LR chi2(5) Prob > chi2 Pseudo R2 (87) (−41.0622) (33.36) (0.000) (02889) *** and ** show significance at P < 0.01and P < 0.05 Agricultural credit is an important component of all economic activities like agriculture. Proper utilization of agricultural credit has dominant role to get high crop productivity, the results of Tables 2 and 3 reveal that the medium and large-scale farmers had invested more in purchasing of main farm inputs like seeds, fertilizers and pesticides as well as in the preparation of land, irrigation and labour. Further results show that medium and large-scale farmers had relatively low fungibility than smallholder farmers in the study area. Our results are consistent with the findings of (Hussain 2012), who highlighted that smallholder farmers had more credit fungibility than large-scale farmers in the Punjab, Pakistan. Further, the findings of the study are also consistent with (Akram and Hussain 2008; Hussain and Thapa 2012, 2016; Nosiru 2010; Saqib et al. 2017), who reported that the agricultural credit was used for non-agricultural purposes, for instance in education, health, consumption, festivals and repayment of loans. Several researchers adopted different econometric techniques for the data analysis such as Probit regression model, Logit and as well as Tobit regression model because of the nature of data. However, in our study, we have adopted Probit regression model to examine the determinants of rice farmers' access to credit. Various socioeconomic factors are influencing access to credit. The results of regression analysis are presented in Table 4. Age of the household head has a negative relationship with access to agricultural credit showing that when the age of the household increases, access to agricultural credit decreases. The results agree with the findings of Sebopetji and Belete (2009). Marginal effects of age of the household head reveal that as age increases by one unit, the probability of access to agricultural credit decreases by 0.0018%. Formal education has a positive and significant association with agricultural credit. The results of marginal effects indicate that if education level of the household increases by one unit, it increases access to credit by 0.0277%. This means that formal education plays an important role. Farmers with a high level of education could better understand the terms and conditions and the procedure of getting loans. Furthermore, household size has a positive and significant linkage with credit. Marginal effects of household size reveal that as household size increases by one unit, the probability of access to agricultural credit increase by 0.0473%. The findings of this study are consistent with the results of (Adeagbo and Awoyinka 2006; Duniya and Adinah 2015; Okunade 2007; Ugwumba and Omojola 2013). Additionally, farm size has a positive and highly significant relationship with access to agricultural credit. Its corresponding results of marginal effects reveal that if farm size increase by one unit access to credit increase by 0.0443%. Therefore, the farm size is a very important socioeconomic factor in accessing credit from formal financial sources. Also, it is a symbol of high social status in the society which helps in obtaining credit from informal financial channels. The results of this study are consistent with findings of (Ahmad et al. 2016; Hussain and Thapa 2012; Ugwumba and Omojola 2013). This study demonstrates that the credit margin of investment in the agricultural sector and credit fungibility among a different group of farmers in the district Shikarpur, Sindh, Pakistan. The findings of our study showed that formal financial institutions and informal financial channels used to provide agricultural credit to farmers in the study area. Majority of smallholder farmers received agricultural credit from informal financial channels. Almost both group of farmers had fungibility in the amount of agricultural credit. In a different group of farmers, smallholder farmers have used a considerable proportion of their loans for other non-agriculture purposes while medium and large-scale farmers had more invested in every agricultural activity. Most of the smallholder farmers were in inadequate of funds, and out of this credit they could not solve their farm problems, therefore smallholder farmers diverted this amount to other non-agricultural purposes. Further, the results of probit regression model reveal that formal education, farming experience; household size and farm size have a positive and significant influence on the farmers' access to agricultural credit. Based on the findings, the study recommends that the government needs to ensure more supply of agricultural credit to the farmers which can eliminate their dependency on informal financial channels. Increased the supply of agricultural credit can enhance the agricultural productivity and welfare of the farmers, provide adequate resources to fulfill domestic needs of the farmers, and ultimately decrease credit fungibility. Additionally, there is need of strong monitoring by formal financial institutions in order to avoid the credit fungibility. For Gross Domestic Product GOP: Government of Pakistan IFC: International Finance Corporation WB: The authors are very thankful to the College of Economics, Sichuan Agricultural University Chengdu, China for its financial support. Also, the authors are grateful to the editor an anonymous reviewers for their positive suggestion that helped to improve the content of this paper. We do not receive any financial support from any agency. All data and materials are available in this paper, so there is no other data to present. Dr. Abbas Ali Chandio has designed the study by drafting the introduction and contributed in the data collection, analysis of the data and discussion part of the manuscript. Prof. Yuansheng Jiang has supervised the entire process of the research. Dr. Abdul Rehman has contributed in summarizing the literature review and critically evaluated and proof read the manuscript. All authors read and approved the final manuscript. Dr. Abbas Ali Chandio is a Postdoctoral Scientific Research Fellow in the College of Economics, Sichuan Agricultural University, Chengdu 611,130, China, Prof. Dr. Yuansheng Jiang is a Dean, College of Economics, Sichuan Agricultural University, Executive Director, Sichuan Center for Germany Research, Deputy Director, Southwestern Center for Poverty Alleviation & Development Huimin Rd. 211, Wenjiang district, Chengdu, China and Dr. Abdul Rehman is a Postdoctoral Scientific Research Fellow in the Research Center of Agricultural-Rural-Peasants, Anhui University Hefei, China. The authors of this paper declare that they have no competing interests. College of Economics, Sichuan Agricultural University, Huimin Rd. 211, Wenjiang District, Chengdu, 611130, China Research Center of Agricultural-Rural-Peasants, Anhui University Hefei, Hefei, China Adeagbo S, Awoyinka Y (2006) Analysis of demand for informal and formal credit among small-scale cassava farmers in Oyo state, Nigeria. Journal of agriculture, forestry and the. Soc Sci 4:50–59Google Scholar Ahmad M, Sanaullah P, Khattak K (2016) Access to credit and its adequacy to farmers in Khyber Pakhtunkhwa: the case of Mardan district. Sarhad J Agriculture 32:184–191View ArticleGoogle Scholar Akram W, Hussain Z (2008) Agricultural credit constraints and borrowing behavior of farmers in rural Punjab. Eur J Sci Res 23:294–304Google Scholar Amjad S, Hasnu S (2007) Smallholders' access to rural credit: evidence from Pakistan. Lahore J Econ 12:1–25Google Scholar Ayaz S, Hussain Z (2011) Impact of institutional credit on production efficiency of farming sector: a case study of district Faisalabad. Pak Econ Soc Rev 49(2):149–162Google Scholar Bell C, Srintvasan T, Udry C (1997) Rationing, spillover, and interlinking in credit markets: the case of rural Punjab. Oxf Econ Pap 49:557–585 https://doi.org/10.1093/oxfordjournals.oep.a028625 View ArticleGoogle Scholar Chandio AA, Jiang Y, Joyo MA, Rehman A (2016a) Impact of area under cultivation, water availability, credit disbursement, and fertilizer off-take on wheat production in Pakistan. J App Environ Biol Sci 6:10–18Google Scholar Chandio AA, Jiang Y, Wei F, Rehman A, Liu D (2017c) Famers' access to credit: does collateral matter or cash flow matter?—evidence from Sindh, Pakistan. Cogent Econ Finance 5:1369383 https://doi.org/10.1080/23322039.2017.1369383 View ArticleGoogle Scholar Chandio AA, Magsi H, Rehman A, Sahito JGM (2017d) Types, Sources and Importance of Agricultural Credits in Pakistan. J Appl Environ Biol Sci 7:144–149Google Scholar Chandio AA, Yuansheng J, Gessesse AT, Dunya A (2017e) The Nexus of agricultural credit, farm size and technical efficiency in Sindh, Pakistan: a stochastic production frontier approach. J Saudi Soc Agric Sci https://doi.org/10.1016/j.jssas.2017.11.001 Chandio AA, Yuansheng J, Sahito JGM, Larik SA (2016b) Impact of formal credit on agricultural output: evidence from Pakistan. Afr J Bus Manag 10:162–168Google Scholar Dercon S, Christiaensen L (2011) Consumption risk, technology adoption and poverty traps: evidence from Ethiopia. J Dev Econ 96:159–173 https://doi.org/10.1016/j.jdeveco.2010.08.003 View ArticleGoogle Scholar Duniya K, Adinah I (2015) Probit analysis of cotton farmers' accessibility to credit in northern Guinea savannah of Nigeria. Asian journal of agricultural extension, economics. Sociology 4:296–301Google Scholar Ellis F (1992) Agriculture policies in developing countries. Book chapter no. 7, credit policy. Cambridge University Press, Cambridge, pp 152–174Google Scholar Enimu S, Eyo EO, Ajah EA (2017) Determinants of loan repayment among agricultural microcredit finance group members in Delta state, Nigeria. Finan Innov 3(1):21View ArticleGoogle Scholar Fayaz M, Jan D, Jan AU, Hussain B (2006) Effects of short term credit advanced by ZTBL for enhancement of crop productivity and income of growers. J Agricult Biol Sci 1:15–18Google Scholar Gatti R, Love I (2008) Does access to credit improve productivity? Evidence from Bulgaria. Econ Trans 16:445–465View ArticleGoogle Scholar GOP (2017a) Economic Survey of Pakistan 2016–17. Agricultural Statistics of Pakistan. Ministry of Food Agriculture and Livestock Division, IslamabadGoogle Scholar GOP, 2017b. Population Census 2017, Pakistan Bureau of Statistics.Google Scholar Guirkinger C, Boucher SR (2008) Credit constraints and productivity in Peruvian agriculture. Agric Econ 39:295–308Google Scholar Hussain A (2012) Small holder access to agricultural credit and its effects on farm productivity,income and household food security in Pakistan. Ph.D Thesis. Asian Institute of Technology, ThailandGoogle Scholar Hussain A, Thapa GB (2012) Smallholders' access to agricultural credit in Pakistan. Food Security 4:73–85View ArticleGoogle Scholar Hussain A, Thapa GB (2016) Fungibility of smallholder agricultural credit: empirical evidence from Pakistan. Eur J Dev Res 28:826–846View ArticleGoogle Scholar Hussein H, Ohlmer B (2008) Influence of credit constraint on production efficiency: the case of farm households in southern Ethiopia. Swedish University of Agricultural Sciences, SwedenGoogle Scholar Karlan D, Osei R, Osei-Akoto I, Udry C (2014) Agricultural decisions after relaxing credit and risk constraints. Q J Econ 129:597–652View ArticleGoogle Scholar Keramati A, Ghaneei H, Mirmohammadi SM (2016) Developing a prediction model for customer churn from electronic banking services using data mining. Finan Innov 2(1):10View ArticleGoogle Scholar Kimuyu, P., Omiti, J., 2000. Institutional impediments to access to credit by micro and small scale enterprises in Kenya. Institute of Policy Analysis and Research NairobiGoogle Scholar Kumar CS, Turvey CG, Kropp JD (2013) The impact of credit constraints on farm households: survey results from India and China. App Econ Perspect Policy 35:508–527View ArticleGoogle Scholar Li W (2014) Credit coordinate ratings with corresponding credit rating agencies and regulations. J Finan Eng 1(01):1450002View ArticleGoogle Scholar Mahmood N, Khalid M, Kouser S (2009) The role of agricultural credit in the growth of livestock sector: a case study of Faisalabad. Pak Vet J 29(2):81-84Google Scholar Muhumuza, W., 1997. The interface between structural adjustment, poverty and state managed credit programmes in Uganda. Makerere Political Science Review 2Google Scholar Nosiru MO (2010) Microcredits and agricultural productivity in Ogun state Nigeria. World J Agricult Sci 6:290–296Google Scholar Okunade E (2007) Accessibility of agricultural credit and inputs to women farmers of Isoya rural development project. Res J Agricult Biol Serv 3:138–142Google Scholar Rehman A, Chandio AA, Hussain I, Jingdong L (2017a) Fertilizer consumption, water availability and credit distribution: major factors affecting agricultural productivity in Pakistan. J Saudi Soc Agric Sci https://doi.org/10.1016/j.jssas.2017.08.002 Rehman A, Chandio AA, Hussain I, Jingdong L (2017b) Is credit the devil in the agriculture? The role of credit in Pakistan's agricultural sector. J Finance Data Sci 3(1–4):38–44 https://doi.org/10.1016/j.jfds.2017.07.001 View ArticleGoogle Scholar Rehman A, Jingdong L, Shahzad B, Chandio AA, Hussain I, Nabi G, Iqbal MS (2015) Economic perspectives of major field crops of Pakistan: an empirical study. Pacific Sci Rev B Humanities Soc Sci 1(3):145–158 https://doi.org/10.1016/j.psrb.2016.09.002 Google Scholar RoK (2006) Central Bank of Kenya Monthly Economic Review. Government printer, Nairobi Pg 13 July, 2006Google Scholar Saqib S, Ahmad MM, Panezai S, Khattak KK (2016) Access to credit and its adequacy to farmers in Khyber Pakhtunkhwa: the case of Mardan District. Sarhad J Agriculture 32(3):1-8View ArticleGoogle Scholar Saqib S, Khan H, Panezai S, Ali U, Ali M (2017) Credit Fungibility and credit margin of investment: the case of subsistence farmers in Khyber Pakhtunkhwa. Sarhad J Agriculture 33(4):661-667Google Scholar Sebopetji T, Belete A (2009) An application of probit analysis to factors affecting small-scale farmers decision to take credit: a case study of the Greater Letaba local municipality in South Africa. Afr J Agric Res 4:718–723Google Scholar Shimamura Y, Lastarria-Cornhiel S (2010) Credit program participation and child schooling in rural Malawi. World Dev 38:567–580View ArticleGoogle Scholar Siddiqi, M.W., Baluch, K.N., 2009. Institutional credit: A policy tool for enhancement of agricultural income of Pakistan. International Research Journal of Arts & Humanities (IRJAH) 37Google Scholar Simon M (2013) Determinants of farmers decision to access credit: the case of Zimbabwe. Russian journal of agricultural and socio-economic. Sciences 17(5):7-11Google Scholar Ugwumba C, Omojola J (2013) Credit access and productivity growth among subsistence food crop farmers in Ikole local government area of Ekiti state Nigeria. ARPN J Agricultural Biol Sci 8:351–356Google Scholar WB (2008) Agriculture for development. World development report 2008. World Bank and OUP Press, Washington DCGoogle Scholar Wu W, Kou G (2016) A group consensus model for evaluating real estate investment alternatives. Finan Innov 2(1):8View ArticleGoogle Scholar Yamane T (1967) Statistics: an introductory analysis Harper and row. New York, Evanston and London and John Weather Hill, Inc, TokyoGoogle Scholar
CommonCrawl
Early warning indicators for mesophilic anaerobic digestion of corn stalk: a combined experimental and simulation approach Yiran Wu1, Adam Kovalovszki2, Jiahao Pan1, Cong Lin1, Hongbin Liu3, Na Duan1 & Irini Angelidaki2 Monitoring and providing early warning are essential operations in the anaerobic digestion (AD) process. However, there are still several challenges for identifying the early warning indicators and their thresholds. One particular challenge is that proposed strategies are only valid under certain conditions. Another is the feasibility and universality of the detailed threshold values obtained from different AD systems. In this article, we report a novel strategy for identifying early warning indicators and defining threshold values via a combined experimental and simulation approach. The AD of corn stalk (CS) was conducted using mesophilic, completely stirred anaerobic reactors. Two overload modes (organic and hydraulic) and overload types (sudden and gradual) were applied in order to identify early warning indicators of the process and determine their threshold values. To verify the selection of experimental indicators, a combined experimental and simulation approach was adopted, using a modified anaerobic bioconversion mathematical model (BioModel). Results revealed that the model simulations agreed well with the experimental data. Furthermore, the ratio of intermediate alkalinity to bicarbonate alkalinity (IA/BA) and volatile fatty acids (VFAs) were selected as the most potent early warning indicators, with warning times of 7 days and 5–8 days, respectively. In addition, IA, BA, and VFA/BA were identified as potential auxiliary indicators for diagnosing imbalances in the AD system. The relative variations for indicators based on that of steady state were observed instead of the absolute threshold values, which make the early warning more feasible and universal. The strategy of a combined approach presented that the model is promising tool for selecting and monitoring early warning indicators in various corn stalk AD scenarios. This study may offer insight into industrial application of early warning in AD system with mathematical model. Anaerobic digestion (AD), as an efficient technology for organic waste treatment, has been widely adopted worldwide [1]. Among others, straw has a great potential to serve as a feedstock for anaerobic methane (CH4) production, due to its abundance and suitable bioconversion characteristics [2]. In China, straws are produced at a high annual rate of approximately 1 billion metric tons [3], and about 30% of those were underutilized [3, 4]. In addition, the methanogenic potential of some major straws is 2.86–3.78 × 105 Nm3 CH4/kg VS [5]. However, straw is consisting of cellulose, hemicellulose, and lignin, which is a typical high carbon-to-nitrogen (C/N) substrate [2, 6]. And excessive volatile fatty acids (VFAs), which are intermediates of AD, may be produced when feedstock overloading occurs, especially with high C/N (e.g., > 30) [2, 5, 7]. Besides, the complex structure of lignocellulose is difficult for microbial cellulolytic enzymes to access, limiting degradation [8]. Therefore, some previous studies reported that the moderate organic loading rates (OLRs) were very vital to avoid system acidification. The thermophilic AD process of straw can only be stably operated under relatively low organic loading rates (OLRs) and below 2 kg VS/(m3 day) [9]. Meanwhile, Li et al. [10] suggested that the mesophilic anaerobic co-digestion of rice stalk with cow manure should be operated at an OLR of 3–6 kg VS/(m3 day). Ward et al. [11] found that biogas projects using straw as substrate also tend to be controlled at suboptimal OLRs to prevent process inhibition. Besides optimal operational parameters like OLR, reliable early warning and regulation systems are also favorable for AD process. Previously, many studies have been carried out to explore feasible warning indicators in different AD systems. Some examples for several indicators proposed include VFA concentrations, alkalinity, biogas composition, specific intermediate metabolite (like glycerol, aromatic compound, etc.) concentrations, microbial community composition, and enzyme activity [12,13,14,15,16,17]. In addition, some coupled indicators, such as the ratio between intermediate and partial alkalinities or VFA concentrations and bicarbonate alkalinity (BA), showed better performance than individual indicators [18]. However, when comparing the results of previous studies, it must be pointed out that the proposed strategies are only valid under certain conditions, as some parameters may have different sensitivities to environmental fluctuations in different AD systems. For instance, Castellano et al. [19] suggested that hydrogen (H2) concentrations have a high discriminatory ability for process state identification. On the contrary, Kleyböcker et al. [20] showed that H2 partial pressure was not an ideal indicator, because of its unstable responses under organic overload conditions in an AD system treating rapeseed oil. In addition, the indicator threshold values might vary in AD systems with different substrates and operating conditions. In a specific case, Pullammanappallil et al. [21] suggested that the critical value of propionic acid was 2750 mg/L, while Holm-Nielsen et al. [22] found it to be 1500 mg/L. Conversely, propionic acid did not show early warning in some trials [20]. Consequently, finding effective indicators and rational threshold values for early warning and inhibition diagnosis is a challenge when working with AD systems. Compared with traditional early warning methods, which only monitor process indicators by chemical pathway, modeling the AD process can provide a flexible and rapid solution for comparing and evaluating large numbers of such indicators, and it provides the possibility of automated warning for industrial applications. Mathematical models have long been used for the simulation of various AD scenarios, with several computer-aided implementations in existence [23, 24]. Unlike simple models that are mainly used for calculating theoretical biogas and CH4 yields, complex bioconversion models can be used to generate insights about process kinetics, microbial growth inhibition, substrate conversion, and product generation rates, to mention a few of their functionalities. In addition, these models can also handle extensive amounts of numerical data and provide qualitative or quantitative comparisons between the measured and simulated datasets. Hence, using these tools for the evaluation of early warning indicators appears to be a promising approach. The aims of the present study were therefore to (1) evaluate the reaction of various process parameters during AD process of corn stalk fed to continuously stirred tank reactors under different overload modes (organic or hydraulic retention time (HRT) overload; and gradual or sudden overload); (2) compare simulation using a proven bioconversion model and experimental results at the same operational conditions; (3) identify and evaluate response parameters that are sufficiently sensitive to environmental disturbances; and (4) define threshold values for the sensitive indicators identified in (3), for the different overload conditions tested. Digester performance Gaseous parameters The CH4 yield and content of the two reactors were seen to have stabilized in the full-load phase, at 0.20 L CH4/g VS (62.20% of CH4) on average with OLR of 1.50–2.24 g VS/(L day) in R1 and at 0.20 L CH4/g VS (58.73% of CH4) with OLR between 1.87 and 2.24 g VS/(L day) in R2. During the gradual organic overload phase in R1 (day 101–113), both the CH4 yield and CH4 content decreased significantly to 0.02 L CH4/g VS and 51.76% on day 113. Regarding the sudden overload phase of R1, the CH4 yield showed a stepwise rise with elevated OLR, up to 0.37 L CH4/g VS on day 155 with OLR at 3.37 g VS/(L day), and then sharply decreased to 0.12 L CH4/g VS (day 161). With respect to the gradual HRT overload phase of R2, the CH4 yield showed an acidification response under OLR of 2.81–3.74 g VS/(L day). Subsequently, CH4 production increased slowly and then decreased sharply after day 161. For each overload condition, the CH4 content decreased slightly in the beginning and then returned to steady-state level (Fig. 1a1, a2). This result was consistent with a previous study, where the CH4 content did not decrease significantly, as long as the pH was higher than 5.5 [1]; moreover, it reported that raising OLR did not result in a significant change in CH4 content. However, a clear shift in populations of archaea from acetotrophic to hydrogenotrophic methanogenesis may have occurred [25]. A comparison of experimental and simulated CH4 yields (a1, a2) and CH4/CO2 ratios (b1, b2) related to the laboratory experiments. Data presented in subplots a1 and b1 represent R1, while those in subplots a2 and b2 represent R2 On the other hand, solids evidently accumulated in the reactor, probably due to straw floating and inadequate stirring, which is approximately 5–6% in the last phase of the experiment (Additional file 1: Fig. S1). Solids accumulated could lead to gas–liquid phase transfer delay and oversaturation of CH4 in the liquid phase [26]. Thereby, above gaseous parameters were relatively insensitive. Meanwhile, the ratio of methane and carbon dioxide (CH4/CO2) showed more intense response to perturbations (Figs. 1b and 2b), where sudden drops were observed on days 112 and 161 in R1, and on day 161 in R2. Further, it realized a remarkable early warning potential in comparison with CH4 yield (8 days earlier) under sudden overload, but barely provided any early warning under gradual overload. Previously, H2 concentration was also suggested as a useful variable for detection of disturbances in both carbohydrate- and protein-based wastewaters [27]. However, in the current study, H2 was only detected in the overload stage on days 112 and 155 in R1, and on day 159 in R2. In addition, these fluctuations were faint and short (1–2 days), with the maximum H2 content of 0.17% being reached in the sudden overload phase of R1. It is speculated that the pH and total solid (TS) concentration of the effluent can affect the discrimination of H2. A comparison of experimental and simulated acetic acid (a1, a2), propionic acid (b1, b2), butyric acid (c1, c2), and total VFA (d1, d2) concentrations related to the laboratory experiments. Data presented in subplots a1, b1, c1, and d1 represent R1, and those in subplots a2, b2, c2, and d2 represent R2 VFA parameters As far as the full-load phase is concerned, the total VFA concentrations of the two reactors were found stable, with average values of 0.49 g/L (R1) and 0.60 g/L (R2), respectively. However, the total VFA concentration increased rapidly during the overload phase. Acetic acid was the most abundant acid, accounting for approximately 40% of the total VFA, hence it dominates total VFA concentration changes. Its sharp increase occurred on day 109 (from 0.21 to 0.39 g/L) and day 147 (from 0.23 to 0.59 g/L) in R1, and on day 155 (from 0.34 to 0.52 g/L) in R2. The sudden rise of total VFA was also occurred on day 109 (from 0.37 to 0.80 g/L) and day 147 (from 0.56 to 0.95 g/L) in R1, and on day 155 (from 0.82 to 2.15 g/L) in R2. The quick accumulation of acetic acid indicated an imbalance between the acid-forming phase and methane-forming phase of the digestion process, which was in line with the changes seen in CH4 yield and CH4/CO2 (Fig. 1). Propionic acid concentration was fairly stable during the full-load phase, especially in R2 (Fig. 2c1, c2). Its sharp increase was observed on day 109 (from 0.06 to 0.30 g/L) and day 153 (from 0.32 to 0.50 g/L) in R1, and on day 155 (from 0.18 to 0.98 g/L) in R2. It is noticeable that in R2, the sharp increases of VFAs concentration were always followed by the gradual returning to its steady-state concentration, and propionic acid showed the slowest recovery (Fig. 2), which is in agreement with a previous study [22], but different to the finding of Boe et al. [28] stating that acetate revealed recovery while propionate was persistent. In the case of methanogenic populations, possible adaptation to the higher VFA concentrations may have occurred, which was probably the reason for reduced acidification [29]. What's more, a latest research reported that there is a kind of methanogen could uptake acetate but also propionate directly, which might be one of the reasons as well [30]. The findings about slow propionic acid reduction were also supported by the findings from Ahring et al. [31], who showed that propionate acid degraders are the slowest growing and most sensitive VFA-degrading microorganisms in the AD process. pH, alkalinity, and VFA/BA During the full-load period of the experiment, pH values ranged from 6.77 to 6.90 (R1) and 6.67 to 6.83 (R2), respectively (Additional file 1: Fig. S2). In overload period, the pH value decreased below 6.4 on day 112 (R1-gradual overload), on day 159 (R1-sudden overload), and on day 160 (R2-gradual overload), respectively, which value we defined as limit for process failure. High total alkalinity (TA) (about 6000 g CaCO3/L) was detected in both reactors at the beginning, due to the high alkalinity contained in the inoculum originating from the full-scale biogas plant. During the experiment, TA was maintained at an average of 1814.3 mg CaCO3/L (R1) and 1871.9 mg CaCO3/L (R2) (Fig. 3a1, a2). The result was also in accordance with a previous study, in that TA could remain stable until the pH fell below 4.3 as a consequence of high VFA concentration [32]. BA suddenly dropped on days 109 and 158 in R1, and day 154 in R2 (Fig. 3c1, c2). Conversely, intermediate alkalinity (IA) suddenly rose on days 105 and 157 in R1, and day 153 in R2 (Fig. 3b1, b2). Although both indicators showed fluctuation under overload conditions, IA proved to be more sensitive, since it got out of balance earlier. A comparison of experimental and simulated total alkalinities (a1, a2), intermediate alkalinities (b1, b2), bicarbonate alkalinities (c1, c2), BA/TA (d1, d2), IA/BA (e1, e2), and VFA/BA (f1, f2) ratios related to the laboratory experiments. Data presented in subplots a1, b1, c1, d1, e1, and f1 represent R1, while those in subplots a2, b2, c2, d2, e2, and f2 represent R2 From another perspective, coupled indexes appeared to be more sensitive than single indicators, thus their distinct fluctuations made the identification of acidification response easier. BA/TA decreased sharply on days 105 and 158 in R1, and on day 154 in R2 (Fig. 3d1, d2). Conversely, IA/BA showed a sharp increase on days 105 (from 0.28 to 0.53) and 157 (from 0.44 to 0.54) in R1, and day 153 (from 0.48 to 0.68) in R2 (Fig. 3e1, e2). Unfortunately, all alkalinity indicators showed obvious signs of response delay under sudden overload. It is similar to the report of Li et al. [33] where the warning times of IA/PA (where PA is the partial alkalinity and is analogous to BA) and BA/TA were shortened by 6 days and 2 days, respectively. Ahring et al. [12] also reported that most indicators were suitable for detecting gradual overloads, but were too slow to respond under sudden overload. Significant increase of VFA/BA was found on days 113 and 151 in R1, and day 155 in R2 (Fig. 3f1, f2), and was confirmed by the work of Li et al. [33], who also found VFA/BA increase due to higher OLR in mesophilic AD of vegetable waste. Simulation results using the BioModel The results of the experimental reactor operation simulations are presented in terms of CH4 yield and CH4/CO2 (Fig. 1), and individual and total VFA concentrations (Fig. 2). From a qualitative point of view and by looking at the fits between experimental and simulated data curves, it appears that the model was mostly successful in capturing the overall gas and VFA production trends of the two experiments with high accuracy. At the same time, by evaluating the goodness of these fits using statistical measures, relative root mean squared error (rRMSE) and mean absolute percentage error (MAPE) values provide a more detailed overview of simulation accuracy (Table 1). By comparing the visual and statistical measures, biogas and CH4 yield, along with butyric and acetic acid concentration simulations appear to be the most accurate, with the lowest rRMSE and MAPE only slightly above the feasible range (Table 1, values for "Day 0–165"). This deviance is due to a few sections of the dataset, where the simulation-measurement fit was not satisfactory. For example, the gas yield and VFA concentration levels during the reactor startup periods were not matched in the simulations. However, reactor startup periods inherently involve significant stochasticity and, depending on the substrates and reactor history, potential microbial growth lag [34], which are hard to simulate with fixed kinetic models. Deeming this period irrelevant from the perspective of early warning indication and excluding it from the statistical evaluation, rRMSE and MAPE values of CH4 yield (Table 1, values for "Day 30–165") have improved significantly. On the other hand, VFA simulations showed varying rates of change, with those of R1 being more positive than R2. A more important difference between the measured and simulated values was, on the other hand, seen in R1, around days 110–120. Here, the drop in CH4 yield and the consecutive process failure were slower in the simulation results compared to the laboratory experience, by approximately 2 days. At the same time, the drop was preceded by a slight increase in the respective yields, potentially owing to an initial positive response of simulated acetoclastic methanogens to an increased OLR. This means that in the model, the immediate response of microbial groups to an increase in substrate availability is a proportional increase in their productivity, which is in most cases followed by their negative response to the gradual accumulation of certain inhibitory compounds. In the case of acetoclastic methanogens, model inhibition was assumed by free ammonia and the saturation of volatile fatty acids, of which the latter was seen both experimentally and during the simulations. However, as the levels of acid saturation were significantly lower in the model simulations than in the physical reactors, inhibition during the simulations appeared to take effect slower than seen experimentally. Nevertheless, the simulations for both R1 and R2 were in general found to be in good agreement with experimental data, therefore they could be used for comparing the experimentally defined early warning indicators with their simulated counterparts. In the short term, this provided an additional method for verifying the quality of indicator selection. In addition, the long-term benefits of such simulations lie in their rapid generality and continuous interpretability: both contributing to the reduction of necessary analytic measurements and process data density. Thus, by means of simulated early warning indicators, monitoring, and forecasting, the fate of the experimental processes could be improved significantly. Table 1 rRMSE and MAPE values for goodness-of-fit analyses Early warning indicators and threshold values The procedure of screening early warning indicators Experimental and simulated data were used to screen potentially optimal warning indicators, based on the proposition of three important and mostly qualitative criteria. Firstly, optimal indicators had to show high sensitivity to changes, in the sense that there was enough warning time between an indicator's response point and process failure. Stable acidification response was another vital aspect, as excessive indicator sensitivity to acids may have led to false assumptions. Finally, the low cost of monitoring is essential in practice [20, 35], therefore indicators had to be measured economically. Based on the above selection criteria, reference points for reactor failure are needed for measuring warning time of different parameters. In the current study, these points of failure were declared when at least one of two events happened. The first one concerned the reduction of reactor pH to less than 6.4, which is well below the optimal pH range commonly reported for methanogenic archaea [36]. Meanwhile, the other event was the significant reduction of CH4 yield (relative standard deviation (RSD) > 20%). pH and CH4 yield are the most intuitive and easily measured indicators, but relatively insensitive for process destabilization, therefore, their changes were also recommended as the symbol of process failure in previous study [37]. In order to quantify the changes of different indicators at the points of failure, their variation amplitudes were calculated. These amplitudes were expressed on a percentage basis and relative to their steady-state values. Specifically, by comparing the variation amplitude of abrupt changes throughout the different overload modes and types applied in the experimental reactors, maximum and minimum values of each indicator were defined. Screening early warning indicators According to the experimental data, days 112 and 159 were identified as reference points for measuring the early warning times of different indicators under gradual and sudden overload conditions in R1, and this reference point was day 160 for R2. The abrupt change dates and values, and warning times of different indicators are shown in Tables 2 and 3. Almost all the parameters have performed certain response to the overload shock. Overall, gas phase indicators provided a delayed response in comparison with liquid phase indicators, only CH4/CO2 have 6 days warning time under sudden overload. This could probably be due to the properties of corn stalk (CS), given the floating problem caused by low specific gravity can lead to stirring issues and eventually delays in mass transfer from the liquid to the gas phase [38, 39]. Table 2 The early warning indicators of R1 It is obvious that parameters' sensitivity is influenced by the overload mode. For the gradual overload, the acidification responses of the various indicators did not show significant difference (3–7 days for individual VFA and 6–7 days for alkalinity) (Tables 2 and 3), regardless of the overload type. Compared with the gradual overload, indicators demonstrated varied acidification responses during sudden overload. More specifically, VFA-related indicators (6–12 days) had longer warning time, while alkalinity and its coupled indicators (1–2 days) had shorter warning time (Tables 2 and 3). This contradiction is probably related to the change of alkalinity. At the beginning of the AD process, the effluent taken from the full-scale plant for inoculation provided a high initial alkalinity in both reactors. Later, the gradual overload applied to R1 generated a large amount of VFA, most of which was consumed by alkalinity that delayed VFA accumulation. By contrast, as the buffer capacity of the AD system declined before the second (sudden) overload phase, VFA accumulated rapidly and showed a longer warning time. Besides early warning time, stability and measurability of indicators are also important for monitoring biogas plants [19]. For example, IA and IA/BA, as well as BA and BA/TA, showed similar warning times in the two reactors (Tables 2 and 3). However, the average RSD values of these indicators during the full-load phase were 9.57 and 11.59 for IA, 8.23 and 10.31 for IA/BA, 6.26 and 6.94 for BA, and 2.83 and 3.45 for BA/TA, in R1 and R2, respectively. The RSD values of coupled indicators are relatively smaller, which indicates that the coupled indicators are more consistent before overload shock and implies less misjudgment as well. Furthermore, AD is an interrelated process, and the relationship between certain process variables might be the reason for coupled indicators showing better stability than individual ones. Although Li et al. [18] also proposed using coupled indicators to achieve early warning, present study showed that total VFA, acetic, and propionic acids can provide similar warning times in the reactors (Tables 2 and 3). Nevertheless, total VFA can be determined relatively easier than individual VFA in biogas plants [40], making total VFA a more widely acceptable variable for early warning indication. Fortunately, due to the development of detection technology, total VFA and alkalinity can both be monitored online by transducers, titration, or infrared spectroscopy, as well as via online sampling and gas chromatography [41,42,43,44]. In addition to the experiment-based identification of indicator variables with the highest early warning potential, results of the simulations were used to calculate the same variables numerically and evaluate their warning efficiency in comparison to the empirical values. This approach was unlike any previous work on the topic that the authors reviewed [28, 45,46,47,48,49]. Up until now, early warning indicators were selected mostly based on the offline or online monitoring of different values of VFA, pH, alkalinity, biogas fractions, trace elements, and their various combinations, although, in one instance, stable carbon isotopes of CH4 were also named as potential indicators of process imbalance [50]. These indicator values could then be either evaluated comparatively, or used as inputs to sophisticated control systems. Despite their usually good early warning indicator potential, so far the majority of these solutions remained inapplicable by the industry, mainly due to their limited scope or significant costs involved [51]. Therefore, present simulation-based system could offer a competitive solution for the identification and monitoring of early warning indicators, through its validation by relevant experimental data, application flexibility, customizability at scale, and relatively low price point, when compared to laboratory-intensive processes. The experimental and simulated variables were compared separately for reactors R1 and R2 and in the gradual (Rg) and sudden overload (Rs) modes, thus creating three distinct indicator groups of interest, referred to as R1 g, R1 s, and R2 g from here onwards. In order to provide a reasonable basis for indicator comparison, reference values for identifying the early warning time in the simulations were chosen according to the experimental points defined earlier. Figure 4 shows a comparison of the experimental and simulated indicators for the three indicator groups described above. For each group and individual indicator, the orange bars enclosed with red (lower) and blue (higher) horizontal lines represent the ranges of percentagewise differences calculated between the reference (critical) indicator values and their highest values measured (exp) or simulated (sim) during the different steady-state periods (Tables 2 and 3). As an example, for the indicator group R2 g (Fig. 4c1, c2), the reference value of the experimental propionate indicator (Propionate_exp) was found to be 0.98 g/L on day 155; then during the five steady-state periods identified from the gradual mode operational data of R2, the highest experimental values measured were given as 0.20, 0.23, 0.22, 0.20, and 0.19 g/L, representing, respectively, 390, 326, 345, 390, and 416% changes between their values and the reference value of 0.98 g/L; finally, selecting the lowest (326%) and the highest (416%) changes, the minimum and maximum boundaries of the bar belonging to R2 g Propionate_exp (Fig. 4c1, c2) were given. The ranges for the simulated propionate indicator (Propionate_sim), along with all the other indicators were calculated in an identical manner. In cases where the minimum and maximum indicator values were equal, average indicator values were used for the calculation (see standalone red horizontal lines). A comparison of experimental and simulated early warning indicator values in R1 during gradual overload (a) and sudden overload (b), and in R2 during gradual overload (c). Negative values indicate a decrease, while positive values indicate an increase in the value of the respective indicators, at the points of abrupt changes and relative to steady-state values Figure 4 shows that the experimental and simulated difference ranges agreed well for some indicators, while for others they were significantly different. Given that most of the indicators were partially or completely dependent on VFA concentrations or alkalinity (VFA and alkalinity being interrelated), changes in these two types of process variables could influence the results to a great extent. Regarding the generation of experimental and simulated VFA data points, a major influencing factor was how accurately the model considered the conversion of initial compounds to intermediate and terminal products, compared to reality. This implies that under experimental conditions, the conversion of complex organic substrates to VFA and then further to gases is a function of a series of stochastic and microbial community-driven events, while model simulations, which are relatively simplified descriptions of reality, are generated assuming structured kinetic equations and stoichiometric yield coefficients. Such fundamental differences could inherently lead to deviations between measured and simulated VFA concentrations, despite the good agreement of the experimental and simulated CH4 yields (Fig. 1a1, a2) showing that the overall model mass balances are otherwise reliable. On the other hand, the simulation of the various alkalinity fractions posed considerable difficulties. Under laboratory conditions, the alkalinity of a sample is commonly measured through titrimetry, and it is the preferable method for the routine analysis of anaerobic digestion samples as well, due to its speed, simplicity and competitive price [52]. Nonetheless, it involves a significant level of uncertainty when used for the offline measurement of dissolved carbonate concentration in AD samples with inhomogeneous matrices [53]. Meanwhile, as the model cannot simulate titration, simplification was necessary. It was decided that using the simulation results, the BA of the reactor would be calculated by using the bicarbonate and carbonate ion fractions calculated by the model, while IA would be represented by the sum of acetic, propionic and butyric acids, expressed in terms of acetic acid equivalents. However, this simplification meant that any inaccuracies in the simulation of these compounds, together with the uncertainties brought about by the titration method would potentially cause disagreements between experimental and simulated alkalinity results. For this reason, this factor was considered during the evaluation of the comparative results. Considering the overall sensitivity, stability and measurability of indicators and based on the experimental and simulation results, IA/BA and VFA were selected as optimal early warning indicators. Further to that, IA, BA and VFA/BA were defined as auxiliary indicators for the diagnosis of the AD system treating CS. IA/BA was also suggested as warning indicator in other lab-scale research [32], and an industrial scale research [38], due to its sensitivity to pronounced changes under overloading conditions. VFA was usually recommended as warning indicator [12, 18]. Threshold value The abrupt change values and their change amplitude in the previously selected main (IA/BA and VFA) and auxiliary (IA, BA and VFA/BA) early warning indicators were investigated in current study. The variation amplitudes of abrupt changes in indicators monitored during experiments and simulation are shown in Fig. 4, and the accurate values were shown in Tables 2 and 3. The abrupt change value of IA/BA was below 0.7, which was 0.53 for organic gradual overload (R1 g), 0.54 for organic sudden overload (R1 s) and 0.68 for hydraulic gradual overload (R2 g), respectively. Meanwhile, Martín-González et al. [32] proposed the critical value of IA/PA to be 0.24 for municipal waste (37 °C), and in another study Ferrer et al. [38] found this number to be 0.72 for sewage sludge (55 °C). This divergence, however, may be the result of differences in feedstock composition and AD operating conditions. Consequently, compared with providing a determined threshold, observing the relative variation for indicators might be a more promising strategy, while evaluating the effectiveness of early warning indicators. This conclusion found further support in previous studies [12, 18]. Accordingly, an acidification risk that requires attention would appear in present AD processes when the IA/BA changed more than 10% in the experiment or 20% in the simulation, based on the steady-state data. The abrupt change values of VFA were 0.80 g/L (R1 g), 0.95 g/L (R1 s) and 2.15 g/L (R2 g), respectively. Compared to the average values during the steady-state period, if the VFA increased more than 51% (experimental data) or 19% (simulation data), the biogas system would potentially be at risk from instability. The abrupt change values of IA, BA and VFA/BA in this study were lower than 0.90, higher than 1.05 and lower than 0.80, respectively. When the IA value increased by nearly 20%, BA value decreased about 11% and VFA/BA value increased approximately 30%. This implied that the biogas system was imbalanced, thus attention should be paid and necessary actions might have to be taken to regulate the AD process. While based on the assessment of several relevant publications, above discussion could potentially be extended by future analyses of the available literature on the early warning indicators in AD. These, together with experiments carried out and evaluated in a manner similar to the one hereby presented, could offer further model verification, deeper insights into the dynamic behavior of such interconnected processes and eventually provide optimized early warning indicators for biogas plants. Monitoring and providing early warning are essential operations in the AD process. Using a mathematical model to simulate the selected experimental process variables provided good data fit and played a key role in the evaluation of the early warning indicators. Based on both experimental and simulated results, the optimal early warning indicators were identified to be IA/BA and VFA. Besides, IA, BA, and VFA/BA could be used as auxiliary indicators for diagnosing the AD system of CS. It is concluded that this modeling can be a promising tool for monitoring the change signals from early warning indicators and improving the standards of AD plant operation. Feedstock and inoculum Corn stalk was obtained from Weichang County, Hebei, China. The collected corn stalk was dried and smashed to approximately 3 mm. The inoculum, obtained from an anaerobic reactor in a wastewater treatment plant (Beijing, China), was acclimated at mesophilic temperature by feeding with pig manure for 2 weeks. The properties of corn stalk and inoculum were shown in Table 4. Table 4 The properties of the corn stalk and seeding sludge The experiment was carried out using two identical 20 L continuously stirred tank reactors (CSTR) having 17 L working volume. The reactors were maintained at mesophilic conditions (35 ± 1 °C) by a heating water bath (SY-200, Changfeng Instrument and Apparatus Company, Beijing, China) and were continuously mixed at a stirring speed of 60 rpm. Reactor 1 (R1) operated at stepwise elevated OLR by increasing influent feedstock concentration. The experiment in R1 was divided into two phases, which were denoted as gradual overload (day 0–113 with OLR from 1.50 to 2.99 g VS/(L day)) and sudden overload (day 121–165 with OLR from 1.50 to 3.37 g VS/(L day)). Due to process inhibition, the AD system started recovering from day 114 to 120, during which period a re-inoculation was made, substrate feeding was stopped, and the reactor effluent was recycled as feed. R1 was operated at a fixed HRT of 25 days during the whole experiment. At the same time, Reactor 2 (R2) was operated at a gradually increasing OLR, by shortening HRT and keeping influent feedstock concentration at 6% TS. Both reactors were kept operating until the process completely failed. The operational parameters and periods of the experiment are presented in Table 5. Table 5 The operational parameters and duration of the experiment Produced biogas was collected in a gas bag, and the gas volume was measured by a gas flowmeter (LML-1, Changchun auto filter co., Ltd, Jilin, China). The effluent was drawn daily for the analysis of pH, VFA, TA, BA, IA, TS and VS. Total solid (TS) and volatile solid (VS) were measured according to the standard methods [54]. Crude protein was estimated by multiplying the total Kjeldahl nitrogen by 6.25 and the total Kjeldahl nitrogen was measured by a Kjeldahl apparatus (K1305A, Sonnen Automated Analysis Instrument Co., Ltd., Shanghai, China). Crude fiber was determined using a fiber analyzer (Model A220, ANKOM Technology Corporation, NY, USA). Organic elemental components of the corn stalk were determined using an elemental analyzer (Exeter Analytical, Inc. CE-440 Elemental Analyzer, Chicago, USA). Biogas composition (CH4, H2 and carbon dioxide) was determined by a gas chromatograph (1490, Agilent Technologies, USA) equipped with a thermal conductivity detector as previously described [55]. Liquid samples were centrifuged at 4000 rpm for 10 min and then used for the chemical analyses. Before VFA analysis, samples were filtered through a 0.22 μm membrane. The VFA concentrations were measured by a high performance liquid chromatograph (LC-10A, Shimadzu Corporation, Kyoto, Japan), according to the method proposed by [55]. Alkalinity and pH were tested by an automatic potentiometric titrator (ZDJ-4B, Shanghai INESA Scientific Instrument Co., Ltd, China), with a glass and calomel electrode used as the indicator and reference electrode, respectively. For titration, 0.20 mol/L HCl was used as titrant, and the system was calibrated with anhydrous Na2CO3. IA, PA, and TA were determined using a three-point method [56], by recording the HCl consumption at the respective pH points of 5.75, 4.3, and 3.8, and converting those values to calcium carbonate (CaCO3) equivalents (see Eq. (1)). $${\text{Alkalinity}}\left( {{{{\text{mg CaCO}}_{3} } \mathord{\left/ {\vphantom {{{\text{mg CaCO}}_{3} } {\text{L}}}} \right. \kern-0pt} {\text{L}}}} \right) = \frac{{{\text{HCl concentration}} \times {\text{HCl consumption volume}} \times 50.05}}{\text{Sample volume}} \times 1000,$$ where 50.05 is a coefficient used to convert alkalinity units from mEq/L to mg CaCO3/L.BA is estimated by multiplying the PA by 1.25 according to Anderson and Yang [56]. To simplify the data analysis, only BA was analyzed in current study. Statistical method The state of the reactor was defined as steady state when the daily biogas production was within 10% variation, for at least 6 consecutive days [57]. The date of process failure was determined by the time point where a significant decrease (RSD > 20%) appeared in the CH4 yield. Relative standard deviations (RSDs) were calculated according to Eq. 2, for a quantitative assessment of fluctuations in daily indicator values compared with the previous day. $${\text{RSD}} = \frac{S}{{\overline{x} }} \times 100\% = \frac{{\sqrt {\mathop \sum \nolimits_{i = 1}^{2} (x_{i} - \overline{x} )^{2} } }}{{\overline{x} }} \times 100\%$$ In the above equation, S is the standard deviation of the measured indicator value compared to the previous day, and \(\overline{x}\) is the average of the values at day i and i−1. The larger the RSD value, the greater the fluctuation. In the current study, RSD > 10% (yellow symbol in all figures) and RSD > 20% (red symbol in all figures) were identified as the signs of slightly and highly unstable process, respectively. Thereby, the date when RSD exceeds 20% was determined as the time point where sudden changes took place. For the quantification of goodness of fit between simulations and experimental data, relative root mean-squared error (rRMSE) and mean absolute percentage error (MAPE) were used, according to Eqs. 3 and 4. $${\text{rRMSE}} = \frac{1}{{\overline{y}_{exp} }}\sqrt {\frac{1}{n}\mathop \sum \limits_{i = 1}^{n} \left( {y_{{exp_{i} }} - y_{{sim_{i} }} } \right)^{2} }$$ $${\text{MAPE}} = \frac{100}{n}\mathop \sum \limits_{i = 1}^{n} \left| {\frac{{y_{{exp_{i} }} - y_{{sim_{i} }} }}{{y_{{exp_{i} }} }}} \right|,$$ where \(y_{{sim_{i} }}\) and \(y_{{exp_{i} }}\), respectively, represent a single simulated or measured (experimental) data value, while \(\overline{{y_{\exp } }}\) is the average of all experimental data values and \(n\) is the total number of experimental data points available. RMSE is commonly used for AD model evaluation [58, 59] and in certain cases, offers advantages over other measures, especially when sensitivity to large errors between experimental and simulated data points is required [60]. After dividing RMSE by the mean of the experimental data points involved, the resulting rRMSE can be compared through different variable datasets, without having to consider its specific units. While rRMSE is defined in the range of 0 (no error) to infinity (no fit) and can therefore take significantly different values for different variables, MAPE is expressed in percentages. By design, it takes values between 0 and 100% for simulated data points that are at most double as large as experimental ones, but is in general said to produce reliable simulations when MAPE is less than 50% [61]. Consequently, it offers a complementary metric for error assessment less distorted by extreme data points, and provides a holistic view of dynamic simulation regression [62]. Mathematical model simulations For the simulation of the two experimental reactors and the different overload conditions, an advanced bioconversion model (BioModel) was used. The model was developed by Angelidaki et al. [63, 64], and was later extended by Kovalovszki et al. [65] and Lovato et al. [66], considering various anaerobic co-digestion scenarios for validation. Compared to the extended model established earlier, however, two minor changes were made in the present model implementation. Of the two, the first involved the removal of ammonia as a microbial growth limiting substrate from the model's kinetic equations, while through the second, the acetic acid inhibition effect on acetolactic methanogens was replaced by a total VFA inhibition effect. Former change was considered reasonable, given that the single substrate in these experiments—corn stalk—is a negligible source of ammonia. Meanwhile, the argument for extending the range of VFA inhibition on methanogens lies in their sensitivity to undissociated organic acids [67], as well as the dilute reactor medium, which in general might limit the availability of physical shelter and granule-forming sites for methanogens [68, 69]. The above described VFA inhibition effect was kinetically controlled similar to the original acetic acid inhibition effect, using inhibition constants with manually estimated values of 315 mg/L and 365 mg/L for R1 and R2, respectively. By comparison, similar inhibition constants found in published literature take values on a wide range, from 10 s to 1000 s mg/L and considering either acetic and propionic acid as the main inhibitors, or all VFA collectively [64, 70,71,72,73]. The values reported depend largely on the specific reactor conditions and substrates, therefore the low values used in this work can be justified by the highly dilute reactor contents and the lack of solid buffers (potentially leading to faster acidification) and the general sensitivity of methanogenic archaea to the accumulation of acids. In addition, the slight difference in the magnitude of the values estimated appears negligible compared to the above literature sources, and can be attributed to reactor-specific conditions. BA: bicarbonate alkalinity BioModel: bioconversion model CS: corn stalk CSTR: continuously stirred tank reactors HRT: hydraulic retention time IA: intermediate alkalinity OLR: organic loading rates RSD: relative standard deviations total alkalinity total solid VFA: volatile fatty acid VS: volatile solid Sawatdeenarunat C, Surendra KC, Takara D, Oechsner H, Khanal SK. Anaerobic digestion of lignocellulosic biomass: challenges and opportunities. Bioresour Technol. 2015;178:178–86. Yang L, Xu F, Ge X, Li Y. Challenges and strategies for solid-state anaerobic digestion of lignocellulosic biomass. Renew Sustain Energy Rev. 2015;44:824–34. Shi ZL, Jia T, Wang YJ, Wang JC, Sun RH, Wang F, Li X, Bi YY. Comprehensive utilization status of crop straw and estimation of carbon from burning in China. J Agric Resour Reg Plann. 2017;38:32–7. Hong J, Ren L, Hong J, Xu C. Environmental impact assessment of corn straw utilization in China. J Cleaner Prod. 2016;112:1700–8. Weiland P. Biogas production: current state and perspectives. Appl Microbiol Biotechnol. 2010;85:849–60. Paul S, Dutta A. Challenges and opportunities of lignocellulosic biomass for anaerobic digestion. Resour Conserv Recycl. 2018;130:164–74. Bhatnagar A, Sain M. Processing of cellulose nanofiber-reinforced composites. J Reinf Plast Compos. 2016;24(12):1259–68. Sun L, Liu T, Müller B, Schnürer A. The microbial community structure in industrial biogas plants influences the degradation rate of straw and cellulose in batch tests. Biotechnol Biofuels. 2016;9(1):128. Zhou J, Yang J, Yu Q, Yong X, Xie X, Zhang L, Wei P, Jia H. Different organic loading rates on the biogas production during the anaerobic digestion of rice straw: a pilot study. Bioresour Technol. 2017;244:865–71. Li D, Liu S, Mi L, Li Z, Yuan Y, Yan Z, Liu X. Effects of feedstock ratio and organic loading rate on the anaerobic mesophilic co-digestion of rice straw and cow manure. Bioresour Technol. 2015;189:319–26. Ward AJ, Hobbs PJ, Holliman PJ, Jones DL. Optimization of the anaerobic digestion of agricultural resources. Bioresour Technol. 2008;99:7928–40. Ahring B, Sandberg M, Angelidaki I. Volatile fatty acids as indicators of process imbalance in anaerobic digestors. Appl Microbiol Biotechnol. 1995;43(3):559–65. Hecht C, Griehl C. Investigation of the accumulation of aromatic compounds during biogas production from kitchen waste. Bioresour Technol. 2009;100:654–8. Cook S, Skerlos S, Raskin L, Love N. A stability assessment tool for anaerobic co-digestion. Water Res. 2007;112:19–28. Ugwuanyi J, Harvey L, Mcneil B. Protease and xylanase activities and thermophilic populations as potential process monitoring tools during thermophilic aerobic digestion. J Chem Technol Biotechnol. 2004;79:30–8. Munk B, Lebuhn M. Process diagnosis using methanogenic Archaea in maize-fed, trace element depleted fermenters. Anaerobe. 2014;29:22–8. Poirier S, Bize A, Bureau C, Bouchez T, Chapleur O. Community shifts within anaerobic digestion microbiota facing phenol inhibition: towards early warning microbial indicators? Water Res. 2016;100:296–305. Li L, He QM, Wei YM, He Q, Peng XY. Early warning indicators for monitoring the process failure of anaerobicigestion system of food waste. Bioresour Technol. 2014;171:491–4. Castellano M, Ruiz-Filippi G, Gonzalez W, Roca E, Lema J. Selection of variables using factorial discriminant analysis for the state identification of an anaerobic UASB-UAF hybrid pilot plant, fed with winery effluents. Water Sci Technol. 2007;56:139–45. Kleyböcker A, Liebrich M, Verstraete W, Kraume M, Würdemann H. Early warning indicators for process failure due to organic overloading by rapeseed oil in one-stage continuously stirred tank reactor, sewage sludge and waste digesters. Bioresour Technol. 2012;123:534–41. Pullammanappallil PC, Chynoweth DP, Lyberatos G, Svoronos SA. Stable performance of anaerobic digestion in the presence of a high concentration of propionic acid. Bioresour Technol. 2001;78:165–9. Holm-Nielsen J, Al Seadi T, Oleskowicz-Popiel P. The future of anaerobic digestion and biogas utilization. Bioresour Technol. 2009;100:5478–84. Lyberatos G, Skiadas IV. Modelling of anaerobic digestion—a review. Global Nest Int J. 1999;1:63–76. Goux X, Calusinska M, Lemaigre S, Marynowska M, Klocke M, Delfosse P, et al. Microbial community dynamics in replicate anaerobic digesters exposed sequentially to increasing organic loading rate, acidosis, and process recovery. Biotechnol Biofuels. 2015;8:122. Donoso-Bravo A, Mailier J, Martin C, Rodríguez J, Aceves-Lara CA, Wouwer AV. Model selection, identification and validation in anaerobic digestion: a review. Water Res. 2011;45(17):5347–64. Zealand AM, Roskilly AP, Graham DW. The effect of feeding frequency and organic loading rate on the anaerobic digestion of Chinese rice straw. Energy Procedia. 2017;105:62–7. Molina F, Castellano M, García C, Roca E, Lema JM. Selection of variables for on-line monitoring, diagnosis, and control of anaerobic digestion processes. Water Sci Technol. 2009;60:615–22. Boe K, Batstone DJ, Steyer JP, Angelidaki I. State indicators for monitoring the anaerobic digestion process. Water Res. 2010;44(20):5973–80. Xiao KK, Guo CH, Zhou Y, Maspolim Y, Ng WJ. Acetic acid effects on methanogens in the second stage of a two-stage anaerobic system. Chemosphere. 2016;144:1498–504. Azim AA, Rittmann KMR, Fino D, Bochmann G. The physiological effect of heavy metals and volatile fatty acids on Methanococcus maripaludis S2. Biotechnol Biofuels. 2018;11(1):301. Ahring B, Ibrahim A, Mladenovska Z. Effect of temperature increase from 55 to 65°C on performance and microbial population dynamics of an anaerobic reactor treating cattle manure. Water Res. 2001;35(10):2446–52. Martín-González L, Font X, Vicent T. Alkalinity ratios to identify process imbalances in anaerobic digesters treating source-sorted organic fraction of municipal wastes. Biochem Eng J. 2013;76:1–5. Li D, Ran Y, Chen L, Cao Q, Li Z, Liu X. Instability diagnosis and syntrophic acetate oxidation during thermophilic digestion of vegetable waste. Water Res. 2018;139:263–71. Vadasz P, Vadasz AS. Predictive modeling of microorganisms: LAG and LIP in monotonic growth. Int J Food Microbiol. 2005;102(3):257–75. Switzenbaum MS, Giraldo-Gomez E, Hickey RF. Monitoring of the anaerobic methane fermentation process. Enzyme Microb Technol. 1990;12:722–30. Liu Y, Whitman WB. Metabolic, phylogenetic, and ecological diversity of the methanogenic archaea. Ann N Y Acad Sci. 2008;1125(1):171–89. Chen Y, Cheng JJ, Creamer K. Inhibition of anaerobic digestion process: a review. Bioresour Technol. 2008;99:4044–64. Ferrer I, Vazquez F, Font X. Long term operation of a thermophilic anaerobic reactor: process stability and efficiency at decreasing sludge retention time. Bioresour Technol. 2010;101:2972–80. Sukhesh M, Rao P. Anaerobic digestion of crop residues: technological developments and environmental impact in the Indian context. Biocatal Agric Biotechnol. 2018;16:513–28. Madsen M, Holm-Nielsen JB, Esbensen KH. Monitoring of anaerobic digestion processes: a review perspective. Renew Sustain Energy Rev. 2011;15:3141–55. Rudnitskaya A, Legin A. Sensor systems, electronic tongues and electronic noses, for the monitoring of biotechnological processes. J Ind Microbiol Biotechnol. 2008;35:443–51. Steyer JP, Bouvier JC, Conte T, Gras P, Harmand J, Delgenes JP. On-line measurements of COD, TOC, VFA, total and partial alkalinity in anaerobic digestion processes using infra-red spectrometry. Water Sci Technol. 2002;45:133–8. Sun H, Guo J, Wu S, Liu F, Dong R. Development and validation of a simplified titration method for monitoring volatile fatty acids in anaerobic digestion. Waste Manage. 2017;67:43–50. Boe K, John D, Batstone DJ, Angelidaki I. An innovative online VFA monitoring system for the anerobic process, based on headspace gas chromatography. Biotechnol Bioeng. 2010;96(4):712–21. Dong F, Zhao QB, Li WW, Sheng GP, Zhao JB, Tang Y, Yu HQ, Kubota K, Li YY, Harada H. Novel online monitoring and alert system for anaerobic digestion reactors. Environ Sci Technol. 2011;45(20):9093–100. Moeller L, Zehnsdorf A. Process upsets in a full-scale anaerobic digestion bioreactor: over-acidification and foam formation during biogas production. Energy Sustain Soc. 2016;6(1):30. He Q, Li L, Peng X. Early warning indicators and microbial mechanisms for process failure due to organic overloading in food waste digesters. J Environl Eng. 2017;143(12):04017077. Li D, Chen L, Liu X, Mei Z, Ren H, Cao Q, Yan Z. Instability mechanisms and early warning indicators for mesophilic anaerobic digestion of vegetable waste. Bioresour Technol. 2017;245:90–7. Röhlen DL, Pilas J, Dahmen M, Keusgen M, Selmer T, Schöning MJ. Toward a hybrid biosensor system for analysis of organic and volatile fatty acids in fermentation processes. Front Chem. 2018;6:284. Polag D, May T, Müller L, König H, Jacobi F, Laukenmann S, Keppler F. Online monitoring of stable carbon isotopes of methane in anaerobic digestion as a new tool for early warning of process instability. Bioresour Technol. 2015;197:161–70. Wu D, Li L, Zhao X, Peng Y, Yang P, Peng X. Anaerobic digestion: a review on process monitoring. Renew Sustain Energy Rev. 2019;103:1–2. Lahav O, Morgan BE. Titration methodologies for monitoring of anaerobic digestion in developing countries—a review. J Appl Chem Biotechnol. 2004;79(12):11. Sun H, Wu S, Dong R. Monitoring volatile fatty acids and carbonate alkalinity in anaerobic digestion: titration methodologies. Chem Eng Technol. 2016;39(4):599–610. APHA. Standard methods for the examination of water and wastewater. Washington, DC: American Public Health Association; 2005. Li RR, Duan N, Zhang DM, Li BM, Zhang YH, Liu ZD, Dong TL. Anaerobic co-digestion of chicken manure and microalgae Chlorella sp.: methane potential, microbial diversity and synergistic impact evaluation. Waste Manage. 2017;68:120–7. Anderson GK, Yang G. Determination of bicarbonate and total volatile acid concentration in anaerobic digesters using a simple titration. Water Environ Res. 1992;64(1):53–9. De Francisci D, Kougias P, Treu L, Campanaro S, Angelidaki I. Microbial diversity and dynamicity of biogas reactors due to radical changes of feedstock composition. Bioresour Technol. 2015;176:56–64. Velázquez-Martí B, Meneses-Quelal OW, Gaibor-Chavez J, Niño-Ruiz Z. Review of mathematical models for the anaerobic digestion process. In: Biogas. IntechOpen; 2018. https://doi.org/10.5772/intechopen.80815. Owhondah RO, Walker M, Ma L, Nimmo B, Ingham DB, Poggio D, Pourkashanian M. Assessment and parameter identification of simplified models to describe the kinetics of semi-continuous biomethane production from anaerobic digestion of green and food waste. Bioprocess Biosyst Eng. 2016;39(6):977–92. Chai T, Draxler RR. Root mean square error (RMSE) or mean absolute error (MAE)?-Arguments against avoiding RMSE in the literature. Geosci Model Dev. 2014;7(3):1247–50. Tsai SB, Xue Y, Zhang J, Chen Q, Liu Y, Zhou J, Dong W. Models for forecasting growth trends in renewable energy. Renew Sustain Energy Rev. 2017;77:1169–78. Hamawand I, Baillie C. Anaerobic digestion and biogas potential: simulation of lab and industrial-scale processes. Energies. 2015;8(1):454–74. Angelidaki I, Ellegaard L, Ahring BK. A mathematical model for dynamic simulation of anaerobic digestion of complex substrates: focusing on ammonia inhibition. Biotechnol Bioeng. 1993;42(2):159–66. Angelidaki I, Ellegaard L, Ahring BK. A comprehensive model of anaerobic bioconversion of complex substrates to biogas. Biotechnol Bioeng. 1999;63(3):363–72. Kovalovszki A, Alvarado-Morales M, Fotidis IA, Angelidaki I. A systematic methodology to extend the applicability of a bioconversion model for the simulation of various co-digestion scenarios. Bioresour Technol. 2017;235:157–66. Lovato G, Alvarado-Morales M, Kovalovszki A, Peprah M, Kougias PG, Rodrigues JAD, Angelidaki I. In-situ biogas upgrading process: modeling and simulations aspects. Bioresour Technol. 2017;245:332–41. Xiao KK, Guo CH, Zhou Y, Maspolim Y, Wang JY, Ng WJ. Acetic acid inhibition on methanogens in a two-phase anaerobic process. Biochem Eng J. 2013;75:1–7. Doloman A, Varghese H, Miller CD, Flann NS. Modeling de novo granulation of anaerobic sludge. BMC Syst Biol. 2017;11(1):69. Jiang J, Wu J, Poncin S, Li HZ. Effect of hydrodynamic shear on biogas production and granule characteristics in a continuous stirred tank reactor. Process Biochem. 2016;51(3):345–51. Mawson AJ, Earle RL, Larsen VF. Degradation of acetic and propionic acids in the methane fermentation. Water Res. 1991;25(12):1549–54. Mussati MC, Fuentes M, Aguirre PA, Scenna NJ. A steady-state module for modeling anaerobic biofilm reactors. Lat Am Appl Res. 2005;35(4):255–63. Demitry ME. Anaerobic digestion process stability and the extension of the ADM1 for municipal sludge co-digested with bakery waste. PhD dissertation. 2016. Demitry ME, Zhong J, Hansen C, McFarland M. Modifying the ADM1 model to predict the operation of an anaerobic digester Co-digesting municipal sludge with Bakery waste. Environ Pollut. 2015. https://doi.org/10.5539/ep.v4n4p38. ND and YW designed the entire research project; YW and JP executed the experimental work; AK executed the computational work; YW and AK prepared the manuscript; ND, CL, HL. and IA reviewed the manuscript; and ND, YW, IA, and AK revised the manuscript. All authors read and approved the final manuscript. All datasets supporting the conclusions of this study are included in this submitted article and its Additional files. This work was supported by the National Natural Science Foundation of China (51506217), the National Key Research and Development Program (2018YFD0800803), and by the open fund of Key Laboratory of Nonpoint Source Pollution Control, Ministry of Agriculture, P.R.China. College of Water Resources and Civil Engineering, China Agricultural University, Beijing, 100083, China Yiran Wu, Jiahao Pan, Cong Lin & Na Duan Department of Environmental Engineering, Technical University of Denmark, 2800, Kgs. Lyngby, Denmark Adam Kovalovszki & Irini Angelidaki Key Laboratory of Nonpoint Source Pollution Control, Ministry of Agriculture/Institute of Agricultural Resources and Regional Planning, Chinese Academy of Agricultural Sciences, Beijing, 100081, China Hongbin Liu Yiran Wu Adam Kovalovszki Jiahao Pan Cong Lin Na Duan Irini Angelidaki Correspondence to Na Duan. Additional file 1: Fig. S1. Variation of effluent TS and VS in (a) R1 and (b) R2. Fig. S2. Variation of pH in (a) R1 and (b) R2. Wu, Y., Kovalovszki, A., Pan, J. et al. Early warning indicators for mesophilic anaerobic digestion of corn stalk: a combined experimental and simulation approach. Biotechnol Biofuels 12, 106 (2019). https://doi.org/10.1186/s13068-019-1442-7 BioModel
CommonCrawl
Nanosecond laser coloration on stainless steel surface In-situ development of a sandwich microstructure with enhanced ductility by laser reheating of a laser melted titanium alloy Xu Chen & Chunlei Qiu Aluminum with dispersed nanoparticles by laser additive manufacturing Ting-Chiang Lin, Chezheng Cao, … Xiaochun Li A comprehensive study on microstructure and tensile behaviour of a selectively laser melted stainless steel Chunlei Qiu, Mohammed Al Kindi, … Issa Al Hatmi Pressure-assisted sintering and characterization of Nd:YAG ceramic lasers Avital Wagner, Yekutiel Meshorer, … Nachum Frage Simultaneously enhanced strength and ductility for 3D-printed stainless steel 316L by selective laser melting Zhongji Sun, Xipeng Tan, … Chee Kai Chua QR code micro-certified gemstones: femtosecond writing and Raman characterization in Diamond, Ruby and Sapphire Andre Jaques Batista, Pilar Gregory Vianna, … Anderson Stevens Leonidas Gomes Using Atomic Force Microscopy to Measure Thickness of Passive Film on Stainless Steel Immersed in Aqueous Solution Rongguang Wang, Yunhui Li, … Mohammed Bazzaoui High-strength Damascus steel by additive manufacturing Philipp Kürnsteiner, Markus Benjamin Wilms, … Dierk Raabe Laser-clad Inconel 625 coatings on Q245R structure steel: microstructure, wear and corrosion resistance Wanyuan Gui, Cheng Zhong, … Junpin Lin Yan Lu1,2 na1, Xinying Shi ORCID: orcid.org/0000-0002-0649-89922 na1, Zhongjia Huang3, Taohai Li4, Meng Zhang5, Jakub Czajkowski ORCID: orcid.org/0000-0003-2459-08296, Tapio Fabritius6, Marko Huttula2 & Wei Cao ORCID: orcid.org/0000-0003-3139-17802 Laser material processing In this work, we present laser coloration on 304 stainless steel using nanosecond laser. Surface modifications are tuned by adjusting laser parameters of scanning speed, repetition rate, and pulse width. A comprehensive study of the physical mechanism leading to the appearance is presented. Microscopic patterns are measured and employed as input to simulate light-matter interferences, while chemical states and crystal structures of composites to figure out intrinsic colors. Quantitative analysis clarifies the final colors and RGB values are the combinations of structural colors and intrinsic colors from the oxidized pigments, with the latter dominating. Therefore, the engineering and scientific insights of nanosecond laser coloration highlight large-scale utilization of the present route for colorful and resistant steels. Beautifying and passivating metal surfaces is highly time, energy and material consuming process in modern industry. Recently developed laser coloration has been achieved on stainless steel but it has been greatly limited for expensive sources. In addition, the lack of physical knowledge of the mechanism beyond color appearance and performance have prevented the boom of the promising technique. The colorful stainless steel has drawn much attention due to its excellent performance in architectural and decorative applications1. Conventional chemical erosion and electrochemical methods2, 3 to introduce colors on steel surface are gradually abandoned due to the environmental pollution problems. The hunt for a facile and costless method of fabricating colored stainless steel have been active for years by both industrial and academic fields. One feasible route was brought about by femtosecond laser technique, which has been applied successfully in preparing black silicon, iridescent aluminum and blue titanium surfaces4,5,6. The femtosecond laser with high photon energy is efficient at producing laser-induced periodic surface structure (LIPSS) on steel surface7. Such LIPSS surface shows enhanced hydrophobic and corrosion resistant abilities, and is also endowed with abundant colors8,9,10. However, high-cost femtosecond laser is not well suitable for industrial production, while the alternative nanosecond laser was believed that it could only produce limited colors species11. Current researches of these routes mainly stay at the engineering level, just producing series of colors by tuning laser parameters12, 13. Despite the limitation arisen from materials engineering, mechanisms leading to the surface coloration remain unclear. Coloration of femtosecond laser markings is primarily attributed to the structure color effect, but as for markings fabricated by nanosecond laser, such light-matter interference will be extremely decreased because the surface structure is in a much larger scale than the wavelength of visible light. As for the route employing nanosecond sources, although the color range has been extended gradually14, the coloration mechanism has not been sufficiently studied. Several reports referred to the intrinsic colors of the metal oxides in the laser markings12,13,14, but lack of further approval and discussion. Contributions from different pigments or interferences to RGB values on the decorated steels are not quantified to the best of our knowledge. Therefore, understanding the impact of laser-induced structural and compositional variations on the apparent colors becomes crucial to promote the nanosecond laser marking technique in large-scale industrial applications. In this work, we carried out a novel study to explore mechanism of nanosecond laser coloration on 304 stainless steel surface, and quantitatively matched the RGB values on the colored steels to the corresponding abundances of oxides determined at atomic level. Square markings with straw-yellow, cornflower-blue and fuchsia colors were prepared under a master-oscillator-power-amplifier (MOPA) laser system in ambient air at room temperature. Structural color effect was studied by both simulation of the thin film interference and measurements of optical reflectance properties. Meanwhile, elemental and chemical states analysis of the marking samples was carried out through X-ray photoelectron spectroscope (XPS). The compositional color contribution was thus quantitatively discussed. Various colors were formed on the mechanically polished 304 steel surfaces via the low-cost and fast nanosecond pulse laser coloration route. A typical radiation time is around 1 minute to color each square with an area of 5 × 5 mm2. Different colors were reached via semi-empirically varying laser scanning speed, repetition rate, and pulse width. The relations between nanosecond laser parameters and final surface performance have been discussed elsewhere12,13,14,15, yet out of the main focus of the present work. Color information of the laser markings was obtained through polarized light microscope without filters. During the measurements, the incident white light beam was kept perpendicular to the sample surface. As shown in Fig. 1, three colorized samples were prepared, displaying straw-yellow, cornflower-blue and fuchsia colors, respectively. The insets were taken by digital camera with tilted directions (raw photo shown in Supplementary Fig. 1), appearing slightly different colors. To quantitatively describe color information, the commonly used RGB color model is presented here. The RGB model is built following the way in which cone cells in human retina perceive red, green and blue colors16, and all the three color components vary from 0 ~ 255. A larger component value indicates a higher brightness and saturation. In order to get the average RGB values of each photo, the red, green and blue values of each pixel were counted and averaged. Table 1 indeed shows that all the generated colors have relatively low RGB values, or in other words, colors are not fully saturated. Optical microscopy images of the colorized surface. (a) Sample 1, straw-yellow, (b) Sample 2, cornflower-blue, (c) Sample 3, fuchsia. Photos of insets were taken by digital camera. Table 1 Color information of the laser markings. Optical reflectance Optical reflectance was measured at the incident angle of 30° and 70°, respectively. With Xenon lamp as the light source, the incident light spectrum agreed well with the solar spectrum in the visible light range17. In both Fig. 2a and b, low reflectance values were recorded, yet it is noteworthy that the reflectance increased slightly with the incident angles. This is the characteristic behavior of structural color effect. In general, the presented colors vary in a limited scope. Differences between Fig. 1a~c and the insets suggest a weak influence on the final colors from the aspect of light-matter scattering. Among the three samples, sample 1 exhibits the highest reflectance. Meanwhile, a rise of ~20% in the reflectance indicates that sample 1 is more influenced by the structure color. For each sample, the reflectance roughly increases along with the light wavelength, in accordance with results in previous research18. Optical reflectance property measured with incident angles of: (a) 30°, (b) 70°. Contribution of structural color To clarify the light-matter interaction, it is essential to acquire the information of morphology and thickness of the laser-induced layer on steel surface. Sample morphologies depicted in Fig. 3a–c show that the surfaces are basically composed of irregular micro-wrinkles with lateral dimensions in 10 µm scale, much larger than the wavelength of visible light. Thus, possibility of light interference with the microstructure is rather low19. Although sample 1 and 2 show distinct colors, their microstructures are quite similar except that the wrinkles in sample 1 are larger and arranged more sparsely. Short-order periodic grooves appear in sample 2 (Fig. 3b), but their influence on the coloration can be still neglected due to the lack of quantity. In Fig. 3c, there are uniformly arranged gratings. However, the periodic length is around 8 ~ 12 µm, far from the coherent length to interfere with visible light. Therefore, the structural color effect may only originate from the light interference with the laser-generated film. Morphologies of laser marking and the simulation results of reflectance. (a)~(c) SEM images of sample 1~3, respectively. (d) Simulation results of optical reflectance. The film on top of steel surface was formed during laser treatment, and its thickness can be estimated through surface profile measurements. The layer thickness of sample 2 is ~300 nm, while the other two are approximately 800 nm (Supplementary Fig. 2). As an alternative non-destructive technique, ellipsometry method20 was also employed to double-check the thickness. For sample 2, the ellipsometry method gave the thickness of 314.4 ± 37.6 nm (Supplementary Fig. 3), which is very close to the surface profiler result (300 nm). However, such a method could not be accomplished for sample 1 and sample 3, even with the AutoRetarder which allows depolarization for surface of certain roughness. It might be due to rather high roughness and limited region of the laser induced films, which are beyond the applicability of the surface-sensitive ellipsometry method21. In this case, we carried out cross-sectional scanning electron microscope (SEM) measurement in order to directly obtain the film thicknesses (Supplementary Fig. 4). Values of 764.1 ± 31.7 nm, 338.1 ± 20.9 nm and 807.6 ± 54.2 nm were found for sample 1–3, respectively, in good agreement with the optical profilometer results. Simulation work was carried out to evaluate the reflective ability coming from thin-film interference. Smooth film models were built, and the morphology information on the surface was discarded. The layer thickness was set according to the surface profile results, while incident light set perpendicularly. The refractive indices and extinction coefficients were set based on ellipsometry measurement results. The simulated results in Fig. 3d have the same trend of wavelength dependence as the experimental determinations in Fig. 2. However, discrepancies of ~10% have been found between the calculated and measured reflectance for sample 2 and sample 3. This can be attributed to typical simplifications of theoretical models and homogeneity of materials employed in the simulations22, 23. Indeed, even for a surface composed of single element (e.g., bare silicon surface), theoretical results22 can still vary more than 10% against experimental ones24. In spite of the discrepancies, no obvious peaks can be observed in the reflectance curves. Thus, structural coloration does not dominate final colors on the nanosecond laser marked surfaces. Quantification of compositions Quantitative analysis of surface compositions is the prerequisite to evaluate the composition based coloration mechanism. By the energy dispersive spectroscopy (EDS), we roughly examined the elemental information of the samples. Fe, Cr, Ni and Mn were the primary elements, and more Cr was detected in sample 1 (see Supplementary Table 1). Since the detection depth of EDS (~1 µm)25 is much larger than the film thickness, EDS result was mixed up with elemental information coming from substrates, making it inappropriate to analyze the thin films on the surface. XPS is a surface sensitive technique and thus available to focus on the thin markings. Figure 4 shows the XPS spectra of Cr 2p, Fe 2p and the details of peak fitting are listed in Supplementary Table 2. For Cr 2p spectra, two groups of peaks were identified in Fig. 4a, which were Cr2O3 at ~576.5 eV and a spinel structure at ~575.4 eV, respectively26, 27. The typical spinel structure is XY2O4, where X and Y represent ions in +2 and +3 valence28. In this work, its form can be specified as (Mn2+ x1Ni2+ x2Fe2+ x3)(Fe3+ x4Cr3+ x5)O4, where x1 + x2 + x3 = 1 and x4 + x5 = 2. In contrast, the spinel peaks disappeared on the spectra of steel substrates, and chromium of metal form was detected (Supplementary Fig. 5). Similar results are also found in Fig. 4b. The spinel compound and Fe2O3 were identified. As a typical feature for Fe 2p spectra of stainless steel, the pair of satellite peaks of spinel compound were also specified. It suggests that the spinel structure was formed during the laser treatment process. XPS spectra of: (a) Cr 2p, (b) Fe 2p. In each panel, the spectra of the three samples are illustrated. Each pair of doublet peaks were illustrated with identical color. The scatter plots are experimental results, while the magenta and grey lines represent the fitting envelopes and backgrounds, respectively. Indeed, native passivation layer on stainless steel surface is mainly composed of Cr2O3, Fe3O4 and Fe2O3 29. Alloy elements were promoted to disperse from the substrate to the passivation layer due to the heat effect of pulsed laser, and they could be more easily oxidized by oxygen. Oxygen partial pressure was therefore reduced, producing a reductive atmosphere in local region. In this case, Cr and Mn have the priority to react with the remaining oxygen and form their metal oxides. After that Fe was also oxidized to FeO, meanwhile, laser beam moved away and the temperature as well as the reaction rate dropped consequently. Oxygen partial pressure rose, which facilitated the simultaneous oxidization of Fe, Ni and the metal oxides. The spinel compound was then formed. It was also possible for Cr3+ substituted by Fe3+, making the spinel structure even more complicated30. It is noteworthy that very small amount of Ni was detected although there was 8~11% in the commercial 304 steel substrate (see Supplementary Table 2 and Supplementary Fig. 6). Ni is nearly absent in the native passivation layer, while that in the substrate usually completely dissolved in γ–Fe, making it difficult to escape31. The EDS results indicated more than 6% of Ni, most of which should come from the substrates. Obvious discrepancies between the real colors on the surfaces and optical determinations demand further investigations of the color origins of the laser treated steel surface. It has been noticed the metal oxides, especially well crystalized ones, can also act as stable pigments. Thus, in the following, we carefully examined chemical components on the surfaces, and estimated their contributions to the RGB values. The XPS fitting results provide quantitative information of each element in the sample. For Cr 2p, Fe 2p, Mn 2p and Ni 2p spectra, each spectrum was fitted with two components: the metal oxide and the spinel compound. Based on the atomic percentage of the metal ions (in metal oxides), the relative contents of their metal oxides could be easily obtained by dividing the subscript numbers. That is, $${C}_{MO}=\frac{{C}_{ion}}{{n}_{i}}$$ where C MO , C ion and n i are the relative contents of metal oxides, the elemental percentage of ions in the metal oxides, and their subscript numbers, respectively. Similarly for the ions in the spinel (Mn2+ x1Ni2+ x2Fe2+ x3)(Fe3+ x4Cr3+ x5)O4, divided by the corresponding subscripts, the relative content of spinel could be calculated through any of the metal ions. $${C}_{spinel}=\frac{{C}_{s-ion}}{{x}_{i}}$$ where C spinel , C s-ion and x i are the relative content of the spinel, the elemental percentage of metal ions in the spinel and their subscript numbers in the molecular formula. Through equations (1) and (2), we can calculate the relative contents of each metal oxide and the spinel, and also the ratios of each ion in the spinel. The relative contents of all the metal oxides and spinel compounds were normalized and tabulated in Table 2. The quantified molecular formula of the spinels are (Mn2+ 0.12Fe2+ 0.88)(Fe3+ 1.12Cr3+ 0.88)O4, (Mn2+ 0.07Ni2+ 0.09Fe2+ 0.84)(Fe3+ 1.82Cr3+ 0.18)O4 and (Mn2+ 0.07Ni2+ 0.04Fe2+ 0.89)(Fe3+ 1.80Cr3+ 0.20)O4 for samples 1–3, respectively. Table 2 Contents and color information of the compositions, and the calculated and measured RGB values of each sample. The percentage numbers show the contents of the composition. Each cell is shaded with its own RGB values. Compositions in the laser marking film, say the metal oxides and the spinel compound, are natural pigments presenting distinct colors. RGB values of metal oxides were set according to previous studies of their pure form32, 33. However, it becomes complicated when figuring out the color of the spinel. Its color is highly influenced by Cr, Fe elements, yet unfortunately affirmative relationships between the ratios and colors remain unknown. What's more, the Fe2+ and Fe3+ have different influence to the colors, and colors change drastically even with a small amount variation. Therefore we propose that the metal oxides and the spinel contribute their own color (in RGB values) by a factor of their relative contents, while the color of the spinel is qualitatively dominated by the existing Fe2+, Fe3+ and Cr3+. Fe2+ in the spinel results in a blue color species, varying among grey-blue, violet-blue and even dark green34. In contrast, enrichment of Fe3+ presents dark brown, while small amount of Fe3+ makes the color prone to green or even blue-green35 when Fe2+ is oxidized to Fe3+. On the other hand, Cr3+ is the key factor of red appearance, which usually give red or pale pink colors to the spinel36. The joint effect of Fe2+ and Cr3+ then produce colors in violet-red or dark violet, but the color will tend to orange when Fe3+ replace part of the Cr3+. The effect of Mn2+ and Ni2+ is not significant due to the very small amount as given by the EDS and XPS results. Mn2+ limitedly enhances the saturation of blue color and thus make it close to yellow37, while the existence of Ni2+ usually produces yellow-green color which is similar to the intrinsic color of NiO38. In general, the spinel presents brown color when Fe2+, Fe3+ and Cr3+ exist simultaneously. Considering the specific amount of metal ions in each sample, we are able to set initial RGB values for the spinels. For sample 1, Fe2+, Fe3+ and Cr3+ have almost the same amount, and much of the Cr3+ ions entered the center of spinel octahedron, making a reddish brown color. Meanwhile, Fe3+ raised the green value, and Mn2+ improved the color saturation and mainly increased blue and green values. Thus spinel color could be revised from standard reddish brown (165, 42, 42) to (165, 110, 95). In sample 2 and 3, the amount of Cr3+ greatly dropped so that the red values would be much smaller than in sample 1. Compared to sample 3, there are more Fe3+ in sample 2 which make a higher saturation of blue. It also contains relatively more Ni2+, tending the color closer to green. The spinel color then was set as (65, 130, 150). Fe3+ are two times more than Fe2+ in sample 3, and it usually presents dark grey (105, 105, 105) in this case. With more Cr3+ and less Ni2+ than in sample 2, red value would be larger and green value became smaller. (90, 95, 105) should be a reasonable value for the spinel in this sample. With the contents as well as the RGB values of metal oxides and the spinel, RGB values of each sample can be calculated by the product of its color matrix and content matrix, $$(\begin{array}{c}R\\ G\\ B\end{array})=(\begin{array}{c}\begin{array}{ccc}\begin{array}{cc}\begin{array}{cc}{R}_{1} & {R}_{2}\end{array} & {R}_{3}\end{array} & {R}_{4} & {R}_{5}\end{array}\\ \begin{array}{cccc}{G}_{1} & {G}_{2} & \begin{array}{cc}{G}_{3} & {G}_{4}\end{array} & {G}_{5}\end{array}\\ \begin{array}{cccc}\begin{array}{cc}{B}_{1} & {B}_{2}\end{array} & {B}_{3} & {B}_{4} & {B}_{5}\end{array}\end{array})(\begin{array}{c}{C}_{1}\\ {C}_{2}\\ {C}_{3}\\ {C}_{4}\\ {C}_{5}\end{array})$$ where R i , G i , B i are the red, green and blue values of Cr2O3, Fe2O3, MnO, NiO and the spinel compound, respectively; C i represents the contents of those compositions. The calculated RGB values agree well with the experimental results (see Table 2). For each sample, both the calculated and measured RGB results are within the same color species although minor differences exist. Such difference may come from the specification of initial spinel colors, but the structural color effect should be also accounted. Despite the surface microstructure of the laser markings, the interference between the thin film and visible light contributes to the final colors depending on its thickness. In conclusion, we treated the surface of 304 stainless steel with nanosecond laser and studied the coloration mechanism. Different from the femtosecond laser source, nanosecond laser fabricated surface structures in ~20 µm scale, far from the range of visible light wavelength. Therefore the structural color effect was quite weak and the colors of the laser markings were dominantly contributed by the colorful metal oxides and spinel compound. The contribution should be weighted by the relative content of each components. Such findings would be beneficial to the application of low-cost nanosecond laser in surface treatment of steel and other metallic materials. In this work, the commercial AISI 304 stainless steel was used as the substrate (18~20% Cr, 8 ~ 11% Ni, ≤2% Mn and ≤1% Si) with the thickness of 1 mm. Mechanical polishing was applied before laser treatment in order to produce a smooth surface, and the samples were then cleaned with ethanol. The laser treatment was carried out under a nanosecond MF20-E-A fiber laser marking machine in ambient air at room temperature at Han's Laser Inc., China. The output power was set as 20 W, generating pulsed laser beam with the central wavelength of 1064 nm. During the laser marking process, the maximum pulse energy was 1.10 mJ at the pulse repetition rate of 45–500 kHz and the scanning speed of 100 mm/s–1300 mm/s, while the pulse width was tuned from 4 ns to 260 ns. Scanning steps varied during the laser treatment. Polarized light microscope (Nikon Eclipse LV100DA-U) was employed to acquire the color information of the marked steel surface. The light source is within visible wavelength range without optical filters. Low magnification was adopted so that the colorized surface could be recorded in a relatively large scale. Optical properties including reflectance, refractive index and extinction coefficient were measured by an UVISEL-VASE Horiba Jobin-Yvon ellipsometer with a fixed incident angle of 70°. Reflectance behavior was also taken by an UV-vis-NIR Varian Cary 500 spectrophotometer. For the purpose of comparative analysis, the incident angle of the spectrophotometer was set as 30° degree. The microstructure morphology was characterized by a Zeiss ULTRA plus FESEM. Surface roughness and film thickness were measured by optical profilometer (Bruker ContourGT-K). Ellipsometer (VASE) and cross-sectional SEM were also applied to double-check the thickness. During ellipsometry measurements, the polarization orientation (Ψ) and polarization phase (Δ) were measured with the AutoRetarder. Regression analysis was used to match the experimental data and calculate the thickness. For cross-sectional SEM measurements, the laser marking samples were cut and inlaid with epoxy resin. The image quality was limited due to the weak conductivity of the metal oxides in the laser marking film. We roughly examined the element contents of each laser marking through EDS. A Thermo Fisher Scientific ESCALAB 250Xi XPS spectrometer was used for detailed analysis of the final chemical states. All XPS spectra were calibrated with C 1 s (248.8 eV) and then fitted. Numerical simulation of light scattering was carried out in the finite differential time domain (FDTD). The OptiFDTD software (free version) was employed here, and the simulations were based on the space and time partial derivatives of discrete time-dependent Maxwell's equations. From the experimental point of view, the lateral dimensions of surface morphologies have weak interference with visible light, and this was also approved by series of simulation tests. Therefore, models of laser markings were established in thin film interference mode with smooth surface in order to improve the simulation speed. In the two-layer models, the lower layer was stainless steel substrate and the upper one was the laser marking film. The refractive indices of these two layers were defined according to the experimental results. In the simulations, we employed the refractive indices at the wavelength of 550 nm which were recommended by the OptiFDTD software. These are, n 1 = 1.104 + 1.607i, n 2 = 2.198–1.045i, n 3 = 1.929 + 0.775i, n 4 = 2.586 + 2.413i for the laser marking films in samples 1–3 and the steel substrates, respectively. The thickness of steel substrate was fixed as 1 µm, while the values were 800 nm, 300 nm and 800 nm for samples 1–3, respectively. Perpendicular input plane light source was set up as Gaussian modulated continuous wave with wavelength from 400 nm to 700 nm. The boundary conditions in X and Y directions were set as Periodic Boundary Condition (PBC), while Anisotropic Perfectly Matched Layers (APML) in Z direction. To receive reflective information, the observation points were set directly above the models. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. Chater, J. Polychrome: the many colours of stainless steel. Stainless Steel World 3, 1–4 (2014). Kikuti, E., Conrrado, R., Bocchi, N., Biaggio, S. R. & Rocha-Filho, R. C. Chemical and electrochemical coloration of stainless steel and pitting corrosion resistance studies. J. Braz. Chem. Soc. 15, 472–480 (2004). Taveira, L. V., Kikuti, E., Bocchi, N. & Dick, L. F. Microcharacterization of colored films formed on AISI 304 by different electrochemical methods. J. Electrochem. Soc. 153, B411–B416 (2006). Vorobyev, A. Y. & Guo, C. Direct creation of black silicon using femtosecond laser pulses. Appl. Sur. Sci. 257, 7291–7294 (2011). Article ADS CAS Google Scholar Vorobyev, A. Y. & Guo, C. Colorizing metals with femtosecond laser pulses. Appl. Phys. Lett. 92, 041914 (2008). Vorobyev, A. Y. & Guo, C. Metal colorization with femtosecond laser pulses. Proc. SPIE 7005, 70051T (2008). Milovanović, D. S. et al. Femtosecond laser surface patterning of steel and titanium alloy. Phys. Scr. T162, 014017 (2014). Gu, Z.-Z. et al. Structural color and the lotus effect. Angew. Chem. Int. Ed. 42, 894–897 (2003). Łęcka, K. M. et al. Effects of laser-induced oxidation on the corrosion resistance of AISI 304 stainless steel. J. Laser Appl. 28, 032009 (2016). Ahsan, M. S., Ahmed, F., Kim, Y. G., Lee, M. S. & Jun, M. B. Colorizing stainless steel surface by femtosecond laser induced micro/nano-structures. Appl. Sur. Sci. 257, 7771–7777 (2011). Guo, Q. L., Li, X. H., Zhang, Y. & Huang, W. The comparison of fabricating colorful patterns on stainless steel with femtosecond and nanosecond laser pulses. ICEOE 2011 2, 69–71 (2011). Veiko, V. et al. Development of complete color palette based on spectrophotometric measurements of steel oxidation results for enhancement of color laser marking technology. Materials & Design 89, 684–688 (2016). Veiko, V. et al. Controlled oxide films formation by nanosecond laser pulses for color marking. Optics Express 22, 24342–24347 (2014). Li, Z. L. et al. Analysis of oxide formation induced by UV laser coloration of stainless steel. Appl. Sur. Sci. 256, 1582–1588 (2009). Antończak, A. J. et al. The influence of process parameters on the laser-induced coloring of titanium. Appl. Phys. A 115, 1003–1013 (2014). Hunt, R. W. G. The Reproduction of Colour. (Wiley–IS&T Series in Imaging Science and Technology, 2004). Ye, C. et al. Rapid cycling of reactive nitrogen in the marine boundary layer. Nature 532, 489–491 (2016). Luo, F. et al. Study of micro/nanostructures formed by a nanosecond laser in gaseous environments for stainless steel surface coloring. Appl. Sur. Sci. 328, 405–409 (2015). Ball, P. This season's colours. Nature Mater. 15, 709–709 (2016). Zheng, H. Y., Lim, G. C., Wang, X. C., Tan, J. L. & Hilfiker, J. Process study for laser-induced surface coloration. J. Laser Appl. 14, 215–220 (2002). Losurdo, M., Hingerl, K. Ellipsometry at the Nanoscale. (Springer, 2013). Huang, Z. et al. A computational study of antireflection structures bio-mimicked from leaf surface morphologies. Solar Energy 131, 131–137 (2016). Huang, Z. et al. Structural color model based on surface morphology of morpho butterfly wing scale. Surf. Rev. Lett. 23, 1650046 (2016). Sardana, S. K. & Komarala, V. K. Optical properties of hybrid plasmonic structure on silicon using transparent conducting-silver nanoparticles–silicon dioxide layers: the role of conducting oxide layer thickness in antireflection. J. Opt. 18, 075004 (2016). Vickerman, J. C. & Gilmore, I. S. Surface Analysis - The Principal Techniques. (John Wiley & Sons, 2009). Allen, G. C., Harris, J. A., Jutson, J. A. & Dyke, J. M. A study of number of mixed transition metal oxide spinels using X-ray photoelectron spectroscopy. Appl. Sur. Sci. 37, 111–134 (1998). Huntz, A. M. et al. Oxidation of AISI 304 and AISI 439 stainless steels. Mater. Sci. Eng. A 447, 266–276 (2007). Biagioni, C. & Marco, P. The systematics of the spinel-type minerals: An overview. Am. Mineral. 99, 1254–1264 (2014). Simões, A. M. P., Ferreira, M. G. S., Rondot, B. & da Cunha Belo, M. Study of passive films formed on AISI 304 stainless steel by impedance measurements and photoelectrochemistry. J. Electrochem. Soc. 137, 82–87 (1990). Levinstein, H. J., Robbins, M. & Capio, C. A crystallographic study of the system FeCr2O4-Fe3O4 (Fe2+ Fe3+ xCr2−xO4). Mater. Res. Bull. 7, 27–34 (1972). Pickering, F. B. Physical metallurgy and the design of steels. (Applied Science Publishers, 1978). Liang, S.-T. et al. Preparation of Cr2O3-based pigments with high NIR reflectance via thermal decomposition of CrOOH. Trans. Nonferrous Met. Soc. China. 25, 2646–2652 (2015). Torrent, J. & Barrón, V. Diffuse reflectance spectroscopy of iron oxides. Encyclopedia of Surface and Colloid Science. 1, 1438–1446 (2002). Muhlmeister, S. et al. Flux-grown synthetic red and blue spinels from Russia. Gems Gemol. 29, 81–98 (1993). D'Ippolito, V. et al. Color mechanisms in spinel: cobalt and iron interplay for the blue color. Phys. Chem. Miner. 42, 431–439 (2015). Huong, L. T.-T. et al. Gemstones from Vietnam: an update. Gems Gemol. 48, 158–176 (2012). http://www.gemstonemagnetism.com/spinel_pg_2.html. Llusar, M. et al. Stability and coloring properties of Ni-qandilite green spinels (Ni, Mg)2 TiO4: The "half color wheel" of Ni-doped magnesium titanates. Dyes Pigments. 122, 368–381 (2015). Authors thank Han's Laser Inc. China, for preparations, discussions, and parameter optimizations of the nanosecond laser marking. This work is financially supported by Oulu University Strategic Grant, and the Academy of Finland. Y.L. acknowledges the supports from the National Natural Science Foundation of China (Grant No. 51201061), International Cooperation Project from Henan Province (Grant No. 172102410032), and Science and Technology Innovation Team of Henan University of Science and Technology (2015XTD006). Y.L. and X.S. acknowledge the scholarship sponsored by China Scholarship Council. The Center of Microscopy and Nanotechnology of Oulu University, and Q. Wang at Tsinghua University are also acknowledged for sample characterizations. Yan Lu and Xinying Shi contributed equally to this work. School of Materials Science and Engineering, Henan University of Science and Technology, Luoyang, 471023, China Yan Lu Nano and Molecular Systems Research Unit, University of Oulu, P.O. Box 3000, FIN-90014, Oulu, Finland Yan Lu, Xinying Shi, Marko Huttula & Wei Cao School of Mechanical and Automotive Engineering, Anhui Polytechnic University, Wuhu, 241000, China Zhongjia Huang College of Chemistry, Key Lab of Environment Friendly Chemistry and Application in Ministry of Education, Xiangtan University, Yuhu District, Xiangtan, 411105, China Taohai Li Department of Physics, East China University of Science and Technology, Meilong Road 130, Shanghai, 200237, China Meng Zhang Optoelectronics and Measurement Techniques Research Unit, University of Oulu, P.O. Box 4500, FIN-90014, Oulu, Finland Jakub Czajkowski & Tapio Fabritius Xinying Shi Jakub Czajkowski Tapio Fabritius Marko Huttula Wei Cao The project was initiated, conceived and conducted by W.C., Y.L. and X.S. performed the SEM, XPS and optical reflectance measurements, and X.S. computed RGB values through chemical state determinations. J.C., Y.L. and T.F. took the surface profile and optical microscope photos. Y.L. and Z.H. carried out the FDTD simulations, while T.L. and M.Z. in searching for connections between pigments and coloration. X.S., Y.L. and W.C. wrote the manuscript, with general supervisions from M.H. All authors have read and approved the final manuscript. Correspondence to Yan Lu or Wei Cao. Lu, Y., Shi, X., Huang, Z. et al. Nanosecond laser coloration on stainless steel surface. Sci Rep 7, 7092 (2017). https://doi.org/10.1038/s41598-017-07373-8 Accepted: 27 June 2017 Unveiling nano-scaled chemical inhomogeneity impacts on corrosion of Ce-modified 2507 super-duplex stainless steels Harishchandra Singh Yi Xiong npj Materials Degradation (2022)
CommonCrawl
MSc in Mathematical Modelling and Scientific Computing Whatever your background, we expect that you will be familiar with all the material listed below. We also expect that you will have experience in more advanced areas such as differential equations, fluid mechanics, numerical analysis, statistics, etc. The following gives details of the minimum prerequisite material. Basic vector manipulation. Ideas of position vector, velocity, acceleration. Scalar and vector products. Parametrisation of a curve; tangent vector, arc length. Parametrisation of a surface; normal to a surface. Background reading and exercises Jordan & Smith chapters 9-11; Kreyszig sections 8.1-8.6. Systems of linear equations and their interpretation via matrices. Elementary row operations, Gauss and Gauss-Jordan elimination, linear dependence and independence. Matrix multiplication, transpose, determinant, trace, inverse. Identities such as (AB)T=BTAT. Definition and concepts of eigenvalues and eigenvectors. Finding them by hand for up to 3x3 matrices. Rotation of coordinates, orthogonal matrices. Diagonalisation. Possibility of non-diagonalisable matrices. Properties of real symmetric matrices. Jordan & Smith chapters 7, 8, 12, 13; Kreyszig chapters 6, 7. Real analysis Concepts and practice of differentiation and integration. Taylor's theorem. Limits and L'Hôpital's rule. Solution of simple ODEs: first-order separable, integrating factors, linear constant-coefficient ODEs, complementary function and particular integral. Stürm-Liouville problem for second-order linear ODE. Elements of phase plane analysis: critical points and their classification. Standard sequences and series. Fourier series and eigenfunction expansions. Jordan & Smith chapters 1-5, 14-19, 23, 26; Kreyszig sections 1.1-1.6, 2.1-2.3, 2.7-2.10, chapters 3, 4, sections 10.1-10.4, A3.3. Calculus of several variables Concept and practice of partial differentiation. Change of coordinates, chain rule. div, grad and curl. Simple manipulation rules: $\nabla \cdot (\phi$ u) = $\nabla \phi \cdot$ u + $\phi \nabla \cdot$ u , and so forth. Line, surface and volume integrals. Change of variables, Jacobian. Classification of stationary points: local minima, maxima and saddle points. Lagrange multipliers. Divergence theorem and Stokes' theorem. Jordan & Smith chapters 28-34; Kreyszig sections A3.2, 8.8-8.11, chapter 9. Basic treatment of Laplace, heat and wave equations. Their solution via separation of variables. D'Alembert's solution of the wave equation. Use of Fourier series, Fourier transform and Laplace transform to solve linear constant-coefficient ODEs and PDEs. Kreyszig chapter 11. Basic manipulation of complex numbers and complex variables. Properties of complex functions: zn, ez, log z, z$\alpha$ Analytic functions and power series. Convergence, divergence and Cauchy sequences. Cauchy-Riemann equations. Analysis and classification of isolated singularities and branch points. Contour integration and residue calculus. Conformal mapping. Jordan & Smith chapter 6; Kreyszig chapters 12-15; Priestley. D. W. Jordan and P. Smith, Mathematical Techniques, 3rd Edition (2002). Oxford University Press. E. Kreyszig, Advanced Engineering Mathematics, 8th Edition (1999). Wiley. H. A. Priestley, Introduction to Complex Analysis, revised edition (1990). Oxford University Press.
CommonCrawl
Corporate Finance & Accounting Accounting Harmonic Mean What Is a Harmonic Mean? The harmonic mean is a type of numerical average. It is calculated by dividing the number of observations by the reciprocal of each number in the series. Thus, the harmonic mean is the reciprocal of the arithmetic mean of the reciprocals. The harmonic mean of 1,4, and 4 is: 3(11 + 14 + 14) = 31.5 = 2\frac{3}{\left(\frac{1}{1}\ +\ \frac{1}{4}\ +\ \frac{1}{4}\right)}\ =\ \frac{3}{1.5}\ =\ 2(11​ + 41​ + 41​)3​ = 1.53​ = 2 [Important: the reciprocal of a number n is simply 1 / n.] The Basics of a Harmonic Mean The harmonic mean helps to find multiplicative or divisor relationships between fractions without worrying about common denominators. Harmonic means are often used in averaging things like rates (e.g., the average travel speed given a duration of several trips). The weighted harmonic mean is used in finance to average multiples like the price-earnings ratio because it gives equal weight to each data point. Using a weighted arithmetic mean to average these ratios would give greater weight to high data points than low data points because price-earnings ratios aren't price-normalized while the earnings are equalized. The harmonic mean is the weighted harmonic mean, where the weights are equal to 1. The weighted harmonic mean of x1, x2, x3 with the corresponding weights w1, w2, w3 is given as: ∑i=1nwi∑i=1nwixi\displaystyle{\frac{\sum^n_{i=1}w_i}{\sum^n_{i=1}\frac{w_i}{x_i}}}∑i=1n​xi​wi​​∑i=1n​wi​​ The harmonic mean is the reciprocal of the arithmetic mean of the reciprocals. Harmonic means are used in finance to average data like price multiples. Harmonic means can also be used by market technicians to identify patterns such as Fibonacci sequences. Harmonic Mean Versus Arithmetic Mean and Geometric Mean Other ways to calculate averages include the simple arithmetic mean and the geometric mean. An arithmetic average is the sum of a series of numbers divided by the count of that series of numbers. If you were asked to find the class (arithmetic) average of test scores, you would simply add up all the test scores of the students, and then divide that sum by the number of students. For example, if five students took an exam and their scores were 60%, 70%, 80%, 90%, and 100%, the arithmetic class average would be 80%. The geometric mean is the average of a set of products, the calculation of which is commonly used to determine the performance results of an investment or portfolio. It is technically defined as "the nth root product of n numbers." The geometric mean must be used when working with percentages, which are derived from values, while the standard arithmetic mean works with the values themselves. The harmonic mean is best used for fractions such as rates or multiples. Example of the Harmonic Mean As an example, take two firms. One has a market capitalization of $100 billion and earnings of $4 billion (P/E of 25) and one with a market capitalization of $1 billion and earnings of $4 million (P/E of 250). In an index made of the two stocks, with 10% invested in the first and 90% invested in the second, the P/E ratio of the index is: Using the WAM: P/E = 0.1×25+0.9×250 = 227.5Using the WHM: P/E = 0.1 + 0.90.125 + 0.9250 ≈ 131.6where:WAM=weighted arithmetic meanP/E=price-to-earnings ratio\begin{aligned}&\text{Using the WAM:\ P/E}\ =\ 0.1 \times25+0.9\times250\ =\ 227.5\\\\&\text{Using the WHM:\ P/E}\ =\ \frac{0.1\ +\ 0.9}{\frac{0.1}{25}\ +\ \frac{0.9}{250}}\ \approx\ 131.6\\&\textbf{where:}\\&\text{WAM}=\text{weighted arithmetic mean}\\&\text{P/E}=\text{price-to-earnings ratio}\\&\text{WHM}=\text{weighted harmonic mean}\end{aligned}​Using the WAM: P/E = 0.1×25+0.9×250 = 227.5Using the WHM: P/E = 250.1​ + 2500.9​0.1 + 0.9​ ≈ 131.6where:WAM=weighted arithmetic meanP/E=price-to-earnings ratio​ As can be seen, the weighted arithmetic mean significantly overestimates the mean price-earnings ratio. Modified Duration Modified duration is a formula that expresses the measurable change in the value of a security in response to a change in interest rates. What Is the Macaulay Duration? The Macaulay duration is the weighted average term to maturity of the cash flows from a bond. Annualized Total Return Annualized total return gives the yearly return of a fund calculated to demonstrate the rate of return necessary to achieve a cumulative return. Inside the Average Annual Growth Rate (AAGR) The average annual growth rate (AAGR) is the average increase in the value of an individual investment, portfolio, asset, or cash stream over the period of a year. It is calculated by taking the arithmetic mean of a series of growth rates. T-Test Definition Linearly Weighted Moving Average (LWMA) Definition and Calculation A linearly weighted moving average is a type of moving average where more recent prices are given greater weight in the calculation, and prior prices are given less weight. The Difference Between Arithmetic Mean and Geometric Mean Continuous Compound Interest Calculating Present and Future Value of Annuities Basics Of The Binomial Distribution What the Dow Means and How It Is Calculated Fund Trading Active Share Measures Active Management
CommonCrawl
MIT-Harvard-MSR Combinatorics Seminar Schedule 2002 Spring Organizer: Sara C. Billey 2-338 - Wednesdays and Fridays 4:15pm Refreshments at 3:45pm David Galvin, Rutgers University Phase transition in the hard-core model on Z^d We show that for large dimension d, the hard-core model on the integer lattice Z^d with all activities equal to lambda exhibits a phase transition when lambda > C (log^3(d)/d)^(1/4) We define the distance between any two points in the lattice to be the L^1 distance and say a subset I of the lattice is independent if no two points of I are adjacent (at distance 1). Let B = [-N,N]^d be a large box in the lattice. For lambda >0, we consider the probability distribution on the independent subsets of B where each such I has probability proportional to lambda^|I|. Let p_e (resp. p_o) be the probability that I contains the origin given that all points on the boundary of B lying at an even (resp. odd) distance from the origin are in I. Jim Propp, "University of Wisconsin Reciprocity theorems for enumeration of perfect matchings There are many cases in which a naturally-defined sequence of graphs $G_1,G_2,\dots$ has the property that if one defines $f(n)$ as the number of perfect matchings of $G_n$ and extends the function $f$ in a natural way so that its domain contains all the integers, the resulting extension satisfies a reciprocity relation of the form $f(-n)=\pm f(n-c)$ for all $n$ (for some fixed integer $c$ and for some specific sign-rule). I will survey known results of this type, some of which can be explained by Ehrhart reciprocity but most of which cannot. For example, let $G_n$ denote the $m$-by-$n$ grid, with $m$ fixed. The associated numbers $f(n)$ satisfy a linear recurrence relation, and so may be extrapolated to negative values of $n$; these extrapolated values satisfy the relation $f(-n)=\epsilon_{m,n}f(n-2)$, where $\epsilon_{m,n}=-1$ if $m \equiv 2$ (mod 4) and $n$ is odd, and $\epsilon_{m,n}=+1$ otherwise. This result was proved algebraically by Stanley in 1985, but a new, combinatorial proof yields a generalization to the case where $G_n$ is the box-product of any finite graph $G$ with a path of $n$ vertices. Andrei Zelevinsky, Northeastern University Y-systems and generalized associahedra I This is the first of two talks based on a joint paper with Sergey Fomin. I will concentrate on Y-systems, a particular class of birational recurrence relations associated with an arbitrary finite root system. These relations were introduced in 1991 by Al.B.Zamolodchikov, in connection with the theory of thermodynamic Bethe ansatz. We prove Zamolodchikov's conjecture that this system exhibits periodicity with period h+2, where h is the Coxeter number of the root system. Our proof is based on the study of a piecewise-linear version of the Y-system obtained from it by the "tropicalization" procedure. This version allows us to introduce a new family of simple polytopes (generalized associahedra) associated with root systems. These polytopes will be discussed by Sergey Fomin in the second talk of this series. Maps and the Jack symmetric function A map is a 2-cell embedding of a graph in a surface. The determination of the counting series for maps is a question that arises in combinatorics, in geometry, and in physics as a partition function. Tutte, for example. studied the enumeration of maps in the sphere as part of his attack on the Four Colour Problem. In this talk I will discuss an algebraic approach to determining the counting series for maps in orientable surfaces and in all surfaces (including non-orientable surfaces) in terms of Jack symmetric functions. The indeterminate b in the Jack parameter a=1+b is conjectured to mark an invariant of rooted maps. The counting series for maps with one vertex is central to the determination of the Euler characteristic of the moduli spaces of complex (b=0; Schur functions) and real algebraic curves (b=1; zonal polynomials). The complex case was treated by Harer and Zagier. If time permits I shall refer briefly to the conjectured combinatorial role of b in this context. Sergey Fomin, University of Michigan Y-systems and generalized associahedra II This is the second of two talks based on a joint paper with Andrei Zelevinsky, which was motivated by the theory of cluster algebras , on one hand, and by Zamolodchikov's periodicity conjecture for Y-systems, on another. The presentation will be self-contained, and will focus on the second part of the title ("generalized associahedra"). We introduce and study a family of simplicial complexes which can be viewed as a generalization of the Stasheff polytope (a.k.a. associahedron) for an arbitrary root system. In types A and B, our construction recovers, respectively, the ordinary associahedron and the Bott-Taubes polytope, or cyclohedron. In a follow-up joint project with Frederic Chapoton, we present explicit polytopal realizations of these generalized associahedra. On the enumerative side, these constructions provide natural root system analogues to noncrossing/nonnesting partitions. Mihai Ciucu, Georgia Tech Perfect matchings and perfect powers In the last decade several results have been found concerning lattice regions in the plane whose number of tilings by certain tiles is a perfect power or a near-perfect power. We review some of them and present new simple, unified proofs. We also discuss and generalize a conjecture due to Matt Blum on enumerating the tilings of a certain family of regions on the lattice obtained from the triangular lattice by drawing in all altitudes. Panel discussion "The Future of Combinatorics" by members of the department What is Combinatorics in 2002 and where are we going? The subject of combinatorics has been transformed immensely in the last fifty years. It has gone from being an recreational hobby to one of the major areas of mathematics today. In fact, Combinatorics is applied to almost all areas of mathematics: algebra, algebraic geometry, probability, topology, symplectic geometry, operations research, mathematical biology, as well as computational complexity, design and analysis of algorithm, and statistical physics. In this discussion, we hope to clarify the role of Combinatorics in the present and future. The panel will consist of five people: Sara Billey (MIT), David Jackson (University of Waterloo/MIT), Igor Pak (MIT), Jim Propp (University of Wisconsin/Harvard) and Santosh Vempala (MIT). Dan Kleitman will moderate the discussion and Richard Stanley will act as the supreme arbitrator. Much of the time will be allotted to questions and opinions from the audience. Everyone is encouraged to participate in this lively debate. Robin Forman, Rice University A topological aproach to the game of "20 questions" In the usual game of 20 questions, one player tries to determine a hidden object by asking a series of "yes or no" questions. A number of real life search problems have this general form. However, in applications one is usually limited to a predetermined set of questions, and one is not required to determine the hidden object precisely, but rather only to determine the object up to some sort of equivalence. For example: Suppose one has a communications network, and a storm (or a terrorist attack, or...) knocks out some of the arcs in the network. For each arc of the network, one may test "Is this arc still working?" (This is the predetermined list of allowable questions.) The goal may not be to determine the surviving network precisely, but instead to determine the answers to a small list of questions of the form "Is the network still connected?" "Is there still a direct connection between Point A and the Point B?", etc. In the problems we examine, we will assume that one can complete the task if one is permitted to ask all of the allowable questions. The question we will aexamine is - Is it possible to do better? That is, can one complete the task without asking all of the allowable questions? Our approach is to restate the problem in a more topological form. We will then define a new homology theory that captures the difficulty of solving this problem. The link between the homology theory and the original search problem is provided by a slight generalization of "Discrete Morse Theory." Kohji Yanagawa, Osaka University, Japan Local cohomology modules of Stanley-Reisner rings and Alexander duality Stanley-Reisner rings and affine semigroup rings are central concepts of combinatorial commutative algebra. For these studies, explicit descriptions of the local cohomology modules with supports in maximal monomial ideals are fundamental. In the Stanley-Reisner ring case, Hochster's formula gives a topological description. But recently, for affine semigroup rings, several authors investigated the case where the support ideal is not maximal. In this case, the local cohomology modules are neither noetherian nor artinian, but still have nice graded structure. This talk will concern the local cohomologies of a Stanley-Reisner ring with supports in a general monomial ideal. A monomial ideal of a Stanley-Reisner ring associated to a subcomplex of an (abstract) simplicial complex. I will give a combinatorial topological formula for the multigraded Hilbert series, and in the case where the ambient complex is Gorenstein, compare this with a second formula that generalizes results of Mustata and Terai. The agreement between these two formulae is seen to be a disguised form of Alexander duality. This is a joint work with V. Reiner and V. Welker. Hoang Ngoc Minh, University of Lille 2 - CNRS (France) The algorithmic and combinatorial aspects of functional equations on polylogarithms The algebra of polylogarithms is the smallest C-algebra that contains the constants and that is stable under integration with respect to the differential forms dz/z and dz/(1-z). It is known that this algebra is isomorphic to the algebra of the noncommutative polynomials equipped with the shuffle product. As a consequence, the polylogarithms Li_n(g(z)), where the g(z) belong to the group of biratios, are polynomial on the polylogarithms indexed by Lyndon words with coefficients in a certain transcendental extension of Q : MZV, the algebra of the Euler-Zagier sums. The question of knowing whether the polylogarithms Li_n(g(z)) satisfy a linear functional equation is then effectively decidable up to a construction of a basis for the algebra MZV. Jeb Willenbring, Yale University Representation theory of the orthogonal group from a combinatorial point of view In the first part of this talk I will address some problems in classical invariant theory from a combinatorial point of view. Consequences of Schur-Weyl duality, the theory of symmetric pairs and Roger Howe's theory of dual reductive pairs will serve as tools to connect the combinatorial ideas with invariant theory. During the second part of the talk, research done in collaboration with Thomas Enright will be described. This work is based on results of Thomas Enright, Roger Howe and Nolan Wallach concerning unitary highest weight representations. Specifically, we will see how these results provide a modern context for the Littlewood restriction formula (which is a branching rule for decomposing finite dimensional representations of GL(n) into irreducible representations of an orthogonal or symplectic subgroup). This context provides a stronger formulation of Littlewood's result. The results from the second part of the talk shed light on the problems addressed in the first part. Van H. Vu, University of California, San Diego Economical Waring bases Few hundreds years ago, Waring asserted (without proof) that every atural number can be represented as the sum of few powers. More precisely, for every natural number k, there is a nutural number s such that every natural number n can be written as sum of s kth power (of non-negative integers). For instance, every natural number is a sum of 4 squares, 9 ubes and so on. Waring's assertion has turn into one of the main research problems in number theory. It was first proved by Hilbert, and a little bit later by Hardy and Littlewood, using the circle method. Works on Waring's problem still continue even today. About 20 years ago, Nathanson posed a question that whether one can represent all natural numbers using only "few" powrs (by few we mean a sparse subset of the set of all kth powers). May results have been obtained by Erdos, Nathanson, Zollner and Wirsing for the case k=2. In this talk, we will solve Nathanson's question for general k. The proof is a combination of number theory, probablity and combinatorics and some of the tools arof independent interest. Robust combinatorial optimization We propose an approach to address data uncertainty for discrete optimization problems that allows controlling the degree of conservatism of the solution, and is computationally tractable both practically and theoretically. In particular, when both the cost coefficients and the data in the constraints of an integer programming problem are subject to uncertainty, we propose a robust integer programming problem of moderately larger size that allows to control the degree of conservatism of the solution in terms of probabilistic bounds on constraint violation. When only the cost coefficients are subject to uncertainty and the problem is a 0-1 discrete optimization problem on n variables, then we solve the robust counterpart by solving n+1 instances of the original problem. Thus, the robust counterpart of a polynomially solvable 0-1 discrete optimization problem remains polynomially solvable. In particular, robust matching, spanning tree, shortest path, matroid intersection, etc. are polynomially solvable. Moreover, we show that the robust counterpart of an NP-hard $\alpha$-approximable 0-1 discrete optimization problem, remains $\alpha$-approximable. (joint work with Melvyn Sim) Gilles Schaeffer, Loria -- CNRS, France Planar triangulations and a Brownian snake A planar map is a proper embedding of a graph in the plane. W.T. Tutte gave in the 60's beautiful formulas for the number of (combinatorially distinct) planar maps with n edges but also for subclasses like triangulations, quadrangulations, etc. Some of these formulas were also found by physicists in the 70's. David Jackson recently discussed some algebraic aspects of this theory in this seminar (Feb. 20). We shall instead consider bijective approaches to enumerative and probabilistic questions. First, I will present a bijective proof of Tutte's formulas, building on the cycle lemma used by Raney in his bijective proof of the Lagrange inversion formula. Second, following physicists, we shall put the uniform distribution on quadrangulations and view them as random surfaces. Bijections again allow to study the geometry of these random surfaces and lead to a surprising connection with a probabilistic process introduced by David Aldous (the Brownian snake constructed on a standard Brownian excursion). As a consequence the diameter (in graph theoretic sense) of a random quadrangulation with n vertices grows like n^{1/4}. Egon Schulte, Northeastern University Locally unitary groups and regular polytopes Complex groups generated by involutory reflections arise naturally in the modern theory of abstract regular polytopes. These groups preserve a hermitian form on complex n-space and are generated by n involutory hyperplane reflections. We are particularly interested in the case that the subgroups generated by all but a few generating reflections are finite (unitary) groups. For example, all the subgroups generated by n-1 reflections may be finite. We explain how the enumeration of certain finite universal regular polytopes can be accomplished through the enumeration of certain types of finite complex reflection groups, and describe all the finite groups and their diagrams which arise in this context. This is joint work with Peter McMullen. Ernesto Vallejo, Instituto de Matematicas, Morelia, Mexico 3-dimensional matrices and Kronecker products A formula, due to Snapper, gives the number of 3-dimensional (0,1)-matrices with fixed plane sums as an inner product of certain characters of the symmetric group. Using this formula we give a criterion, in the spirit of the Gale-Ryser theorem, for deciding when such number is one. We also establish a conexion with the problem of determining the minimal components, in the dominance order, of the Kronecker product of two irreducible characters of the symmetric group. In looking for ways of computing the multiplicity of the minimal components we are led to a one-to-one correspondence between 3-dimensional (0,1)-matrices and certain triples, which uses and generalizes the dual RSK-correspondence. One application of it is a combinatorial description of those multiplicities in terms of matrices. Tom Bohman, Carnegie Mellon Linear Versus Hereditary Discrepancy The concept of hypergraph discrepancy provides a unified combinatorial framework for handling classical discrepancy problems from geometry and number theory. Since the discrepancy of a hypergraph can be small for somewhat arbitrary reasons, variants of hypergraph discrepancy have been introduced in the hopes of getting a better measure of the `intrinsic balance' of a hypergraph. Two such variants are linear discrepancy and hereditary discrepancy. Lov\'asz, Spencer and Vesztergombi proved that the linear discrepancy of a hypergraph \({\mathcal H}\) is bounded above by the hereditary discrepancy of \({\mathcal H}\), and conjectured a sharper bound that involves the number of vertices in \({\mathcal H}\). In this talk we give a short proof of this conjecture for hypergraphs of hereditary discrepancy 1. For hypergraphs of higher hereditary discrepancy we give some partial results and propose a sharpening of the conjecture. Akos Seress, The Ohio State University Finite Groups and Probabilistic Combinatorics A number of combinatorial problems with the symmetric group can be resolved using probabilistic methods. In this talk we discuss some of these of problems, including the following two: 1) Given a permutation s \in S_n, and a random conjugate t of it, what is the probability that s and t commute? 2) Given a word w(X,Y), what is the probability that two random elements x,y \in S_n satisfy w(x,y)=1? Applications include recognition algorithms for finite groups, and a connection to Magnus's conjecture about a residual property of free groups. The talk assumes no group theoretic background and should be accessible to a general audience. Peter Winkler, Bell Labs Building Uniformly Random Objects Finite combinatorial objects often come with notions of size and containment. It is natural to ask whether they can be constructed step by step in such a way that each object contains the last and is uniformly random among objects of its size. For example: starting with a triangle drawn on the plane, can you add line segments two at a time so that at every stage you have a uniformly random triangulation of a polygon? We will describe such processes for this and some more general structures related to Catalan numbers. (Joint work with Malwina Luczak, Cambridge.) Cris Moore, University of New Mexico and the Santa Fe Institute Almost all graphs of degree 4 are 3-colorable A graph coloring is called proper if no two nodes of the same color are connected by an edge. Deciding whether a graph has a proper coloring is a classical problem, of interest in Combinatorics and Computer Science. In this talk we study the probability that a random graph on n vertices with average degree d has a proper 3-coloring. This probability decreases as d grows, and it is conjectured that it jumps from 1 to 0 as n tends to infinity, when d passes a threshold t of about 4.7. We survey known results on the subject and prove that t > 4.03. We also show that almost all 4-regular graphs are 3-colorable. The proof uses analysis of differential equations which arise in the study of a greedy heuristic algorithm for graph coloring. This is joint work with Dimitris Achlioptas, and will appear at the Symposium on the Theory of Computing (STOC) 2002. Darla Kremer, Gettysburg College A q,t-Schroder polynomial In 1994 Garsia and Haiman conjectured that the Hilbert series for the space of diagonal harmonic alternates can be described by a certain rational function $C_n(q,t)$. They showed that $ C_n(1,1) =c_n$, the $n$th Catalan number; that $C_n(1,q)$ satisfies the recurrence of the Carlitz-Riordan $q$-Catalan polynomial $C_n(q)$; and that $ q^{\binom{n}{2}}C_n(q,1/q)$ evaluates to the $q$-Catalan polynomial. Haiman used techniques from algebraic geometry to show that $C_n(q,t)$ is a polynomial. In 2000, Garsia and Haglund gave a combinatorial proof that $C_n(q,t)$ has nonnegative integer coefficients. The 1994 conjecture was confirmed by Haiman in 2001. Garsia and Haglund's proof of nonnegativity gave a combinatorial interpretation of $C_n(q,t)$ as the generating series for two statistics $area$ and $dmaj$ defined on the set of Catalan paths of length $2n$. (A Catalan path is a lattice path from $(0,0)$ to $(n,n)$ which remains weakly above the line $y=x$, and which takes only horizontal and vertical steps.) After giving some background on the $q,t$-Catalan polynomial, I will extend the statistics $area$ and $dmaj$ to Schr\"oder paths: lattice paths from $(0,0)$ to $(n,n)$ which remain weakly above the line $y=x$, and which take horizontal, vertical, and diagonal steps. The generating series for $area$ and $dmaj$ over Schr\"oder paths having $d$ diagonal steps $S_{n,d}(q,t)$ defines a polynomial which is believed to be symmetric in $q$ and $t$. That is, $S_{n,d}(q,t)=S_{n,d}(t,q)$. I will discuss properties of $S_{n,d}(q,t)$ which support this conjecture. Michael Kleber, Brandeis University Quantum group representations: a combinatorist's-eye view A recent result in the representation theory of quantum groups describes a familty of finite-dimensional representations which, it turns out, are completely determined by combinatorial properties. The story of the conjecture and proof involves symmetric functions in a way appealing to both combinatorists and representation theorists. Joint work with Ian Grojnowski. Cristian Lenart, SUNY Albany Multiplication formulas in the K-theory of complex flag varieties The main object of the talk is to present some explicit formulas for expanding the product of certain Grothendieck polynomials in the basis of such polynomials. Grothendieck polynomials are representatives for Schubert classes in the K-theory of the variety of complete flags in the complex $n$-dimensional space; thus, they generalize Schubert polynomials, which are representatives for Schubert classes in cohomology. Our formulas are concerned with the multiplication of an arbitrary Grothendieck polynomial by one indexed by a simple transposition, and, more generally, by a cycle of the form $(i,i+1,...,i+p)$; in other words, we generalize Monk's and Pieri's formulas for Schubert polynomials. Our formulas are in terms of chains in a suborder of the Bruhat order on the symmetric group (known as k-Bruhat order) with certain labels on its covers. We deduce as corollaries A. Lascoux's transition formula for Grothendieck polynomials, and give a new formula for the product of a dominant line bundle and a Schubert class in K-theory; a previous formula of this type was given by W. Fulton and A. Lascoux, and later generalized by H. Pittie and A. Ram. Part of this work is joint with F. Sottile. Norman Wildberger, University of New South Wales, Sydney Quarks, diamonds, and unexpected symmetries The representations of sl(3), crucial to the description of elementary particles (quarks, colour and all that), can be displayed completely explicity, over the integers, by studying remarkable polytopes in three dimensional space, called diamonds, and associated directed graphs. No knowledge of Lie theory required. Tatiana Gateva-Ivanova A Combinatorial Approach to the set theoretic solutions of the Yang-Baxter equation A bijective map $r: X^2 \longrightarrow X^2$, where $X = \{x_1, \cdots , x_n \}$ is a finite set, is called a \emph{set-theoretic solution of the Yang-Baxter equation} (YBE) if the braid relation $r_{12}r_{23}r_{12} = r_{23}r_{12}r_{23}$ holds in $X^3.$ Each such a solution (we denote it by $(X;r)$) determines a semigroup $S(r)$ and a group $G(r)$ with a set of generators $X$ and a set of quadratic relations $R(r)$ determined by $r$. The problem of studying the set-theoretic solutions of YBE was posed by Drinfeld in 1990. In this talk I shall discuss the relation between the set-theoretic solutions of YBE, and a special class of standard finitely presented semigroups $S_0$, called \emph{semigroups of skew polynomial type} which I introduced in 1990. In a joint work with Michel Van den Bergh we prove that the set of relations $R_0$ of each semigroup $S_0$ of skew polynomial type determines a (\emph{non-degenerate involutive square-free}) solution of YBE. The corresponding semigroup rings also present a new class of Noetherian Artin-Schelter regular domains. In connection with this, in 1996, I made the conjecture that for each such solution $(X;r)$ the set $X$ can be ordered so, that the relations determined by $r$ form a Groebner basis, and the semigroup $S(r)$ is of skew polynomial type. Using combinatorial methods, recently I verified the conjecture for $n \leq 31$. I shall also discuss the relation between my conjecture and a conjecture of Etingof and Schedler, and various cases in which I verify their conjecture for large $n$. All announcements since Fall 2007 are in the Google Calendar 2005 IAP Seminar Home MIT Mathematics Accessibility
CommonCrawl
Get Slugging Percentage essential facts below. View Videos or join the Slugging Percentage discussion. Add Slugging Percentage to your PopFlock.com topic list for future reference or share this resource on social media. Hitting statistic in baseball Babe Ruth holds the MLB career slugging percentage record (.690).[1] In baseball statistics, slugging percentage (SLG) is a measure of the batting productivity of a hitter. It is calculated as total bases divided by at bats, through the following formula, where AB is the number of at bats for a given player, and 1B, 2B, 3B, and HR are the number of singles, doubles, triples, and home runs, respectively: S L G = ( 1 B ) + ( 2 × 2 B ) + ( 3 × 3 B ) + ( 4 × H R ) A B {\displaystyle \mathrm {SLG} ={\frac {({\mathit {1B}})+(2\times {\mathit {2B}})+(3\times {\mathit {3B}})+(4\times {\mathit {HR}})}{AB}}} Unlike batting average, slugging percentage gives more weight to extra-base hits such as doubles and home runs, relative to singles. Plate appearances resulting in walks, hit-by-pitches, catcher's interference, and sacrifice bunts or flies are specifically excluded from this calculation, as such an appearance is not counted as an at bat (these are not factored into batting average either). The name is a misnomer, as the statistic is not a percentage but an average of how many bases a player achieves per at bat. It is a scale of measure whose computed value is a number from 0 to 4. This might not be readily apparent given that a Major League Baseball player's slugging percentage is almost always less than 1 (as a majority of at bats result in either 0 or 1 base). The statistic gives a double twice the value of a single, a triple three times the value, and a home run four times.[2] The slugging percentage would have to be divided by 4 to actually be a percentage (of bases achieved per at bat out of total bases possible). As a result, it is occasionally called slugging average, or simply slugging, instead.[3] A slugging percentage is always expressed as a decimal to three decimal places, and is generally spoken as if multiplied by 1000. For example, a slugging percentage of .589 would be spoken as "five eighty nine," and one of 1.127 would be spoken as "eleven twenty seven." Facts about slugging percentage A slugging percentage is not just for the use of measuring the productivity of a hitter. It can be applied as an evaluative tool for pitchers. It is not as common but it is referred to as slugging-percentage against.[4] In 2019, the mean average SLG among all teams in Major League Baseball was .435.[5] The maximum slugging percentage has a numerical value of 4.000. However, no player in the history of the MLB has ever retired with a 4.000 slugging percentage. Five players tripled in their only at bat and therefore share the Major League record, when calculated without respect to games played or plate appearances, of a career slugging percentage of 3.000. This list includes Eric Cammack (2000 Mets); Scott Munninghoff (1980 Phillies); Eduardo Rodríguez (1973 Brewers); and Charlie Lindstrom (1958 White Sox).[6] For example, in 1920, Babe Ruth played his first season for the New York Yankees. In 458 at bats, Ruth had 172 hits, comprising 73 singles, 36 doubles, 9 triples, and 54 home runs, which brings the total base count to . His total number of bases (388) divided by his total at bats (458) is .847 which constitutes his slugging percentage for the season. This also set a record for Ruth which stood until 2001 when Barry Bonds achieved 411 bases in 476 at bats bringing his slugging percentage to .863, which has been unmatched since.[7] Long after it was first invented, slugging percentage gained new significance when baseball analysts realized that it combined with on-base percentage (OBP) to form a very good measure of a player's overall offensive production (in fact, OBP + SLG was originally referred to as "production" by baseball writer and statistician Bill James). A predecessor metric was developed by Branch Rickey in 1954. Rickey, in Life magazine, suggested that combining OBP with what he called "extra base power" (EBP) would give a better indicator of player performance than typical Triple Crown stats. EBP was a predecessor to slugging percentage.[8] Allen Barra and George Ignatin were early adopters in combining the two modern-day statistics, multiplying them together to form what is now known as "SLOB" (Slugging × On-Base).[9] Bill James applied this principle to his runs created formula several years later (and perhaps independently), essentially multiplying SLOB × at bats to create the formula: RC = ( hits + walks ) × ( total bases ) ( at bats ) + ( walks ) {\displaystyle {\text{RC}}={\frac {({\text{hits}}+{\text{walks}})\times ({\text{total bases}})}{({\text{at bats}})+({\text{walks}})}}} In 1984, Pete Palmer and John Thorn developed perhaps the most widespread means of combining slugging and on-base percentage: On-base plus slugging (OPS), which is a simple addition of the two values. Because it is easy to calculate, OPS has been used with increased frequency in recent years as a shorthand form to evaluate contributions as a batter. In a 2015 article, Bryan Grosnick made the point that "on base" and "slugging" may not be comparable enough to be simply added together. "On base" has a theoretical maximum of 1.000 whereas "slugging" has a theoretical maximum of 4.000. The actual numbers don't show as big a difference, with Grosnick listing .350 as a good "on base" and .430 as a good "slugging." He goes on to say that OPS has the advantages of simplicity and availability and further states, "you'll probably get it 75% right, at least."[10] Perfect slugging percentage The maximum numerically possible slugging percentage is 4.000.[2] A number of MLB players (117 through the end of the 2016 season) have momentarily had a 4.000 career slugging percentage by homering in their first major league at bat. List of Major League Baseball career slugging percentage leaders ^ "Career Leaders & Records for Slugging %". Baseball Reference. Retrieved . ^ a b Baseball Scorekeeping: A Practical Guide to the Rules, Andres Wirkmaa, Jefferson, North Carolina, London: McFarland & Company, Inc., Publishers, 2003. ^ "Slugging Average All Time Leaders on Baseball Almanac". ^ "What is a Slugging Percentage". ^ "Major League Baseball Batting Year-by-Year Averages". ^ "Slugging Percentage | The ARMory Power Pitching Academy". armorypitching.com. Retrieved . ^ "Single-Season Leaders & Records for Slugging %". Baseball Reference. Retrieved . ^ Lewis, Dan (2001-03-31). "Lies, Damn Lies, and RBIs". nationalreview.com. Archived from the original on 2012-10-20. Retrieved . ^ Barra, Allen (2001-06-20). "The best season ever?". Salon.com. Retrieved . ^ Separate but not quite equal: Why OPS is a "bad" statistic, Bryan Grosnick, Beyond the Box Score, September 18, 2015. Slugging Percentage Calculator Reliever Says Babe Ruth Wouldn't Stand a Chance Playing Modern Day Baseball Griffin Conine MLB Draft Tape | Duke OF Hurricane Category 4 Batting Trainer P1 Zobrist agrees to $56 million, 4-year deal with Cubs Texas Rangers' Choo Shin-soo hits 200th career homer to become first Asian to achieve feat in MLB South Korean major leaguers shine with new records, league-topping statistics Caitlyn Bliss vs. UMass [Newsa] Out of Left Field, Rookie Shortstop Sets a Home Run Record TV's UNFORGETTABLE MOMENTS - "Home Run Kings Of Baseball" Slugging_percentage
CommonCrawl
[Note]Introduction to Software Testing 2021-08-01 / 0 评论 / 0 点赞 / 68 阅读 / 34,727 字 Introduction to Software Testing It is a note from the subject SWEN90006 @ University of Melbourne. Only for self-study. SWEN90006 © Copyright 2021. Learning outcomes of this chapter At the end of this chapter, you should be able to: Discuss the purpose of software testing. Present an argument for why you think software testing is useful or not. Discuss how software testing achieves its goals. Define faults and failures. Specify the input domain of a program. Chapter Introduction Recall from Software Processes and Management that the word assurance means having confidence that a program or document or some other artifact is fit for purpose. Later in these notes we will try and translate the informal and vague notion of confidence into a measure of probability or strict bounds. Testing is an integral part of the process of assuring[1] that a system, program or program module is suitable for the purpose for which it was built. In most textbooks on software engineering, testing is described as part of the process of validation and verification. The definitions of validation and verification as described in IEEE-Std. 610.12-1990 are briefly described below: Validation is the process of evaluating a system or component during or at the end of the development process to determine if it satisfies the requirements of the system. In other words, validation aims to answer the question: Are we building the correct system? Verification is the process of evaluating a system or component at the end of a phase to determine if it satisfies the conditions imposed at the start of that phase. In other words, verification aims to answer the question: Are we building the system correctly? In this subject, testing will be used for much more than just validating and verifying that a program is fit for purpose. We will use testing, and especially random testing methods, to measure the attributes of programs. Note that not all attributes of a program can be quantified. Some attributes, like reliability, performance measures, and availability are straightforward to measure. Others, such as usability or safety must be estimated using the engineer's judgement using data gathered from other sources. For example, we may estimate the safety of a computer control system for automated braking from the reliability of the braking computers and their software. Before looking at testing techniques we will need to understand something of the semantics of programs as well as the processes by which software systems are developed. It is necessary to understand the semantics of programs so that, as a tester, we can be more confident that we have explored the program thoroughly. We discuss programs briefly in Section . (programs)= To be a good tester, it is important to have a sound conceptual understanding of the semantics of programs. When it comes to software there are a number of possibilities, each of which has its own set of complications. Firstly, lets consider the simple function given in Figures 1.1 written in an imperative programming language like C. (f_1_1)= void squeeze(char s[], int c) int i,j; for (i = j = 0; s[i] != '\0'; i++) { if (s[i] != c) { s[j++] = s[i]; s[j] = '\0'; Figure 1.1: The squeeze function from Kernighan and Ritchie implemented in C. The function squeeze removes any occurrence of the character denoted by the variable c from the array s, and squeezes the resulting array together. The semantics of C are such that integers and characters are somewhat interchangeable, in that an element of the type int can be treated as a char, and vice-versa. If the squeeze function were written in Haskell then its type would be $squeeze ~:~ string \times char - string$ The input type is $string \times char$ and the output type is string. The output type is implicit because the parameter to squeeze is a call by reference parameter. The set of values in the input type is called the input domain and the set of values in the output type is called the output domain. For functions like the squeeze function, the input domain is the set of values in the input type, and the output domain is the set of values in the output type. ```{admonition} Footnotes :class: tip \[2\]: The Fibonacci numbers is a sequence of numbers given by $1~1~2~3~5~\ldots~f_{n}~\ldots$ where $\sf f_n = f_{n-1}+f_{n-2}$. The fibonacci function, shown in Figure 1.2, takes an integer N and returns the sum of the first N Fibonacci numbers [2]. unsigned int fibonacci(unsigned int N) { unsigned int a = 0, b = 1; unsigned int sum = 0; unsigned int t = b; b = b + a; a = t; N = N - 1; sum = sum + a; Figure 1.2: A function to sum the first N Fibonacci numbers implemented in C. Again, if the fibonacci function were written in Haskell its type would be $\sf fibonacci ~:~ unsigned\ int - unsigned\ int$ and so the input domain for the fibonacci function is the set of values of type unsigned int. Likewise, the output domain for the fibonacci function is also the set of values of type unsigned int. The input domain to a program is the set of values that are accepted by the program as inputs. For example, if we look at the parameters for the squeeze function they define the set of all pairs $\sf (s,~c)$ where s has type array of char and c has type int. Not all elements of an input domain are relevant to the specification. For example, consider a program that divides a integer by another. If the denominator is 0, then the behaviour is undefined, because a number cannot be divided by 0. Note The input domain can vary on different machines. For example, the input domain to the fibonacci function is the set of values of type unsigned int but this can certainly vary on different machines. Table 1.1 shows how the set of values for parameters of type unsigned int and int respectively differ for different machine word sizes. (t_1_1)= Word Size unsigned int Set of Values int Set of Values 8 Bit Integers 0 .. 255 -127 .. 128 16 Bit Integers 0 .. 65,535 -32,767 .. 32,768 32 Bit Integers 0 .. 4,294,967,295 -2,147,483,647 .. 2,147,483,648 Table 1.1: The set of values for different word sizes. The output domain for the fibonacci function is identical. The design specification of the fibonacci function should specify the range of integers that are legal inputs to the function. Determining the input and output domains of a program, or function is not as easy as it looks, but it is a skill that is vital for effectively selecting test cases. That is why we will spend a good deal of time on input/output domain analysis. Notice also that in the squeeze and fibonacci functions, for any input to the function (an element in the input domain) there exists a unique output (element output domain) computed by the function. The function in question is deterministic. A function is deterministic if for every input to the function there is a unique output --- the output is completely determined by the input[3]. In this case it is easier to test the program because if we choose an input there is only one output to check. BUT, we do not always have deterministic programs. If a program can return one of a number of possible outputs for any given input then it is non-deterministic. Concurrent and distributed programs are often non-deterministic. Non-deterministic programs are harder to check because for a given input there is a set of outputs to check. Programs may terminate or not terminate. The squeeze and fibonacci functions both terminate. In the case of fibonacci an output is returned to the calling program and so to test fibonacci we can simply execute it and examine the returned value. The function squeeze returns a void value so even if it terminates we must still examine the array that was passed as input, because its value may change. On the other hand a classic example of a non-terminating program is a control loop for an interactive program or an embedded system. Control loops effectively execute until the system is shutdown. In the same way as the squeeze function and the fibonacci function a non-terminating program may: generate observable outputs; or not generate any observable outputs at all. In the former case we can test the program or function by testing that the sequence of values that it produces is what we expect. In the second case we need to examine the internal state somehow. (what_is_software_testing)= What is software testing? Testing, at least in the context of these notes, means executing a program in order to measure its attributes. Measuring a program's attributes means that we want to work out if the program fails in some way, work out its response time or through-put for certain data sets, its mean time to failure (MTTF), or the speed and accuracy with which users complete their designated tasks. Our point of view is closest to that of IEEE-Std. 610.12-1990, but there are some different points of view. For comparison we mention these now. Establishing confidence that a program does what it is supposed to do (W. Hetzel, Program Test Methods, Prentice-Hall) The process of executing a program with the intent of finding errors (G.J. Myers, The Art of Software Testing, John-Wiley) The process of analysing a software item to detect the difference between existing and required conditions (that is, bugs) and to evaluate the features of the software item (IEEE Standard for Software Test Documentation, IEEE Std 829-1983) The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component (IEEE Standard Glossary of Software Engineering Terminology, IEEE Std 610.12-1990) The viewpoint in these notes is deliberately chosen to be the broadest of the definitions above, that is, the last item in the list above. The theme for this subject is that testing is not just about detecting the presence of faults, but that testing is about evaluating attributes of system and its components. This means software testing methods are used to evaluate and assure that a program meets all of its requirements, both functional and non-functional. To be more specific, software testing means executing a program or its components in order to assure: The correctness of software with respect to requirements or intent; The performance of software under various conditions; The robustness of software, that is, its ability to handle erroneous inputs and unanticipated conditions; The security of software, that is, the absence of vulnerabilities and its robustness against different kinds of attack; The usability of software under various conditions; The reliability, availability, survivability or other dependability measures of software; or Installability and other facets of a software release. - Testing is based on *executing* a program, or its components. Therefore, testing can only be done when parts (at least) of the system have been built. - Some authors include V&V activities such as audits, technical reviews, inspections and walk-throughs as part of testing. We do not take this view of testing and consider reviews, inspections, walk-throughs and audits as part of the V&V process but not part of testing. The Purpose of Testing \[4\]: In *Notes on Structured Programming*. "Program testing can be used to show the presence of bugs, but never to show their absence!" --- Edsger W. Dijkstra[4] This quote states that the purpose of testing it to demonstrate that their is a discrepancy between an implementation and its specification, and that it cannot be used to show that the implementation is correct. The aim of most testing methods is to systematically and actively find these discrepancies in the program. Testing and debugging are NOT the same activity. Debugging is the activity of: (1) determining the exact nature and location of a suspected fault within the program; and (2) fixing that fault. Usually, debugging begins with some indication of the existence of an fault. The purpose of debugging is to locate fault and fix them. Therefore, we say that the aim of testing is to demonstrate that there is faults in a program, while the aim of debugging is to locate the cause of these faults, and remove or repair them. The aim of program proving (aka formal program verification) is to show that the program does not contain any faults. The problem with program proving is that most programmers and quality assurance people are not equipped with the necessary skills to prove programs correct. (the-language-of-failures-faults-and-errors)= The Language of Failures, Faults, and Errors To begin, we need to understand the language of failures, faults and errors. - **Fault:** A fault is an incorrect step, process, or data definition in a computer program. In systems it can be more complicated and may be the result of an incorrect design step or a problem in manufacture. - **Failure:** A failure occurs when there is a deviation of the observed behaviour of a program, or a system, from its specification. A failure can also occur if the observed behaviour of a system, or program, deviates from its intended behaviour which may not be captured in any specification . - **Error:** An incorrect internal state that is the result of some fault. An error may not result in a failure -- it is possible that an internal state is incorrect but that it does not affect the output. Failures and errors are the result of faults -- a fault in a program can trigger a failure and/or an error under the right circumstances. In normal language, software faults are usually referred to as "bugs", but the term "bug" is ambiguous and can mean to faults, failures, or errors; as such, as will avoid this term. \[5\]: We will not worry about the range just yet. Consider the following simple program specification: for any integer [5] n, $square(n) = n*n$; and an (incorrect) implementation of this specification in Figure 1.3 int square(int x) return x*2; Figure 1.3: A faulty squaring function written C. Executing square(3) results in 6 -- a failure -- because our specification demands that the computed answer should be $9$. The fault leading to failure occurs in the statement return x*2 and the first error occurs when the expression x*2 is calculated. However, executing square(2) would not have resulted in a failure even though the program still contains the same fault. This is because the behaviour of the square function of under the input 2 behaves correctly. This is called coincidental correctness. If $2$ was chosen as the test input, then by coincidence, this happens to exhibit the correct behaviour, even though the function is implemented incorrectly. The point here is there are some inputs that reveal faults and some that do not. While the above example is trivial, any non-trivial piece of software will display coincidental correctness for many test inputs. In testing we can only ever detect failures. Our ability to find and remove faults in testing is closely tied to our ability to detect failures. The discussion above leads naturally to the following three steps when testing and debugging software components. Detect system failures by choosing test inputs carefully; Determine the faults leading to the failures detected; Repair and remove the faults leading to the failures detected; and Test the system or software component again. This process is itself error-prone. We must not only guard against errors that can be made at steps (2) and (3) but also note that new faults can be introduced at step (3). It is important to realise here that steps (2) and (3) must be subject to the same quality assurance activities that one may use to develop code. Test Activities Ultimately, testing comes down to the selecting and executing test cases. A test case for a specific component consists of three essential pieces of information: a set of test inputs, or if the program under test is non-terminating, a set of sequences of test inputs; the expected results when the inputs are executed; and the execution conditions or execution environment in which the inputs are to be executed. A collection of test cases is a test suite. As part of the process of testing, there are several steps that need to be performed. These steps remain the same from unit testing to system testing. Test Case Selection Given the above definition of a test case, there are two generic steps to test case selection. Firstly, one must select the test inputs. This is typically performed using a test input select technique, which aims to achieve some form of coverage criterion. Test input selection is covered in Chapters 2--5 of these notes. Secondly, one must provide the expected behaviour of every test input that is generated. This is referred to as the oracle problem. In many cases, this oracle can be derived in a straightforward manner from the requirements of the program being tested. For example, a test case that assesses performance of a system may be related to a specific requirement about performance in the requirements specification of that system. Despite the amount of research activity on software testing, the oracle problem remains a difficult problem, and it is difficult to automate oracles or to assess their quality. Like any good software engineering process, test case selection is typically performed at a high level to produce abstract test case, and these are then refined into executable test cases. Test Execution Once executable test cases are prepared, the next step is to execute the test inputs on the program-under-test, and record the actual behaviour of the software. For example, record the output produced by a functional test input, or measure the time taken to execute a performance test input. Test execution is one step of the testing process that is generally able to be automated. This not only saves the tester time, but allows for regression testing to be performed at minimal case. Test Evaluation Compare the actual behaviour of the program under the test input with the expected behaviour of the program under that test input, and determine whether the actual behaviour satisfies the requirements. For example, in the case of performance testing, determine whether the time taken to run a test is less than the required threshold. As with test execution, test evaluation can generally be automated. Test Reporting The final step is the report the outcome of the testing. This report may be returned to developers so they can fix the faults that relate to the failures, or it may be to a manager to inform them that this stage of the testing process is complete. Again, certain aspects of test reporting can be automated, depending on the requirements of the report. Test Planning Testing is part of quality assurance for a software system, and as such, testing must be planned like any other activity of software engineering. A test plan allows review of the adequacy and feasibility of the testing process, and review of the quality of the test cases, as well as providing support for maintenance. As a minimum requirement, a test plan should be written for every artifact being tested at every level, and should contain at least the following information: Purpose: identifies what artifact is being tested, and for what purpose; for example, for functional correctness; Assumptions: any assumptions made about the program being tested; Strategy: the strategy for test case selection; Supporting artifacts: a specification of any of the supporting artifacts, such as test stubs or drivers; and Test Cases: an description of the abstract test cases, and how they were derived. Other information can be included in a test plan, such as the estimate of the amount of resources required to perform the testing. The testability of software can have a large impact on the amount of testing that is performed, as well as the amount of time that must be spent on the testing process to achieve certain test requirements, such as achieving a coverage criterion. Testability is composed of two measures: controllability and observability. 1. The *controllability* of a software artifact is the degree to which a tester can provide test inputs to the software. 2. The *observability* of a software artifact is the degree to which a tester can observe the behaviour of a software artifact, such as its outputs and its effect on its environment. For example, the squeeze from Figure 1.1 highly controllable and observable. It is a function whose inputs are restricted to parameters, and whose output is restricted to a return value. This can be controlled and observed using another piece of software. Software with a user interface, on the other hand, is generally more difficult to control and observe. Test automation software exists to record and playback test cases for graphical user interfaces, however, the first run of the tests must be performed manually, and expected outputs observed manually. In addition, the record and playback is often unreliable due to the low observability and controllability. Embedded software is generally less controllable and observable than software with user interfaces. A piece of embedded software that receives inputs from sensors and produces outputs to actuators is likely to be difficult to monitor in such an environment --- typically much more difficult than via other software or via a keyboard and screen. While the embedded software may be able to be extracted from its environment and tested as a stand-alone component, testing software in its production environment is still necessary. Controllability and observability are properties that are difficult to measure, and are aspects that must be considered during the design of software --- in other words, software designers must consider designing for testability. The Psychology of Software Testing Testing is more an art form than a science. In this subject, we look at techniques to make it as scientific as possible, but skill, experience, intuition and psychology all play an important part of software testing. Psychology? How does this impact software testing? We will look at two things related to psychology: (1) the purpose of software testing; and (2) what a successful test case is. The purpose of software testing In Section , we defined testing and looked at its purpose. The famous Dijkstra quote that testing shows the presence of faults, but not their absence, is important. What are some other commonly stated goals of software testing?: To show that our software is correct. To find ALL the faults in our software. The prove that our software meets its specification. These may seem like great things to achieve, but should they be our goal in software testing? Let's face some facts about software. Software is complex -- exceedingly complex. Even medium-scale software applications are so much more complex than the very largest of other engineering projects. As such, every piece of software has faults. Even the most trivial programs contains faults; perhaps not ones that are likely to occur or have a large impact, but they do. Consider the standard binary search algorithm that we use in lectures throughout the semester. This algorithm was printed in textbooks, used, discussed, and presented for over four decades before it was noted that it contained a fault. The program contains about 10 logical lines of code! It is naive to think that modern web applications could have less than hundreds if not thousands of faults that the developers don't know about. Further to this, another fact is that almost every program has an infinite number of possible inputs. Think of a program that parses HTML documents: there are an infinite number of HTML documents, but it needs to parse them ALL. No matter how many tests were run on this program, there is no guarantee that the very next test will not fail. As such, Dijkstra's quote is clear: we cannot test every possible input except for the most trivial program (even then, a program that takes two integers as input will take several years to test every combination), so, we cannot prove that a program is correct with testing. All we can do is prove that faults are in the software by running tests that fail. Then, we find and fix the faults, trying not to introduce new faults in the process, and hopefully the quality of our software has increased. Why does psychology matter here? Let's take these two points: (1) we can't prove a program is correct with testing; and (2) any program we test will have faults, and will continue to after we finish testing. If our goal is to find all faults or to show our program has no faults, then we are destined to fail at our goal. What is the point of aiming for a goal that we know that we cannot achieve? Psychologically, we know that people perform poorly when asked to do things they know are impossible. If you task a team to "show that your software is fault free", what would the response be? Most likely, they would select tests that they know pass. This does not improve software quality at all. Quality is only improved when we find faults and fix them. Psychologically, a much better goal is: try to break our program. If we assume that there are faults in the program and set out to find them, we will do a much better job of our goal. Further to this, we will be motivated each time we find a fault, because that was our goal. Testing should improve quality. By finding no faults, quality is not improved. Testing is a destructive process. We try to break our program. Testing is NOT proof. Summary: We must aim to find the faults are present. If we test to show correctness, we will subconsciously steer towards this goal. In addition, we will fail, because testing to show correctness is impossible. What is a "successful" test case? This is an interesting question. Our first response may be: "a test is successful if it passes". But given what we have just learnt, I hope you agree that: a test is successful if is fails. Any test that does not find a fault is almost a waste. Of course, this is an exaggeration. If we are doing a good job of testing and trying to break our program, a passed test gives us some confidence that our program is working for some inputs. Further, we can keep our tests and run them later after debugging to make sure we do not introduce any new faults. This is called regression testing (Section 7.3). Any test that fails (that is, any test that finds a fault), is a chance to improve quality. This is a successful test. A successful test is a more valuable investment than an unsuccessful test. Much of this subject deals with the question: how do we select tests that are more likely to fail than others. Consider an analogy of medical diagnosis. If a patient feels unwell, and a doctor runs some tests, the successful test is one that diagnosis the problem. Anything that does not reveal a problem is not valuable. We must consider programs as sick patients -- they contain faults whether we want them to or not! Summary: A successful test is one that fails. This is how testing can be used to improve quality. Psychologically, we should be pleased when a test fails. It is not a bad thing when a test fails. The fault was there before the test was run, the test just found it! Principles of software testing psychology In the The Art of Software Testing (see reference below), Myers lists 10 principles of software testing. In this section, we will discuss three of these principles, as they are related to the psychology of software testing. Principle 1 --- A necessary part of a test case is a definition of the expected output or result. This means that a test case is not just an input. Before we run the test, we must also know what the expected output should be. It is not sufficient to run the test, see the output, and only then decide whether that output is correct. Why not? Psychologically, we run into a problem. An erroneous result can be interpreted as correct because the mind sees what it wants to see. We may have a desire to see the correct behaviour because we don't want to do any debugging. Or, we may consider that the person who wrote the code is much smarter than us, so surely they wouldn't have made this error. Or, we may just convince ourselves that this is the correct answer somehow. On the other hand, if we have the expected output before the test, we compare the actual output with the expected output, and now whether the test fails or not is completely objective. Either the observed output is the same as the expected, or it is not. Of course, the expected output may be incorrect itself -- we make mistakes in testing too. But at the very least, we will now check both the expected output and the program to see which is incorrect. Principle 2 --- A programmer should avoid attempting to test his or her own program. This is perhaps the most important of all principles! First, let me say that I completely disagree with this principle. A programmer should absolutely test their own code! They should not be committing code to repositories that has never been testing. However, what the principle really means is: a programmer should avoid being the only person to test his or her own program. Why? There are three main reasons for this; all linked to the psychology of software testing: If a programmer missed important things when coding, such as failing to consider a null pointer, or failing to check a divide by zero, then it is quite likely they will also not think of these during testing. However, another person is less likely to make exactly the same mistakes. So, this duplication helps to find these types of 'oversight' faults. If a programmer misunderstands the specification they are programming to (e.g. they misunderstand a user requirement), they will implement incorrect code. When it comes to testing, they will still misunderstand the specification, and will therefore write an incorrect test using this misunderstanding. Both the code and the test are wrong, and in the same way. To the test will pass. However, another person brings an opportunity to interpret the specification correctly. Of course, they may interpret it incorrectly in the same way, but it is less likely that both people will do this rather than just one. Finally, recall that testing is a destructive process. We aim to find faults; that is, showing that the software is 'broken'.However, programming is a a constructive process. We create the software and we try to make it correct. As a programmer, once we create something that we work so hard on, we do not then want to turn around a break it! Consider some programming assignments that you have worked on. When you were running tests for it, did you hope they pass, or did you hope they fail? Now consider that you were running tests on someone else's assignment. Would you care so much whether they pass or fail? Put simply: someone testing their own code will struggle to switch from the constructive process of coding to the destructive process of testing if the code is their own. Psychologically, they will semi-consciously try to avoid testing parts of the code they think are faulty. This principle works on a team level too. I have talked with software engineering teams in an organisation who test each others code. They take it competitively! They get great pleasure at breaking other teams' code. They are motivated to find as many problems as possible. Could a team be so pleased at breaking their own code? I would say not. Principle 8 --- The probability of the existence of more errors in a section of a program is proportional to the numbers already found in that section. Faults seem to come in clusters. This is for several reasons: complex bits of code are harder to get right, some parts of code are written hurriedly due to time constraints, and just some software engineers are not as good as others, so their parts will be more faulty. What does this mean? It means that as we test, we find many faults in one part of a program and fix them, and find comparatively fewer in another part and fix them. We should then invest our time in the more faulty regions. This may seem counter-intuitive. After all, if we find only 1-2 faults in one section, there must be so many more out there than in the section we find 25! However, empirical evidence suggests this is not the case. Therefore, best investment is made in these error/fault-prone sections. (input-domains)= Input Domains To perform an analysis of the inputs to a program you will need to work out the sets of values making up the input domain. There are essentially two sources where you can get this information: the software requirements and design specifications; and the external variables of the program you are testing. In the case of white box testing, where we have the program available to us, the input domain can be constructed from the following sources. Inputs passed in as parameters; Inputs entered by the user via the program interface; Inputs that are read in from files; Inputs that are constants and precomputed values; Aspects of the global system state including: Variables and data structures shared between programs or components; Operating system variables and data structures, for example, the state of the scheduler or the process stack; The state of files in the file system; Saved data from interrupts and interrupt handlers. In general, the inputs to a program or a function are stored in program variables. A program variable may be: A variable declared in a program as in the C declarations int base; char s[]; Resulting from a read statement or similar interaction with the environment, for example, scanf("%d\n", x); Variables that are inputs to a function under test can be: atomic data such as integers and floating point numbers; structured data such as linked lists, files or trees; a reference or a value parameter as in the declaration of a function; or constants declared in an enclosing scope of the function under test, for example: #define PI 3.14159 double circumference(double radius) return 2*PI*radius; We all try running our programs on a few test values to make sure that we have not missed anything obvious, but if that is all that we do then we have simply not covered the full input domain with test cases. Systematic testing aims to cover the full input domain with test cases so that we have assurance -- confidence in the result -- that we have not missed testing a relevant subset of inputs. Black-Box and White-Box Testing If we do have a clear requirements or design specification then we choose test cases using the specification. Strategies for choosing test cases from a program or function's specification are referred to as specification-based test case selection strategies. Both of the following are specification-based testing strategies. Black-box Testing where test cases are derived from the functional specification of the system; and White-box Testing where test cases are derived from the internal design specifications or actual code for the program (sometimes referred to as glass-box). Black-box test case selection can be done without any reference to the program design or the program code. Black-Box test cases test only the functionality and features of the program but not any of its internal operations. \[6\]: COTS stands for **C**ommercial **O**ff **T**he **S**helf The real advantage of black-box test case selection is that it can be done before the implementation of a program. This means that black-box test cases can help in getting the design and coding correct with respect to the specification. Black-box testing methods are good at testing for missing functions and program behaviour that deviates from the specification. They are ideal for evaluating products that you intend to use in a system such as COTS [6] products and third party software (including open source software). The main disadvantage of black-box testing is that black-box test cases cannot detect additional functions or features that have been added to the code. This is especially important for systems that are safety critical (additional code may interfere with the safety of the system) or need to be secure (additional code may be used to break security). White-box test cases are selected using requirements and design specifications and the code of the program under test. This means that the testing team needs access to the internal designs and code for the program. The chief advantage of white-box testing is that it tests the internal details of the code and tries to check all the paths that a program can execute to determine if a problem occurs. As a result of this white-box test cases can check any additional code that has been implemented but not specified. The main disadvantages of white-box testing is that you must wait until after designing and coding the program under test in order to select test cases. In addition, if some functionality of a system is not implemented, using white-box testing may not detect this. The term "black-box testing" and "white-box testing" are becoming increasingly blurred. For example, many of the white-box testing techniques that have been used on programs, such as control-flow analysis and mutation analysis, are now being applied to the specifications of programs. That is, given a formal specification of a program, white-box testing techniques are being used on that specification to derive test cases for the program under test. Such an approach is clearly black-box, because test cases are selected from the specified behaviour rather than the program, however, the techniques come from the theory of white-box testing. In these notes, we deliberately blur the distinctions between black-box and white-box testing for this reason. Error Guessing Before we dive into the world of systematic software testing, it is worth mentioning one highly-effective test strategy that is always valuable when combined with any other strategy in these notes. This technique is known as error guessing. Error guessing is an ad-hoc approach based on intuition and experience. The idea is to identify test cases that are considered likely to expose errors. The general technique is to make a list, or better a taxonomy (a hierarchy), of possible errors or error-prone situations and then develop test cases based on the list. The idea is to document common error-prone or error-causing situations and create a defect history. We use the defect history to derive test cases for new programs or systems. There are a number of possible sources for such a defect history, for example: The Testing History of Previous Programs --- develop a list of faults detected in previous programs together with their frequency; The Results of Code Reviews and Inspections --- inspections are not the same as code reviews because they require much more detailed defect tracking than code reviews; use the data from inspections to create Some examples of common faults include test cases for empty or null strings, array bounds and array arithmetic expressions (such as attempting to divide by zero), and blank values, control characters, or null characters in strings. Error guessing is not a testing technique that can be assessed for usefulness or effectiveness, because it relies heavily on the person doing the guessing. However, it takes advantage of the fact that programmers and testers both generally have extensive experience in locating and fixing the kinds of faults that are introduced into programs, and can use their knowledge to guess the test inputs that are likely to uncover faults for specific types of data and algorithm. Error guessing is ad-hoc, and therefore, not systematic. The rest of the techniques described in these notes are systematic, and can therefore be used more effectively as quality assurance activities. Some Testing Laws Here are some interesting "laws" about software testing. They are not really laws per se, but rules of thumb that can be useful for software testing. Dijkstra's Law: Testing can only be used to show the presence of errors, but never the absence of errors Hetzel-Myers Law: A combination of different V&V methods out-performs and single method alone. Weinberg's Law: A developer is unsuited to test their own code. Pareto-Zipf principal: Approximately 80% of the errors are found in 20% of the code. Gutjar's Hypothesis: Partition testing, that is, methods that partition the input domain or the program and test according to those partitions, is better than random testing. Weyuker's Hypothesis: The adequacy of a test suite for coverage criterion can only be defined intuitively. G.J. Myers, The Art of Software Testing, John Wiley & Sons, 1979. D.E. Knuth, The Art of Computer Programming, vol. 2: Semi-numerical Algorithms, 2nd Ed., Addison Wesley, 1981. B. Beizer, Software Testing Techniques, 2nd ed., van Nostrand Reinhold, 1990. E. Kit, Software Testing in the Real World, Addison-Wesley, 1995. R. Hamlet, Random Testing, In Encyclopedia of Software Engineering, J. Marciniak ed., pp. 970--978, Wiley, 1994. A. Endres and D. Rombach, A Handbook of Software and Systems Engineering, Addison-Wesley, 2003. J. A. Whittaker, How to Break Software: A Practical Guide to Testing, Addison-Wesley, 2002. Development Testing Notes 本文链接: /archives/noteintroductiontosoftwaretesting
CommonCrawl
Arcadian Functor occasional meanderings in physics' brave new world Name: Kea Marni D. Sheppeard New Format Blog Dwarf Mysteries M Theory Lesson 312 Meanwhile II Paper Archive Physics Blogs Carl Brannen Louise Riofrio Matti Pitkanen Phil Gibbs Lieven Le Bruyn NC Geometry Paolo Bertozzini Oxford Science Dave Bacon Nigel Cook Tommaso Dorigo Supernova Condensate Richard Borcherds John Baez n-Category Cafe Theoretical Atlas Unapologetic Math Todd and Vishal Everything Seminar Motivic Stuff Nghbrhd Infinity Web Page Counter FQXi reject mixing matrix MUB arithmetic 2009 reject The AF Book DARPA Challenge At first I thought the problem list was a mildly amusing, handwaving bit of entertainment, but it turns out that the U.S. DARPA Mathematical Challenge has funding opportunities, open also to foreigners! And the 3 page announcement is the coolest I've ever seen, including the words Submissions that merely promise incremental improvements over the existing state of the art will be deemed unresponsive. I feel yet another funding proposal coming on ... HAPPY NEW YEAR! posted by Kea | 8:07 AM | 6 comments Neutrino 08 Registration for Neutrino 08 here at UC is open, so make sure you consider heading up this way! posted by Kea | 4:56 PM | 0 comments Here's to 2008 posted by Kea | 2:37 PM | 16 comments I know it's a bit late for this year, but I found the perfect cheap present for a budding M theorist: the Sudokube! Of course, some basic knowledge of magic squares makes it too easy to solve, but it would look good on the shelf. And if you don't mind me saying so, Santa, I was a bit disappointed with The Golden Compass. Why were all the physicists male? And that extended arm double ice axe arrest was just plain ridiculous. Putting together the Hoffman and Castro expressions, for real $s$ and $t$ in the critical interval with $|s+t| < 1$, we obtain $\sum_{m,n} s^m t^n \zeta (x^m y^n) = t [ \frac{\zeta (s)}{\zeta (1-s)} \frac{\zeta (1+t)}{\zeta (-t)} \frac{\zeta (1-s-t)}{\zeta (s+t)}]$ where the left hand side is the expression $\sum_{m} \frac{s^m}{m!} \sum_{k_1,k_2,\cdots,k_m} \frac{1}{k_1 k_2 \cdots k_m} \sum_{n} \frac{t^n}{(k_1 + k_2 + \cdots + k_m)^{n}}$ Specific values of the zeta function include, for the choice $t = 0.5$, $\zeta (1.5) = 2.612$ and, using the functional equation, $\zeta (- \frac{1}{2}) = \frac{1}{\sqrt{2}} \pi^{\frac{-3}{2}} \Gamma (\frac{3}{2}) \zeta (\frac{3}{2})$ so that the centre ratio in the first equation above becomes $\sqrt{2} \pi^{\frac{3}{2}} \frac{2}{\sqrt{\pi}} = 2 \sqrt{2} \pi$ giving a particularly interesting relation for the parameter $s < \frac{1}{2}$ involving the expression $\frac{\zeta (0.5 - s)}{\zeta (0.5 + s)} \frac{\zeta (s)}{\zeta (1-s)}$ It would be nice to extend this to complex values of the parameters, because zeroes of the zeta function occur in conjugate pairs and the finite positivity of an MZV could then rule out zeroes lying in this region. The original Hoffman post mentioned an expression in $\Gamma$ functions, similar to that appearing in the relation $B(a,b)= \frac{\Gamma(a)\Gamma(b)}{\Gamma(a + b)} + \frac{\Gamma(a)\Gamma(c)}{\Gamma(a + c)} + \frac{\Gamma (1 - a - b) \Gamma (b)}{\Gamma (1 - a)}$ $ = \frac{\zeta (1 - a)}{\zeta (a) } \frac{\zeta (1 - b)}{\zeta (b) } \frac{\zeta (a + b)}{\zeta (1 - a - b) }$ which appears in Castro's discussion of the zeroes of the Riemann zeta function. The $B$ function is the familiar 4-point amplitude of Veneziano, which we have been expressing in terms of chorded polygons; in this case a square with two diagonals representing the 1 dimensional associahedron, the interval. Hoffman's 1997 paper begins with this example of an MZV relation: $\zeta (2) \zeta (2,1) = 2 \zeta (2,2,1) + \zeta (2,1,2) + \zeta (4,1) + \zeta (2,3)$ which M theorists can try to draw in a number of ways, such as the 2-ordinal picture This suggests that zeta relations are in some sense functorial, or categorified, and arise from relations amongst arguments. In the last post, for instance, the argument of the Riemann zeta function was given by a complex cosmic time coordinate, which is often substituted in M theory for a value of $\hbar$ or $N$th root on the unit circle. posted by Kea | 12:08 PM | 0 comments Riemann's Brane By now we've all heard about the relation between the Riemann zeta function and Hermitian operators associated to matrix models. With CFT/AdS in the air now, it is not surprising to find this paper by McGuigan, which discusses brane partition functions. Somehow, according to McGuigan, on the gravity side we are supposed to end up with modular functions like those appearing in the already notorious Witten paper on 2+1D gravity. In fact, the so-called cosmological constant (just think extra time coordinates) appears as the variable $z$ in a function whose zeroes must lie on the real axis, namely $\Theta (z) = \zeta (iz + \frac{1}{2}) \Gamma (\frac{z}{2} + \frac{1}{4}) \pi^{- \frac{1}{4} - \frac{iz}{2}} (- \frac{z^2}{2} - \frac{1}{8})$ Who would have thought such stuff could get published on the arxiv? posted by Kea | 11:13 AM | 2 comments Following up on the GRG18 news, the LIGO collaboration have posted a paper on the non-observation of gravitational waves from the bright electromagnetic event GRB 070201. Of course, this has been reported in a number of places already. Here are some carols for the celebration of Newton's birthday. I quite like We Three Quarks, which begins We three quarks fine particles are. Bearing charm we travel afar. Fields and forces, spin of course is Multiplied by h-bar. Oh, Quarks are wondrous, quarks are light. Quarks have colors, clear and bright. Still misleading, still exceeding All the physicists' insight. Happy holidays from my three nephews: Connor, Nathan and Aidan. In an increasingly fascinating series of blogposts, the great mathematician Lieven Le Bruyn has finally reached the stringy topic of superpotentials. Apparently Grothendieck's children's drawings are dimers for Dedekind tesselations. Here is the recommended paper by Stienstra. Aside: The very colourful graduation went well, on a stunning day. There were bagpipes, trumpet fanfares, Maori greetings, a Brahms sonata, singing in Maori, English and Latin, and the town hall organ was played. I would like to check my UC mail, but unfortunately somebody has managed to crash the system on the first day of the holidays, as usual, so I may have to wait until the New Year. Switchback Swagger III An intriguing paper [1] by Kalman Gyory discusses the equation $m(m+1) \cdots (m + i - 1) = b k^{l}$ For $b = 1$ Erdös and Selfridge proved in 1975 [2] that this equation has no non-trivial solutions in the positive integers. The $(i,l,b) = (3,2,24)$ case can be seen to correspond to the cannonball problem under the substitution $n \mapsto \frac{m}{2}$. In general this suggests that the sequence of switchback expressions $P_i \frac{\textrm{sum of squares}}{in + T_i}$ may hardly ever be expressed in the form $b k^{l}$ for $k \geq 2$, where $T_i$ is the triangular number $\sum_{j=1}^{i} {j}$, even though it is certainly a positive integer. This is an interesting fact about the cardinality of these faces of the permutohedra, and for some mysterious reason the proof for $b=1$ seems to involve the mathematics of Fermat's last theorem. Note also the similarity between the denominator above and terms in the associahedra sequences $F_{n}(i)$. [1] K. Gyory, Acta Arith. 83 (1998) 87-92 [2] P. Erdös and J.L. Selfridge, Illinois J. Math. 19 (1975) 292-301 As reported in New Scientist, one of my esteemed colleagues from Mt Cook Village has expired after eating too much chocolate. Seriously folks, what are you doing throwing chocolate into the garbage can in a National Park? The kea is now officially endangered. Switchback Swagger II Courtesy of a commenter at God Plays Dice we have this nice link about the fact that there are no solutions to the sum of squares problem for $n > 24$. This was proved in 1918 by G. N. Watson, in the paper The problem of the square pyramid. In fact, the only solutions are $n = 1$ and $n = 24$. The equation $\frac{1}{6} n (n+1) (2n + 1) = k^{2}$ originally described a pile of cannonballs, built from a base layer of $k \times k$ balls into a square pyramid of height $n$. So it's really a sphere packing problem. Switchback Swagger The 2-ordinal polytopes associated with the symmetric groups are the permutohedra. The number of codimension $k$ faces of the $n$th permutohedron is given in sequence A019538. The second diagonal has some nice properties. For example, Alexander Povolotsky observed that these numbers $P_{i}$ arise as the right hand coefficients for the following sequence of expressions, indexed by $i$. $n(n+1)[n + (n+1)] = 6(1 + 4 + 9 + \cdots + n^2 )$ $n(n+1)(n+2)[n + (n+1) + (n+2)] = 36(1 + (1+4) + (1+4+9) + \cdots + (1 + 4 + \cdots + n^2 ))$ This brings to mind the Leech sequence $1 + 2^2 + 3^2 + \cdots + 24^{2} = 70^{2}$ for $n=24$, for which the first element of the list is expressed $\frac{1}{3}(\sum_{i=1}^{n} i ) (2n + 1) = n^{2}$ If the squares of integers up to $n > 24$ cannot be summed to a square, it follows that the left hand side can never be a square. Continuing with the wonders of table A033282, the $k = 3$ recursion results in the relation $F_{n+2}(3) F_{n+1}(1) = \frac{1}{2} F_{n+2}(2) [F_{n+1}(2) + \frac{1}{3} n F_{n+1}(1)]$ For example, considering the codimension 3 edges of the 4d polytope we obtain the relation $84 \times 9 = \frac{1}{2} \times 56 \times (21 + \frac{2}{3} \times 9)$ Isn't it wonderful how the combinatorics of the associahedra gives us so many relations between integers? One might be forgiven for guessing that operads can tell us something about factorization of an integer into primes. The expression $F_{n}$ for the codimension 1 faces of the associahedron is $F_{n}(1) = \frac{1}{2} n (n + 3) = \frac{1}{2} n (n + 1) + n = (\sum_{i=1}^{n} i) + n$ This corresponds to rewriting the sequence in the form 2 + 2 + 1 + 3 + 1 2 + 2 + 1 + 3 + 1 + 4 + 1 After eliminating the bold $n$ terms this becomes 1 + 2 + (1 + 2) 1 + 2 + 3 + (1 + 1 + 2) and so on. The previous post also separated the set of $F_{n}(1)$ Young diagrams into a set of $n$ yellow tiled diagrams and another set of $1 + 2 + \cdots + n$ diagrams with at most two purple tiles in a row. Now consider the $k = 2$ sequence $F_{n}(2)$, which counts the number of divisions of an $(n+3)$-gon into three pieces by two diagonals. The terms of this sequence are given by the formula $F_{n}(2) = \frac{1}{3} B(n + 4, 2) B(n,2) = \frac{1}{12} n(n-1)(n+3)(n+4) = \frac{1}{6} (n-1)(n+4) F_{n}(1)$ What set of diagrams counts this sequence? The $F_{n}(1)$ factor says that before cutting an $n$-gon into three pieces we must cut it into two. Note also that $F_{2}(2) = F_{2}(1) = 5$ since having cut a pentagon into two pieces, there is only one way to cut it into three. It is therefore more natural to write $F_{n+1}(2) = \frac{1}{6} n(n+5) F_{n+1}(1) = [\frac{1}{3} (\sum_{i=1}^{n} i) + \frac{2}{3} n] F_{n+1}(1) $ $= [ \frac{1}{3} F_{n}(1) + \frac{1}{3} n] F_{n+1}(1) = \frac{1}{3} F_{n+1}(1)(n + F_{n}(1))$ This says that once we have chopped the $(n+4)$-gon into two pieces, either we have a triangle and an $(n+3)$-gon to chop up, or we have at least an $\frac{1}{2}(n+6)$-gon (ignoring odd $n$ for now) for which we can choose $n$ diagonals meeting the existing diagonal. For example, the $21$ edges of the 3d Stasheff polytope arise from the relation $21 = \frac{1}{3} 9 (2 + 5)$ which says that once a hexagon is cut into two, in one of $9$ ways, either there is a pentagon to chop up, in one of $5$ ways, or the hexagon is split into two squares, one of which may be cut up in two ways. The full set of $9$ diagrams for $n=3$ appears, along with the pentagon subset and the $n$ yellow pieces. Note that either $F_{n+1}(1)$ is divisible by $3$, or $F_{n}(1) + n$ is divisible by $3$. For example, $14 + 4 = 18$ and $35 + 7 = 42$. The overcount factor of $3$ comes from the familiar cyclic symmetry of a central triangle in the hexagon, the three bisections of which mark the three possible choices for a square face on the 3d polytope. Similarly, the factor of $2$ in the $F_{n}(1)$ sequence came from the two diagonals of a square, which obey an $S_2$ symmetry under rotation. Young diagrams are usually used to label irreducible representations of the symmetric group $S_n$. If certain collections of Young diagrams are used to label associahedra (which may be obtained from the permutohedra with vertices the elements of $S_n$) then there is a close connection between the collection of groups $\{ S_n \}$ and its representations. However, the whole heirarchy in all dimensions needs to be considered if we want to understand this correspondence between a theory and its models. We have seen how the Catalan numbers come up in many places. For example, they count the number of vertices of the $n$th associahedron, which is fully labelled by chorded $n+3$-gons. The full list of the number of codimension $k$ elements of the $n$th associahedron is given by sequence A033282 at the integer database. For instance, the left diagonal row $2, 5, 9, 14, 20, 27, \cdots$ counts two ends to an interval, five edges on a pentagon, nine faces of the 3d Stasheff polytope, and so on. These codimension 1 faces are labelled by an $n$-gon with a single diagonal, splitting the $n$-gon into two parts. Another way of viewing the particular row above is in terms of Young diagrams with only two rows. For example, the 3d and 4d polytopes have faces labelled by the following Young diagrams. Observe how the recursion is visible in the complements of the white tiles. For $n = 3$, by omitting both the full white diagram and the yellow diagrams one obtains five remaining purple diagrams corresponding to the Young pictures for the $n = 2$ pentagon edges. The recursion relation for codimension 1 faces must therefore be $F_{n} = F_{n - 1} + n + 1 = \frac{n(n+3)}{2}$ Recall that codimension 1 elements are also labelled by trees with two internal vertices. For example, the whole pentagon is labelled by a single vertex tree with four leaves, but the pentagon edges are labelled by four leaved trees with two internal vertices. The homework problem is to figure out how these trees correspond to the Young diagrams. The full list of left diagonals for A033282, with the above row as the $k = 1$ entry, gives the number of codimension $k$ faces. The general formula for these numbers takes the form $F_{n} (k) = \frac{1}{k + 1} B(n + k + 2,k) B(n,k)$ for binomial coefficients $B(n,m)$. The sequence A126216 is a mirror image of A033282, which counts the number of Schroeder paths. These arose in the construction of Abel sums and interesting maps between sets of trees. Update: It turns out that R. P. Stanley worked out a relation between trees and Young tableaux (numbered Young diagrams) in the short 1996 paper Polygon dissections and standard Young tableaux in J. Comb. Theory A76, pages 175-177. Having blogged a bit already about Geometric Representation Theory, I was going to leave it to people to watch this lecture series for themselves. But lecture 8 is way too cool to ignore! Here we see how paths in a $q$ deformed Pascal triangle can be counted. First note that in M Theory we usually draw the triangle as a quadrant in a plane, on which we consider paths for the noncommutative Fourier transform. A step to the right picks up multiples of a power of $q$, whereas a step up simply multiplies the entry below by 1. In this way one obtains polynomials in $q$ with integer coefficients. Even though $q$ starts out labelling the number of elements in a finite field, we recall counting trees in a similar fashion, but ending up with complex roots of unity. Physicists may smell a sneaky Wick rotation in the shrubbery. Nerdy Nerdy I couldn't resist: thanks to Clifford ... Motive Madness III A quick note: check out John Baez's TWF 259 for more details on the absolute point! Clearly I'm not the only crazy one who sees ghosts in the machine. To quote the summary: In short, a mathematical phantom is gradually taking solid form before our very eyes! In the process, a grand generalization of algebraic geometry is emerging ... James Dolan speaks of categorification and decategorification, and of information and entropy. In logos land, these processes have a dimension raising or lowering aspect. It is often said that categorification is ill defined, in comparison to decategorification, but with dual processes it should not be so. Therefore, categorification itself must be defined in some canonical way that generalises the turning of natural numbers into sets or spaces. One way to do this would be to put the heirarchy on a loop, such as the loop labelled by the $q$ parameters at roots of unity. There would be $n$-categories for $n \in \mathbb{N}$ and $r$-categories for $r \in \mathbb{Q}$, and $n \rightarrow \infty$ would look like the limit $q \rightarrow 1$ again, where spaces begin to look like sets. After all, projective geometry has its horizons, and the cohomology of motives would move left and right, like the mass interaction, or Stokes' theorem, or the Riemann zeta function. Motive Madness II Recall that Kapranov and Smirnov have also been thinking about the field with one element. They say an affine line over $F_1$ should be zero along with all the roots of unity. Looking at polynomials with $F_1$ coefficients, the group $GL(n, F_{1}[x])$ is just the braid group on $n$ strands. For example, $3 \times 3$ matrices are associated with the three strand braid group, as often discussed. Then the map $B_n \rightarrow S_n$ is thought of as the $q \rightarrow 1$ limit, since the symmetric group acts on sets as vector spaces. The field $F_{1}(n)$ extends $F_1$ by containing zero and the set of all nth roots of unity. A vector space over this field is a pointed set (marked by zero) with an action by the roots of unity. Direct sum and smash product become the operations on such spaces. Note that Weber and others have considered the category of pointed sets as a 2-categorical Cat analogue of subobject classifier, and the category Set plays the role here of a one element set. There seem to be a number of ways in which the field $F_1$ introduces new topos theoretic arrows. Motive Madness Whoa! I wasn't expecting it that soon. Motives appear already, at least conjecturally, in John Baez's lecture 6! In the diagram the vertical arrows are the decategorification of either sets or projective spaces. An isomorphism class of $n$ element sets is mapped to the number $n$. Natural numbers become $q$-numbers in the case of spaces, which is to say rational functions in the parameter $q$ corresponding to the number of elements in the finite field, or secretly really polynomials with integer coefficients. But what replaces the category of finite sets? There is more structure to the projective spaces, and we also want to understand the bottom arrow, which considers a set as a space over a one element field. This is a rather delicate mathematical question. Baez mentions a recent paper by Durov (with lots of stuff on monads) about the idea of a one element field. When we understand this properly, do we find motives? Now, that is the question. The plane of a (finite) field $F_{q}^{2}$ is the $q$-analogue of a two element set, which plays an important role in the Boolean topos Set, namely as the subobject classifier. The vector space version of this is commonly known as a qubit. Somehow the reason that a one element field $F_1$ doesn't usually make sense is because the logical 0 and 1 are not distinct. Since $q$ is, in the first instance, just a natural number (= $p^{k}$ for some prime $p$), we can ask ourselves first what it means to collapse a finite plane to a one element field. For the topos Set, this would amount to turning the whole category into the trivial category 1, since there is no way to distinguish a subset of a set $S$ from its complement and all sets have effectively only one element. Now this one element set is like a basis for spaces over $F_1$. But the map that takes a basis to a space is just the functor 1 $\rightarrow$ Set, which picks out a set! But this doesn't sound right. Maybe what we need here is not the trivial 1-category, or a one point 0-category (set), but rather a -1-category. This idea always lurks in the operad heirarchy, where the left hand side of the table starts with the single leaf tree, despite the fact that a point is actually a two leaf tree. Anyway, think of a finite set $S$ lying at the endpoints of the unit vectors in a vector space. The empty set at the origin is the smallest piece of the power set of $S$, and the one element subsets are the next smallest pieces. The power set fills out a cube of dimension $|S|$. Since the field in question is $F_1$ there is no extent to the axes. Only the elements of $S$ really exist. Planar Young diagrams represent partitions of a natural number $n = n_1 + n_2 + n_3 + \cdots + n_k$. The $k$ rows are the pieces $n_i$ of the partition. But categorified numbers $n_i$ are actually sets with $n_i$ elements, or perhaps vector spaces of dimension $n_i$, or projective spaces of dimension $n_i - 1$. In this setting the expression $n = n_1 + n_2 + n_3 + \cdots + n_k$ is about the decomposition of a space into subspaces. We have seen something like this before. Let $n_i$ instead represent $V^{\otimes n_{i}}$ for a fixed finite dimensional vector space $V$. Then the $O(n_{i})$ piece of the operad is the space of linear maps from $n_i$ to $V$ in Vect. The operad rules come from compositions $n_1 \otimes n_2 \otimes n_3 \cdots \otimes n_k \rightarrow n$ of these maps. Maybe instead of categorification of $\mathbb{N}$ we can look at operadification. One then wonders what happens for higher dimensional ordinals. Actually, partitions are a lot like 2-tree ordinals. The simplest generalisation would allow each $n_{i}$ of a planar partition to be itself replaced by a partition of $m$ in a third direction. The first permutation group to fill a three dimensional diagram would be $S_4$, with four box partitions. This is the only additional diagram to the planar labels for the $S_4$ barycentrically divided tetrahedron. Similarly, $S_3$ is the first group to fill a truly planar Young diagram. This pattern continues for all $n$. Lecture 3 by James Dolan starts with diagrams for the subgroups of $S_3$. For example, the cyclic group produces a diagram where all three green vertices have been identified by rotations of the triangle, as have the blue vertices. Now we see the need for the Young diagram boxes, which can represent indeterminate elements of a set. This example splits the six elements of the group into two sets of three permutations, represented by the two triangles. One such set appeared as a basis for the mass matrix Fourier transform. M theory is like a child's game of connect the dots. In Geometric Representation Theory lecture 2 James Dolan draws a 2-simplex with a barycentric subdivision, looking like a hexagon with a central point, or rather a cube with a missing hidden vertex. This picture is labelled with Young diagrams associated to the group $S_3$ of permutations on three letters. Diagrams of height 1 are associated with vertices, diagrams of height 2 with edges and diagrams of height 3 with faces. Note that horizontal lists are unordered. So the six faces represent the elements of the group. Dolan wants to think of these diagrams as axiomatic theories in a categorical sense. Such diagrams, and their subdiagrams, are associated with sequences of subspaces of the three element set, in analogy with flag spaces for vector spaces. Sets are just a classical kind of vector space. This is clear when counting vectors in vector spaces over finite fields, as discussed by Baez in lecture 1. For vector spaces over $F$ the permutation groups would naturally be replaced by the example of $GL(n,F)$. The example above generalises as expected. Before we know it, we'll probably be doing motivic cohomology and Langlands geometry using Hecke pictures. Of course, Kontsevich has already been thinking about such things. Riemann Revived II S duality in the guise of the Langlands program is a truly awe inspiring component of stringy triality. Recall that the complex number form of S duality has a modular group symmetry. This group appears all over the place in M theory. For example, we looked at the Banach-Tarski paradox in terms of a ternary tiling of hyperbolic space as a Poincare disc. This tiling marks the boundary of the circle with a nice triple pattern of accumulating points. Alain Connes and Matilde Marcolli say that the Riemann zeta function is related to the problem of mass, which in turn we have seen is related to three stranded braids and M theory triality, of which S duality is a piece. It appears that no part of mathematics is left untouched by gravity. Update: a new paper by Witten et al on 3D gravity, prominently featuring the modular group, has appeared on the arxiv. See the picture on page 49, and the J invariant on page 52. The paper shows that the partition function cannot be given a conventional Hilbert space interpretation. Holomorphic factorisation is suggested as a possible mechanism for extending the degrees of freedom. For instance, the complexified Einstein equations are considered. They say: we think another possibility is that the non-perturbative framework of quantum gravity really involves a sum not over ordinary geometries in the usual sense, but over some more abstract structures that can be defined independently for holomorphic and antiholomorphic variables. Only when the two structures coincide can the result be interpreted in terms of a classical geometry. I confess to finding this statement a little ill-phrased, since some more abstract structure presumably does not begin with traditional complex analysis. Riemann Revived This month's conference on Langlands and QFT has naturally drawn some attention. Matti Pitkanen has some comments. Louise Riofrio and David Ben-Zvi (one of the speakers) have been taking notes. Louise points out that Witten is interested in four dimensions, but according to Ben-Zvi's notes from his talk, Witten said it was natural to look at six dimensions (think twistors), or at least four, but for the talk he focused on two. Admittedly, I can't make that much out of Ben-Zvi's notes, and I'm sure that's not his fault! Here's a link to Frenkel's helpful paper. Fading Dark Force Dr Motl reports on a new astro-ph paper on the lack of a Dark Force (cosmological constant). The abstract suggests a new concordance model with 90% dark matter, 10% baryons, no dark energy and 14.8 Gyr as the age of the universe. This sounds familiar. It includes a reference to the now published paper by D. Wiltshire. On Wednesday, Dr Motl reported on the difficulties that Fermi initially faced having his ideas accepted. Meanwhile, Louise Riofrio brings us reports from New York, and a small town nearby.
CommonCrawl
Bivariate Data Create and Interpret Back to back stem and leaf plots Create and Interpret Stem and Leaf Plots (Single and Double) Create and Interpret Parallel Box Plots Create and Interpret Two Way Tables Interpret Divided Bar Graphs Create and Interpret Scatter Plots Create and Interpret Parallel Dot Plots Describing Statistical Relationships Comparing Sets of Data Comparing Sets of Data II (includes box and whisker) Data involving Time Shape and Correlation of Bivariate Data It is important to be able to compare data sets because it helps us make conclusions or judgments about the data. For example, say Jim got $\frac{5}{10}$510​ in a geography test and $\frac{6}{10}$610​ in a history test. Which test did he do better in? Just based on those marks, it makes sense to say he did better in history. But what about if everyone else in his class got $\frac{4}{10}$410​ in geography and $\frac{8}{10}$810​ in history? If you had the highest score in the class in geography and the lowest score in the class in history, does it really make sense to say you did better in history? By comparing the means of central tendency in a data set (that is, the mean, median and mode), as well as measures of spread (range and standard deviation), we can make comparisons between different groups and draw conclusions about our data. The number of goals scored by Team 1 and Team 2 in a football tournament are recorded. A $2$2 $5$5 B $4$4 $2$2 C $5$5 $1$1 D $3$3 $5$5 E $2$2 $3$3 Find the total number of goals scored by both teams in Match C. What is the total number of goals scored by Team 1 across all the matches? What is the mean number of goals scored by Team 1? The weights of a group of men and women were recorded and presented in a stem and leaf plot as shown. Key: 4 | 2 $=$= 42 kg Calculate the standard deviation of the group of men, correct to two decimal places. Calculate the standard deviation of the group of women, correct to two decimal places. Find the range of the weights for the group of men. Find the range of the weights for the group of women. Which group, on average, has the higher variability in weight? S5-1 Plan and conduct surveys and experiments using the statistical enquiry cycle:– determining appropriate variables and measures;– considering sources of variation;– gathering and cleaning data;– using multiple displays, and re-categorising data to find patterns, variations, relationships, and trends in multivariate data sets;– comparing sample distributions visually, using measures of centre, spread, and proportion;– presenting a report of findings
CommonCrawl
Characterization of the value function of final state constrained control problems with BV trajectories A generalization of $H$-measures and application on purely fractional scalar conservation laws November 2011, 10(6): 1589-1615. doi: 10.3934/cpaa.2011.10.1589 Vortex interaction dynamics in trapped Bose-Einstein condensates Pedro J. Torres 1, , R. Carretero-González 2, , S. Middelkamp 3, , P. Schmelcher 3, , Dimitri J. Frantzeskakis 4, and P.G. Kevrekidis 5, Departamento de Matemática Aplicada, Facultad de Ciencias, Universidad de Granada Campus de Fuentenueva s/n, 18071 Granada, Spain Nonlinear Physics Group, Departamento de Física Aplicada I, Universidad de Sevilla, 41012 Sevilla, Spain Zentrum für Optische Quantentechnologien, Universität Hamburg, Luruper Chaussee 149, 22761 Hamburg, Germany, Germany Department of Physics, University of Athens, Panepistimiopolis, Zografos, Athens 15784, Greece Department of Mathematics and Statistics, University of Massachusetts, Amherst, MA 01003-9315, United States Received July 2010 Revised May 2011 Published May 2011 Motivated by recent experiments studying the dynamics of configurations bearing a small number of vortices in atomic Bose-Einstein condensates (BECs), we illustrate that such systems can be accurately described by ordinary differential equations (ODEs) incorporating the precession and interaction dynamics of vortices in harmonic traps. This dynamics is tackled in detail at the ODE level, both for the simpler case of equal charge vortices, and for the more complicated (yet also experimentally relevant) case of opposite charge vortices. In the former case, we identify the dynamics as being chiefly quasi-periodic (although potentially periodic), while in the latter, irregular dynamics may ensue when suitable external drive of the BEC cloud is also considered. Our analytical findings are corroborated by numerical computations of the reduced ODE system. Keywords: vortices, vortex dipoles., Bose-Einstein condensates. Mathematics Subject Classification: Primary: 34A34, 34D20; Secondary: 37N2. Citation: Pedro J. Torres, R. Carretero-González, S. Middelkamp, P. Schmelcher, Dimitri J. Frantzeskakis, P.G. Kevrekidis. Vortex interaction dynamics in trapped Bose-Einstein condensates. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1589-1615. doi: 10.3934/cpaa.2011.10.1589 L. M. Pismen, "Vortices in Nonlinear Fields,", Oxford Science Publications, (1999). Google Scholar A. J. Chorin and J. E. Marsden, "A Mathematical Introduction to Fluid Mechanics,", Springer-Verlag, (1993). Google Scholar Yu. S. Kivshar, J. Christou, V. Tikhonenko, B. Luther-Davies and L. M. Pismen, Dynamics of optical vortex solitons,, Opt. Commun., 152 (1998), 198. doi: 10.1016/S0030-4018(98)00149-7. Google Scholar A. Dreischuh, S. Chevrenkov, D. Neshev, G. G. Paulus and H. Walther, Generation of lattice structures of optical vortices,, J. Opt. Soc. Am. B, 19 (2002), 550. doi: 10.1364/JOSAB.19.000550. Google Scholar A. S. Desyatnikov, Yu. S. Kivshar and L. Torner, Optical vortices and vortex solitons,, Prog. Optics, 47 (2005), 291. doi: 10.1016/S0079-6638(05)47006-7. Google Scholar L. P. Pitaevskii and S. Stringari, "Bose-Einstein Condensation,", Oxford University Press, (2003). Google Scholar C. J. Pethick and H. Smith, "Bose-Einstein Condensation in Dilute Gases,", Cambridge University Press, (2002). Google Scholar P. G. Kevrekidis, D. J. Frantzeskakis and R. Carretero-González, "Emergent Nonlinear Phenomena in Bose-Einstein Condensates. Theory and Experiment,", Springer-Verlag, (2008). doi: 10.1007/978-3-540-73591-5_1. Google Scholar A. L. Fetter and A. A. Svidzinksy, Vortices in a trapped dilute Bose-Einstein condensate,, J. Phys.: Cond. Matt., 13 (2001). doi: 10.1088/0953-8984/13/12/201. Google Scholar P. G. Kevrekidis, R. Carretero-González, D. J. Frantzeskakis and I. G. Kevrekidis, Vortices in Bose-Einstein condensates: some recent developments,, Mod. Phys. Lett. B, 18 (2004), 1481. doi: 10.1142/S0217984904007967. Google Scholar P. K. Newton and G. Chamoun, Vortex lattice theory: A particle interaction perspective,, SIAM Rev., 51 (2009), 501. doi: 10.1137/07068597X. Google Scholar R. Carretero-González, P. G. Kevrekidis and D. J. Frantzeskakis, Nonlinear waves in Bose-Einstein condensates: physical relevance and mathematical techniques,, Nonlinearity, 21 (2008). doi: 10.1088/0951-7715/21/7/R01. Google Scholar A. L. Fetter, Rotating trapped Bose-Einstein condensates,, Rev. Mod. Phys., 81 (2009), 647. doi: 10.1103/RevModPhys.81.647. Google Scholar M. R. Matthews, B. P. Anderson, P. C. Haljan, D. S. Hall, C. E. Wieman and E. A. Cornell, Vortices in a Bose-Einstein condensate,, Phys. Rev. Lett., 83 (1999), 2498. doi: 10.1103/PhysRevLett.83.2498. Google Scholar K. W. Madison, F. Chevy, V. Bretin and J. Dalibard, Stationary states of a rotating Bose-Einstein condensate: routes to vortex nucleation,, Phys. Rev. Lett., 86 (2001), 4443. doi: 10.1103/PhysRevLett.86.4443. Google Scholar K. W. Madison, F. Chevy, W. Wohlleben and J. Dalibard, Vortex formation in a stirred Bose-Einstein condensate,, Phys. Rev. Lett., 84 (2000), 806. doi: 10.1103/PhysRevLett.84.806. Google Scholar A. Recati, F. Zambelli and S. Stringari, Overcritical rotation of a trapped Bose-Einstein condensate,, Phys. Rev. Lett., 86 (2001), 377. doi: 10.1103/PhysRevLett.86.377. Google Scholar S. Sinha and Y. Castin, Dynamic instability of a Rotating Bose-Einstein condensate,, Phys. Rev. Lett., 87 (2001). doi: 10.1103/PhysRevLett.87.190402. Google Scholar C. Raman, J. R. Abo-Shaeer, J. M. Vogels, K. Xu and W. Ketterle, Vortex nucleation in a Stirred Bose-Einstein condensate,, Phys. Rev. Lett., 87 (2001). doi: 10.1103/PhysRevLett.87.210402. Google Scholar D. R. Scherer, C. N. Weiler, T. W. Neely and B. P. Anderson, Vortex formation by merging of multiple trapped Bose-Einstein condensates,, Phys. Rev. Lett., 98 (2007). doi: 10.1103/PhysRevLett.98.110402. Google Scholar R. Carretero-González, N. Whitaker, P. G. Kevrekidis and D. J. Frantzeskakis, Vortex structures formed by the interference of sliced condensates,, Phys. Rev. A, 77 (2008). doi: 10.1103/PhysRevA.77.023605. Google Scholar R. Carretero-González, B. P. Anderson, P. G. Kevrekidis, D. J. Frantzeskakis and C. N. Weiler, Dynamics of vortex formation in merging Bose-Einstein condensate fragments,, Phys. Rev. A, 77 (2008). doi: 10.1103/PhysRevA.77.033625. Google Scholar G. Ruben, D. M. Paganin and M. J. Morgan, Vortex-lattice formation and melting in a nonrotating Bose-Einstein condensate,, Phys. Rev. A, 78 (2008). doi: 10.1103/PhysRevA.78.013631. Google Scholar C. N. Weiler, T. W. Neely, D. R. Scherer, A. S. Bradley, M. J. Davis and B. P. Anderson, Spontaneous vortices in the formation of Bose-Einstein condensates,, Nature, 455 (2008), 948. doi: 10.1038/nature07334. Google Scholar A. E. Leanhardt, A. Görlitz, A. P. Chikkatur, D. Kielpinski, Y. Shin, D. E. Pritchard and W. Ketterle, Imprinting vortices in a Bose-Einstein condensate using topological phases,, Phys. Rev. Lett., 89 (2002). doi: 10.1103/PhysRevLett.89.190403. Google Scholar Y. Shin, M. Saba, M. Vengalattore, T. A. Pasquini, C. Sanner, A. E. Leanhardt, M. Prentiss, D. E. Pritchard and W. Ketterle, Dynamical instability of a doubly quantized vortex in a Bose-Einstein condensate,, Phys. Rev. Lett., 93 (2004). doi: 10.1103/PhysRevLett.93.160406. Google Scholar T. Isoshima, M. Okano, H. Yasuda, K. Kasa, J. A. M. Huhtamäki, M. Kumakura and Y. Takahashi, Spontaneous splitting of a quadruply charged vortex,, Phys. Rev. Lett., 99 (2007). doi: 10.1103/PhysRevLett.99.200403. Google Scholar L.-C. Crasovan, V. Vekslerchik, V. M. Pérez-García, J. P. Torres, D. Mihalache and L. Torner, Stable vortex dipoles in nonrotating Bose-Einstein condensates,, Phys. Rev. A, 68 (2003). doi: 10.1103/PhysRevA.68.063609. Google Scholar M. Möttönen, S. M. M. Virtanen, T. Isoshima and M. M. Salomaa, Stationary vortex clusters in nonrotating Bose-Einstein condensates,, Phys. Rev. A, 71 (2005). doi: 10.1103/PhysRevA.71.033626. Google Scholar V. Pietilä, M. Möttönen, T. Isoshima, J. A. M. Huhtamäki and S. M. M. Virtanen, Stability and dynamics of vortex clusters in nonrotated Bose-Einstein condensates,, Phys. Rev. A, 74 (2006). doi: 10.1103/PhysRevA.74.023603. Google Scholar A. Klein, D. Jaksch, Y. Zhang and W. Bao, Dynamics of vortices in weakly interacting Bose-Einstein condensates,, Phys. Rev. A, 76 (2007). doi: 10.1103/PhysRevA.76.043602. Google Scholar W. Li, M. Haque and S. Komineas, Vortex dipole in a trapped two-dimensional Bose-Einstein condensate,, Phys. Rev. A, 77 (2008). doi: 10.1103/PhysRevA.77.053610. Google Scholar J.-P. Martikainen, K.-A. Suominen, L. Santos, T. Schulte and A. Sanpera, Generation and evolution of vortex-antivortex pairs in Bose-Einstein condensates,, Phys. Rev. A, 64 (2001). doi: 10.1103/PhysRevA.64.063602. Google Scholar T. Schulte, L. Santos, A. Sanpera and M. Lewenstein, Vortex-vortex interactions in toroidally trapped Bose-Einstein condensates,, Phys. Rev. A, 66 (2002). doi: 10.1103/PhysRevA.66.033602. Google Scholar S. McEndoo and Th. Busch, Small numbers of vortices in anisotropic traps,, Phys. Rev. A, 79 (2009). doi: 10.1103/PhysRevA.79.053616. Google Scholar T. W. Neely, E. C. Samson, A. S. Bradley, M. J. Davis and B. P. Anderson, Observation of vortex dipoles in an oblate Bose-Einstein condensate,, Phys. Rev. Lett., 104 (2010). doi: 10.1103/PhysRevLett.104.160401. Google Scholar J. A. Seman, E. A. L. Henn, M. Haque, R. F. Shiozaki, E. R. F. Ramos, M. Caracanhas, P. Castilho, C. Castelo Branco, K. M. F. Magalhães and V. S. Bagnato, Three-vortex configurations in trapped Bose-Einstein condensates,, Phys. Rev. A, 82 (2010). doi: 10.1103/PhysRevA.82.033616. Google Scholar D. V. Freilich, D. M. Bianchi, A. M. Kaufman, T. K. Langin and D. S. Hall, Real-time dynamics of single vortex lines and vortex dipoles in a Bose-Einstein condensate,, Science, 329 (2010), 1182. doi: 10.1126/science.1191224. Google Scholar S. Middelkamp, P. G. Kevrekidis, D. J. Frantzeskakis, R. Carretero-González and P. Schmelcher, Bifurcations, stability and dynamics of multiple matter-wave vortex states,, Phys. Rev. A, 82 (2010). doi: 10.1103/PhysRevA.82.013646. Google Scholar B. Jackson, J. F. McCann and C. S. Adams, Vortex line and ring dynamics in trapped Bose-Einstein condensates,, Phys. Rev. A, 61 (1999). doi: 10.1103/PhysRevA.61.013604. Google Scholar A. A. Svidzinsky and A. L. Fetter, Stability of a vortex in a trapped Bose-Einstein condensate,, Phys. Rev. Lett., 84 (2000), 5919. doi: 10.1103/PhysRevLett.84.5919. Google Scholar J. Tempere and J. T. Devreese, Vortex dynamics in a parabolically confined Bose-Einstein condensate,, Solid State Comm., 113 (2000), 471. doi: 10.1016/S0038-1098(99)00495-0. Google Scholar B. P. Anderson, P. C. Haljan, C. E. Wieman and E. A. Cornell, Vortex precession in Bose-Einstein condensates: observations with filled and empty cores,, Phys. Rev. Lett., 85 (2000), 2857. doi: 10.1103/PhysRevLett.85.2857. Google Scholar S. Middelkamp, P. G. Kevrekidis, D. J. Frantzeskakis, R. Carretero-González and P. Schmelcher, Stability and dynamics of matter-wave vortices in the presence of collisional inhomogeneities and dissipative perturbations,, J. Phys. B: At. Mo. Opt. Phys., 43 (2010). doi: 10.1088/0953-4075/43/15/155303. Google Scholar A. Ambrosetti and V. Coti Zelati, "Periodic Solutions of Singular Lagrangian Systems,", Birkh\, (1993). Google Scholar A. Fonda and R. Toader, Periodic orbits of radially symmetric Keplerian-like systems: a topological degree approach,, J. Differential Equations, 244 (2008), 3235. doi: 10.1016/j.jde.2007.11.005. Google Scholar A. Fonda and R. Toader, Periodic orbits of radially symmetric systems with a singularity: the repulsive case,, To appear in Adv. Nonlinear Stud., (2011). Google Scholar A. Fonda and A. J. Ureña, Periodic, subharmonic and quasi-periodic oscillations under the action of a central force,, Discrete Cont. Dyn. Syst. A, 29 (2011), 169. doi: 10.3934/dcds.2011.29.169. Google Scholar P. J. Torres, Existence of one-signed periodic solutions of some second-order differential equations via a Krasnoselskii fixed point theorem,, J. Diff. Eq., 190 (2003), 643. doi: 10.1016/S0022-0396(02)00152-3. Google Scholar M. A. Krasnoselskii, "Positive Solutions of Operator Equations,", Groningen: Noordhoff, (1964). Google Scholar W. Magnus and S. Winkler, "Hill's Equation,", Dover, (1979). Google Scholar V. M. Starzinskii, A survey of works on conditions of stability of the trivial solution of a system of linear differential equations with periodic coefficients,, Amer. Math. Soc. Transl. Ser. 2, 1 (1955), 189. Google Scholar C. L. Siegel and J. K. Moser, "Lectures on Celestial Mechanics,", Springer-Verlag, (1971). Google Scholar V. Arnold, "Les Méthodes Matémathiques de la Mécanique Classique,", Mir. Moscow, (1976). Google Scholar J. Möser, On invariant curves of area-preserving mappings of an annulus,, Nachr. Akad. Wiss. G\, (1962), 1. Google Scholar S. E. Newhouse, Quasi-elliptic periodic points in conservative dynamical systems,, Amer. J. Math., 99 (1977), 1061. doi: 10.2307/2374000. Google Scholar C. Genecand, Transversal homoclinic orbits near elliptic fixed points of area-preserving diffeomorphisms of the plane,, in, (1993). Google Scholar P. J. Torres, Twist solutions of a Hill's equation with singular term,, Adv. Nonlinear Stud., 2 (2002), 279. doi: 10.1016/j.jmaa.2009.02.033. Google Scholar A. Chenciner and R. Montgomery, A remarkable periodic solution of the three-body problem in the case of equal masses,, Ann. Math., 152 (2000), 881. doi: 10.2307/2661357. Google Scholar Florian Méhats, Christof Sparber. Dimension reduction for rotating Bose-Einstein condensates with anisotropic confinement. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 5097-5118. doi: 10.3934/dcds.2016021 P.G. Kevrekidis, Dimitri J. Frantzeskakis. Multiple dark solitons in Bose-Einstein condensates at finite temperatures. Discrete & Continuous Dynamical Systems - S, 2011, 4 (5) : 1199-1212. doi: 10.3934/dcdss.2011.4.1199 Weizhu Bao, Loïc Le Treust, Florian Méhats. Dimension reduction for dipolar Bose-Einstein condensates in the strong interaction regime. Kinetic & Related Models, 2017, 10 (3) : 553-571. doi: 10.3934/krm.2017022 Vadym Vekslerchik, Víctor M. Pérez-García. Exact solution of the two-mode model of multicomponent Bose-Einstein condensates. Discrete & Continuous Dynamical Systems - B, 2003, 3 (2) : 179-192. doi: 10.3934/dcdsb.2003.3.179 Vladimir S. Gerdjikov. Bose-Einstein condensates and spectral properties of multicomponent nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - S, 2011, 4 (5) : 1181-1197. doi: 10.3934/dcdss.2011.4.1181 Liren Lin, I-Liang Chern. A kinetic energy reduction technique and characterizations of the ground states of spin-1 Bose-Einstein condensates. Discrete & Continuous Dynamical Systems - B, 2014, 19 (4) : 1119-1128. doi: 10.3934/dcdsb.2014.19.1119 Kui Li, Zhitao Zhang. A perturbation result for system of Schrödinger equations of Bose-Einstein condensates in $\mathbb{R}^3$. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 851-860. doi: 10.3934/dcds.2016.36.851 Dong Deng, Ruikuan Liu. Bifurcation solutions of Gross-Pitaevskii equations for spin-1 Bose-Einstein condensates. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3175-3193. doi: 10.3934/dcdsb.2018306 Anne de Bouard, Reika Fukuizumi, Romain Poncet. Vortex solutions in Bose-Einstein condensation under a trapping potential varying randomly in time. Discrete & Continuous Dynamical Systems - B, 2015, 20 (9) : 2793-2817. doi: 10.3934/dcdsb.2015.20.2793 Xuguang Lu. Long time strong convergence to Bose-Einstein distribution for low temperature. Kinetic & Related Models, 2018, 11 (4) : 715-734. doi: 10.3934/krm.2018029 Weizhu Bao, Yongyong Cai. Mathematical theory and numerical methods for Bose-Einstein condensation. Kinetic & Related Models, 2013, 6 (1) : 1-135. doi: 10.3934/krm.2013.6.1 Brahim Alouini, Olivier Goubet. Regularity of the attractor for a Bose-Einstein equation in a two dimensional unbounded domain. Discrete & Continuous Dynamical Systems - B, 2014, 19 (3) : 651-677. doi: 10.3934/dcdsb.2014.19.651 Brahim Alouini. Finite dimensional global attractor for a Bose-Einstein equation in a two dimensional unbounded domain. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1781-1801. doi: 10.3934/cpaa.2015.14.1781 Brahim Alouini. Long-time behavior of a Bose-Einstein equation in a two-dimensional thin domain. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1629-1643. doi: 10.3934/cpaa.2011.10.1629 Tai-Chia Lin. Vortices for the nonlinear wave equation. Discrete & Continuous Dynamical Systems - A, 1999, 5 (2) : 391-398. doi: 10.3934/dcds.1999.5.391 Uta Renata Freiberg. Einstein relation on fractal objects. Discrete & Continuous Dynamical Systems - B, 2012, 17 (2) : 509-525. doi: 10.3934/dcdsb.2012.17.509 Colin Guillarmou, Antônio Sá Barreto. Inverse problems for Einstein manifolds. Inverse Problems & Imaging, 2009, 3 (1) : 1-15. doi: 10.3934/ipi.2009.3.1 Luis Vega. The dynamics of vortex filaments with corners. Communications on Pure & Applied Analysis, 2015, 14 (4) : 1581-1601. doi: 10.3934/cpaa.2015.14.1581 Stefanella Boatto. Curvature perturbations and stability of a ring of vortices. Discrete & Continuous Dynamical Systems - B, 2008, 10 (2&3, September) : 349-375. doi: 10.3934/dcdsb.2008.10.349 B. Emamizadeh, F. Bahrami, M. H. Mehrabi. Steiner symmetric vortices attached to seamounts. Communications on Pure & Applied Analysis, 2004, 3 (4) : 663-674. doi: 10.3934/cpaa.2004.3.663 Pedro J. Torres R. Carretero-González S. Middelkamp P. Schmelcher Dimitri J. Frantzeskakis P.G. Kevrekidis
CommonCrawl
OSTI.GOV Journal Article: Edge states and local electronic structure around an adsorbed impurity in a topological superconductor Title: Edge states and local electronic structure around an adsorbed impurity in a topological superconductor Recently, topological superconducting states have attracted much interest. In this work, we consider a topological superconductor with Z 2 topological mirror order [Y.-Y. Tai et al., Phys. Rev. B 91, 041111(R) (2015)] and s ±-wave superconducting pairing symmetry, within a two-orbital model originally designed for iron-based superconductivity [Y.-Y. Tai et al., Europhys. Lett. 103, 67001 (2013)]. We predict the existence of gapless edge states. We also study the local electronic structure around an adsorbed interstitial magnetic impurity in the system, and find the existence of low-energy in-gap bound states even with a weak spin polarization on the impurity. We also discuss the relevance of our results to a recent scanning tunneling microscopy experiment on a Fe(Te,Se) compound with an adsorbed Fe impurity [J.-X. Yin et al., Nat. Phys. 11, 543 (2015)], for which our density functional calculations show the Fe impurity is spin polarized. Tai, Yuan-Yen [1]; Choi, Hongchul [1]; Ahmed, Towfiq [1]; Ting, C. S. [2]; Zhu, Jian-Xin [1] Los Alamos National Lab. (LANL), Los Alamos, NM (United States) Univ. of Houston, Houston, TX (United States). Texas Center for Superconductivity and Dept. of Physics USDOE Office of Science (SC), Basic Energy Sciences (BES); USDOE Laboratory Directed Research and Development (LDRD) Program; Robert A. Welch Foundation; US Air Force Office of Scientific Research (AFOSR) Alternate Identifier(s): OSTI ID: 1226095 LA-UR-15-24366 Journal ID: ISSN 1098-0121; PRBMDO; TRN: US1901329 AC52-06NA25396; E-1146; FA9550-09-1-0656 Journal Article: Accepted Manuscript Physical Review. B, Condensed Matter and Materials Physics American Physical Society (APS) 75 CONDENSED MATTER PHYSICS, SUPERCONDUCTIVITY AND SUPERFLUIDITY; 36 MATERIALS SCIENCE Tai, Yuan-Yen, Choi, Hongchul, Ahmed, Towfiq, Ting, C. S., and Zhu, Jian-Xin. Edge states and local electronic structure around an adsorbed impurity in a topological superconductor. United States: N. p., 2015. Web. doi:10.1103/PhysRevB.92.174514. Tai, Yuan-Yen, Choi, Hongchul, Ahmed, Towfiq, Ting, C. S., & Zhu, Jian-Xin. Edge states and local electronic structure around an adsorbed impurity in a topological superconductor. United States. https://doi.org/10.1103/PhysRevB.92.174514 Tai, Yuan-Yen, Choi, Hongchul, Ahmed, Towfiq, Ting, C. S., and Zhu, Jian-Xin. Mon . "Edge states and local electronic structure around an adsorbed impurity in a topological superconductor". United States. https://doi.org/10.1103/PhysRevB.92.174514. https://www.osti.gov/servlets/purl/1457246. title = {Edge states and local electronic structure around an adsorbed impurity in a topological superconductor}, author = {Tai, Yuan-Yen and Choi, Hongchul and Ahmed, Towfiq and Ting, C. S. and Zhu, Jian-Xin}, abstractNote = {Recently, topological superconducting states have attracted much interest. In this work, we consider a topological superconductor with Z2 topological mirror order [Y.-Y. Tai et al., Phys. Rev. B 91, 041111(R) (2015)] and s±-wave superconducting pairing symmetry, within a two-orbital model originally designed for iron-based superconductivity [Y.-Y. Tai et al., Europhys. Lett. 103, 67001 (2013)]. We predict the existence of gapless edge states. We also study the local electronic structure around an adsorbed interstitial magnetic impurity in the system, and find the existence of low-energy in-gap bound states even with a weak spin polarization on the impurity. We also discuss the relevance of our results to a recent scanning tunneling microscopy experiment on a Fe(Te,Se) compound with an adsorbed Fe impurity [J.-X. Yin et al., Nat. Phys. 11, 543 (2015)], for which our density functional calculations show the Fe impurity is spin polarized.}, url = {https://www.osti.gov/biblio/1457246}, journal = {Physical Review. B, Condensed Matter and Materials Physics}, https://doi.org/10.1103/PhysRevB.92.174514 FIG. 1: The band structures shows the gapless edge states of the coexistence of topological and superconducting orders. a, The folded band structure of a 2-site per unit cell BZ. b-e, The band structure of a strip of width of 20 lattice constants with open and close boundary condition undermore » non-superconducting and superconducting phase. We take the periodicity along the y-direction of the strip, where 2000 $$k_y$$-points are taken. The inset of the middle panel of e is the enlargement of the crossing point of the Fermi surface, where we can find that there are totally four degnerated points around $$k_y$$ = ±$$\frac{π}{2}$$ (the green inset) and the others around $$k_y$$ = 0 and ± π are not degenerated.« less https://doi.org/10.1103/PhysRevLett.77.3865 Chiral topological superconductor from the quantum Hall state Qi, Xiao-Liang; Hughes, Taylor L.; Zhang, Shou-Cheng Impurity-induced states in conventional and unconventional superconductors Balatsky, A. V.; Vekhter, I.; Zhu, Jian-Xin Reviews of Modern Physics, Vol. 78, Issue 2 https://doi.org/10.1103/RevModPhys.78.373 Strong Localization of Majorana End States in Chains of Magnetic Adatoms Peng, Yang; Pientka, Falko; Glazman, Leonid I. https://doi.org/10.1103/PhysRevLett.114.106801 Majorana fermions in superconducting helical magnets Martin, Ivar; Morpurgo, Alberto F. Universal Impurity-Induced Bound State in Topological Superfluids Hu, Hui; Jiang, Lei; Pu, Han Physical Review Letters, Vol. 110, Issue 2 Topological superconductivity induced by ferromagnetic metal chains Li, Jian; Chen, Hua; Drozdov, Ilya K. Time-Reversal-Invariant Topological Superconductors and Superfluids in Two and Three Dimensions Qi, Xiao-Liang; Hughes, Taylor L.; Raghu, S. Topological superfluid in one-dimensional spin-orbit-coupled atomic Fermi gases Liu, Xia-Ji; Hu, Hui https://doi.org/10.1103/PhysRevA.85.033622 Self-organized topological state in a magnetic chain on the surface of a superconductor Reis, I.; Marchand, D. J. J.; Franz, M. Topological Mirror Superconductivity Zhang, Fan; Kane, C. L.; Mele, E. J. Density functional study of FeS, FeSe, and FeTe: Electronic structure, magnetism, phonons, and superconductivity Subedi, Alaska; Zhang, Lijun; Singh, D. J. Calculated phase diagram of doped BaFe 2 As 2 superconductor in a C 4 -symmetry breaking model Tai, Yuan-Yen; Zhu, Jian-Xin; Graf, Matthias J. EPL (Europhysics Letters), Vol. 103, Issue 6 Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set https://doi.org/10.1103/PhysRevB.54.11169 Kondo Stripes in an Anderson-Heisenberg Model of Heavy Fermion Systems Zhu, Jian-Xin; Martin, I.; Bishop, A. R. https://doi.org/10.1103/PhysRevB.59.1758 Symmetry-Protected Majorana Fermions in Topological Crystalline Superconductors: Theory and Application to Sr 2 RuO 4 Ueno, Yuji; Yamakage, Ai; Tanaka, Yukio Observation of a robust zero-energy bound state in iron-based superconductor Fe(Te,Se) Yin, J-X.; Wu, Zheng; Wang, J-H. Nature Physics, Vol. 11, Issue 7 https://doi.org/10.1038/nphys3371 Topological Superconductivity and Majorana Fermions in RKKY Systems Klinovaja, Jelena; Stano, Peter; Yazdani, Ali Topological Yu-Shiba-Rusinov chain from spin-orbit coupling Brydon, P. M. R.; Das Sarma, S.; Hui, Hoi-Yin Unpaired Majorana fermions in quantum wires Kitaev, A. Yu Physics-Uspekhi, Vol. 44, Issue 10S https://doi.org/10.1070/1063-7869/44/10S/S29 Lattice model for the broken-time-reversal-symmetry pairing state near a surface of d-wave superconductors Zhu, Jian-Xin; Friedman, B.; Ting, C. Emergent topological mirror insulator in t 2 g -orbital systems Tai, Yuan-Yen; Wang, C. -C. Joseph; Graf, Matthias J. Superconducting Proximity Effect and Majorana Fermions at the Surface of a Topological Insulator Fu, Liang; Kane, C. L. In-gap bound states induced by interstitial Fe impurities in iron-based superconductors Zhang, Degang Physica C: Superconductivity and its Applications, Vol. 519 https://doi.org/10.1016/j.physc.2015.08.009 Disorder effects in multiorbital s ± -wave superconductors: Implications for Zn-doped BaFe 2 As 2 compounds Chen, Hua; Tai, Yuan-Yen; Ting, C. S. Observation of Majorana fermions in ferromagnetic atomic chains on a superconductor Nadj-Perge, S.; Drozdov, I. K.; Li, J. https://doi.org/10.1126/science.1259327 Topological superconducting phase in helical Shiba chains Pientka, Falko; Glazman, Leonid I.; von Oppen, Felix Impurity-induced bound states in iron-based superconductors with s -wave cos k x ⋅ cos k y pairing symmetry Tsai, Wei-Feng; Zhang, Yan-Yang; Fang, Chen Interplay of topological phases in magnetic adatom-chains on top of a Rashba superconducting surface Heimes, Andreas; Mendler, Daniel; Kotetes, Panagiotis New Journal of Physics, Vol. 17, Issue 2 https://doi.org/10.1088/1367-2630/17/2/023051 Self-Organized Topological State with Majorana Fermions Vazifeh, M. M.; Franz, M. Superconducting spintronics Linder, Jacob; Robinson, Jason W. A. Topological phases of noncentrosymmetric superconductors: Edge states, Majorana fermions, and non-Abelian statistics Sato, Masatoshi; Fujimoto, Satoshi Tunneling interstitial impurity in iron-chalcogenide-based superconductors Huang, Huaixiang; Zhang, Degang; Gao, Yi Effects of spin-orbit coupling on zero-energy bound states localized at magnetic impurities in multiband superconductors Seo, Kangjun; Sau, Jay D.; Tewari, Sumanta FIG. 1(p. 5)figure Proximity-induced superconducting gap in the quantum spin Hall edge state of monolayer WTe2 Journal Article Lüpke, Felix ; Waters, Dacen ; de la Barrera, Sergio C. ; ... - Nature Physics The quantum spin Hall insulator is characterized by a bandgap in the two-dimensional (2D) interior and helical 1D edge states 1,2,3. Inducing superconductivity in the helical edge state results in a 1D topological superconductor, a highly sought-after state of matter at the core of many proposals for topological quantum computing 4. In the present study, we report the coexistence of superconductivity and the quantum spin Hall edge state in a van der Waals heterostructure, by placing a monolayer of 1T'-WTe 2, a quantum spin Hall insulator 1,2,3, on a van der Waals superconductor, NbSe 2. Using scanning tunnelling microscopy andmore » spectroscopy (STM/STS), we demonstrate that the WTe 2 monolayer exhibits a proximity-induced superconducting gap due to the underlying superconductor and that the spectroscopic features of the quantum spin Hall edge state remain intact. Taken together, these observations provide conclusive evidence for proximity-induced superconductivity in the quantum spin Hall edge state in WTe 2, a crucial step towards realizing 1D topological superconductivity and Majorana bound states in this van der Waals material platform.« less https://doi.org/10.1038/s41567-020-0816-x Strong ferromagnetic exchange interaction under ambient pressure in BaFe 2 S 3 Journal Article Wang, Meng ; Jin, S. J. ; Yi, Ming ; ... - Physical Review B Inelastic neutron scattering measurements have been performed to investigate the spin waves of the quasi-one-dimensional antiferromagnetic ladder compound BaFe2S3, where a superconducting transition was observed under pressure [H. Takahashi et al., Nat. Mater. 14, 1008 (2015); T. Yamauchi et al., Phys. Rev. Lett. 115, 246402 (2015)]. By fitting the spherically averaged experimental data collected on a powder sample to a Heisenberg Hamiltonian, we find that the one-dimensional antiferromagnetic ladder exhibits a strong nearest-neighbor ferromagnetic exchange interaction (SJR=-71±4 meV) along the rung direction, an antiferromagnetic SJL=49±3 meV along the leg direction, and a ferromagnetic SJ2=-15±2 meV along the diagonal direction. Ourmore » data demonstrate that the antiferromagnetic spin excitations are a common characteristic for the iron-based superconductors, while specific relative values for the exchange interactions do not appear to be unique for the parent states of the superconducting materials.« less A high-temperature ferromagnetic topological insulating phase by proximity coupling Journal Article Katmis, Ferhat ; Lauter, Valeria ; Nogueira, Flavio S. ; ... - Nature (London) Topological insulators are insulating materials that display conducting surface states protected by time-reversal symmetry 1,2, wherein electron spins are locked to their momentum. This unique property opens up new opportunities for creating next-generation electronic, spintronic and quantum computation devices 3,4,5. Introducing ferromagnetic order into a topological insulator system without compromising its distinctive quantum coherent features could lead to the realization of several predicted physical phenomena 6,7. In particular, achieving robust long-range magnetic order at the surface of the topological insulator at specific locations without introducing spin-scattering centres could open up new possibilities for devices. Here we use spin-polarized neutron reflectivitymore » experiments to demonstrate topologically enhanced interface magnetism by coupling a ferromagnetic insulator (EuS) to a topological insulator (Bi 2Se 3) in a bilayer system. This interfacial ferromagnetism persists up to room temperature, even though the ferromagnetic insulator is known to order ferromagnetically only at low temperatures (<17 K). The magnetism induced at the interface resulting from the large spin–orbit interaction and the spin–momentum locking of the topological insulator surface greatly enhances the magnetic ordering (Curie) temperature of this bilayer system. The ferromagnetism extends ~2 nm into the Bi 2Se 3 from the interface. Owing to the short-range nature of the ferromagnetic exchange interaction, the time-reversal symmetry is broken only near the surface of a topological insulator, while leaving its bulk states unaffected. The topological magneto-electric response originating in such an engineered topological insulator 2,8 could allow efficient manipulation of the magnetization dynamics by an electric field, providing an energy-efficient topological control mechanism for future spin-based technologies« less https://doi.org/10.1038/nature17635 SISGR: Atom chip microscopy: A novel probe for strongly correlated materials Technical Report Lev, Benjamin L. Microscopy techniques co-opted from nonlinear optics and high energy physics have complemented solid-state probes in elucidating the order manifest in condensed matter materials. Up until now, however, no attempts have been made to use modern techniques of ultracold atomic physics to directly explore properties of strongly correlated or topologically protected materials. Our current program is focused on introducing a novel magnetic field microscopy technique into the toolbox of imaging probes. Our prior DOE ESPM program funded the development of a novel instrument using a dilute gas Bose-Einstein condensate (BEC) as a scanning probe capable of measuring tiny magnetic (and electric)more » DC and AC fields above materials. We successfully built the world's first "scanning cryogenic atom chip microscope" [1], and we now are in the process of characterizing its performance before using the instrument to take the first wide-area images of transport flow within unconventional superconductors, pnictides and oxide interfaces (LAO/STO), topological insulators, and colossal magnetoresistive manganites. We will do so at temperatures outside the capability of scanning SQUIDs, with ~10x better resolution and without 1/f-noise. A notable goal will be to measure the surface-to-bulk conductivity ratio in topological insulators in a relatively model-independent fashion [2]. We have completed the construction of this magnetic microscope, shown in Figure 1. The instrument uses atom chips—substrates supporting micron-sized current-carrying wires that create magnetic microtraps near surfaces for ultracold thermal gases and BECs—to enable single-shot and raster-scanned large-field-of-view detection of magnetic fields. The fields emanating from electronic transport may be detected at the 10-7 flux quantum (Φ0) level and below (see Fig. 2); that is, few to sub-micron resolution of sub-nanotesla fields over single-shot, millimeter-long detection lengths. By harnessing the extreme sensitivity of atomic clocks and BECs to external perturbations, we are now in a position to use atom chips for imaging transport in new regimes. Scanning quantum gas atom chip microscopy introduces three very important features to the toolbox of high-resolution scanning microscopy of strongly correlated or topological materials: simultaneous detection of magnetic and electric fields (down to the sub-single electron charge level [3,4]; no invasive large magnetic fields or gradients; simultaneous micro- and macroscopic spatial resolution; DC to MHz detection bandwidth; freedom from 1/f flicker noise at low frequencies; and, perhaps most importantly, the complete decoupling of probe and sample temperatures. The atom chip microscope can operate at maximum sensitivity and resolution without regard to the substrate temperature. While the BEC is among the coldest objects realizable (100 nK temperatures are typical), the atom chip substrate can be positioned 1 μm away from the BEC and be as hot as 400 K or as cold as the cryostat can cool. This is because unlike superconducting probes, whose temperature is closely coupled to nearby materials, quantum gases are immune to radiative heating. The energy gap between a Rb atom's ground state and first excited state far exceeds the typical energy of room-temperature blackbody radiation; such atoms are therefore transparent to radiation heating by materials at room temperature or below. We experimentally demonstrated a new atom chip trapping system that allows the placement and high-resolution imaging of ultracold atoms within microns from any ≤100 μm-thin, UHV-compatible material, while also allowing sample exchange with minimal experimental downtime [1]. The sample is not connected to the atom chip, allowing rapid exchange without perturbing the atom chip or laser cooling apparatus. Exchange of the sample and retrapping of atoms has been performed within a week turnaround, limited only by chamber baking. Moreover, the decoupling of sample and atom chip provides the ability to independently tune the sample temperature and its position with respect to the trapped ultracold gas, which itself may remain in the focus of a high-resolution imaging system. See Fig. 3. We confine 100-nK BECs of 104 87Rb atoms near a gold-mirrored 100-μm-thick silicon substrate. The substrate can be cooled to 35 K without use of a heat shield, while the atom chip, 120-μm away, remains at room temperature. Atoms may be imaged with 1-μm resolution and retrapped every 16 s, allowing rapid data collection. Straightforward improvements will allow us to push sample temperatures close to 4 K, and improve imaging resolution from 1 μm down to a few-100 nm, thereby providing 10-9 Φ0 detection sensitivity. We will test the utility of this technique by imaging the magnetic fields emanating from electronic transport and domain percolation in several interesting examples of strongly correlated or topologically protected materials. STM, transport, and x-ray scattering experiments have, among others, revealed the existence of a quantum liquid crystal state in iron (pnictide) and cuprate superconductors. This strongly correlated state of matter could also be detected by imaging the fluctuating transport (spatially and in time) of electrons as the phase/regime boundary is crossed between the pnictide non-Fermi liquid (cuprate strange metal) and the pnictide magnetic phase (cuprate pseudogap regime). Our ability to image wide-area inhomogeneous current flow from room-temperature to <10 K will allow us to study the developing domain structure and transport near twin boundary interfaces through the TN~50-150 K nematic transition recently identified in bulk transport experiments by Ian Fisher's group in underdoped Fe-arsinide superconductors [6]. Again, this highlights a main feature of our cryogenic atom chip microscope: the ability to image transport regardless of the sample temperature since the BEC, at nK temperatures, is transparent to blackbody radiation, even when held a microns from the surface. References: 3) S. Aigner et al., Long-range order in electronic transport through disordered metal films, Science 319 319 (2008). 4) S. Wildermuth, et al. Sensing electric and magnetic fields with Bose-Einstein condensates, Appl. Phys. Lett. 88, 264103 (2006). 5) M. Lu, N. Q. Burdick, S.-H. Youn, and B. L. Lev, Strongly Dipolar Bose-Einstein Condensate of Dysprosium, PRL 107, 190401 (2011). 6) J.-H. Chu, J. Analytis, K. De Greve, P. Mcmahon, A. Islam, Y. Yamamoto, and I. Fisher, In-Plane Resistivity Anisotropy in an Underdoped Iron Arsenide Superconductor, Science 329, 824 (2010). Publications: 1) M. A. Naides, R. W. Turner, R. A. Lai, J. M. DiSciacca, and B. L. Lev, Trapping ultracold gases near cryogenic materials with rapid reconfigurability, Applied Physics Letters 103, 251112 (2013). 2) B. Dellabetta, T. L. Hughes, M. J. Gilbert, and B. L. Lev, Imaging topologically protected transport with quantum degenerate gases, Phys. Rev. B 85, 205442 (2012).« less Formation mechanism of superconducting phase and its three-dimensional architecture in pseudo-single-crystal K xFe 2-ySe 2 Journal Article Liu, Yong ; Xing, Qingfeng ; Straszheim, Warren E. ; ... - Physical Review B Here, we report how the superconducting phase forms in pseudo-single-crystal K xFe 2-ySe 2. In situ scanning electron microscopy (SEM) observation reveals that, as an order-disorder transition occurs, on cooling, most of the high-temperature iron-vacancy-disordered phase gradually changes into the iron-vacancy-ordered phase whereas a small quantity of the high-temperature phase retains its structure and aggregates to the stripes with more iron concentration but less potassium concentration compared to the iron-vacancy-ordered phase. The stripes that are generally recognized as the superconducting phase are actually formed as a remnant of the high-temperature phase with a compositional change after an "imperfect" order-disorder transition.more » It should be emphasized that the phase separation in pseudo-single-crystal K xFe 2-ySe 2 is caused by the iron-vacancy order-disorder transition. The shrinkage of the high-temperature phase and the expansion of the newly created iron-vacancy-ordered phase during the phase separation rule out the mechanism of spinodal decomposition proposed in an early report [Wang et al, Phys. Rev. B 91, 064513 (2015)]. Since the formation of the superconducting phase relies on the occurrence of the iron-vacancy order-disorder transition, it is impossible to synthesize a pure superconducting phase by a conventional solid state reaction or melt growth. By focused ion beam-scanning electron microscopy, we further demonstrate that the superconducting phase forms a contiguous three-dimensional architecture composed of parallelepipeds that have a coherent orientation relationship with the iron-vacancy-ordered phase.« less
CommonCrawl
24 Nov 2021, 09:00 → 26 Nov 2021, 17:20 Australia/Sydney ABSTRACT SUBMISSION NOW CLOSED Due to various covid restrictions the comittee has decided to change this to an online only event, ANSTO hopes we can commence face to face meetings next year. A schedule of online webinars allows scientists who have accessed any of ANSTO's research infrastructure, and includes the beamlines of the Australian Synchrotron, neutron scattering instruments at the Australian Centre for Neutron Scattering, instruments the Centre for Accelerator Science and deuterated products from the National Deuteration Facility, to showcase their work during the last year. The event will coincide with the combined annual meetings of the Australian Neutron Beam Users Group (ANBUG) and the Australian Synchrotron Users Advisory Committee (UAC). The announcement of excellence in neutron and synchrotron research awards takes place during the meeting. Investigations using the instruments extend to a diverse range of scientific areas including advanced materials, biomedicine, life sciences, food science, physics, surface and condensed matter, chemistry, soft matter and crystallography, manufacturing and engineering, earth, environment and cultural heritage and research relating to the instruments and instrument techniques. AINSE is offering up to 100 student registrations for students from AINSE Member Universities to attend this event with preference given to students that are presenting either an oral or poster. Wed, 24 Nov Thu, 25 Nov Fri, 26 Nov 09:00 → 09:15 Update: Organisational Prof Despina Louca - Plenary 30m To be supplied Speaker: Prof. Despina Louca (University of Virginia) Morning Tea 30m Structural studies of solid-state ionic conductors at the limits of diffraction and beyond 20m The structures of solid-state ionic conductors are a compromise between long-range (and hence long-term) lattice stability and short-range coordinative flexibility. To rationally design improved versions for applications such as fuel cells and batteries, we need to understand how this compromise is reached. Diffraction methods alone are inadequate – whether using X-rays or neutrons, ex situ or operando, conventional crystallography or total scattering analysis – because of their dynamic nature. The time-averaged structure is not the whole story. In this talk I will show how we use experimental X-ray and neutron spectroscopy, and computational structure and dynamics calculations, to supplement diffraction when studying solid-state oxide, proton and lithium ionic conductors. We can then validate the insights gained by making targeted chemical modifications and testing their effects on structure and functional physical properties. Speaker: Chris Ling (University of Sydney) Magnetoelastic coupling as a relaxation pathway for single ion magnets observed using inelastic neutron scattering. 15m Single ion magnets (SIM's), are materials that show an energy barrier to spin reorientation without long range magnetic order. Such materials have been postulated to be useful as potential materials for high density data storage or to be used as Qubits. The origin of the effect lies in the crystal field splitting of the central lanthanoid ion. The determination of crystal field splitting has long been performed using INS and this has been readily extended to SIM's [1]. In recent years the operating temperature of these SIM's has increased dramatically with magnetic hysteresis observed above liquid nitrogen temperatures [2]. The limiting factor is no longer the height of the energy barrier for reorientation, but minimization of relaxation by varying methods such as quantum tunneling of magnetization and Orbach relaxation. Such phenomena have previously been shown to be measurable using INS and QENS techniques [3,4,5]. In our recent work we have revisited the INS of Na9[Ho(W5O18)2] to analyse both the presence or absence of a QENS signal to determine whether Orbach relaxation occurs [3]. We have also performed an analysis of the peak widths of the crystal field excitations and modelled these using a magnetoelastic model [4]. This reanalysis demonstrates that INS holds more information than just the energy scale of the system for single ion magnets. [1] M. A. Dunstan et al, European Journal of Inorganic Chemistry 1089 (2019). [2] F-S. Guo,et al, Science, 362, 1400 (2018) [3] M. Roepke, et al, Physical Review B, 60, 9793 (1999) [4] S.W. Lovesey, U. Staub, Physical Review B, 61, 9130 (2000) [5] M. Ruminy, et. al, Physical Review B 95, 060414(R) (2017) Speaker: Richard Mole (ANSTO) Total scattering: science that's better than average 15m Local-scale defects and disorder are essential in the development of new advanced functional materials. However, such features are often difficult to characterize and understand without suitable probes. Powder diffraction is a powerful technique for understanding atomic structures, however, Bragg peaks alone are limited to information regarding the "average" or long-range structure. The presence of local-scale disorder results in diffuse features that occur beneath and between the Bragg peaks. Hence, the chracterisation of nano-scale (0.1 – 3 nm) features in functional materials demands an alternative approach. Total scattering involves the collection of both Bragg and diffuse data over a wide Q-range. This can be Fourier transformed to generate the pair distribution function (PDF), which corresponds to an interatomic histogram of atom-atom pairs in real space. Analyzing such data can enable the development of atomic models that capture the local- and long-range structural features. Such measurements require the use of high energy x-rays and/or neutrons. With the development of the advanced diffraction and scattering beamline at the Australian synchrotron, such measurements will become viable in Australia. This presentation will show case clear-cut examples of the application of total scattering in materials chemistry. This will show the necessity for local structure analysis in developing a complete understanding of structure-property relationships. Speaker: Frederick Marlton (University of Sydney) Single-Crystal-to-Single-Crystal Transformations of Metal–Organic-Framework-Supported, Site-Isolated Trigonal-Planar Cu(I) Complexes with Labile Ligands 15m Transition-metal complexes bearing labile ligands can be difficult to isolate and study in solution because of unwanted dinucleation or ligand substitution reactions. Metal–organic frameworks (MOFs) provide a unique matrix that allows site isolation and stabilization of well-defined transition-metal complexes that may be of importance as moieties for gas adsorption or catalysis. Herein we report the development of an in situ anion metathesis strategy that facilitates the postsynthetic modification of Cu(I) complexes appended to a porous, crystalline MOF. By exchange of coordinated chloride for weakly coordinating anions in the presence of carbon monoxide (CO) or ethylene, a series of labile MOF-appended Cu(I) complexes featuring CO or ethylene ligands are prepared and structurally characterized using X-ray crystallography. These complexes have an uncommon trigonal planar geometry because of the absence of coordinating solvents. The porous host framework allows small and moderately sized molecules to access the isolated Cu(I) sites and displace the "place-holder" CO ligand, mirroring the ligand-exchange processes involved in Cu-centered catalysis. Speaker: Ricardo Peralta (University of Adelaide) Deuteration of Rec1-Resilin and its hydrogel for biomedical applications 15m Rec1-resilin is a highly hydrophilic protein that exudes a vast range of multi-responsiveness, well known for its superelasticity. Self-assembly of Rec1-resilin has been studied in vitro, however, it is difficult to understand the interaction and molecular organisation of the protein with varying biological environments due to the presence of complex systems. Therefore, it is critical to synthesise Rec1-resilin in deuterated form, which would enable a unique neutron scattering length density for neutron scattering experiments. With a view to understand the self-assembly and co-assembly of Rec1-resilin and tailor its responsiveness, we successfully synthesised deuterated Rec1-resilin using a modified protocol. Utilising this modified protocol, we hope to develop modular versions of Rec1-resilin with hydrophobic segment to explore its unique properties from its conformational structure, binding and organisation with complementary and contrasting neutron scattering techniques with intentions to develop biomimetic gels for adhesion and repair of tissue. Ultimately, this will show the impact of deuteration on the protein through a comparison of the structure and organisation of the deuterated and unlabelled modular versions of Rec1-resilin in different environments, which will not only provide a fundamental understanding of phase behaviour but also lead to utilisation of isotopically-labelled modular Rec1-resilin protein and its hydrogels for biomedical applications. Speaker: Nisal Wanasingha Interfacial spin-structures in Pt/Tb3Fe5O12 bilayer films on Gd3Ga5O12 substrates 15m The insulating ferrimagnets of rare-earth iron garnets (ReIG) are researched intensively owing to their strong magneto-electric responses. Proximity coupling between an insulating ReIG and a heavy metal, such as Pt has been shown to lead to an Anomalous Hall effect (AHE). Amongst the ReIG family, TbIG is less explored than the well-known YIG films. In this article, we report thin films (40 nm) of ferrimagnetic insulator Tb3Fe5O12 (TbIG) were grown on (111) oriented Gd3Ga5O12 (GGG) substrates by using pulsed laser deposition technique, some of which were capped by a thin Pt layer. Scanning transmission electron microscopy and X-ray diffraction show that the oxide films are epitaxial with high crystalline quality and sharp interfaces. Detailed polarized neutron reflectometry was used to study the spin structure above, below and near the compensation to search for interfacial spin effects. The neutron reflectivity with different states (spin up (R+) and spin down (R-)) and the spin asymmetry (SA = (R+ - R-)/ (R+ + R-)) shows trends above 100 K consistent with the weak ferrimagnetic moment and compensation point. Remarkably, the PNR spectra at 7 K shows additional splitting of R+ and R- indicating strong magnetization on the magnetic film showing a new magnetic layer that arise, additional STEM mapping elucidate that this additional layer occurs at the TbIG/GGG interface, where a chemical difference in the ratio of Gd:Ga occurs as product of the growth conditions. This effect appears in both capped and Pt-free TbIG films. Reversal of AHE sign occurred between 145 K and room temperature. The peculiar behavior of AHE loop around 220 K is related to the compensation point of TbIG. Speaker: Ms Roshni Yadav Operando investigation of a lead-acid battery with the IMBL. 15m Lead-acid batteries play a key role in the energy storage marketplace. They are often cheaper, safer and more recyclable than alternative electrochemical energy storage systems. Under traditional energy storage applications such as starting, lighting and ignition batteries, they provide a great balance of affordability, lifespan and performance to the consumer. However, as the demands placed on energy storage systems have increased over recent decades, lead-acid batteries have been shown to have a markedly shortened lifespan. Investigations have found the cause of this failure is related to an uneven utilization of active material in the Pb electrode. Although, the process whereby uneven utilization of active material leads to a significant reduction in cycle-life has yet to be determined. Typically, investigations of uneven material utilization are conducted after the fact and are destructive (EPMA, SEM, XRD etc.). Our project aimed to determine whether the IMBL could be a suitable candidate for non-destructive, operando investigations into active material utilization. To achieve this, a lead-acid cell was designed specifically for the IMBL. It was cycled whilst being simultaneously imaged at 85 keV, with an exposure time of 0.8 s at a resolution of 5.8 µm. Results show that key reaction products could be observed in-situ and furthermore they could be localized within certain regions within the Pb electrode. We show that the IMBL could be a powerful tool to further the current understanding of lead-acid batteries. Speaker: Mr Chad Stone (Swinburne University of Technology) The Death Kiss: understanding how the zombie protein, MLKL, is triggered to kill cells by necroptosis 20m In 2012, Mixed lineage kinase domain-like (MLKL), a catalytically-dead ("zombie") cousin of conventional protein kinases, termed a pseudokinase, was implicated as the key effector in the programmed necrosis (or necroptosis) cell death pathway. This pathway has been implicated in innate immunity, the pathogenesis of inflammatory diseases, and tissue injury arising from ischemia-reperfusion. As a result, an improved fundamental knowledge of MLKL's activation mechanism is of enormous interest as we and others look to target the pathway therapeutically. Here, I will describe our recent work dissecting the chronology of events in this pathway using novel tools, biochemistry, microscopy, proteomics and structural biology methods. Our structural studies were enabled by the MX and SAXS beamlines at the Australian Synchrotron and isotopic protein labelling at the National Deuteration Facility. Speaker: Dr James Murphy (Walter and Eliza Hall Institute of Medical Research) Structures of biliary micelles during solubilisation of lipids mimicking the digestion products of human and bovine milk 15m Milk is our sole source of nutrition for the first six months of life and milk lipids carry fat-soluble nutrients through the gut as well as providing most of the energy we consume with milk. The digestion and absorption of lipids, predominantly triglycerides, and entrained nutrients is therefore important for survival and growth. Milk triglycerides are regarded amongst the most chemically complex mixtures, their composition is species-dependent and determines the mixture of fatty acids and monoglycerides that form during their digestion. Most lipid digestion takes place in the small intestines where bile salts mixed with phospholipids in the intestinal fluids form a colloidal sink into which the poorly-soluble digestion products can partition and be absorbed at the intestinal walls. This work describes attempts to simulate how the structures of biliary micelles change when they absorb milk digestion products under intestinal conditions. Mixtures of fatty acids and monoglycerides were prepared to mimic the digestion products of human and bovine milk. The chemical complexity of the mixtures was varied by including between four and eight different lipid chain types in the digestion product mixtures. The effect of pH on micelle structure was also studied within the range of pH 6.4-7.7, consistent with the increase in pH along the intestinal tract. The structural differences when these complex lipid mixtures were solubilised by bile salt/phospholipid micelles were identified using the SAXS/WAXS beamline at the ANSTO Australian Synchrotron. The lipid composition was found to be a primary driver of micelle shape and size, with pH having a secondary affect in reducing aggregate formation at higher pH. Speaker: Dr Andrew Clulow (ANSTO Australian Synchrotron) Hepatic lipid composition in dietary models of high iron NAFLD investigated with Synchrotron Infrared and X-Ray Fluorescence microscopy 15m Hepatocytes are essential for maintaining homeostasis of mammalian iron and lipid metabolism. Serious health consequences have been linked to dysregulation of both areas. One such consequence is non-alcoholic fatty liver disease (NAFLD). Approximately 30% of individuals with NAFLD demonstrate a moderate increase in hepatic iron; however, the mechanism and metabolic consequences remain under-investigated. We assessed the metabolic consequences using mice fed either a control or high fat (HF) diet, with or without high iron. Attenuated Total Reflection Infrared Microscopy (Macro-ATR) at the Australian Synchrotron was used to investigate lipid composition and distribution, and X-Ray Fluorescence Microscopy (XRF) at the Diamond Light Source (UK) was used to determine subcellular iron concentration and distribution. Peri-portal hepatocytes of HF fed animals exhibited elevated lipid parameters, including ester and free fatty acid concentrations ~7x that of controls (P<0.005). The increase was seen within lipid droplets, which were primarily composed of cholesteryl esters and triglycerides. When HF livers were iron loaded, reductions in all lipid parameters were observed, with ~2.6x lower relative ester concentration (P<0.05) compared to HF only. Iron loaded HF peri-portal hepatocytes exhibited shorter chain lengths (P<0.005) and a shift in the olefinic peak (3011 cm-1) compared to HF (3007 cm-1) (P<0.05), suggesting the shorter chains were more polyunsaturated. Iron accumulated within mitochondria of peri-portal hepatocytes of animals fed high iron diets. Poly-unsaturated lipids are strong activators of hepatic lipid breakdown and this study suggests a role for iron in reducing the lipid burden by remodelling hepatic lipids in NAFLD. Speakers: Mr Clinton Kidman (Curtin University), Dr Cyril Mamotte (Curtin University) Investigating the Therapeutic Benefit of Spermidine in a Pre-Clinical Model of Muscular Dystrophy 15m Research into treatment for Duchenne Muscular Dystrophy (DMD) typically focuses on deterioration of muscle, however bone health is also severely compromised. Current treatment with corticosteroids exacerbate bone loss, so novel therapies targeting both muscle and bone are needed. Studies on bone health in a pre-clinical model, mdx mice, are limited and have conflicting results. Objective of study: To characterise aspects of bone health in mdx mice and investigate whether spermidine might attenuate disease symptoms and spare bone. Bone structure and function were assessed in 16-week-old mdx mice femurs by three-point bending, microarchitectural assessment using the Imaging and Medical Beamline (IMBL) at the Australian Synchrotron, and by histological analysis. Cortical thickness and cortical bone area fraction were lower in dystrophic mice compared to wild-type controls (WT). No differences were observed in metaphyseal trabecular bone morphometry. Three-point bending indicated that mdx femurs required less stress to reach yield point and failure but were able to sustain damage for a longer period (post-yield displacement) compared to WT mice. Despite this, mdx femurs required more energy to reach failure. Histology revealed lower osteoblast numbers in mdx mice. Spermidine treatment did not appear to compromise bone health in either WT or mdx mice which is important as current treatments typically worsen bone quality. This study provides novel data about aspects of skeletal morphology in mdx mice at 16 weeks of age, and provides new techniques using pre-clinical models to investigate potential therapies for DMD patients that might target both muscle and bone. Speaker: Lauryn Schaddee Van Dooren (University of Melbourne) Structural insights into the ferroxidase and iron sequestration mechanisms of ferritin from Caenorhabditis elegans 15m Iron is an essential trace element that, when in excess, becomes highly toxic [1]. Intracellular iron concentration must be strictly regulated by a network of interacting mechanisms [2]. Ferritin is a ubiquitous iron-storage protein that forms a highly conserved 24-subunit spherical cage-like structure. Ferritin catalyses the oxidation of iron (II) to iron (III) and sequesters the newly oxidised iron (III) as a mineral core to prevent cellular damage [3]. In this study, we use the model organism, Caenorhabditis elegans, to investigate iron uptake, oxidation, storage and release by ferritin. C. elegans expresses two ferritin proteins, FTN-1 and FTN-2, which both exhibit ferroxidase activity [4]. FTN-2 functions at a rate significantly faster than FTN-1 despite conservation of all catalytic residues, suggesting that structural differences at a location distinct to the ferroxidase centre may influence catalytic activity. We solved the X-ray crystal structures of FTN-1 (1.84 Å) and FTN-2 (1.47 Å), and the cryo-EM structure of FTN-2 (1.88 Å). FTN-1 and FTN-2 both adopt the conserved 24-subunit cage-like structure and bind one iron (II) in the ferroxidase centre of each chain. We postulate that iron (II) accesses the ferroxidase centre through a three-fold symmetrical pore. This pore is notably larger and more negatively charged in the FTN-2 structure and may facilitate easier access of iron (II) to the ferroxidase centre, resulting in a faster catalysis rate. These structural insights further our understanding of the mechanisms used by ferritin to regulate iron storage and the overall role of ferritin in iron homeostasis. Speaker: Tess Malcolm (Bio21 Molecular Science and Biotechnology Institute) Spectroscopic Analysis of Age-Related Changes in the Brain Lateral Ventricles During Ageing 15m Alzheimer's disease is the most common form of dementia and poses significant health and economic concerns. Currently, the disease has no cure, and it is expected that over 1 million people could be affected by 2058 in Australia alone. The content and distribution of metals such as Fe, Cu, Zn is known to change in the ageing brain and thus, increased understanding of the mechanistic role of metal dis-homeostasis may illuminate new therapeutic strategies. The brain lateral ventricles, which play a role in controlling metal and ion transport, have shown increasing levels of copper surrounding their walls with ageing. As a redox active metal, copper can induce oxidative stress which is a process that occurs during Alzheimer's disease onset and progression. Our research group has been interested in determining whether the age-related elevation of copper surrounding the lateral ventricles is inducing oxidative stress in that region. In this study, we have utilised X-Ray Absorption Spectroscopy (XAS) at the Stanford Synchrotron Radiation Lightsource to analyse different chemical forms of sulfur and measure oxidative stress by analysis of disulfides. Additionally, we used the infrared microscopy beamline at the Australian Synchrotron to identify whether any other markers of oxidative stress were present around the ventricles. Further insights into metal dis-homeostasis and its influence on other biochemical pathways, may help to reveal some of the neurochemical mechanisms involved in progression of Alzheimer's disease. In turn, this may help pave the way for potential preventative or therapeutic models. Speaker: Ms Ashley Hollings (School of Molecular and Life Sciences, Curtin University, GPOBox U1987, Bentley Western Australia 6845, Australia. Curtin Health Innovation Research Institute, Curtin University, Bentley, Western Australia 6102, Australia.) Imaging Breast Microcalcifications Using Dark-Field Signal in Propagation-Based Phase-Contrast Tomography 15m Breast microcalcifications are an important primary radiological indicator of breast cancer. However, microcalcification classification and diagnosis can be still challenging for radiologists due to limitations of the standard 2D mammography technique, including spatial and contrast resolution. In this study, we propose an approach to improve the detection of microcalcifications in propagation-based phase-contrast X-ray tomography (PB-CT) of breast tissues. Five fresh mastectomies containing microcalcifications were scanned at the Imaging and Medical beamline of the Australian Synchrotron at different X-ray energies and radiation doses. Both bright-field and dark-field images were extracted from the same data sets using different image processing methods [1]. A quantitative analysis was performed in terms of visibility and contrast-to-noise ratio of microcalcifications. The results show that the visibility of the microcalcifications in the dark-field images is more than two times higher compared to the bright-field images. Dark-field images have also provided more accurate information about the size and shape of the microcalcifications [2]. Therefore, dark-field PB-CT images are likely to help radiologists evaluate the probability of breast cancer more effectively. This work has been conducted in the course of developing a medical imaging facility at the Australian Synchrotron for advanced breast cancer imaging. [1] T. E. Gureyev, et al., Phys. Med. Biol. 65, 215029, 2020. [2] A. Aminzadeh et al., submitted. Speaker: Alaleh Amin zadeh Convener: Andrew Clulow (ANSTO Australian Synchrotron) From Niche to Mainstream: Ptychography Comes of Age. 20m In the space of a few short years, ptychography has moved from a niche method1,2 to emerging as a mainstream technique for user science3,4. Until recently, ptychography required significant expert user experience to collect and reconstruct useable data, with a field of view often limited to a small area (such as a single cell)5 by data collection and reconstruction limitations6. Now however, ptychography data can be collected at high speed7 complementary to modern X-ray fluorescence fly-scanning8, with data-pipelines providing results within a few hours using GPU enabled reconstruction algorithms9. These advances allow ptychography to be applied to larger areas and sample replicates10, allowing statistically significant user science to be done in a reasonable timeframe while simultaneously collecting X-ray fluorescence data11-13. In this presentation, I highlight key milestones on the ptychography journey as it makes its way into the mainstream, as well as looking towards the future. Michael W. M. Jones1, Grant A. van Riessen2,3, Christoph E. Schrank4, Nicholas W. Phillips5, Gerard N. Hinsley2, Martin D. de Jonge6, Cameron M. Kewish2,6 1Central Analytical Research Facility, Queensland University of Technology, Brisbane QLD 4000, Australia 2Department of Chemistry and Physics, La Trobe Institute for Molecular Science, La Trobe University, Bundoora VIC 3086, Australia 3Melbourne Centre for Nanofabrication, Clayton VIC 3168, Australia 4School of Earth and Atmospheric Sciences, Faculty of Science, Queensland University of Technology, Brisbane QLD 4000, Australia 5Paul Scherrer Institut, 5232 Villigen PSI, Switzerland 6Australian Nuclear Science and Technology Organisation, Australian Synchrotron, Clayton VIC 3168, Australia Speaker: Michael Jones (QUT) How to take a perfect image with DINGO 15m Neutron tomography is a powerful non-destructive technique used to study the internal structure of opaque objects. Neutron images are obtained by exposing an object to a uniform neutron beam. The transmitted neutrons interact with a phosphor which converts from neutrons to visible light, which is then demagnified on to a CCD camera. The modulation transfer function (MTF) is routinely used to determine the sharpness of an image, i.e. the ability of the imaging system to transfer information from an object to an image. The spatial frequency (SF) is the rate of transition between light and dark features in the image. For a perfect system where all of the frequency information is passed from object to image equally, the MTF of the will be 1 or 100% for all spatial frequencies and all features and contrast in the object will be transferred to the image. We performed a series of measurements to optimise the time necessary to obtain high-resolution radiographs with the DINGO instrument. We determined the MTF over a range of experimental conditions to understand the various contributions of DINGO's imaging system variables to radiograph resolution. The system components varied in this study are the two beam modes, different scintillator screens, and pixel resolution of different cameras and lenses. We also compared the different exposure times of the object to the neutron beam to try to understand the minimum exposure time that will generate good resolution radiographs. Details of the use of this method for determining the quality of a neutron tomographic imaging system will be presented and the MTF data will be used to determine the optimal operating arrangement. Speaker: Vili Grigorova (Macquarie University) The Imaging and Medical Beamline is expanding 15m Synchrotron radiation has many advantages, but it is also flawed. And its biggest flaw happens to be its fundamental intrinsic property! The radiation is emitted in the plane of the stored beam and we are stuck with the infamous 'letterbox door' beam profile. At least when not tinkering with focused undulator beams. In clinical imaging research, this beam shape is a serious disadvantage. In fact, when compared with the field of view of commercial medical imaging devices, it is often the showstopper when engaging with a clinician to discuss medical application of the IMBL. So how will we image human patients in 2022, as part of our world leading research project in breast CT imaging and cancer detection? Our vertical 'letter box opening' at 135 meter is 3 cm, at 35 keV, with a roll off of 50%. This is far from ideal for imaging the breasts of a patient lying in a prone position on our robotic positioning and scanning stage. Consequently, we have designed and tested a Bragg-Bragg beam expander to be placed downstream of our double-bent-Laue primary monochromator. The net result is an 8 cm vertical beam profile at 135 meter, with minimal roll off, to match the vertical field of view of our new EIGER2 CdTe X 3M clinical detector. This paper will present the design of our beam expander and the results of our in-air tests. This device will be installed in vacuum in the next machine shutdown. Speaker: Daniel Hausermann (Australian Synchrotron (ANSTO)) Quantifying the x-ray dark-field signal in single-grid imaging 15m X-ray imaging has progressed in recent decades to capture not only a conventional attenuation image, but also a 'phase-contrast' image that visualises those features that are difficult to see with attenuation. More recently, techniques have been developed to capture a 'dark-field' signal. The dark-field signal is generated by ultra-small-angle x-ray scattering from unresolved sample features, such as bubbles, powders or fibres, providing information about sample microstructure that is inaccessible using full-field conventional or phase-contrast x-ray imaging. Dark-field imaging can be useful in a range of fields, including medical diagnosis, materials science and airport screening. Single-grid imaging is an emerging x-ray imaging technique that only requires one optical element – a grid or, in the speckle-based variant, a piece of sandpaper. The grid or sandpaper generate a reference pattern that is warped and blurred when the sample is introduced, revealing phase and dark-field respectively. This technique is suitable for dynamic imaging since the three complementary image signals can be extracted from single sample exposure, unlike previous methods. Until now, this technique has primarily been applied in phase-contrast imaging. In this work, we derive a method to extract and quantify a dark-field signal from single-grid imaging, relating the signal to the number of sample microstructures. We also apply our method of analysis to images captured at the Australian Synchrotron's Imaging and Medical Beamline to evaluate how our measurements align with theoretical predictions. Future directions include investigating how the sample microstructure size affects the dark-field signal strength. Speaker: Ying Ying How Total Scattering Measurements at the Australian Synchrotron Powder Diffraction Beamline: Capabilities and Limitations 15m The PD beamline at the Australian Synchrotron (ANSTO) consistently receives requests to carry out total scattering experiments for various materials including battery electrodes, piezoelectrics and coordination frameworks. In this study we describe the capabilities and limitations of carrying out total scattering experiments on the Powder Diffraction beamline. A maximum instrument momentum transfer of 19 Å-1 can be achieved. Our results detail how the pair distribution function is affected by Qmax, absorption, and counting time duration at the PD beamline. We also trial a variable counting time strategy using the Mythen II detector. Refined structural parameters exemplify how the PDF is affected by these parameters. Total scattering experiments can be carried out at PD although there are limitations. These are: (1) only measurements on stable systems and at non-ambient conditions is possible if the temperature is held during data collection, (2) it is essential to dilute highly absorbing samples (µR>1), and (3) only correlation lengths >0.35 Å may be resolved. A case study comparing the PDF atom-atom correlation lengths with EXAFS derived radial distances of Ni and Pt nanoparticles is also presented, which shows good agreement between the two techniques. The results here can be used as a guide for researchers considering total scattering experiments at the PD beamline. Speaker: Dr Anita D'Angelo (Australian Synchrotron) Update on Polarised Neutron Capabilities at the Australian Centre for Neutron Scattering 15m The Australian Centre for Neutron Scattering offers neutron polarisation capabilities which are compatible with six different neutron scattering instruments, using a combination of polarising supermirrors and $^3$He cell spin filters. An overview of these capabilities will be given, followed by a description of some recent experiments which make use of a variety of these capabilities on instruments, including the cold triple-axis spectrometer Sika, and the small-angle neutron scattering instrument Quokka with a recently-commissioned 7 T compensated vertical magnet. Finally, current and future work to expand capabilities will be outlined, such as a new system for polarisation analysis experiments with magnetic fields controlled in 3D for the time-of-flight spectrometer Pelican which will soon be offered to the user community, and a bespoke 0.5 T horizontal magnet system for the thermal triple-axis spectrometer Taipan. Speaker: Andrew Manning (ANSTO) KOALA 2: Implications for magnetic structural and exotic studies 15m The KOALA single-crystal diffractometer has now been operating for more than a decade and is now nearing retirement (mid-2022). The technical improvements of the new KOALA 2 diffractometer, and the implications for conventional chemical crystallography are described in separate presentations at this meeting. In this presentation we will present the implications for less conventional studies, such as: magnetic structures; incommensurate and other complex structures; very small samples; high-pressure experiments; studies over many temperatures; various preparatory studies of inelastic and diffuse scattering. Speaker: Dr Ross Piltz (ACNS, ANSTO) Lunch 30m Update: ACNS Australian Centre for Neutron Scattering Update 20m The Australian Centre for Neutron Scattering (ACNS) utilises neutrons from Australia's multi-purpose research reactor, OPAL, to solve complex research and industrial problems for Australian and international users via merit-based access and user-pays programs. Neutron scattering techniques provide the research community and industry with unique tools to study the structure, dynamics and properties of a range of materials, helping scientists understand why materials have the properties they do, and helping tailor new materials. An update will be given on the OPAL reactor and its neutron beam facilities, the status of the neutron beam instruments and supporting capabilities, user program, and future plans. Speaker: Jamie Schulz (ANSTO) Update: NDF Deuteration at the NDF: facility overview and update on diversity of capabilities, user program and impact. 15m Deuteration can provide contrast and improved resolution to assist investigations into the relationship between molecular structure and function of molecules of both biological and synthetic origin. Molecular deuteration of organic compounds and biomolecules increases options available in characterisation and complex structure function investigations using neutron scattering and reflectometry, nuclear magnetic resonance (NMR), mass spectrometry (MS) and other techniques and also creates functional materials with superior properties in life sciences, pharmaceutical and advanced technology applications. The National Deuteration Facility (NDF) at the Australian Nuclear Science and Technology Organisation (ANSTO) has the specialised expertise and infrastructure to deliver deuteration through both biological and chemical molecular deuteration techniques to provide for a range of experimental and research applications that benefit from availability of custom deuterated molecules. The NDF has developed a suite of capabilities supporting researcher and industry access to a diversity of molecules. Capabilities include production of isotopically labelled proteins (variably deuterated, multiple-labelled - 2H, 13C, 15N) and cholesterol-d45 through bacterial recombinant expression and bio-engineered yeast growth respectively and catalysed 1H/2H exchange and chemical synthesis of a wide range of small organic molecules using tailored deuteration approaches to provide bespoke deuterated molecules generally unavailable commercially. This includes a range of deuterated lipids, unsaturated phospholipids (e.g. POPC and DOPC), surfactants, ionic liquids, fatty acids and detergents. Availability of these molecules widens the breadth of systems that can be investigated with applications across multiple research fields. An overview and update on the NDF will be provided including details on the NDF User Program and modes of access, capability advancements and brief highlights of research enabled through utilisation of deuterated molecules produced by the NDF. Speaker: Karyn Wilde (National Deuteration Facility, ANSTO) In-situ X-ray diffaction for hydrogen sorption study of Mg-La alloys 15m Trace Na additions can enhance the reaction kinetics of Mg-5%La (wt.%) alloys, resulting in a potential hydrogen storage material. In this study, we used in-situ synchrotron Powder X-ray Diffraction (PXRD) to examine the hydrogen sorption behaviour of the Na-modified Mg-5%La. A setup equipped with a hydrogen gas flow cell and a hot air blower at the Powder Diffraction beamline of the Australian Synchrotron facility is used to allow for PXRD data collection during hydrogen sorption reactions to study the phase evolutions and the cyclability of the alloy. To shed light on the underlying processes during the reactions, in-situ desorption and absorption were performed in a hydrogen atmosphere between 30-480 °C and atmospheric pressure to 2MPa H2. Rietveld refinement was conducted using the TOPAS-Academic V6 software to calculate the weight percentage and lattice expansion of each phase in the sample. In addition, in-situ High Voltage Transmission Electron Microscopy (HVTEM) was used as a complementary technique to study the volume expansion properties during desorption as a function of temperature. Speaker: Manjin Kim Natural ageing behaviour in Al-Cu alloys containing Sc and Zr 15m The 2xxx series Al-Cu alloys have been extensively used as engineering structures and components of lightweight vehicles due to their excellent strength-to-weight ratio. Recent research has demonstrated that further substantial enhancement in the strength of Al-Cu alloys could be achieved by adding Sc and Zr by forming nano-sized Al3(Sc, Zr) dispersoids. However, further development and manufacturing of these new Sc and Zr-containing Al-Cu alloys are limited by a lack of basic understanding of the effect of Al3(Sc, Zr) dispersoids on the microstructural evolution during room temperature storage after quenching from solution treatment (called natural ageing). In this work, therefore, we have studied the effect of Al3(Sc, Zr) dispersoids on natural ageing behaviour in an Al-4wt.%Cu-0.1wt.%Sc-0.1wt.%Zr alloy using small-angle neutron and x-ray scattering (SANS and SAXS). The hardness measurement shows that the presence of Al3(Sc, Zr) dispersoids significantly delays the natural ageing kinetics of Al-Cu alloys. SANS was used to quantify the size distribution of Al3(Sc, Zr) dispersoids which is ~ 25 ± 3 nm. In-situ SAXS results show that the presence of Al3(Sc, Zr) dispersoids results in a significant delay in the solute clustering formation during natural ageing. This is attributed to the suppression of the natural ageing kinetics in the Al-Cu-Sc-Zr alloys. These results were confirmed by differential scanning calorimetry (DSC) and high resolution transmission electron microscopy (TEM). The suppression mechanism is hypothesized to come from the dispersoids and Sc solute acting as vacancy sinks which slows down the diffusion of solute at room temperatures. Speaker: Lu Jiang (Deakin University) Magnetic Nanochain Formation Studied by Small-Angle Scattering 15m Self-assembly of magnetic nanoparticles is of interest due to the broad range of applications in material science and biomedical engineering. Parameters that affect self-assembly in nanoparticles include particle size, the applied magnetic field profile, concentration and synthesis routines. A range of different sizes of magnetic nanoparticles between 5 and 27 nm were investigated using polarized small-angle neutron scattering (SANS) at the KWS-1 instrument operated by the Jülich Centre for Neutron Science (JCNS) at Heinz Maier-Leibnitz Zentrum (MLZ) in Garching, Germany and the Quokka instrument operated by the Australian Centre for Neutron Scattering (ACNS) at ANSTO in Lucas Heights, Australia. Iron oxide nanoparticles were dispersed in toluene and measured at room temperature in applied fields between ±2.2 T. The observed self-assembly strongly depended on both nanoparticle size and applied field. For smaller particles (diameter ≤ 20 nm) there was no indication of self-assembly, while 27 nm nanoparticles assemble into linear chains even in low concentrations (0.42% v/v) and low field (4 mT). The magnetization profile within the cores of the smaller nanoparticles could be extracted with high-resolution when using a spin-polarized incident neutron beam. For larger nanoparticle, the structural and form factors were obtained by sector analysis of the 2-D SANS patterns. The extracted structure factors suggest that the chains grow longer and straighter and align more closely with the field direction up until application of the maximum field. This is understood in terms of a minimization of the dipole energy of the nanoparticles in the presence of the applied field and neighbouring particles. Preliminary results from experiments studying self-assembly of more complex nanoparticles (including gold-iron dumbbell nanoparticles) will be discussed. Speaker: Dr Lester Barnsley (ANSTO) Neutron Reflectometry Unravels Allergen-Lung Surfactant Monolayer Interactions in the Development of Pollen-Induced Thunderstorm Asthma 15m Pollen-induced thunderstorm asthma outbreaks affect thousands of individuals globally. Australians in particular suffer from it every year. Pollens, the major culprit in thunderstorm asthma, are biological microparticles produced by flowering species of the plants. Pollens encounter stormy environments including lightning and humidity in thunderstorms, which results in liberation of associated allergen proteins and probable reactions with reactive oxygen nitrogen species (RONS) from the environment, before inhalation. Since allergen proteins are much smaller in size than whole pollen, they can travel deep down in the lower airways where they initially interact with the lung surfactant monolayer present within the lumen of alveoli. Although meteorological, pathological and immunological analyses support the role of pollen allergens in exacerbating asthma, the physicochemical basis of this phenomenon is underinvestigated. In this talk, we present a model system to study the interactions between an allergen protein and a lung surfactant monolayer composed of solid-supported dipalmiptoylphosphatidylcholine (DPPC) monolayer. We mimic the stormy environment with plasma-activated water (PAW) and employ advanced analytical tools and techniques such as quartz crystal microbalance with dissipation (QCM-D) and neutron reflectometry (NR) to investigate the effect of RONS on the allergen protein and its subsequent interactions with the DPPC monolayer. Our experimental analysis revealed the attachment of RONS on the allergens when exposed to the PAW, and QCM-D showed mass adsorption profiles. Furthermore, NR showed the monolayer insertion and aggregation propensity of the allergens, providing a deeper mechanistic insight into these interactions. The findings of this research will enable effective diagnostic strategies and therapeutics for the treatment of thunderstorm asthma. Speaker: Mr Arslan Siddique (UNSW Sydney) Investigating the role of Zn in glucose regulation using X-ray Fluorescence Microscopy and X-ray absorption near-edge structure spectroscopy 15m Zinc plays an important function in glucose regulation, particularly within pancreatic islets, the anatomical home of the glucose regulating hormones insulin and glucagon. Glucose dysregulation is a significant contributor to the epidemic of metabolic diseases, including diabetes, that affect an increasing number of people. Zn is found in very high (mM) concentrations in insulin-secreting β-cells, where it facilitates insulin synthesis and storage, and is co-secreted with insulin, subsequently acting as a signalling molecule. Zn dysregulation is often coincident with impairment of insulin secretion, but little is known about the nature of the changes. Since a subset of the pool of Zn in islets is labile, it is difficult to image in its in vivo situation using conventional techniques such as histochemistry. Not only do preparation steps such as washing displace Zn, but some forms in which it exists are not readily discernible using conventional microscopy techniques. X-ray fluorescence microscopy (XFM) and X-ray absorption near-edge structure spectroscopy (XANES) offer several advantages in that tissue preparation is minimal, facilitating the conservation of native states, and all forms of Zn are not only detectable, but are able to be discriminated by matching spectra against an existing library of Zn forms. Here we report the preliminary results from our study of Zn speciation and elemental mapping in murine islets from healthy or diabetes-prone animals in two age groups, 14 (denoted young) or 28 (old) weeks. This work uses a library of biologically relevant Zn forms created in our laboratory, and contributes to our understanding of the role of Zn in glucose regulation in health and disease, including aging. Speaker: Dr Gaewyn Ellison (Curtin Innovation Health Research Institute (CHIRI), Curtin University) Gd-TPP-DOTA reduces cell viability in cancer cells via synchrotron radiotherapy 15m High-Z elements have been proposed as radiosensitisers in X-ray photon radiotherapy due to their emission of multiple high-LET photo- and Auger electrons following X-ray irradiation. Gadolinium is a particularly attractive candidate radiosensitiser, since it can also be used as an MRI contrast agent. In this study, we report on the efficacy of Gd-triphenylphosphonium salt-DOTA (Gd(III)-TPP-DOTA) for synchrotron microbeam radiation therapy dose enhancement. The compound utilises the mitochondrial targeting moiety triphenylphosphonium (TPP) to accumulate Gd in the inner mitochondrial membrane. Experiments were conducted using the dynamic mode option at hutch 2B of the Imaging and Medical Beamline at the Australian Synchrotron. Human glioblastoma multiforme cells (T98G cell line) were cultured to 80-90% confluence in T12.5 flasks. Approximately 24 hours prior to irradiation, the cultures were either treated with a 500 μM solution of Gd(III)DOTA-TPP or a vehicle control. Spatial dose distribution of synchrotron broad beam (BB) and single/multiple microbeams were measured using a micron-scale X-Tream dosimetry system and Gafchromic films in air and at 2 cm depth in solid water (same depth as the monolayer of cells in T12.5 flasks). A total of 96 flasks were irradiated, with doses of 0, 1, 2, 3, 4, 5, 10 and 16 Gy delivered in valley (MRT) or uniformly (BB). Post irradiation, each flask was re-seeded into 7 x 96 well-plates to perform the resazurin cell proliferation assay up to 7 days after irradiation. Our preliminary analysis indicates that for cells irradiated by 3 Gy of BB or MRT radiation, the addition of Gd(III)DOTA-TPP results in a reduction in viable cell mass by 24.25% and 25.79%, respectively, compared with untreated flasks. Speaker: Dr Ryan Middleton (ANSTO) Structural, Biochemical and Functional characterization of Salmonella BcfH: an unusual Dsb-like fimbrial protein 15m Bacteria use folding enzymes to produce functional virulence factors. These foldases include the Dsb family of proteins, which catalyze a key step in the protein-folding pathway, the introduction of disulfide bonds. The Dsb oxidative system, which includes an oxidative DsbA/DsbB pathway and an isomerase DsbC/DsbD pathway, is present in numerous bacterial species. Conventionally, Dsb proteins have specific redox functions with monomeric and dimeric Dsbs exclusively catalyzing thiol oxidation and disulfide isomerization, respectively. This contrasts with the eukaryotic disulfide forming machinery where the modular thioredoxin protein PDI mediates thiol oxidation and disulfide reshuffling. In this study, we identified and structurally and biochemically characterized a novel Dsb-like protein from Salmonella enterica termed BcfH and defined its role in virulence. Encoded by a highly conserved bcf (bovine colonization factor) fimbrial operon, the Dsb-like enzyme BcfH forms a trimeric structure, exceptionally uncommon among the large and evolutionary conserved thioredoxin superfamily. BcfH also displays very unusual catalytic redox centers, including an unwound α-helix holding the redox active site and a trans proline instead of the conserved cis proline active site loop. Remarkably, BcfH displays both thiol oxidase and disulfide isomerase activities contributing to Salmonella fimbrial biogenesis. Typically, oligomerization of bacterial Dsb proteins modulates their redox function, with monomeric and dimeric Dsbs mediating thiol oxidation and disulfide isomerization, respectively. The present study demonstrates a further structural and functional malleability in the thioredoxin-fold protein family. BcfH trimeric architecture and unconventional catalytic sites permit multiple redox functions emulating in bacteria the eukaryotic protein disulfide isomerase dual oxido-reductase activity. Speaker: Pramod Subedi (La Trobe University) almonella enterica BcfH Is a Trimeric Thioredoxin-Like Bifunctional Enzyme with Both Thiol Oxidase and Disulfide Isomerase Activities Precision Measurement of the Complex Fine Structure at the Australian Synchrotron 15m Current applications of X-ray Absorption Fine Structure (XAFS) to low absorbing samples such as ultra-thin films in semiconductor and nano-devices have been limited. This is expected to not be the case for the phase component of the fine structure as it is generally orders of magnitude larger than the absorption component in the x-ray regime. Here, we present details of precision measurements of both the phase and absorption components of the atomic fine structure across the K-edge of thin copper and iron foils. The experiments applied Fourier Transform Holography with an extended reference in spectroscopy mode and were conducted at the XFM and the SAXS/WAXS beamlines of the Australian Synchrotron. The results provide critical experimental benchmark for further theoretical development and has potential to delve into the phase equivalent of XAFS related techniques. Speaker: Tony Kirk (La Trobe University) X-ray dark-field imaging without optics 15m X-ray image contrast can be generated via three mechanisms: (i) attenuation, (ii) phase contrast and (iii) most recently, the dark-field signal, which arises due to the incoherent scattering of the incident x-ray wavefield by unresolved sub-pixel features (microstructure) present in the sample. These contrast mechanisms can be realised using emerging x-ray imaging techniques, such as analyser-based and grid-based imaging, each of which require the use of specialised optics and carefully aligned setups. In this work, we focus on a technique which has not been used to capture quantitative dark-field contrast – propagation-based imaging. Propagation-based imaging requires no specialist optics and operates on the principle that phase variations induced in the x-ray wavefield by the sample manifest as intensity variations at the detector plane, some metres downstream, due to the self-interference of the wavefield. We describe a new approach to analysing propagation-based images, derived from the x-ray Fokker-Planck Equation, which enables dark-field images to be extracted. All that is required is two exposures, captured at two different propagation distances, which enable our algorithm to separate phase and dark-field effects to recover sample thickness and microstructure distribution. We demonstrate, using images captured at the Australian Synchrotron's Imaging and Medical Beamline, that it is possible to capture dark-field images without having to introduce specialised optics or spend extensive time on optics alignment. This new technique could be applied to study biomedical microstructures, like the alveoli in the lung, or manufactured parts, capturing porosity or carbon fibre. Speaker: Mr Thomas Leatham (Monash University) refnx - The Next Generation of Reflectometry Analysis Software 15m refnx [1] is a next generation reflectometry analysis package, building on its predecessor, Motofit. It has undergone a large amount of collaborative development over the last five years, introducing innovative features that greatly aid the national and international neutron and X-ray reflectometry community: a Bayesian statistics core with comprehensive uncertainty analyses and model selection ("how many layers can the data justify"). quantitative introduction of prior information into the modelling system (information known from other sources) modular construction of structural models, ranging from a basic Slab up to freeform SLD profiles and Lipid membrane leaflets. These components are easily extensible. co-refinement of multiple contrast datasets. mixed Area models. Python based with analyses performed in Jupyter notebooks or a Qt GUI. Here we give a brief introduction to how these aspects advance the reflectometry technique. In addition, refnx is designed to enable reproducible research. We also discuss what reproducible research means in the context of a neutron scattering study, outlining how this is achieved with refnx, and how these practices could (should) be taken up by neutron scatterers in general. [1] Nelson, Andrew RJ, and Stuart W. Prescott. "refnx: neutron and X-ray reflectometry analysis in Python." Journal of applied crystallography 52.1 (2019): 193-200. Speaker: Andrew Nelson (ANSTO) Micro-Computed Tomography (MCT) beamline at ANSTO/Australian Synchrotron: A progress report 15m The Micro-Computed Tomography (MCT) beamline is one of the first new beamlines to be constructed at the Australian Synchrotron as part of the BRIGHT program. MCT will complement the existing X-ray imaging/tomography capability provided by the Imaging and Medical Beamline (IMBL), and will target applications requiring higher (sub-micron) spatial resolution and involving smaller samples. MCT will be a bending-magnet beamline, operating in the 8 to 40 keV range, based on a double-multilayer monochromator. Filtered white and pink beams will also be available, the latter utilising a single-(vertical)bounce mirror. MCT will benefit from X-ray phase-contrast modalities (such as propagation-based, grating-based and speckle) in addition to conventional absorption contrast, and be equipped with a robotic stage for rapid sample exchange. A higher-resolution CT configuration based on the use of a Fresnel zone plate system will also be available. A number of sample environmental stages, such as for high temperature and the application of loads, are planned in collaboration with certain groups in the user community. Anticipated application areas for non-destructive 3D sample characterisation include biomedical/ health science, food, materials science, and palaeontology. This presentation will provide an update on the progress of the MCT project, including the procurement of three state-of-the-art X-ray detector systems, and the significant scientific-computing effort required to meet the demands of this high-performance imaging beamline. Speaker: Andrew Stevenson (Australian Synchrotron) Afternoon Tea 30m Elucidating the Structures and Behaviour of Therapeutic Delivery Platforms with Non-interfering Techniques 20m Preamble: Anton is the Leader of the Applied Chemistry and Translational Biomaterials Group. His research focuses on the development of innovative chemistries, delivery systems and biotechnologies to address challenges in the biomedical, mining, and environmental sectors. Abstract: Self-assembled polymeric delivery platforms based on colloidal aggregates have promise for the delivery of therapeutics and cells, and their morphology in solution strongly influences their behaviour in a biological context (e.g., cellular uptake). In turn, the composition and microstructure of the individual polymers play a defining role in their self-assembly and the morphology of the resulting colloidal aggregates. Observing the behaviour and precise morphology of these systems in solution using non-interfering techniques allows them to be studied in their native state. In this presentation, Anton will discuss the application of diffusion nuclear magnetic resonance spectroscopy and Synchrotron small-angle X-ray scattering for the elucidation of colloidal aggregate structure and morphology. Speaker: Anton Blencowe (University of South Australia) Quantifying the robustness of neutron reflectometry for analysing polymer brush structure 15m Surfaces covered with densely tethered polymer chains possess desirable properties and are ubiquitous in natural and human-made systems. These properties stem from the diffuse structure of these polymer brush interfaces; consequently, resolving their structure is key to designing systems with better performance. NR has been widely used for studying these systems as it is the only technique that can resolve the detailed structure of these films, the polymer volume fraction profile. However, the analysis of collected reflectometry data has significant challenges; inflexible models preclude viable structures and the uncertainty around accepted profiles (spread) is challenging to quantify. Furthermore, there is no guarantee of profile uniqueness in reflectivity analysis - multiple structures may match the data equally well (multimodality). Quantifying these uncertainties has not been attempted on brush systems, but is a vital part of validating the application of NR for structural characterisation. Historically, data analyses have used least-squares approaches, which can't satisfactorily determine profile uncertainty. Here we outline the methodology we have developed for modelling NR data. We model our brush with a freeform profile that minimises assumptions regarding polymer conformation while enforcing physically reasonable structures. We employ a Bayesian statistical framework that enables the characterisation of structural uncertainty and multimodality through Markov Chain Monte Carlo sampling. The Bayesian approach lets us introduce prior knowledge into the analysis procedure; the amount of grafted polymer should remain constant under different conditions. The rigour of our approach is demonstrated via a round-trip analysis of a simulated system, as well as data collected on thermoresponsive brushes. A low level of uncertainty was observed, confirming the validity of NR for examining polymer brush systems. Speaker: Dr Andrew Nelson (ANSTO) Muti-Scale Dynamic Study on The Amphiphilic Nanostructure of Protic Ionic Liquids 15m Ionic liquids are a novel class of solvents with ultra-low vapour pressure and tunable liquid properties. Among them, protic ionic liquids (PILs) are particularly effective solvents for self-assembly of surfactants and lipids into micelles, vesicles, liquid crystals and microemulsions. This is exemplified by alkylammonium PILs, which are also cheap, easily prepared and can be readily deuterated. Over the past decade, much is learnt about the static structure of alkylammonium PILs, however, virtually nothing is known about their dynamics, both the single ion diffusion and the collective motion of clusters. This is due to the complex and disordered nature of liquid nanostructure, which is expected to display a range of dynamic behaviors on different time and length scale. In this study, we have examined ethanolammonium nitrate, ethylammonium nitrate and propylammonium nitrate, using a variety of dynamic techniques. We employed multi-contrast wide-angle neutron spin echo spectroscopy (WASP, ILL) to capture the nanosecond relaxations across 0.1 – 1.4 Å^-1, and pulse-field gradient NMR to track molecular diffusion. Combined with their known averaged liquid nanostructures, we have now characterized the static and dynamic nanostructure of three protic ionic liquids, carefully chosen to demonstrate different degrees of ordering, at multiple temperatures. This allows us to understand the structure-property relationship of alkylammonium PILs across a wide space and time scale, which has the potential to unlock rational design of job-specific PIL-based solvent systems. Speaker: Shurui Miao (The University of Sydney) Maximum flux: Using time-resolved neutron reflectometry to improve our understanding of surface-initiated polymerisation 15m Polymer brushes are dense arrays of surface-tethered polymers that possess desirable qualities, such as lubricity and fouling resistance, provided that their structure and chemistry are correctly tuned [1]. Surface-initiated polymerisation (SIP) is the primary method for synthesising these brushes with the physicochemical properties required to imbue surfaces with the aforementioned qualities. However, previous work [2,3] indicates that polymers synthesised by SIP deviate from polymers produced via solution polymerisation, likely due to the proximity of initiators in the tethered case. This deviation is not well understood, which impedes the structural characterisation of the resulting brushes. As structure dictates behaviour [1], understanding the nature of the brushes produced by SIP facilitates the rational design of functional brush coatings. Here we present a study of brushes synthesised via SIP of the well-characterised polymer poly(N-isopropyl acrylamide) (PNIPAM) using time-resolved neutron reflectometry (NR). First, we demonstrate that we can control the polymer initiator density and examine the relationship between molecular weight and grafting density. We then observe a series of SIP reactions from surfaces with different initiator densities in situ using time-resolved NR. To our knowledge, this is the first time that the structure of a growing polymer brush has been directly observed. The results confirm that a high initiator density leads to poor control early in the reaction, and explain several phenomena observed by previous NR experiments [4,5]. This experiment paves the way for further kinetic experiments on Platypus and will be of interest to anyone interested in the dynamic assembly of interfaces over timescales of 10 minutes to several hours. 10.1021/acs.macromol.7b00450 10.1021/acs.langmuir.0c01502 Speaker: Isaac Gresham (The University of New South Wales) Elucidation of the electronic structure in lanthanoid-radical systems by inelastic neutron scattering 15m Single-molecule magnets (SMMs) are metal organic compounds which exhibit magnetic hysteresis and slow magnetic relaxation at low temperature. They have potential applications in high density data storage, quantum computing, and molecular spintronics. Coordination complexes of the trivalent lanthanoid (Ln(III)) ions are the current best performing SMMs, with examples showing hysteresis above liquid nitrogen temperature.[1] The magnetic properties of Ln(III) ions stems from the crystal field (CF) splitting of the ground Russel-Saunders state. These CF states give rise to the energy barrier to reversal of magnetisation, and can be tuned by modification of the ligand environment around the Ln(III) centre. Slow magnetic relaxation in Ln-SMMs can also be modulated by the introduction of magnetic exchange coupling with another magnetic moment, such as that of an organic radical ligand.[2] Quantifying the magnitude of magnetic exchange coupling in many Ln(III) systems is, however, difficult using conventional magnetometric techniques, due to the often large spin-orbit coupling. Inelastic neutron scattering (INS) is an ideal spectroscopic tool to measure both CF splitting and magnetic exchange coupling in Ln(III) systems.[3] We have used INS measurements to elucidate the magnetic exchange coupling and CF splitting in Ln(III)-semiquinonate complexes. Using this information we have rationalised the magnetic properties of these compounds, with the hope that a better understanding of the magnetic exchange in these systems can be used to design SMMs with improved performance. [1] Guo et al. Science 2018, 362 (6421), 1400–1403 [2] Demir et al. Coord. Chem. Rev. 2015, 289–290, 149–176 [3] Dunstan et al. Eur. J. Inorg. Chem. 2019, 8, 1090–1105 Speaker: Maja Dunstan (University of Melbourne) Characterisation of Ionic Liquids and Their Ability to Stabilise Proteins 15m Proteins are an important part of biotechnology and can be utilised for a range of applications and industries1. But the stability and solubility of the protein is often a limiting factor, so ionic liquids (ILs) have been tested as an alternative solvent due to their wide scope and tailorable properties. They are reported to increase protein activity2, solubility, long term and thermal stability. However, the relationship between the structure of an IL and how it interacts with proteins in solution is unknown. In this study 52 ammonium based ILs and 14 common salts were prepared with HEWL and human lysozyme. Physicochemical, and thermal properties of the neat ILs were characterised, while SAXS was used to characterise protein stability. High concentrations of IL (>50 mol%) were often not conducive with the native structure of the protein, while lower concentrations (1-10 mol%) can support native protein structures with minimal to no aggregation. It was also found that additional alkyl chains on the cation, and the presence of hydroxyl groups reduced lysozyme's radius of gyration, preserving its native structure. Egorova, K. S.; Gordeev, E. G.; Ananikov, V. P., Biological Activity of Ionic Liquids and Their Application in Pharmaceutics and Medicine. Chemical Reviews 2017, 117 (10), 7132-7189. Mann, J. P.; McCluskey, A.; Atkin, R., Activity and thermal stability of lysozyme in alkylammonium formate ionic liquids—influence of cation modification. Green Chemistry 2009, 11 (6), 785-792. Speaker: Stuart Brown (RMIT) From deep time to the present: An exploration of Aboriginal connections to South Australia's Riverland region 20m This keynote address will explore a range of cultural heritage projects relating to collaborations between ANSTO, Flinders University and the River Murray and Mallee Aboriginal Corporation. From rare artefacts to earthen cooking mounds and ancient shell middens this presentation considers the contribution of radiocarbon dating within a broader research program that has investigated Aboriginal connections to Country from deep time to the present in South Australia's Riverland region. Speaker: Prof. Amy Roberts (Flinders University) The structure and spectroscopy of solid propanal: A potential mineral for planetary astrobiology 15m Aldehydes are considered an important species toward astrobiology, acting as a primary reagent for the Strecker synthesis of amino acids in aqueous media. However, within the cold, icy surfaces of planetary bodies and interstellar dust particles, chemical reactions that lead to these biological-building blocks can still unfold. Here, "non-thermal equilibrium" chemistry is driven by harsh radiation environments, which produce populations of radicals and charged species in the icy matrix. It is these short-lived intermediates that then on-react with ammonia and cyanides to form of higher-order organics. For the possible detection of proteinogenic amino acids in space environments it is important to first locate their more abundant amino acid precursors. However, only formaldehyde and acetaldehyde has been observed by telescope and spacecraft reconnaissance to date. The search for other simple aldehydes has been hampered by a general lack of fundamental data including crystal structure and spectroscopic signatures. In a combined neutron scattering (ANSTO Wombat Instrument) and x-ray diffraction study, we have determined the crystal structure of propanal (CH3CH2CHO)-under planetary ice surface conditions-for the first time. This new structure allowed for the DFT simulation of its vibrational frequencies, which was then applied to assign its far-infrared spectrum collected at the Australian Synchrotron THz Beamline. This critical structural and spectroscopic data will enable the search for this species during future surveys and spacecraft exploration of distant icy worlds in our quest to uncover the molecular origins-of-life. Speaker: Courtney Ennis (University of Otago) Pioneer plant driven primary mineral weathering and secondary mineral formation in Fe ore tailings 15m Eco-engineering tailings into soil-like substrates is an emerging technology to rehabilitate the tailings landscapes. Pioneer plants play an important role in mineral weathering and secondary mineral formation, which are pre-requisites for aggregate formation and pedogenesis in the tailings. The present study aimed to characterise the direct role of pioneer plant roots in tailing mineral weathering and secondary mineral formation in a compartmented cultivation system [1]. It was found that root activities accelerated the weathering of Fe bearing primary minerals (e.g., biotite) via Fe(II) oxidation coupled with Fe(III) and Si dissolution. Numerous nanosized Fe-Si rich amorphous minerals and vermiculite were neo-formed in the tailings subject to rhizosphere activities, as revealed by various micro-spectroscopic analysis. The Fe-Si rich secondary amorphous minerals may have resulted from co-precipitation of dissolved Fe(III) and Si on mineral surfaces under alkaline and circumneutral pH conditions. The roots of Gramineae plant Sorghum spp. developed most extensively in the tailings, leading to more efficient mineral weathering and secondary mineral formation than Halophyte plant Atriplex amnicola and Leguminous plant Acacia chisholmii. Overall, the study has unravelled the pioneer plant role in tailing mineral (biotite dominant) weathering and secondary Fe-Si mineral formation. These findings also indicate that tolerant pioneer plants may act as integral components in designing the eco-engineering processes for soil formation in Fe ore tailings. [1] Wu, S., Liu, Y., ... & Huang, L. (2021). Rhizosphere Drives Biotite-Like Mineral Weathering and Secondary Fe–Si Mineral Formation in Fe Ore Tailings. ACS Earth and Space Chemistry, 5(3), 618-631. Speaker: Dr Songlin Wu (The University of Queensland) Cuatros Amigos- the four stromatolites in a row. The first 3D image of the oldest evidence of life in the geologic record 15m The 3.48 Ga Dresser Formation, Pilbara Craton, Western Australia provides the Earth's most convincing evidence of early life through a diverse array of biosignatures. However, identifying biosignatures in Archean rocks is difficult due to billions of years of erosion, deformation, and metamorphic alteration. Characterisation of community-accepted biosignatures also remains challenging, particularly the robustness of textural biosignatures as indicators of early life in Archean rocks. The textural biosignatures identified in the Dresser Formation are identified in surface outcrops that are weathered. Therefore, in May 2019, fresh Dresser deposits were drilled to aid in a better understanding of these ancient biosignatures and to provide validity to a biogenic origin. Three well-preserved cores of 5-30 m thickness and 8 cm in diameter were extracted from ~70 m beneath the land surface. The cores provide excellent preservation of biosignatures, including the preservation of fossilized, pyritized, stromatolites. One stromatolite horizon within the core exhibits extraordinary morphological structures. Here we present preliminary results of the 3D geometry of these fossil stromatolites. 3D structures were obtained using the neutron imaging station DINGO at ANSTO. A full tomography of a first sample has been scanned with 1896 projection with an angular step of 0.19° and an exposure time of 60 seconds per projection. The data was reconstructed using filtered backprojection technique with Pydingo (a free in-house developed python toolbox). 3D-rendering was done with VG-Studio. This horizon aids in better defining the biogenicity of these textural biosignatures. Speaker: Ms Michaela J Dobson (The University of Auckland, ANSTO) High-resolution high throughput thermal neutron tomographic imaging of fossiliferous cave breccias from Sumatra 15m We employ high-throughput thermal-neutron tomographic imaging to visualise internal diagnostic features of dense fossiliferous breccia from three Pleistocene cave localities in Sumatra, Indonesia. We demonstrate that these seemingly homogeneous breccias are an excellent source of data to aid in determining taphonomic and depositional histories of complex depositional sites such as tropical caves. X-ray Computed Tomographic (CT) imaging is gaining importance amongst palaeontologists as a non-destructive approach to studying fossil remains. Traditional methods of fossil preparation risk damage to the specimen and may destroy contextual evidence in the surrounding matrix. CT imaging can reveal the internal composition and structure of fossils contained within consolidated sediment/rock matrices prior to any destructive mechanical or chemical preparation. Neutron tomography (NT) provides an alternative contrast to X-rays, and in some circumstances, is capable of discerning denser matrices impenetrable to or yielding no contrast with CT imaging. High throughput neutron imaging reduces neutron fluence during scanning which means there is less residual neutron-induced radioactivation in geological samples; allowing for earlier subsequent analyses. However, this approach remains unutilised in palaeontology, archaeology or geological surveys. Results suggest that the primary agents in the formation of the breccias and concentration of incorporated vertebrate remains are several rapid depositional phases of water and sediment gravity flow. This study highlights the potential for future analyses of breccia deposits in palaeontological studies in caves around the world. Speaker: Ms Holly Ellen Smith (Griffith University) Pressure-dependent changes in Zr coordination in silicate liquid: in vs. ex situ measurements 15m Changes in the coordination of elements in silicate melts as a function of pressure impact their geochemical behaviour and are key to understanding processes such as planetary differentiation. Questions persist as to the extent to which the coordination environment of elements in silicate melts at high pressure and temperature can be preserved in glasses recovered to ambient conditions. The only method to unambiguously measure the coordination environment of trace elements in a silicate liquid at high pressure is via in situ measurements such as x-ray absorption spectroscopy, preferably in large volume apparatus that can simulate the environment of the upper mantle such as the Macquarie D-DIA apparatus. These measurements are difficult and only possible at a handful of facilities worldwide, so most experimental data are derived from ex situ measurements of recovered glasses. We made Zr K-edge XANES measurements in situ at conditions simulating the mantle, showing a pressure-dependent change consistent with an earlier ex situ study on samples recovered from piston-cylinder experiments in which glasses were annealed close to their glass transition temperature (Burnham et al. 2019). We plan further XAS experiments measuring samples recovered from our in situ experiments to determine the differences between quenched glasses, annealed glasses, and in situ measurements. We propose that changes in Zr XANES correspond to an increase in the coordination number of Zr with pressure. This could explain previously observed changes in Zr solubility at high pressure not predicted by current models, and changes in Zr olivine/melt partition coefficients at mantle pressures. Speaker: Nicholas Farmer (Macquarie University) Good vibrations: phonons in topological thermoelectrics 20m Thermoelectric materials harness a temperature gradient to produce a voltage via the Seebeck effect, providing a way to harvest and recycle heat. Recently a new generation of thermoelectrics has been developed that offer unprecedented performance by leveraging topological physics. The key to their functionality is their robust high electronic conductivity in tandem with their low thermal conductivity. The latter can be engineered by controlling the lattice vibrations or "phonons". Here I will discuss recent neutron spectroscopy experiments at the Australian Centre for Neutron Scattering, ANSTO, which offer unique insights into the differences between "good" optical and "bad" acoustic phonon vibrations in thermoelectrics. I will show how these experiments are complemented by large-scale molecular dynamic simulations on the GADI supercomputer within the National Computing Infrastructure. Time permitting, I will also briefly demonstrate how we use the Centre for Accelerator Science and neutron reflectometry to enable surface-engineering in these novel crystals for microelectronic applications. N. Islam, D. L. Cortie et, Acta Materialia 215, 117026 (2021) W. Zhao, Cortie, Wang et al, Physical Review B 104 (8), 085153 (2021) D.L et al. Cortie Applied Physics Letters 116 (19), 192410 (2020) Speaker: Dr David David Cortie Characterization of MOSFET sensors for dosimetry in alpha particle therapy 15m Alpha particle therapy, such as diffusing alpha-emitters radiation therapy (DaRT) and targeted alpha-particle therapy (TAT), exploits the short-range and high linear energy transfer (LET) of alpha particles to destroy cancer cells locally with minimal damage to surrounding healthy cells. Dosimetry for DaRT and TAT is challenging, as their radiation sources produce mixed radiation fields of α particles, β particles, and γ rays. There is currently no dosimeter for real-time in vivo dosimetry of DaRT or TAT. Metal-oxide-semiconductor field-effect transistors (MOSFETs) have features that are ideal for this scenario. Owing to their compactness, MOSFETs can fit into fine-gauge needle applicators, such as those used to carry the radioactive seeds into the tumour. This study characterized the response of MOSFETs designed at the Centre for Medical and Radiation Physics, University of Wollongong. MOSFETs with three different gate oxide thicknesses (0.55 µm, 0.68 µm, and 1.0 µm) were irradiated with a 5.5 MeV mono-energetic helium ion beam (He2+) using SIRIUS 6MV accelerator tandem at the Australian Nuclear Science and Technology Organization (ANSTO) and an Americium-241 (241Am) source. The sensitivity and dose-response linearity were assessed by analysing the spatially resolved median energy maps of each device and their corresponding voltage shift values. The results showed that the response of the MOSFET detectors was linear with alpha dose up to 25.68 Gy. Also, it was found that a gate bias of between 15 V and 60 V would optimize the sensitivity of the detectors to alpha particles with energy of 5.5 MeV. Speaker: Ms Fang-Yi Su (Centre for Medical Radiation Physics, University of Wollongong) Magnetic Ordering in Superconducting Sandwiches 15m Our cuprate-manganite 'superconducting sandwich' multilayers exhibit a highly unusual magnetic-field induced insulating-to-superconducting transition (IST), contrary to the commonly held understanding that magnetic fields are detrimental to superconductivity [1, 2]. This new behaviour is a result of the specific magnetic and electronic properties of the manganite coupling with the high-Tc cuprate (YBa2Cu3O7-δ, YBCO). Due to the specific manganite composition, Nd0.65(Ca0.7Sr0.3)0.35MnO3 (NCSMO), we hypothesize the behaviour to originate from CE-type antiferromagnetic ordering as well as charge and orbital ordering [3]. The magnetic data presented here will focus on polarized neutron reflectometry (PNR) and elastic neutron scattering on a YBCO-NCSMO trilayer and superlattice. The model that best described the PNR data for the trilayer had antiparallel moments at the YBCO-NCSMO interfaces. In the superlattice, the direction of moments at NCSMO interfaces were found to alternate with film depth whose long-ranged ordering was broken below 35 K in a 1 T applied field. The stability of the AFM order in the superlattice was further supported by a robustness of magnetic in-plane half-order elastic scattering peaks at 9 T. This evidences the interplay of magnetism and superconductivity that play a role in realizing the IST effect in our superconducting sandwiches. [1] B. Mallett et al. Phys. Rev. B. 94, 180503(R) (2016) [2] E. Perret et al. Comms. Phys. 45, 1-10 (2018) [3] Y. Tokura. Rep. Prog. Phys. 69, 797-851 (2006). Speaker: Andrew Chan (The University of Auckland) Neutron and synchrotron characterisation techniques for hydrogen fuel cell materials 15m Hydrogen fuel cells and other renewable energy technologies have specific materials and functional needs which can be more fully understood using neutron and synchrotron characterisation techniques. In this presentation, a materials which has applications in proton exchange membranes is studied with a variety to techniques to develop a comprehensive understanding of the functional-structural relationship. The materials used here is phosphotungstic acid (HPWA) stabilised in an 'inert' mesoporous silica host material. This aim of this research is to develop an understanding of the interaction between the HPWA and the silica and whether different structures or surface chemistries have advantageous or detrimental effects. Two silica symmetries used were Ia3 ̅d (face centred cubic bi-continuous) and P6mm (2D hexagonal with cylindrical pores) which were vacuum impregnated with solutions of HPWA in a range of concentrations. The resulting powder samples were then analysed using small angle x-ray scattering (SAXS), inductively coupled plasma emissions spectroscopy (ICP-OES), nitrogen gas adsorption/desorption, near edge X-ray absorption fine structure (NEXAFS/X-ray absorption near edge structure/XANES) of the O and Si k-edges, Fourier transform infra-red spectroscopy (FTIR), Raman spectroscopy, and then formed into a disk using polyethylene as the binder for electrical impedance spectroscopy (EIS). The insights gained from this systematic study indicate that the surface chemistry of the silica host has a significant effect on the performance, uptake and interactions with the HPWA anions, where lower concentrations of HPWA result in stronger host:HPWA interactions but lower conductivity Speaker: Krystina Lamb (ANSTO) Towards fast dose calculations for novel radiotherapy treatments with generative adversarial networks 15m Existing approximations used in clinical treatment planning are either not fast or not accurate enough for some novel irradiation techniques like microbeam radiation therapy (MRT), which relies on arrays of sub-mm synchrotron-generated, polarized X-ray beams. We present studies using generative adversarial networks (GANs) to mimic full Monte Carlo simulations of radiation transport to achieve a compromise of fast and accurate dose computation for variable phantoms and irradiation scenarios. To obtain a generalised model for the dose prediction a conditional GAN using a 3D-UNet architecture is developed. As proof of concept, we predict the simulated dose depositions of a bone slab inside a water phantom with variable rotation angles and thicknesses. Subsequently, we demonstrate that our model is generalisable by applying it to a simplified head phantom simulation. All Monte Carlo simulations are performed with Geant4 using a phase space file obtained from a validated simulation at the Australian Synchrotron. The trained model predicts for both the bone slab inside the water phantom and the simple head phantom dose distributions with deviations of less than 1% of the maximum dose for over 94% of the simulated voxels in the beam. Dose predictions near material interfaces are accurate on a voxel-by-voxel basis with less than 5% deviation in most cases. Dose predictions can be produced in less than a second on a desktop PC compared to approximately 50 CPU hours needed for the corresponding Geant4 simulation. Speaker: Florian Mentzel Update: AINSE Welcome Address: Day 2 Opening Remarks Awards: UAC | Research Award Awards: Stephen Wilkins Medal Delivery of antimicrobials to bacteria by cubosome nanocarriers 20m Dyett, B.; Meikle, T.G.; Yu, H.; Strachan, J.B.; Lakic, B.; White, J.; Drummond, C.J. and Conn, C.E The increasing prevalence of antibiotic resistant bacteria, in part due to overuse and misuse of antibiotics over the past decades, is one of the key global health challenges. Some gram-negative strains have already been found to be resistant even to last resort antibiotics. This is partially due to their ability to hinder the transport of antimicrobials through their outer membrane structure. One proposed strategy to combat this issue is via the use of lipid nanocarriers as drug delivery vehicles. These nanocarriers are known to interact with the outer lipid membrane via a unique fusion-type mechanism which can improve transport of antimicrobials into Gram-negative bacteria (Figure 1). In this talk I will discuss the mechanism of interaction of cubosome lipid nanocarriers with both Gram-positive and Gram-negative bacteria and discuss recent advances in the use of lipid-based nanocarriers to deliver antibiotics. Synchrotron SAXS is used to characterise the internal nanostructure of the particles before and after encapsulation of a range of antimicrobials including metal nanocrystals,[1] antimicrobial peptides [2] and small molecule drugs.[3] Synchrotron CD is used to confirm retention of secondary structure following encapsulation for antimicrobial peptides.[2] The mechanism of uptake of cubosomes into both Gram-positive and Gram-negative bacteria is demonstrated using TIRF microscopy in combination with synchrotron FTIR.[3.4] Fundamental differences in the uptake mechanism between Gram-positive and Gram-negative bacteria will be described.[3] Speaker: Charlotte Conn (RMIT) Chemical expansion and proton conductivity in vanadium-substituted variants of γ-Ba4Nb2O9 15m Complex perovskite derived oxides are an important emerging class of ionic conducting materials with potential applications in energy technologies including fuel cells, batteries, and separation membranes. The high temperature phase γ-Ba4Nb2O9 is one such complex oxide which shows proton and oxide ionic conduction. Recently we have shown that two new compositional series with the previously unique γ-Ba4Nb2O9 type structure, γ-Ba4VxTa2-xO9 and γ-Ba4VxNb2-xO9 (x = 0-2/3),can form [1]. Undoped Ba4Ta2O9 forms a 6H-perovskite type phase, but with sufficient V doping the γ-type phase is thermodynamically preferred and possibly more stable than γ-Ba4Nb2O9, forming at a 200 °C lower synthesis temperature. This is explained by the fact that Nb5+ ions in γ-Ba4Nb2O9 simultaneously occupy 4-, 5- and 6-coordinate sites in the oxide sublattice, which is less stable than allowing smaller V5+ to occupy the former and larger Ta5+ to occupy the latter. We characterised the structures of the new phases using a combination of X-ray and neutron powder diffraction. All compositions hydrate rapidly and extensively (up to 1/3 H2O per formula unit) under ambient conditions, like the parent γ-Ba4Nb2O9 phase, and show moderate but improved mixed-ionic electronic conduction. At lower temperatures the ionic conduction is significantly protonic, where hydration is maintained. We also show that these new vanadium containing phases have higher total conductivities than the parent γ-Ba4Nb2O9 compound. [1] AJ Brown, B Schwaighofer, M Avdeev, B Johannessen, IR Evans and CD Ling, Chemistry of Materials, available online (2021). DOI: 10.1021/acs.chemmater.1c02340 Speaker: Mr Alex Brown (The University of Sydney) Shape of nanopores in track-etched polycarbonate membranes 15m Small angle X-ray scattering (SAXS) has been used over the past decade for characterizing track etched nanopores in a variety of organic and inorganic materials. In the present study, synchrotron based SAXS was used to study the morphology and size variation of the nanopores in polycarbonate (PC) as a function of the etching time and ion fluence. The shape of the nanopores fabricated through track-etch technology was found to be consistent with cylindrical pores with ends tapering off towards the two polymer surfaces in the last ~1.6 μm. The tapered structure of the nanopores in track-etched PC membranes was first observed more than 40 years ago followed by many other studies suggesting that the shape of nanopores in PC membranes deviates from a perfect cylinder and nanopores narrow towards both membrane surfaces. However, quantification of the shape of nanopores has remained elusive due to inherent difficulties in imaging the pores using microscopy techniques. This study reports on the quantitative measurement of the tapered structure of nanopores using SAXS[1]. Determination of this structure was enabled by obtaining high-quality SAXS data and the development of appropriate form-factor model. The etch rates for both the radius at the polymer surface and the radius of the pore in bulk were calculated. Both etch rates decrease slightly with increasing fluence. This behavior is ascribed to the overlap of track halos which are characterized by cross-linking of the polymer chains. The results enable a better understanding of track-etched membranes and facilitate improved pore design for many applications. [1] Dutt, S. et al. J. Membr. Sci. 638, 119681 (2021) Speaker: Shankar Dutt (Australian National University) Application of Inelastic Neutron Scattering for Thermoelectric Materials Study 15m Research on thermoelectric (TE) materials have been an active field for the past decade as TE material can potentially be used in many niche areas such as to power space probe and convert waste-heat into electricity. Continuing developments are undergoing in the search for advanced TE materials that could play significant role in sustainable technology. One of the strategies in improving the performance of a thermoelectric material is to decrease the thermal conductivity, which is directly related to the lattice dynamics of the materials. Measurement of phonon density of states and phonon dispersion as a function of temperature can provide deep insight of the thermal conductivity in terms of, for example, anharmonic vibrations and low energy rattling modes. PELICAN, a time of fight neutron spectrometer at ACNS, has been actively used for such kind of studies. In this presentation, I will give a brief introduction and the current status of TE material research, followed by the link to material lattice dynamics and explore how inelastic neutron scattering can help in fundamental understanding of the thermoelectric properties with a couple of study cases. Speaker: Dehong Yu (Australian Nuclear Science and Technology Organisation) Origin of vertical slab orientation in blade-coated layered hybrid perovskite films revealed with in-situ synchrotron X-ray scattering 15m Controlling the vertical orientation of perovskite slabs in layered hybrid perovskite films is key for enabling further optimization of photovoltaic device performance. However, the mechanism explaining vertical orientation control in such films remains under debate. Here, we present an in-situ grazing-incidence wide-angle X-ray scattering (GIWAXS) study on the formation of BA2MAn-1PbnI3n+1 perovskite films during blade-coating where BA, MA and n denote butylammonium, methylammonium and thickness of perovskite slabs. The evolution of grazing-incidence transmission wide-angle X-ray scattering (GTWAXS) signal is also monitored to reveal the specific vertically-oriented low-n phases formed in such films. We find that the blade-coating temperature greatly influences the crystallization dynamics of BA2MAn-1PbnI3n+1 perovskite films and perovskite slab orientation via intermediate phase and low-n phase formation. For the perovskite film with targeted dimensionality of n = 4, blade-coating films at higher temperatures suppresses the formation of the 2MAI∙3PbI2∙2DMF intermediate phase. This in turn suppresses the formation of the n = 2 phase that adopts an undesired horizontal perovskite slab orientation, instead favouring the formation of the n = 3 phase that adopts the desired vertical perovskite slab orientation. Further analysis on the microstructural evolution of films with near-perfect vertical orientation reveals that the formation mechanism proceeds through several stages: (i) sol-gel, (ii) weakly-texture 3D-like perovskite, (iii) n = 3 phase, and finally, (iv) crystallite reorientation into the near-perfect texture. The findings from this in-situ simultaneous GIWAXS and GTWAXS study provide improved understanding of the film formation mechanism for layered hybrid perovskite films with near-perfect vertical orientation. Speaker: Mr Wen Liang Tan (Monash University Australia) Membrane permeabilisation is mediated by distinct epitopes in mouse and human orthologs of the necroptosis effector, MLKL 15m Necroptosis is a lytic programmed cell death pathway with origins in innate immunity that is frequently dysregulated in inflammatory diseases. The terminal effector of the pathway, MLKL, is licensed to kill following phosphorylation of its pseudokinase domain by the upstream regulator, RIPK3 kinase. Phosphorylation provokes the unleashing of MLKL's N-terminal four-helix bundle (4HB or HeLo) domain, which binds and permeabilises the plasma membrane to cause cell death. The precise mechanism by which the 4HB domain permeabilises membranes, and how the mechanism differs between species, remains unclear. Here, we identify the membrane binding epitope of mouse MLKL using NMR spectroscopy. Using liposome permeabilisation and cell death assays, we validate K69 in the α3 helix, W108 in the α4 helix, and R137/Q138 in the first brace helix as crucial residues for necroptotic signaling. This epitope differs from the phospholipid binding site reported for human MLKL, which comprises basic residues primarily located in the α1 and α2 helices. In further contrast to human and plant MLKL orthologs, in which the α3-α4 loop forms a helix, this loop is unstructured in mouse MLKL in solution. Together, these findings illustrate the versatility of the 4HB domain fold, whose lytic function can be mediated by distinct epitopes in different orthologs. Speaker: Chris Horne (WEHI) Disulfide bond formation between T-cell receptor and peptide antigen lowers the threshold of T cell activation 15m The immune system is vigilant in detecting foreign pathogens. Our cells present peptides (p), small fragments of proteins, atop Major Histocompatibility Complex (MHC) glycoproteins. These pMHC molecules are displayed on the cell's surface and monitored by T cells of the immune system that patrol the body. T cells use their specialized T cell receptors (TCRs) to recognize and bind to pMHCs, where the quality of binding influences T cell activation. Activated T cells are responsible for killing off infected cells and clearing infection. The contribution of individual parameters that dictate activation for this cell-to-cell TCR-pMHC interaction are unclear. However, a long reigning hypothesis is that the threshold of T cell activation can be determined by the dissociation constant or binding affinity. We have engineered a disulfide bond (DSB) between two cysteine residues introduced into a TCR and peptide that are known to form a TCR-pMHC complex. The formation of the DSB was validated using biophysical assays and X-ray crystallography. This approach represents a model in which the covalently bonded TCR-pMHC do not dissociate, prolonging the confinement time of the interaction almost indefinitely. When this TCR and pMHC model were reproduced in T cells, we discovered that the DSB interaction was 10,000-fold more sensitive in activating T cells than the wild-type counterpart without altering binding affinity. Thus, we show that confinement time plays an important role in the activation of T cells, which could be useful in designing T cell therapies or peptide vaccines. Speaker: Dr Christopher Szeto (La Trobe Institute for Molecular Science) A Comparison of Different Approaches to Image Quality Assessment in Phase-Contrast Mammography 15m Propagation-based phase-contrast computed tomography (PB-CT) has the potential to improve breast cancer detection and characterisation compared to established mammography techniques. The aim of this work is to find a quantitative image quality metric which could accurately predict the subjective clinical image quality assessment of PB-CT images made by radiologists as described in Taba et al. [1]. The experimental data analysed in this study included PB-CT scans, which were obtained for 12 full intact mastectomy samples at Imaging and Medical beamline (IMBL) of the Australian Synchrotron at different monochromatic X-ray energies and clinically relevant radiation doses. Quantitative image quality metrics including visibility, signal to noise ratio (SNR), and spatial resolution, were calculated for all PB-CT and conventional CT image sets using the open-source 3D Slicer (https://www.slicer.org/) software. For each metric, an objective image quality "score" was generated to match the subjective scoring provided by the radiologists. Weighting factors were then applied to the scores and a weighted contrast to noise to resolution (CNR/res) score was calculated. The unscaled contrast and spatial resolution scores were both found to have a significant correlation with the radiologists' scores with R values of 0.9223 and 0.8360 respectively, while SNR had an insignificant correlation, with an R value of -0.6785. The weighted CNR/res score showed a significant correlation to the radiologists' scores with an R value of 0.9681. [1] S. T. Taba et al., Academic radiology 28.1 (2021): e20-e26. Speaker: Jesse Reynolds (University of Canterbury) Human MLKL is maintained by RIPK3 in an inactive conformation prior to disengagement and cell death by necroptosis 15m Necroptosis is a caspase-independent form of programmed cell death that results in the compromise of plasma membranes and release of inflammatory cellular contents. Dysregulated necroptosis has been shown to play a role in a range of different human pathologies, including ischemia-reperfusion injury, inflammatory diseases, and inflammatory bowel disease. Phosphorylation of MLKL by the RIPK3 kinase leads to MLKL oligomerization, translocation to, and permeabilization of, the plasma membrane to induce necroptotic cell death. The precise choreography of MLKL activation remains incompletely understood. Here, we used Monobodies, synthetic binding proteins, that bind the pseudokinase domain of MLKL to detect endogenous protein interactions within human cells. We showed that MLKL is stably bound by RIPK3 prior to their disengagement upon necroptosis induction. Crystal structures of MLKL pseudokinase domain in complex with two different monobodies or RIPK3 kinase domain identified two distinct conformations of MLKL pseudokinase domain. These structures support that human RIPK3 maintains MLKL in an inactive conformation prior to the induction of necroptosis. These studies provide further evidence that MLKL undergoes a large conformational change upon activation and identify MLKL disengagement from RIPK3 as a key regulatory step in the necroptosis pathway. Speaker: Mr Yanxiang Meng (WEHI) The silver bullet: using silver doped lanthanum manganite to selectively target deadly brain cancer 15m Treatment of deadly cancers that are deep-seated within sensitive healthy tissue is limited to adequate targeting strategies. More specifically, brain and central nervous system cancers can be the most aggressive, have higher mortality rates and lower accessibility to chemotherapeutic drugs. This study introduces the first in-depth analysis doped lanthanum manganite (LAGMO) nanoparticles (NPs) as a brain cancer selective chemotherapeutic and radiation dose enhancer The magnetic, chemical and biological properties of LAGMO NPs at silver dopant levels of 0-10% were investigated. Magnetic and chemical phases of LAGMO NPs were analysed with neutron diffraction using the ECHIDNA High-Resolution Powder Diffractometer. Biocompatibility and combinational treatment strategies involved in vitro biological endpoint clonogenic assays, live cell imaging and a cancer cell selectivity investigation. Neutron diffraction revealed that 10% LAGMO NPs exhibit residual ferromagnetism at 300 K suggesting potential hyperthermia cancer treatment strategies. Biocompatibility studies of LAGMO NPs with cancerous and non-cancerous cells displayed completely cancer cell selective toxic response while non-cancerous cell growth was promoted. Clonogenic assays revealed a significant decrease in long-term survival of cancer cells with NPs and radiation therapy compared to radiation alone. LAGMO NPs have potential to significantly improve targeted cancer treatment strategies. Their unique magnetic properties introduce a potential to induce cancer cell hyperthermia alongside radiation treatment and improve clinical outcomes. Furthermore, they promote non-cancerous cell growth while severely damaging cancer cells alongside radiation. Khochaiche, Abass, et al. "First extensive study of silver-doped lanthanum manganite nanoparticles for inducing selective chemotherapy and radio-toxicity enhancement." Materials Science and Engineering: C 123 (2021): 111970. Speaker: Mr Abass Khochaiche (University of Wollongong) Determining the role of protein aggregation in COVID-19 15m COVID-19 is primarily known as a respiratory disease caused by the virus SARS-CoV-2. However, neurological symptoms such as memory loss, sensory confusion, cognitive and psychiatric issues, severe headaches, and even stroke are reported in as many as 30 % of cases and can persist even after the infection is over (so-called 'long COVID'). These neurological symptoms are thought to be caused by brain inflammation and toxicity, triggered by the virus infecting the central nervous system of COVID-19 patients, however we still don't understand the molecular mechanisms underpinning this neurotoxicity. The neurological effects of COVID-19 share many similarities to neurodegenerative diseases such as Alzheimer's and Parkinson's in which the presence of cytotoxic self-assembled protein aggregates, known as amyloid nanofibrils are a common hallmark. This led us to hypothesise that self-assembled amyloid aggregates maybe present in the proteome of SARS-CoV-2 and responsible for some of the neurological symptoms of COVID-19. In this work we identified several peptides sequences within the proteome of SARS-CoV-2 that have a strong tendency to spontaneously self-assemble into amyloid aggregates. We performed an extensive characterisation of the in vitro toxicity and biophysical properties of these assemblies using a variety of techniques. We used data recorded at the SAXS/WAXS beamline at the Australian Synchrotron to provide insights into the nanoscale morphology and molecular structure of these assemblies. Based on these results we introduce the idea that cytotoxic amyloid aggregates of SARS-CoV-2 proteins are causing some of the neurological symptoms commonly found in COVID-19 and contributing to long COVID. Speaker: Nicholas Reynolds (La Trobe University) Small Angle Neutron Scattering Capability at ANSTO 15m The ANSTO Lucas Heights campus is home to three world-class small angle neutron scattering (SANS) instruments: Bilby, a time-of-flight SANS instrument [1], Kookaburra, an Ultra-Small Angle Neutron scattering instrument [2] and Quokka, a monochromatic SANS instrument [3]. Together they cover the structure of materials from 1 nm to > 20 microns. As well as recent scientific highlights, we here outline the updates from the group since the last ANSTO user meeting, notably: - The replacement of our lab-based small angle X-ray instrument with a state-of-the-art instrument along with a range of dedicated sample environments, currently being procured and due for installation early 2022. - The new rheometer for in-situ measurements on the three neutron instruments. - Our recently developed GiSANS setup, funded by the National Synchrotron Radiation Research Center. [1] A. Sokolova, A. E. Whitten, L. de Campo, J. Christoforidis, A. Eltobaji, J. Barnes, F. Darmann and A. Berry, Performance and characteristics of the BILBY time-of-flight small-angle neutron scattering instrument, J Appl Crystallogr, 2019, 52, 1-12. [2] Rehm, C.; Campo, L. d.; Brûlé, A.; Darmann, F.; Bartsch, F.; Berry, A., Design and performance of the variable-wavelength Bonse–Hart ultra-small-angle neutron scattering diffractometer KOOKABURRA at ANSTO. J Appl Crystallogr, 2018, 51 (1), 1-8. [3] K. Wood, J. P. Mata, C. J. Garvey, C. M. Wu, W. A. Hamilton, P. Abbeywick,[..] and E. P. Gilbert, QUOKKA, the pinhole small-angle neutron scattering instrument at the OPAL Research Reactor, Australia: design, performance, operation and scientific highlights, J Appl Crystallogr, 2018, 51, 294-314. Speaker: Kathleen Wood (Australian Nuclear Science and Technology Organisation) New developments in neutron imaging at DINGO 15m The neutron radiography / tomography / imaging instrument DINGO is operational since October 2014 to support research at ANSTO. DINGO provides a useful tool to give a different insight into objects. A major part of applications from research and industrial users was demanding high resolution setup and fast scans on DINGO. The neutron beam size can be adjusted to the sample size from 25 x 25 mm 2 to 200 x 200 mm 2 with a resulting pixel size from 12µm to ~100µm. Depending on the sample composition a full tomography has been taken in 10 minute – 36 hours. During the recent OPAL long shutdown, a new sapphire filter has been installed to reduce the amount of epithermal and fast neutrons at the sample position. These high energy neutrons do not contribute to the image, only as noise, and increasing the radiation levels around the CMOS camera. This update will improve the image quality as well as the reliability of the whole instrument. In addition, we implement a new type of neutron tomography scan to address long samples like in drill cores. These samples can now be scanned horizontal up two 1.2 meter in length. For small core sizes we can run up to three cores in one scan, which makes DINGO a very competitive instrument for fast high throughput imaging. A new software package for 3D reconstruction has been developed as well. It is an open source package based on the python toolbox "tomopy" with a GUI custom made for DINGO to enable users to run the reconstruction on their own computing environment. Speaker: Dr Ulf Garbe (ANSTO) A study of the intrinsic background from the Beryllium Filter Spectrometer on Taipan 15m The Beryllium filter spectrometer on Taipan is a low-energy band-pass spectrometer that employs a number of materials to effectively scatter out neutrons of higher energies and transmit only neutrons in the energy range, ef=1.2±0.5 meV. Here in this study the spectrometer response is studied in order to understand and identify the inherent background from the spectrometer itself. Ambient air and nickel are used as scatterers in this study as the former gives a reasonable detection limit of the spectrometer and the latter gives enough scatter to observe the inelastic signal but not too much to swamp out the inherent signal produced by the spectrometer itself. The background shape is found to be hull-like that reflects the total scattering cross-sections of the filter materials themselves and that of the copper cooling frame and the iron found in the stainless steel collimator. Furthermore the detailed inelastic signals from the last set of Beryllium blocks next to the detector bank are identifiable as low intensity parts of the spectra. A simple experimental method using the collected spectra are used to identify features associated with scatter from the spectrometer and those from the sample under investigation which can then be used to potentially effectively strip-out the spectrometer profile from collected spectra. Further work is discussed to minimise the scatter generated from the spectrometer filter blocks and frames in order to reduce the background to the ultimate minimum limit. Speaker: Dr Anton Stampfl (Australian Nuclear Science and Technology Organisation) BioSAXS: The future of solution scattering at the Australian Synchrotron 15m BioSAXS is one of the new beamlines to be constructed at the Australian Synchrotron within the BRIGHT program. The beamline is currently under construction and it is scheduled to phase into user operations in mid-late 2022. BioSAXS will be a high-flux (~5 x 1014 ph/sec) small angle X-ray scattering beamline dedicated to all sorts of solution scattering including dispersions, gels and soft matter, covering a variety of disciplines from biology to chemistry and material sciences. The high flux of the beamline will provide enhanced data quality and kinetic resolution, allowing for time-resolved studies on the millisecond timescale, as well as the measurement of weak scatterers and low concentrations that wouldn't otherwise be possible to measure. The in-vacuum detector system at the end station will provide quick and highly automated camera changes, a q range of ~0.0015 – 3 Å-1 and low background in collected data. The CoFlow, a pioneering development of the Australian Synchrotron, will be the primary autoloading device for high throughput experiments. Other sample environment options will include a stopped-flow and rheometer, temperature-controlled capillary stages, a shear cell as well as a versatile magnetic-array system, optimized for experiments on magnetic nanoparticles used in biomedical applications. The beamline's sample platform will also accommodate the installation of user equipment. The objective of this presentation is to demonstrate BioSAXS' final design and capabilities that will allow it to develop into a highly-automated and versatile beamline that can accommodate a wide-range of solution scattering experiments, complementing the existing SAXS/WAXS beamline to ensure the world-leading capabilities of the SAXS offering at the Australian Synchrotron. Speaker: Dr Christina Kamma-Lorger (ANSTO Australian Synchrotron) ACNS SAMPLE ENVIRONMENT UPDATE 15m Since the last ANSTO User Meeting the sample environment group at ACNS has supported our facility users with a range of unique developments and set ups. We have had a change in structure with the laboratory group forming and working alongside us. We will report on the progress on our ongoing projects on Direct Laser Melting (DLM) deposition system co-funded by a NSW RAAP grant. Also underway are LIEF grants with equipment for use at ACNS, one includes a rheometer for use on ACNS beam instruments. This presentation will also cover our new equipment projects funded by the NCRIS RIIP scheme. This includes new cryofurnaces, a new type of furnace, a universal testing machine and other equipment. This funding will maintain and improve our existing capabilities and increase the redundancy across the SE suite to better service competing requests. Speaker: Rachel White (ANSTO) High-Resolution Macro ATR-FTIR Chemical Imaging Capability at Australian Synchrotron Infrared Microspectroscopy (IRM) Beamline 15m This presentation aims to provide a summary on technical aspects and applications of our synchrotron macro ATR-FTIR microspectroscopy, unique to the Infrared Microspectroscopy (IRM) beamline at ANSTO–Australian Synchrotron.1 The device was developed by modifying the cantilever arm of a standard macro-ATR unit to accept Ge-ATR elements. Coupling synchrotron-IR beam to the Ge-ATR element (n=4), reduces the beam focus size by a factor of 4 (improving lateral resolution), and the mapping step size by 4 times relative to the stage step motion. As a result, the macro ATR-FTIR measurement at our IRM beamline can be performed at minimum projected aperture (sampling spot size) of 1-2 μm using a 20x objective, and minimum mapping step size of 250 nm, allowing high-resolution chemical imaging analysis with the resolution limit beyond those allowed for standard synchrotron-FTIR transmission and reflectance setups. The technique has facilitated many experiments in a diverse range of research disciplinary. Here, there will be presentations based on macro ATR-FTIR technique in archaeology, electrochemistry (battery), biomedical and forensic sciences. Apart from these, we will provide additional applications in the fields of food and pharmaceutical science,2-4 single-fibre analysis,5-6 and dentistry.7 [1] J. Vongsvivut, et al., Analyst 144, 10, 3226-323 (2019). [2] A.P. Pax, et al., Food Chemistry, 291, 214-222 (2019). [3] Y.P. Timilsena, et al., Food Chemistry, 275, 457-466 (2019). [4] D.M. Silva, et al., Journal of Colloid and Interface Science, 587, 499-509 (2021). [5] S. Nunna, et al., Journal of Materials Chemistry A, 5, 7372-7382 (2017). [6] C. Haynl, et al., Scientific Reports, 10, 17624 (2020). [7] P.V. Seredin, et al., International Journal of Molecular Sciences, 22, 6510 (2021). Speaker: Jitraporn (Pimm) Vongsvivut (Australian Synchrotron) Lunch 1h 10m Jaws caught on the IMBL 15m Maturational changes in feeding behaviour among sharks are associated with increased mineralisation of the teeth and jaws, but this relationship has only been demonstrated in a few species. Large, highly mobile shark species are rarely available for detailed anatomical study, despite their importance for ecological health and widespread interest among the general population. We examined the crania, jaws, and teeth of two great white sharks (Carcharodon carcharias), a 2.3 m juvenile and a 3.2 m young adult. The CT scans used a 230 keV (mean energy) polychromatic beam from the 4 Tesla wiggler, with a filtration of 6mmAl, 6mmCu, 3mmMo and 3mmPb. The detector was a Teledyne-Dalsa Xineos 3030HR with 100µm pixels, a width of 300mm, and a 1mm CsI converter for high efficiency at high energy. Image noise was reduced by collecting 18,000 projections per rotation to deliver an image quality good enough to segment out different tissue types. With a beam size of 300mm x 35mm, the shark head was covered by 'tiling', and stitching the tiles, with the full-head image made up of two columns and 21 tiles, to image a 600mm x 520mm area. Total scan time was 9 hours. The heads were also imaged using conventional CT and 7 Tesla MRI for finite element modelling of bite forces produced by the jaw musculature. These results will be compared with measurements of the difference in mineralisation of tooth and jaw cartilage between the two specimens to assess developmental changes in tooth and jaw hardness as the animals shift their diets from largely fish-based (juvenile) to larger prey, such as seals, scavenged whales and surfers (adults). Speaker: Dr Daniel Hausermann (Australian Synchrotron (ANSTO)) Use of high-resolution technologies to understand the broken past 20m Our understanding of material culture and past environmental contexts have been utterly transformed over the last two decades by new and greatly improved scientific methods. Innovative investigations revolve around refinement of methods for chronological dating, characterization and provenancing, bioarchaeology, geoarchaeology and the emerging sub-discipline of cyber-archaeology. As ever, when dealing with the past, 'meaning' remains more difficult and we are always be limited by what little we can know. With the help of multi-scalar, high-resolution techniques, there at least exists potential for useful and even groundbreaking information to be retrieved from material culture, the absence of which might inhere a sense of doubt. This talk will illustrate some applications of science and technology that are proving valuable in Australian archaeology, drawing from my own studies at ANSTO and the Australian Synchrotron as well as other novel multi-scientific projects. At the heart of these is an interdisciplinary approach and an aim to provide the most accurate understanding of the dynamic past from what is often a very fragmentary record. Speaker: Dr Ingrid Ward (University of Western Australia) Leaving a mark on forensic science: Using synchrotron microscopy and spectroscopy to explore fingermark chemistry 15m Fingermarks are an important tool in forensic investigations however, a large number are not successfully recovered and are never used as evidence.(1) A significant challenge in their detection is the chemical variability of fingermark deposits. This research aims to answer important questions in fingermark chemistry using synchrotron sourced analysis including x-ray fluorescence microscopy (XFM), infrared microspectroscopy (IRM) and THz-Far infrared (Far-IR) spectroscopy to deepen the understanding of fingermark residue and improve recovery methods. First, what is the chemical composition of a fingermark? We explored the distribution of inorganic material using XFM to discriminate between the endogenous and exogenous metals present in a natural fingermark, with multimodal studies using IRM connecting this distribution to the organic material.(2,3) Further investigation of the transfer and persistence of exogenous metals demonstrated how handling different metal objects can affect fingermark chemistry, suggesting daily activities can influence the material present in a fingermark. Second, what happens to this material as the fingermarks age? The material deposited in a fingermark is not static and changes over time, with the rate of change being influenced by the environment and surface. We have directly imaged the rate of change post deposition using IRM, demonstrating the dehydration of hydrophilic material in a fingermark droplet over time. To volumetrically measure this rate of change we have measured the water evaporating off a fingermark in the gas phase using Far-IR, providing important insight into the water content in fingermark residue. S. Chadwick et al. Forensic Science International, 2018, 289, 381-389. B. N. Dorakumbura et al. Analyst, 2018, 143, 4027-4039. R. E. Boseley et al. Analytical Chemistry, 2019, 91, 10622-10630. Speaker: Rhiannon Boseley (Curtin University) Understanding the generation and evolution of reaction-induced porosity in the replacement of calcite by gypsum: A combined microscopy, X-ray micro-tomography, and USANS/SANS study 15m Fluid-mediated mineral replacement reactions are common in natural systems and are essential for geological and engineering processes. In these reactions, a primary mineral is replaced by a product mineral via a mechanism called coupled dissolution-reprecipitation. This mechanism leads to the preservation of the shape of the primary mineral into the product mineral. The product mineral includes reaction-induced porosity contributing to enhanced permeability, which is crucial for the replacement reaction to progress from the surface to the core of the primary mineral grain. These reaction-induced pores are complex in size, shape and connectivity, and can evolve with time. However, the mechanisms of the creation and evolution of such pores are still poorly understood. Therefore, we investigated the replacement of calcite (CaCO3) by gypsum (CaSO4.2H2O) to understand porosity creation in the replacement stage and the evolution of such porosity after complete replacement. This replacement reaction is important for the applications such as groundwater reservoir evaluation, CO2 sequestration, cultural heritage preservation, and acid mine drainage remediation. Samples collected at various reaction stages over 18 months were characterised by ultra-small-angle neutron scattering and small-angle neutron scattering (USANS/SANS), ultra-high-resolution electron microscopy (UHR-SEM), and X-ray micro-computed tomography (X-μCT). Results show the formation of micro-voids in the core of the gypsum grain and the generation of nanometre-sized elongated pores in the newly formed gypsum crystals. Micrometre-sized pores were mostly open, while pores smaller than 30 nm were mainly closed. After complete replacement, continued porosity coarsening occurred in the 18 months' time, driven by Ostwald ripening. Speaker: Muhammet Kartal (Murdoch University) Scattering or spectroscopy? Both! 20m In this presentation I will discuss the development of resonant tender X-ray diffraction to study the molecular packing of semiconducting polymers. Semiconducting polymers are being developed for application in a wide range of optoelectronic devices including solar cells, LED and transistors. Being polymeric materials they offer advantages over traditional semiconductors including ease of processing and mechanical flexibility. Most semiconducting polymers are semicrystalline, with the way in which polymer chains pack strongly affecting their optoelectronic performance. Unlike small molecule crystals whose structure can be directly solved using crystallographic methods, semiconducting polymers are more disordered meaning that there are not enough diffraction peaks available. To squeeze more information from the diffraction peaks that are present, we have turned to resonant diffraction: By varying the X-ray energy across an elemental absorption edge, the variations in diffraction intensity that are observed can provide additional information. Also known as anomalous diffraction this technique has been applied in other fields including protein crystallography. As many semiconducting polymers utilise sulfur as heteroatoms, we have studied resonant diffraction effects at the sulfur K-edge in the tender X-ray regime. Furthermore we have adopted a spectroscopic approach exploiting anisotropic near-edge X-ray absorption features at the sulfur K-edge that produce highly anisotropic resonant diffraction effects. This marrying of spectroscopy and scattering enables us not only to infer information about the position of the resonant atoms within the unit cell, but also to deduce the orientation of molecular bonds and orbitals within the unit cell. Speaker: Prof. Chris McNeill (Monash University) Wavefield Characterisation of MHz XFEL Pulse Trains 15m X-ray Free Electron Laser (XFEL) light sources present new opportunities in the imaging of single particles and biomolecules. The interpretation and analysis of XFEL imaging data depends critically on a fundamental understanding of the characteristics of the inherently stochastic XFEL pulses delivered to the instrument. Exploiting the unique MHz repetition rate of the European XFEL to image single particles requires an improved understanding of both the inter- and intra-train fluctuations in pulse structure and beam pointing, which are frequently implicated in the loss of information in XFEL single particle imaging (SPI) and other classes of coherent diffraction experiment. Failure to account for fluctuations of the electron bunch phase-space and/or trajectory within a pulse train can result in deviations of the recorded wavefront and intensity statistics from theoretical behaviour and lead to conflation of the structure of the source and sample in single particle reconstruction. Contrary to expectations, X-ray optical data collected at the SPB-SFX instrument of the European XFEL demonstrates a sensitivity of inter- and intra-train variations in beam pointing to beam delivery parameters, including the order of a pulse within a train. This data is presented in comparison to a partially coherent wave optical simulation of the SPB-SFX instrument, through which photon diagnostics have been designed and developed, with the goal of improving the stability and subsequent imaging quality of the user-end photon beam. We describe these preliminary results within the scope of developing a novel phase-retrieval method applicable to the study of MHz repetition rate XFEL sources, using nearfield speckle-tracking measurements. Speaker: Mr Trey Guest (La Trobe University, European XFEL) Verification of L-alanine single-crystallinity for anisotropic synchrotron terahertz measurements 15m One way to probe the molecular interactions of a material is by using terahertz (THz) spectroscopy, which has been used to study L-alanine in detail [1]. However, isotropic THz spectroscopy has limitations in identifying the origin of vibrational modes since the direction of the associated dipole moment is random in an isotropic THz measurement. Therefore, there is a benefit to performing anisotropic (polarised) THz measurements. This work represents the first anisotropic measurements performed on L-alanine, the simplest chiral amino acid, and one of the earliest amino acids fundamental to early life on Earth [2]. An appropriate sample for anisotropic measurements must be highly single-crystalline. This presentation describes a method to prepare and test a sample for anisotropic THz measurements. Samples have been grown at the University of Wollongong, and sample verification has been done at ACNS's Taipan triple-axis spectrometer. Using Taipan, a narrow mosaic spread of ~0.8° was determined, and single, well-fitted Gaussian peaks were observed in both sample rotation and Q-space scans, suggesting high single-crystallinity in our L-alanine samples. Additionally, the Taipan measurements were able to verify the orientation of the L-alanine single crystals with respect to their crystallographic axes. Anisotropic THz measurements were taken on the THz – Far Infrared beamline at the Australian Synchrotron using a wire-grid polariser. Distinct absorption bands were observed for different crystal orientations, further confirming single-crystallinity, and identifying the dipole moment directions for the observed modes. We thus demonstrate a method of performing anisotropic THz measurements. [1] T. J. Sanders et al., J. Chem. Phys., 154, 244311 (2021) [2] V. Kubyshkin and N. Budisa, Int. J. Mol. Sci., 20, 5507 (2019) Speaker: Jackson Allen (UOW) Awards: ANBUG | Technical Award Awards: ANBUG | Student Award Awards: ANBUG | Neutron Award Awards: ANBUG | Young Scientist Award Awards: ANBUG | Career Award Update: Career Progress & Poster Slam Break 30m EMU cold-neutron backscattering spectrometer at ACNS, ANSTO 1m EMU is the high-resolution neutron spectrometer installed at the OPAL reactor, ANSTO, which delivers 1 µeV FWHM energy transfer resolution for an accessible ±31 µeV energy transfer range. The spectral resolution is achieved by neutron backscattering from Si (111) crystals on the primary and secondary flight paths, allowing up to 1.95 Å-1 momentum transfer range. The spectrometer is well for suited quasi-elastic and inelastic neutron scattering studies, notably in the field of soft-condensed mater including biophysics and polymer science, chemistry and materials science, and geosciences. Most experiments are carried out with standard cryo-furnaces (2 to 800 K temperature range). Spectrometer beam-time access is merit-based, thus welcoming experiments as well in other materials research areas, and including experiments that may require e.g., other ancillary equipment such as existing controlled-gas delivery, and potentially pressure, applied field set-ups, etc. We will present examples of the spectrometer capabilities through select case studies. Speaker: Alice Klapproth (ANSTO) Full Hemisphere Photoemission Using the Toroidal Analyser 1m The toroidal analyser at the Australian Synchrotron is an angle-resolving photoelectron spectrometer capable of mapping the full hemisphere of emitted photoelectrons from a sample. This measurement capability is unusual amongst conventional photoelectron spectrometers, and permits a number unique techniques for the electronic and structural characterization of surfaces. This presentation will detail the operating principles of the spectrometer, with particular reference to the angular detection geometry, and will describe the three modes of full hemisphere photoemission (i) Fermi Surface Mapping (ii) Molecular Tomography and (iii) Photoelectron Diffraction. Speaker: Anton Tadich Status, statistics, and recent research highlights from Echidna 1m The Echidna high-resolution powder diffractometer remains a reliable and productive ACNS instrument contributing annually to about 50 published studies done on a wide range of topics, from magnetic, energy and planetary materials to cultural heritage and additive manufacturing. We will discuss how Echidna has been affected by COVID-19 measures, latest and planned developments, user programme statistics, and recent research highlights. Speakers: Max Avdeev (Australian Nuclear Science and Technology Organisation, Australian Centre for Neutron Scattering), James Hester (ANSTO), Chin-Wei Wang (NSRRC) Scientific computing support for neutron scattering experiments at ANSTO 1m The purpose of the scientific computing support at ANSTO is to aid in the interpretation of both structural and dynamical data from the neutron scattering instruments using atomistic modelling calculations. Most of these calculations are done with ab initio scientific software packages based on Density Functional Theory, including VASP, WIEN2K, ABINIT, SIESTA, PHONON, and QUANTUM ESPRESSO, although some are performed with packages based on classical force fields, such as LAMMPS, DL_POLY, NAMD, and GULP. Analysis of the results of these calculations exploits tools such as VMD, NMOLDYN, XCRYSDEN, and ISAACS, in addition to in-house code. Calculations and analysis are carried out locally on a scientific computing Linux cluster comprising both ACNS dedicated cores and ANSTO shared ones, with jobs managed by PBS. We give a brief overview of all of the above capabilities and an example of a typical calculation/analysis. Speaker: Dr RAMZI KUTTEH (ANSTO/ACNS) Fusion Peptide Interactions with the Lipidic Cubic Phase 1m Despite the fact that membrane fusion is a key step in many biological processes, the underlying mechanism still remains elusive. The bicontinuous cubic phases are a perfect medium for the delivery of therapeutic proteins owing to their enhanced solubility, sustained release and reduced toxicity. It has been suggested that the fusion event of viruses is tightly regulated by specialized fusion proteins which are responsible for protein-lipid interactions or protein-protein interactions. The fusion components of enveloped viral fusion involve viral proteins that insert hydrophobic sequences into the target membrane and refold to drive merging of the lipid bilayers which can be utilized to enhance drug delivery. By using high throughput methodology to prepare and characterize viral fusion peptide interactions based on lipid composition, our study has revealed that the N-terminal charge of the viral fusion peptide has a significant effect on lattice parameter of the cubic phases. Induced curvature depends on peptide concentration but the mechanism was observed to be viral dependent. We investigated the phase behaviour which represents its fusion function and bilayer destabilizing effect, upon encapsulation in bicontinuous cubic phases with and without phospholipid using synchrotron SAXS. We also used TOF-SANS and contrast-matching of the lipid membrane to investigate the phase behaviour of the mixed lipid systems. This is crucial for better understanding of the fundamental physiochemical parameters of the lipid mesophase in response to peptide encapsulation and dependency of the peptide structural conformation. Speaker: Ms Izabela Milogrodzka (Monash University) Recent highlights from the Pelican spectrometer 1m The cold-neutron time-of-flight spectrometer Pelican has been in operation since 2014. Pelican is well suited to the measurement of quasielastic and inelastic scattering in the low energy region, as a result Pelican is sensitive to many phenomena including self-diffusion of molecular species, low energy phonons, crystal field excitation's and spin waves. While use of the neutron energy gain portion of the spectrum allows the detailed measurement of the phonon density of states. The spectrometer is well equipped with a wide range of sample environment which allows measurements in applied magnetic fields, milli Keliven temperatures and applied pressure. In this contribution we will show results from recent publications highlighting the diverse science and application of the Pelican spectrometer. Speakers: Richard Mole (ANSTO), Dehong Yu (Australian Nuclear Science and Technology Organisation) Kowari residual stress diffractometer 1m Kowari is a residual stress diffractometer which can be used for 'strain scanning' of large engineering components as large as one ton. The integrity of engineering components often depends on strains and stresses inside the material. For example, rails can fail if stresses exceed the 'ultimate tensile stress'. Speaker: Mark Reid (ANSTO) Current Facilities on the Soft X-ray Beamline 1m The soft X-ray beamline at the Australian Synchrotron has been operating with users since 2008. The beamline offers three different systems for users; A full Ultra High Vacuum surface science system with Nexafs and Photo-emission capabilities, an Angle Resolved Photoelectron Spectroscopy (ARPES) system operated with La Trobe University and a High Throughput system exclusively for Nexafs but with vacuum pressure restrictions relaxed from UHV to lie in the 10-7 to 10-8 mbar range. The current specific capability of all three instruments will be presented. Speaker: Dr Bruce Cowie (ANSTO) ATOS-GOM structured light 3D scanner, replacement new for old or intriguing possibilities! 1m The ATOS-GOM structured light 3D scanner is a replacement for the laser scanner previously used on the Kowari instrument. In addition to outlining its operation, this talk will address some exciting possibilities this new piece of equipment brings to the Kowari instrument. ADS-1 and ADS-2: New high-energy X-ray beamlines at the Australian Synchrotron 1m Eight new beamlines are currently under design or construction at the Australian Synchrotron as part of the BRIGHT project. Included among them are the Advanced Diffraction and Scattering beamlines, ADS-1 and ADS-2, which will offer high-energy X-rays (45-150 keV) for a variety of diffraction, imaging and tomography experiments. ADS will benefit users from the materials, minerals and engineering research communities who wish to study bulky or strongly absorbing samples and/or perform in situ measurements using complex sample environments. We present an update on the progress of the ADS project, including the latest design features of the two beamline endstations and the planned range of experimental capabilities. Speaker: Josie Auckett (Australian Synchrotron) Getting better statistics: variable count time data collection with large linear detectors. 1m X-ray diffraction data are normally collected with a fixed count time (FCT) per step. With this data, the intensities of the diffraction peaks decrease with increasing angle,primarily due to the effects of atomic scattering factors, Lorentz-polarisation, atomic displacement parameters, and absorption. In diffractometers with point or small linear detectors, these changes can be counteracted by systematically varying the counting time as a function of diffraction angle.This variable-count-time (VCT) approach has been shown to produce data of superior quality for structure determination and refinement, as all peaks have similar intensities, allowing them to contribute equally to the analysis process. With the advent of large linear detectors, the ability to change the counting time as a function of angle has been removed. A computer program has been written to construct a VCT diffraction pattern by the progressive summation of a series of conventional FCT diffraction patterns. This approach extends the collection of VCT data to large linear detectors, where traditional VCT is impossible. The program can also be used to simulate the construction process as an aid in experimental planning. An example application is given. Speaker: Dr Matthew Rowles (Curtin University) Enhancing synchrotron modulated Microbeam Radiation Therapy in vivo with novel high Z nanoparticles 1m With limited improvement in brain cancer patient survival in the last 30 years, the search for a treatment strategy that is targeted and effective continues. This study is harnessing the unique properties of synchrotron radiation for anti-cancer radiotherapy. The Imaging and Medical Beamline (IMBL) at the ANSTO Australian Synchrotron (AS) offers the possibility to perform pre-clinical synchrotron radiation trials using extremely high dose-rates, sparing normal tissue whilst delivering large doses to the tumour site. This study focused on patient specific treatments combining Microbeam Radiation Therapy (MRT) with novel high Z nanoparticles (NPs), and was the largest rodent survival study utilising nanoparticle enhancement ever undertaken at AS. Thulium oxide NPs (Z=69) are a promising sensitising and imaging agent with limited cytotoxicity and proven synchrotron enhancement. 32 Fischer 344 rats were inoculated with 9L gliosarcoma in the right caudate nucleus of the brain. 11 days later, the rats were imaged with Computed Tomography (CT) to locate the tumour in relation to bony anatomy. The following day, nanoparticles were injected directly to the tumour of each rat. Using the CT scans, the rats were aligned in-beam, and a bolus was placed over the irradiation site. One radiation fraction was given to different treatment groups at valley doses of 8, 14 or 15Gy, with a radiation field of 8mm by 8mm and microbeams produced using the 4T magnet and Al/Al filtration. Utilising a heavily improved oedema protocol, seizure symptoms and adverse events immediately post MRT were significantly reduced. Overall survival compared to rodents with MRT alone was found to be improved when considering the tumour to brain volume. Speaker: Sarah Vogel (University of Wollongong) Structural basis of the Trichoplax adhaerens Scribble and Dlg interactions with the PDZ-binding motif of Vangl 1m Maintenance of multicellular tissue architecture is conserved process which is regulated by a highly conserved set of proteins. The interacting partners of these regulators are also conserved across the animal kingdom. Scribble and Dlg are two such key polarity regulators that involve in the establishment and maintenance of multicellular apical-basal cell polarity in epithelial cells. These are scaffolding proteins bearing multiple PDZ domains that mediate most of their interactions. Complex multicellular organisms evolved from the simple primitive forms; therefore, we examined Scribble and Dlg mediated cell polarity in the simplest metazoan living on earth, Placazoa, Trichoplax adhaerens. Despite its extreme simplicity, Trichoplax contains all polarity regulators that are fundamental to instruct the body plans in higher animals; thus, making it an ideal candidate to use as a polarity studying model. We now show biochemically that a key interaction for the establishment of cell polarity between Scribble and Dlg PDZ domains and Vangl in mammals is fully recapitulated in Trichoplax. We found that Scribble PDZ1, PDZ2 and PDZ3 interact with Vangl with affinities comparable to the human interaction, with a similar hierarchy in affinities. We also found that all three PDZ domains of Dlg interact with Vangl with no hierarchy of their affinities. We then show using crystal structures of Scribble PDZ1, PDZ2 and Dlg PDZ1, PDZ2 bound to the C-terminal PDZ binding motif of Vangl that in addition to the binding affinities, the detailed interactions between Scribble/Vangl and Dlg/Vangl are also conserved at the atomic level between Trichoplax and human. Speaker: Mrs Janesha Maddumage (1Department of Biochemistry and Genetics, La Trobe University) Platypus Neutron Reflectometer 1m PLATYPUS is the initial neutron reflectometer at the Australian Centre for Neutron Scattering with a capability to study surfaces and interface systems ranging from biomolecules, soft matter through to magnetic thin films [1-3]. There have been a number of significant improvements to both the instrument and data reduction and treatment software [4] over the last two years. On the hardware front the original detector has been replaced [5] enabling higher count rate capabilities, greater detection efficiency at shorter wavelengths and significantly lower background. The slits which define the neutron beam have been replaced with upgraded positioning mechanisms enabling greater flexibility in experimental setup. These changes have significantly enhanced the instrument performance with improved reproducibility. [1] M. James et al., J. Neutron Research 14, 91 - 108 (2006) [2] M. James et al., Nuclear Inst. and Methods in Physics Research A, 632, 112 - 123 (2011) [3] T. Saerbeck et al., Rev. Sci. Instrum. 83, 081301 (2012) [4] A. Nelson et al., J. Appl. Crystallography, 52, 193 - 200 (2019) [5] L. Abuel et al., Journal of Neutron Research, 23(1), 53 – 67, (2021). Speaker: Dr Stephen Holt (Australian Nuclear Science and Technology Organisation) Demonstrated enantioselective adsorption with cobalt 1D coordination polymers 1m Chiral coordination polymers have recently been explored as potential stationary phases for enantioselective separations.1 However, the chiral resolution ability of non-porous coordination polymers is not often tested. In this work, two chiral 1D cobalt coordination polymers have been synthesised with an amino acid functionalized diimde ligand. The coordination polymers, which have little free pore space, have been tested for their chiral resolution abilities with 1-phenylethanol. Coordination polymer 1 shows a preference for one enantiomer over the other in both soaking experiments and preliminary 'mini columns', whereas a second structurally similar polymer shows no enantioselectivity thus far. Analyses are currently underway to further probe the chiral separation ability of the systems. 1 Turner, D. R., Chirality in Network Solids. In Chirality in Supramolecular Assemblies: Causes and Consequences, Keene, R. F., Ed. 2016 Speaker: Winnie Cao (Monash University) Complex Coacervates as encapsulation system 1m Complex coacervates are oppositely charged self-assembled biopolymers such as proteins and polysaccharides. They can be used both as a delivery system of bioactive materials, and for improving the structural and textural functionalities of the final food products. The functionalities of the coacervates are dependent on their microstructures, which are determined on a case-by-case basis depending on the combination of protein, polysaccharide, and bioactive. The encapsulation approach developed in this work incorporates the binding of the bioactives to proteins prior to forming complex coacervates with pectin. This was compared to the coacervate structures formed without the bioactives. Structural characterization using SANS showed that protein-bioactive complexes could effectively self-assemble with pectin to form complex coacervates making them suitable to be considered as effective encapsulating systems that can used as value added products such as fat and meat analogous. Speakers: Sunandita Ghosh, Andrew Whitten (ANSTO), Jitendra Mata (ANSTO) A multi-analyser upgrading possibility for the thermal-neutron triple-axis spectrometer Taipan 1m Taipan is a high-flux thermal-neutron triple-axis spectrometer with a traditional single-detector design. Taipan has been working as the work horse for inelastic neutron scattering experiments at ACNS for the last ten years, generating numerous beautiful scientific highlights. Following the trend of the neuron instrumentation, it is interesting to consider the future upgrade of Taipan to increase its data acquisition efficiency with a multi-analyser design. In this research, the possibility of upgrading Taipan into a multi-analyser triple-axis spectrometer is discussed. The simulation of the 21 analyser channels with a 2 degree gap in-between is demonstrated. The simulated result shows that the data acquisition efficiency can be substantially enhanced on Taipan and the multi-analyser design is also very suitable for magnetic diffraction measurement at the low Q range. Speaker: Dr Guochu Deng (Australian Nuclear Science and Technology Organization) Mutli-analyzer upgrade of Taipan Taipan_MultiAna_V3.pdf Taipan – a versatile thermal neutron scattering instrument for materials research. 1m Located on the OPAL reactor face, Taipan is the highest flux, thermal neutron scattering instrument at ANSTO. Originally, Taipan was built as a traditional triple-axis spectrometer for inelastic neutron scattering studies with energy transfers up to 70meV. Since its inclusion in the ANSTO user program in 2010, Taipan has undergone a number of upgrades and improvements, including new shielding, new primary optics and the installation of a Cu-monochromator extending energy transfers up to 200meV. An additional secondary spectrometer, the Be-filter analyser, was also developed and integrated in 2015, offering a new way to measure excitations and vibrations in polycrystalline materials. This poster will present some recent highlights at Taipan – both as a TAS, and a Be-filter analyser spectrometer. Speaker: Kirrily Rule (ANSTO) Diffuse Scattering Studies from a Martensitic Fe-Pd Alloy 1m From literature reports, Fe-Pd alloys in the vicinity of Fe-30at%Pd exhibit two martensitic transformations on being cooled from just above room temperature to about 100 K. A preliminary study of a large single crystal of this composition at the KOALA beamline, not only showed evidence for these transformations but also revealed most interesting satellite reflections around certain Bragg spots. The crystal was then studied further in two triple-axis experiments. The first was at TAIPAN, specifically to study elastic scattering and the second, at SIKA, to study quasi-elastic scattering both in the vicinity of certain Bragg peaks but also around the satellite reflections observed at KOALA. During a parallel experiment to the SIKA one, an ideally small piece of the crystal was studied at KOALA but the interesting satellites found for the large crystal were not present. As a result, in a further experiment on the large crystal at KOALA, completed in early 2021, diffraction patterns were collected with the aim of surveying the whole of the large crystal, particularly in the vicinity of the edge from which the ideally small crystal piece had been extracted by electro-discharge machining. The results from this last experiment will be summarised and discussed in relation to the earlier triple-axis and KOALA results. Speaker: Trevor Finlayson (University of Melbourne) Spin Dynamics, Critical Scattering and Magnetoelectric Coupling Mechanism of Mn$_4$Nb$_2$O$_9$ 1m The spin dynamics of Mn$_4$Nb$_2$O$_9$ were studied by using inelastic neutron scattering. A spin-dynamic model is proposed to explain the observed spin-wave excitation spectrum. The model indicates that the exchange interactions along the chain direction are weakly ferromagnetic while the exchange interactions between the neighbour chains are strongly antiferromagnetic. Such a antiferromagnetic configuration in the hexagonal plane cause spin frustration with a spin gap of about 1.4 meV at the zone centre. The Mn$^{2+}$ ions in this material demonstrate a very weak easy-axis single-ion anisotropy. Critical scattering in the vicinity of T$_{N}$ was studied. On the basis of the magnetic structure and spin-dynamic models, the weak magnetoelectric coupling effect in Mn$_4$Nb$_2$O$_9$ is ascribed to the weak magnetostriction due to the subtle difference between Mn$^{2+}$ ions on the Mn$_{I}$ and Mn$_{II}$ sites. Speaker: Guochu Deng (Australian Nuclear Science and Technology Organization) MNO_UserMeeting2021.pdf Spin Dynamics of Mn4Nb2O9 Exploring Amine-based MOFs for Electrochemical Water Splitting 1m Electrochemical water splitting is one of the widely studied routes to developing sustainable energy systems. Energy in the form of hydrogen has been gaining attention since it can be easily converted, stored, and transported. In order to improve efficiency, an electrocatalyst will be needed to aid the slow kinetics of the oxygen evolution reaction (OER) process. Metal-organic frameworks (MOFs) or porous coordination polymers (PCPs) are generally considered to have inferior electrocatalytic performance relative to noble metal oxides, however, in this study using a 2D Co-framework of 1,4,7-tris(4'-methylbiphenyl-4-carboxylic)-1,4,7-triazacyclononane deposited onto nickel foam has shown a promising catalytic activity. The fabricated electrode with a loading of 0.25 mg cm-2 has shown a low overpotential of 259 mV at the current density of 20 mA cm-2 in alkaline conditions. The electrochemical stability of the electrode was evaluated and showed continuous electrolysis with no decay for several hours at room temperature. These initial results not only provide a good design for fabricating MOF-based catalysts but also opens up more ideas for tuning and enhancing the electrochemical performance of amine-based MOFs. Speaker: Jade Ang (Monash University) Chain alignment and charge transport anisotropy in blade-coated N2200/PS blend films 1m Semiconducting polymers offer the potential of low-cost flexible electronics. To improve the processability and mechanical flexibility of semiconducting polymers, blending with commodity polymers is an attractive strategy. Understanding how blending affects the resulting microstructure in aligned samples produced by directional coating techniques such as blade coating is important to optimize device performance. This presentation will discuss the microstructure of blade-coated blends of the semiconducting polymer N2200 with polystyrene (PS) using a range of techniques. In particular, we have investigated the degree of alignment of chains of the semiconducting polymer N2200 at the surface and in the bulk. UV-vis spectroscopy and surface-sensitive NEXAFS spectroscopy show that blade coating induces the preferential orientation of N2200 chains parallel to the coating direction. Angle-dependent NEXAFS enables the averaged tilt angle of the planar backbone of N2200 to be determined, revealing improved edge-on configuration at the surface with reduced N2200 content. By deconvoluting the spectra of N2200/PS blend film, the concentration of N2200 at the surface was determined, showing its tendency of segregating at the surface. Another synchrotron-based technique, grazing-incidence wide-angle X-ray scattering (GIWAXS) was used to selectively probe the crystalline phase of N2200. The GIWAXS results confirm the directional alignment of N2200 crystallites with backbone stacking direction to be parallel to the coating direction. From the analysis of crystallite orientation (texture), a transition from preferential face-on orientation to edge-on orientation at low N2200 content was seen. Finally, charge transport anisotropy was investigated by measuring organic field-effect transistors based on blade-coated N2200/PS blend films with conductive channel length parallel or perpendicular to the coating direction. Speaker: Ms Lin-jing Tang (Monash University) Investigation of the Diffusion of Cr2O3 into different phases of TiO2 upon Annealing 1m Chromium oxide (Cr2O3) can be used as a protective layer for photocatalysts to improve photocatalytic water splitting activity and is commonly photodeposited. However, it is not known how the conditions of the Cr2O3 formation affect the formation of the protective layer and potential diffusion into the substate onto which the Cr2O3 has been deposited. We have investigated the stability of Cr2O3 photodeposited onto the surface of different crystal phases of TiO2 with subsequent annealing at a range of temperatures up to 600⁰C. X-ray photoelectron spectroscopy and synchrotron near-edge X-ray absorption fine structure were used to analyse the chemical composition of the sample, Neutral impact collision ion scattering spectroscopy was used to study the concentration depth profile of the elements in the sample and atomic force microscopy was used to investigate the morphology of the surface. Under annealing conditions, the Cr2O3 layer diffuses into the amorphous and anatase phases of TiO2 but remains at the surface of the rutile phase. This finding is attributed to differences in surface energy with Cr2O3 being higher in surface energy than the amorphous and anatase phases of TiO2 but lower in surface energy than the rutile phase of TiO2. Reduction of Cr2O3 to Cr metal was observed after annealing with no observation of the formation of higher oxidised forms of chromium oxide like CrO2 and CrO3. These findings are of general interest to researchers utilising a protective overlayer to augment photocatalytic water splitting. Speaker: Abdulrahman Alotabi (Flinders University) Energy Storage Rocks: Metal Carbonates as Thermochemical Energy Storage Materials 1m The intermittent nature of renewable energy is a major challenge that can be overcome via cheap and effective energy storage [1]. Thermochemical energy storage is an upcoming technology that can improve efficiency in applications such as concentrated solar power[2]. Metal carbonates have great potential as thermochemical energy storage materials, through the reversible endo/exothermic desorption/absorption of carbon dioxide (CO2)[3]. However, major challenges include the loss of cyclic capacity and slow reaction kinetics[3]. Recently, it has been established that raw unrefined dolomite, CaMg(CO3)2, performed significantly better than laboratory synthesized dolomite due to the positive effect of chemically inert impurities present in the sample[4] However, increasing its relatively low operational temperature (550 °C) will improve efficiency[4]. The present research explores reactive metal carbonate composites, which consist of barium carbonate destabilised using titanium (IV) oxide (TiO2) or barium silicate (BaSiO3)[5]. This reduces the operating temperature from 1400 °C to, more suitable temperatures of 1100 °C and 850 °C, respectively, and improves kinetics of CO2 release and uptake. The reactions are explored using in situ synchrotron XRD combined with a variety of other characterisation techniques. [1] T. Sweetnam and C. Spataru, in Storing Energy, edited by T.M. Letcher (Elsevier, Oxford, 2016), pp. 501–508. [2] C. Prieto, P. Cooper, A.I. Fernández, and L.F. Cabeza, Renew. Sustain. Energy Rev. 60, 909 (2016). [3] L. André, S. Abanades, and G. Flamant, Renew. Sustain. Energy Rev. 64, 703 (2016). [4] T.D. Humphries, K.T. Møller, W.D.A. Rickard, M.V. Sofianos, S. Liu, C.E. Buckley, and M. Paskevicius, J. Mater. Chem. A 7, 1206 (2019). [5] K.T. Møller, K. Williamson, C.E. Buckley, and M. Paskevicius, J. Mater. Chem. A 8, 10935 (2020). Speaker: Mr Kyran Williamson (Department of Physics and Astronomy, Curtin University,) Data processing technique for the Taipan Be-filter spectrometer 1m Taipan, the highest flux thermal neutron scattering instrument at ACNS, was originally built as a traditional triple-axis spectrometer. In 2016 a beryllium filter analyser spectrometer was added for increased versatility. The Be-filter acts like a low-energy band-pass filter ideal for investigating lattice dynamics and molecular vibrations over a wide energy range. It is particularly well suited to measuring the motion within materials containing light elements such as hydrogen. We have successfully created a robust method of treating data from the Taipan filter-analyser and present the method within this poster [1]. The data-treatment process includes correction for the non-linear energy variation of a particular monochromator, removal of higher-order wavelength contamination, and estimation of low-energy multiple-scattering. The steps described here can be utilized by all users of the Australian Nuclear Science and Technology Organisation "Be-filter"—past, present, and future. [1] G.N. Iles, K.C. Rule, V.K. Peterson, A.P.J. Stampfl and M.M. Elcombe, Rev. Sci. Instrum. 92, 073304 (2021); doi: 10.1063/5.0054786 Discovering peptide inhibitors against FtsY, an antibiotic target 1m The rapid rise of antibiotic resistance has caused an urgent demand for new antibiotics. One way to address this is by manipulating essential bacterial interactions not targeted by current antibiotics. The interaction between the Signal Recognition Particle (SRP) and its receptor (FtsY) is critical for cell viability but is mediated by RNA:protein interactions in bacteria versus protein:protein interactions in eukaryotes. We have used a new technology known as RaPID (Randomised non-standard Peptide Integrated Discovery) to identify cyclic peptides that bind to FtsY. Sequence enrichment was observed after seven rounds of selection and eight representative peptides were selected for further characterisation. To determine whether the peptides can bind the intended RNA-binding interface on FtsY, Nuclear Magnetic Resonance Spectroscopy (NMR) spectra were collected on 2H13C15N-FtsY produced by ANSTO. High deuteration level has facilitated good quality NMR spectra despite the large size of FtsY (35 kDa). In total, ~220 amide groups were mapped onto the "fingerprint" 15N-1H-HSQC spectrum with >75% of backbone resonances assigned. Following peptide synthesis, we will titrate selected peptides into labelled FtsY for chemical shift perturbation experiments. This will provide binding affinity data for the different peptides and enable the mapping of binding residues onto our previously solved crystal structure. The highest affinity binders will be subjected to soaking and co-crystallisation experiments with FtsY to further characterise the mode of interaction. Taken together, the data obtained will inform the future development of cyclic peptides into FtsY inhibitors with high affinity and specificity as potential antibiotic leads. Speaker: Jennifer Zhao (University of Sydney) Zeolitic imidazolate frameworks (ZIFs) structure and properties correlation to nucleic acid delivery 1m In regenerative medicine, (intra)cellular delivery of genetic material can be used to introduce functional copies of a gene that is defective and responsible for disease development. To avoid nuclease- and lysosome-mediated degradation of the gene, drug delivery systems / carriers need to be developed. Recently, non-viral delivery systems are being developed, such as microinjection, or various chemical approaches (e.g. liposomes, polymers, lipids); due to their economical synthesis, biocompatibility and ability to transfer a variety of genetic materials and gene editing tools.1 Zeolite imidazole framework (ZIF) is a well-studied non-viral polymeric delivery system where coordination between Zn(II) and imidazolate forms a highly organised framework in aqueous solution. ZIFs offer advantageous physicochemical properties for bio-delivery applications and have been shown to encapsulate a wide range of biomolecules, including nucleic acids, via biomimetic mineralisation. Such ZIF-based delivery systems provide protection of the gene cargo and were shown to result in endocytosis-mediated cellular uptake. Further, ZIFs degrade in the acidic microenvironments of cancer cells, releasing their cargo at the target site.2,3 Both cellular uptake and release of ZIF encapsulated biomolecules are determined by the framework structure, and its crystal phase. In our work, a series of ZIF preparation methods are studied for the encapsulation of a circular plasmid. The resulting ZIF structures are characterised via FTIR, SEM, synchrotron PXRD. The aim of this project is to establish structure–property relationships to gene loading efficiency, cellular uptake and cargo release profiles. 1. Sung et al. 2019, Biomater Res, 23(1), 1-7, doi: 10.1186/s40824-019-0156-z. 2. Poddar et al. 2020, Small, 15(36), 1902268, doi: 10.1002/smll.201902268. 3. Poddar et al. 2021, Chem Com, 56(98),15406-15409, doi: 10.1039/d0cc06241c. Speaker: Shakil Ahmed Polash (PhD candidate, School of Science, RMIT University, Melbourne, Victoria 3000, Australia) Inelastic Neutron Scatterings Reveal Intense Ferromagnetic Fluctuations Preceding Magnetoelastic First-Order Transitions in LaFe13−xSix 1m First-order magnetic transitions are of both fundamental and technological interest. Of particular interest are giant magnetocaloric effects, which are attributed to first-order magnetic transitions and have attracted great attention for solid-state refrigeration applications. Here, we present a systematic study, with inelastic and quasielastic neutron scatterings, on the lattice and spin dynamics in intermetallic LaFe11.6Si1.4 and LaFe11.2Si1.8, which represent one of the most classical giant magnetocaloric systems and undergo first-order and second-order magnetic transitions, respectively. While the two samples show similar spin-phonon coupling effect, LaFe11.6Si1.4 exhibits a much stronger magnetic diffuse scattering in the paramagnetic state preceding its first-order magnetic transition, correlating closely to picosecond ferromagnetic fluctuations. These dynamic insights suggest that the spin dynamics dominate the magnetoelastic transition and ferromagnetic fluctuations may be universally relevant for magnetocaloric materials [1]. [1] Zhao Zhang, et al. PHYSICAL REVIEW MATERIALS 5, L071401 (2021). Characterisation of an Antimony-based Catalysts for Acid Water Oxidation Catalysis – Insights through X-ray Absorption Spectroscopy and the challenges of multi-metal systems 1m Electrochemical water splitting with a proton-exchange membrane electrolyte provides many advantages for the energy-efficient production of high-purity dihydrogen in a sustainable manner, but the current technology relies on high loadings of expensive and scarce iridium at the anodes, which are also often insufficiently stable in operation. A common strategy to achieve stability is to synthesise composite oxides composed of multiple components, for example [M]SbOx, [M]PbOx, [M]BiOx. Yet, these materials pose a challenge in that it is not well understand how the mixed metal works to stabilise the material under acidic conditions. The present work presents an efficient ruthenium antimony oxide (RuSbOx) electrocatalyst synthesised as a thin film on fluorine-doped tin oxide (FTO). Comprehensive physical characterisation by X-ray absorption spectroscopy (XAS) and transmission electron microscopy (TEM) reveals important insights into the structure and mechanism of the examined materials while simultaneously highlighting how structural effects, such as disorder, may impact the observation and interpretation of EXAFS data. Speaker: Brittany Kerr (Swinburne University of Technology) Biocompatible ionic liquids as designer solvents for the formation of non-lamellar lyotropic liquid crystalline nanoparticles as drug delivery vehicles 1m Ionic liquids (ILs) have emerged as a remarkable class of green solvents with unique characteristics, and feasible task-specific tailoring of their properties. The application of ILs has extended to facilitate amphiphile self-assembly. ILs not only support the self-assembly of amphiphiles, they can also be used as designer solvents(1). Lipid amphiphiles can assemble into a wide range of lyotropic liquid crystalline mesophases possessing unique highly ordered multidimensional structures. The bulk phases can be further broken into nanoparticle dispersions (LCNPs),for examples cubosomes and hexosomes, that are characterised by their high surface to volume ratio. These particles are receiving growing interest due to their great potential as drug delivery vehicles for both hydrophilic and hydrophobic drugs(2). Our recent small angle X ray scattering (SAXS) results revealed a wide range of LCNPs such as cubosomes and hexosomes obtained in various biocompatible ILs-water solvents. A strong correlation exists between the pH of the solutions and the adapted phases. 1. Zhai, J.; Sarkar, S.; Tran, N.; Pandiancherri, S.; Greaves, T. L.; Drummond, C. J., Tuning Nanostructured Lyotropic Liquid Crystalline Mesophases in Lipid Nanoparticles with Protic Ionic Liquids. The Journal of Physical Chemistry Letters 2021, 12 (1), 399-404. 2. Zhai, J.; Fong, C.; Tran, N.; Drummond, C. J., Non-Lamellar Lyotropic Liquid Crystalline Lipid Nanoparticles for the Next Generation of Nanomedicine. ACS Nano 2019, 13 (6), 6178-6206. Speaker: Mr Mohamad El Mohamad (RMIT university) Understanding the structural basis of TIR-domain assembly formation in TRAM- and TRIF- dependent TLR signalling 1m Toll-like receptors (TLRs) detect pathogens and endogenous danger, initiating immune responses that lead to the production of pro-inflammatory cytokines. At the same time, TLR-mediated inflammation is associated with a number of pathological states, including infectious, autoimmune, inflammatory, cardiovascular and cancer-related disorders. This dual role of the pathways has attracted widespread interest from pharmaceutical industries. Cytoplasmic signalling by TLRs starts by their TIR (Toll/interleukin-1 receptor) domain interacting with TIR domain-containing adaptor proteins MyD88, MAL, TRIF and TRAM. Combinatorial recruitment of these adaptors via TIR:TIR domain interactions orchestrates downstream signalling pathways, leading to induction of the pro-inflammatory genes. Although many constituents of the TLR pathways have been identified, the available information on their coordinated interactions is limited. Such information is crucial for a mechanistic understanding of TLR signalling, development of therapeutic strategies, and understanding of the molecular basis of the consequences for human disease of adaptor polymorphic variants. We have discovered that TIR domains can form large assemblies. We hypothesized that TIR domain signalling occurs through a mechanism involving higher-order assembly formation. In this study we aim to determine the molecular architecture of higher-order assemblies formed by TIR domains with a focus on TRAM-TRIF assemblies in the TLR4 and TLR3 pathway. Speaker: Ms Mengqi Pan Structural basis of coronavirus E protein interactions with human PALS1 PDZ domain 1m SARS-CoV-2 infection leads to coronavirus disease 2019 (COVID-19), which is associated with severe and life-threatening pneumonia and respiratory failure. However, the molecular basis of these symptoms remains unclear. SARS-CoV-1 E protein interferes with control of cell polarity and cell-cell junction integrity in human epithelial cells by binding to the PALS1 PDZ domain, a key component of the Crumbs polarity complex. We show that C-terminal PDZ binding motifs of SARS-CoV-1 and SARS-CoV-2 E proteins bind the PALS1 PDZ domain with 29.6 and 22.8 μM affinity, whereas the related sequence from MERS-CoV did not bind. We then determined crystal structures of PALS1 PDZ domain bound to both SARS-CoV-1 and SARS-CoV-2 E protein PDZ binding motifs. Our findings establish the structural basis for SARS-CoV-1/2 mediated subversion of Crumbs polarity signalling and serve as a platform for the development of small molecule inhibitors to suppress SARS-CoV-1/2 mediated disruption of polarity signalling in epithelial cells. Speaker: Mrs Airah Javorsky (La Trobe University) Crystal Structures of Protic Ionic Liquids and their hydrates 1m Protic Ionic Liquids (PILs) are a class of tailorable solvents made up of fused salts with melting points below 100 °C, which are formed through a Brønsted acid-base reaction involving proton exchange[1]. These solvents have applications as lubricants, electrolytes, and many other uses[2]. Although they are quite similar to molten salts, their crystal structures have not been explored in-depth, with only ethylammonium nitrate (EAN) having a reported crystal structure[3, 4]. Ten alkylammonium-based protic ionic liquids at both neat (<1 wt% water) and 90 mol% PIL, 10 mol% water concentrations were selected. Diffraction patterns were collected at the Australian Synchrotron ANSTO while attempting to crystallise the samples by cooling to 120 K. Five samples crystallised (3 neat, 2 dilute), where the temperature of the system was then increased at a rate of 6 K/min to room temperature. From these patterns we have identified a number of crystal phases, identifying their stability ranges and lattice constant variation from 120 K to room temperature. [1] Hallett, J.P. and Welton, T. (2011). Chemical Reviews. 111, 3508–3576. [2] Greaves, T.L. and Drummond, C.J. (2008). Chemical Reviews. 108, 206–237. [3] Abe, H. (2020). Journal of Molecular Liquids. 6. [4] Henderson, W.A., et al. (2012). Physical Chemistry Chemical Physics. 14, 16041. Speaker: Michael Hassett (RMIT University) Ocean acidification alters the nutritional value of Antarctic diatoms 1m The cold waters of the Southern Ocean (SO) are acknowledged as a major hotspot for atmospheric CO2 uptake and is anticipated to be one of first regions to be affected by Ocean acidification (OA). Primary production in the SO is dominated by diatom-rich phytoplankton assemblages, whose individual physiologies and community composition are strongly shaped by the environment, yet knowledge on how diatoms allocate cellular energy in response to OA is limited. Using Synchrotron based FTIR-Microspectroscopy at the Australian Synchrotron, we analysed the macromolecular content of selected individual diatom taxa from a natural Antarctic phytoplankton community exposed to a gradient of fCO2 levels (288 – 1263 µatm). We found strong species-specific differences in macromolecular partitioning under OA. Larger taxa showed preferential energy allocation towards proteins, while smaller taxa increased both lipid and protein stores. Our study also revealed an OA-induced community shift towards smaller taxa and lower silicification rates at high fCO2. If these changes are representative of future Antarctic diatom physiology, we may expect a shift away from lipid rich large diatoms towards a community dominated by smaller, less silicified taxa, but with higher lipid and protein stores than their present-day contemporaries, a response that could have cascading effects on foodweb dynamics in the Antarctic marine ecosystem. Speaker: Ms Rebecca Duncan (University of Technology Sydney and UNIS Svalbard) Synchrotron Light for Exploring Arsenic Environments in Arsenian Pyrite 1m Substitution of arsenic in pyrite called arsenian pyrite is often accompanied by concentration of valuable metals such as gold in some deposits. In such mineralogical occurrence, a concentration-driven substitution of As encapsulating 'pure' pyrite is typical. Although knowledge of As substitution environment in pyrite is important in determining surface characteristics, and interactions in chemical processes such as oxidation, they are widely varied in nature with paucity of information in existing literature. The current study employed synchrotron X-ray spectroscopy (SXPS), using tunable excitation energy to study vacuum-fractured surfaces of arsenian pyrite. SXPS As 3d of arsenian pyrite suggest the existence of an As-As dimer in arsenian pyrite, characterised by a shift in bulk binding energy to 0.6 eV lower than the As-S dimer of arsenopyrite. Possible As cluster formation was also proposed. The high binding energy contribution at excitation energy of 100 and 210 eV were resolved into two surface components that may have formed from possible surface reconstruction or polymerisation. Speaker: Philip Forson (University of South Australia, Future Industries Institute-STEM) Acidophilic iron- and sulfur-oxidizing bacteria driven primary mineral weathering and secondary mineral formation in Fe ore tailings 1m Direct phytostabilisation of Fe ore tailings is typically unfeasible due to its harsh environment, which includes strongly alkaline pH conditions, deficient available nutrients and organic matter and poor physical structure, hindering microbial and plant colonisation. Eco-engineering Fe ore tailings into a soil-like substrate (or technosol) is an emerging technology to rehabilitate tailings landscapes sustainably, involving a suite of abiotic and biotic inputs (organic matter, functional microorganisms and pioneer plants). However, the extreme alkalinity and the lack of secondary Fe-rich minerals are critical barriers to transforming Fe ore tailings into soil. Using a microcosm experiment amendment with elemental sulfur (S0), Acidithiobacillus ferrooxidans demonstrated the capacity to generate acid that neutralised alkaline tailings and accelerated primary mineral weathering, i.e., technosol formation. [1] The effects of biological S0 oxidation on the weathering of alkaline Fe ore tailings were examined using several high-resolution micro-spectroscopic techniques, including synchrotron-based X-ray absorption fine structure spectroscopy (XAFS) and electron microscopy. It is found that: 1) A. ferrooxidans inoculum together with S0 amendment facilitated fast neutralisation of the alkaline Fe ore tailings; 2) A. ferrooxidans activities induced Fe-bearing primary mineral (e.g., biotite) weathering and nano-sized secondary mineral (e.g., ferrihydrite and jarosite) formation; 3) the association between bacterial cells and tailing minerals were facilitated by extracellular polymeric substances (EPS). The behaviour and biogeochemical functionality of A. ferrooxidans in the tailings provide a fundamental basis for developing bacterial based technologies towards eco-engineering tailings into a soil-like substrate for sustainable mine site rehabilitation. Speaker: Mr QING YI (The University of Queensland) AUM2021_QingYI_Poster-20210926.pdf Comparison between calculated texture-derived velocities and laboratory measurements conducted on samples from a gold-hosting structure. 1m Most lode gold deposits worldwide are associated with structures such as shear zones. Thanks to their capacity to couple resolution and depth of investigation, seismic methods can identify these indirect indicators of mineralization and help extend gold exploration targets to greater depths. Rocks from shear zones are usually seismically anisotropic. Seismic anisotropy is generally related to the intrinsic texture of the rock and the presence of cracks at depth. Determining seismic anisotropy in relation to the texture of the rock, and its evolution with depth (pressure) is therefore necessary to help interpret exploratory seismic surveys. We report here the results of such a correlation conducted in the laboratory with rock samples extracted from the Thunderbox Gold Mine in Western Australia. Four samples — including two from the shear zone — were selected to assess the pressure and directional dependency of the P-wave velocities. In addition, an independent texture analysis was carried out on the two samples from the shear zone using the quantitative Neutron diffraction method. We then computed the texture-derived velocities using as inputs the mineralogy and texture of the samples. The good agreement between calculated texture-derived velocities with experimental measurements shows that the texture of the shear zone samples is the main source of seismic anisotropy. This study seeks to improve the understanding of the seismic response across mineral deposits that are structurally controlled by shear zones. Speaker: Mr Andre Eduardo Calazans Matos de Souza (Curtin University) Towards real-time analysis of liquid jet alignment in SFX 1m Serial femtosecond crystallography (SFX) enables atomic scale imaging of protein structures via X-ray diffraction measurements from large numbers of small crystals intersecting intense X-ray Free Electron Laser (XFEL) pulses. Sample injection typically involves continuous delivery of crystals to the pulsed XFEL beam via a liquid jet. Due to movement of the jet, which is often focused to further reduce its diameter using a gas virtual dynamic nozzle (GVDN), jet position is often adjusted multiple times during the experiment. This can result in loss of beamtime and significant manual intervention. Here we present a novel approach to the problem of liquid jet misalignment in SFX based on machine vision. We demonstrate automatic identification and classification of when there is overlap ('hit') and when there is not overlap ('miss') between the XFEL beam and jet. Our algorithm takes as its input optical images from the 'side microscope' located inside the X-ray hutch. This algorithm will be incorporated into the control system at the SFX/SPB beamline at the European XFEL where it will be used for in-situ 'alignment correction' via a continuous feedback loop with the stepper motors controlling the location of the nozzle within the chamber. Full automation of this process will result in a larger volume of useful data being collected. By increasing the efficiency and reducing the per experiment operational cost of SFX at the European XFEL a higher volume of experiments can be performed. In addition, via analysis of the feedback metrology we anticipate that optimised nozzle designs and jetting conditions could be achieved further benefitting the end user. Speaker: Mr Jaydeep Jaydeep Patel (La Trobe University) Combating "fishy" seafood using nuclear techniques 1m Food provenance is a global concern due to rising instances of food fraud, costing the global food industry over 50 billion USD per annum and leading to consumers getting lower quality produce. Seafood is a high value food product, and the Australian seafood industry is worth 4 billion AUD by 2023. Most Australian seafood is exported, and complex supply chains can leave it susceptible to seafood fraud. Accurate and reliable methods of determining seafood provenance is necessary to deter fraud in the supply chain. While conventional techniques can be used for determining seafood provenance, there is no single method that accurately determines both the geographic and production origin of seafood. This is where nuclear techniques can play a vital role, the ANSTO led seafood provenance consortium has partnered with university, industry, and government agencies to develop a method for determining seafood provenance using iso-elemental fingerprinting. This work also highlights the utilisation of ANSTO's multi-platform analysis capabilities including x-ray fluorescence through Itrax, accelerator-based ion beam analysis and stable isotope analysis that allow provenance to be determined with >80% accuracy when combined with machine learning based models. This research is expected to provide the industry and regulatory bodies with an effective way of determining seafood provenance. Furthermore, the iso-elements fingerprints are unique to each grower and has the potential to be used as a tool to protect their brands. It also ensures that the Australian export industry is protected and allow consumers to make informed decisions when purchasing seafood. Speaker: Mr Karthik Gopi (UNSW) Effect of different cladding alloys and grinding on residual stress in laser clad light rail components using neutron diffraction 1m One of the greatest challenges threatening Australia's railway infrastructure is the rapid rate of rail degradation. Wear and rolling contact fatigue occur due to increasing speeds and tonnages of rollingstock, requiring significant funding by the Australian government to maintain. Light rail is particularly susceptible to degradation due to low carbon steel used in tram switch blades. Laser cladding is a repair strategy which applies a metallic deposition by melting a cladding powder with the substrate using a high energy laser. This process forms a metallurgical bonded layer whilst generating a heat affected zone (HAZ) containing a redistribution of residual stress due to phase changes and solidification shrinkage from the thermal inputs. During operation, cyclic wheel-rail contact stress is superimposed on the residual stress leading to fatigue. The ability to accurately measure residual stresses non-destructively, made possible using neutron diffraction, is critical in experimentally obtaining stress data for fatigue assessment. Laser cladding has been carried out on ex-service switch blades using a martensitic stainless steel and two Stellite alloys. A standard grinding procedure has been performed to replicate the stress conditions experienced in-field after cladding repairs. Strain measurements were undertaken on the Kowari strain scanner at ANSTO to determine the tri-axial stress across the cladding, HAZ and substrate. The locations of the fusion boundary and HAZ have been identified through correlation of the stress, microstrain and full width at half maximum profiles. These findings accompany extensive evaluation of microstructure and mechanical properties to optimise laser cladding repairs in light rail components. Speaker: Prof. Ralph Abrahams (Monash University) Data Constrained Modelling with multi-energy X-ray computed microtomography to evaluate the porosity of plasma sprayed ceramic coatings 1m Coatings of the materials zirconium boride (ZrB2) and hydroxyapatite (HAp) underwent X-ray micro-Computed Tomography (X-ray μ-CT) scanning at the Australian Synchrotron. The Data Constrained Modelling (DCM) approach was used to reconstruct 3D models and assess porosity and void distributions. The results from the 3D analysis were compared to a 2D porosity and void distribution assessment, determined from image analysis of the coatings. It was found that the 3D and 2D porosity quantifications were in moderate to good agreement. The 3D porosity determined from the ZrB2-1 model, 24.7%, was within the range determined from 2D analysis, 22.1 ± 2.6%. Alternatively, the 3D porosity determined from the HAp-1 model, 22.8%, was marginally greater than the determined 2D porosity, 19.8 ± 2.1%. However, a comparison of the 2D and 3D void distributions revealed that a 2D assessment poorly predicts the 3D microstructure of coatings and cannot be used to infer properties strongly dependent on the 3D void network. Furthermore, the 3D analysis demonstrated the deficiencies in typical CT segmentation methods applied to data with a moderate CT resolution size of 5.4 μm. The DCM methodology can quantify fine-structure details below the resolution of the performed CT and thus assess the multi-scale porosity and void networks within atmospheric plasma spray (APS) deposited coatings. The superior DCM approach enabled the quantification of pores below the CT resolution limit and revealed that approximately 91.5% and 81.0% of the ZrB2-1 model and the HAp-1 models, respectively, would not have been accurately modelled using typical CT segmentation methods. Speaker: Mr Bruno Kahl (Swinburne University of Technology) Microbeam radiation therapy in a heart beat 1m Non-small-cell lung carcinomas are highly radioresistant and so, of potential interest for treatment with Microbeam Radiation Therapy. In the thoracic cavity, the therapeutic dose is limited largely by the heart; one of the most important organs of risk. We developed an ex vivo protocol to study the acute response of the cardiac impulse conduction system to microbeam radiotherapy with high peak doses, combining physiology measurements in the Langendorff model of the isolated beating heart with world-leading real-time small volume dosimetry. The study was performed in Hutch 2B of the Imaging and Medical Beam Line (IMBL) of the Australian Synchrotron. The acute physiological response of the heart was measured for 60 minutes post-irradiation. With no arrhythmia or ventricle pressure drop, results place the upper limit for normal functioning of the heart between 400 – 4,000 Gy Speaker: Jason Paino (UOW) Using low energy ion beams to pattern the surface of novel semiconductors 1m A wide range of ion energies (KeV- MeV), ion species, and ion fluences achievable by ion beam implantation, which allows fabrication of highly customized patterned subsurface structures in materials. This advanced material processing technology allows tuning of specific magnetic, and electronic properties with the aim of achieving a wide range of functionalities in electronics. Magnetic ions implantation has been actively used for functionalising semiconductor materials in recent few years in attempt to fabricate magnetic semiconductors for spintronic applications. [1, 2] Ion beam patterning, like electron-beam lithography, able to fabricate customised geometryies on a surface of a semiconductor to create a functionalised region with desired electronic and magnetic properties. By using low energy ion beam implanter at Center of Accelerator Science (CAS) at Australian Nueclear Science and Technology Organisation (ANSTO), we demonstrate that the current method has the potential application in the integrated circuitry processing industry with the ability to "write" very small features down to few tens of nanometers. Speaker: Abuduliken Bake (University of Wollongong) Characterising the temperature dependent spectra of polyethylene for terahertz optics 1m Polyethylene is a highly transparent material in the terahertz (THz) region (1-200 cm$^{-1}$). This makes it ideal for lenses and windows, especially for cryostats. It is also often used as a binding medium in sample pellets to dilute trace amounts of optically thick samples. One caveat for this extremely useful material, however, is an absorption at 73 cm$^{-1}$, often overlooked when utilising polyethylene for terahertz optics. This mode was first studied during the 1960's [1,2] but has sparsely been mentioned in scientific literature since, most recently in 2019, being described as "elusive" [3]. To determine the effects of this absorption on terahertz optics we have quantified the intensity and frequency of this mode from 6-300 K for different sample thicknesses on the THz beamline at the Australaian Synchrotron. We have observed a large redshift of 6.7 cm$^{-1}$ (79.9-73.2 cm$^{-1}$) with heating over this temperature range, as well as significant reductions in the peak intensity. These results indicate that for thin samples (<2 mm) of polyethylene this mode is negligible at room temperature, however, at cryogenic temperatures this mode causes a notable drop in transmission, even for samples as thin as 0.5 mm. This warrants caution, especially when selecting cryostat windows and observing weak features near this mode. [1] R. V. McKnight et al., "Far-infrared spectrum of polyethylene, and quartz-crystal plates", J. Opt. Soc. Am., 54(1), 132-133, 1964. [2] S. Krimm et al., "Assignment of the 71-cm-1 band in polyethylene", J. Chem. Phys., 42(11), 4059-4060, 1965. [3] K. Zhou et al., "Transmittance of high-density polyethylene from 0.1 THz to 15 THz", Proc. SPIE 11196, Infrared, Millimeter-Wave, and Terahertz Technologies VI, 2019. Speaker: Thomas Sanders (University of Wollonogong) Inelastic Neutron Scattering of Liquid Metal Gallium 1m Liquid metals (LMs) – metals that are liquid near room temperature – have fascinated scientists for centuries. In the last few decades, in particular, the extent of their peculiar properties has been highlighted. Properties such as low melting point, high flexibility and stretchability, excellent thermal and electrical conductivities, and biocompatibility have led LMs to a wide variety of applications. While LMs have proven to be an exceptionally useful class of materials, their unique properties also speak to various fundamental physical phenomena. In particular, the (hydro)dynamics of LMs is of interest as they have a uniquely challenging nature: possessing the complex nature of regular fluids as well as a "sea" of electrons – giving rise to unique hydrodynamic effects. Inelastic neutron scattering (INS) is a particularly well-suited technique to investigate such effects as it probes the microscopic hydrodynamic origins in the nanometer-terahertz regime. In this presentation, we shall report our preliminary investigations on Ga across the phase transition from solid to liquid as a function of temperature using inelastic neutron scattering. The analysis of the energy dependence of the phonon density of states at low energy region reveals the transition from E^2 for the solid state to a more or less linear relationship corresponding to the liquid state. The dynamic changes will be further discussed in the content of atomic diffusive properties of the system through analysis of the quasielastic neutron scattering in combination with molecular dynamic simulations. Speaker: Caleb Stamper (University of Wollongong) Understanding and controlling the formation of photonic crystals from polydisperse colloidal systems 1m The fundamentals of crystallisation and glass formation are not yet fully understood. Colloidal suspensions have been shown to be promising model systems for understanding these processes. As colloidal motion is Brownian, rather than ballistic, kinetics and dynamics can be studied in real-time. It is well documented that colloidal suspensions can "successfully crystallise" when the particles in the system have sufficiently low polydispersity.[1,2] This means that the particles must have a similar average size and shape. If a system is highly polydisperse, this will hinder the solidification process. In this work we will explore colloidal nanodiamonds. Nanodiamonds are a topic of interest in many material studies due to their wide variety, and unique mechanical and optical properties.[3,4] Detonation nanodiamonds (DNDs) are of particular interest due to their unique fabrication process. Due the detonation synthesis method, the particles are small (several nm) and faceted, but in solution self-assemble into highly irregular fractal shapes.[5] Despite this high polydispersity, when centrifuged, these types of DNDs can yield incredibly ordered structures and form iridescent photonic crystals – this is highly surprising given the highly irregular structures of these materials. These photonic crystals were first discovered by Grichko et al.,[5] however, the mechanisms behind these highly ordered structures are still unknown. With a combination of lab techniques and beam time allocations at the Australian Synchrotron, ANSTO and potentially overseas neutron facilities, we will systematically investigate these nanodiamond photonic crystals, and examine their structure and formation kinetics. Speaker: Katherine Chea (RMIT University) Radiation monitor for astronaut safety and prediction of electronic failure in the space mission 1m Astronauts travelling through space are at risk of exposure to radiation arising from Galactic Cosmic Rays (GCRs) and Solar particle events (SPE) which possess a significant radiobiological effect. GCR is mostly made up of protons with a small proportion of GCR being high atomic number energetic particles, which are difficult to shield while SPEs are events which occasionally eject large number of protons on top of a steady stream of photons and electrons. The composition of GCRs and SPE creates a complex radiation field which becomes difficult to characterize in real time due to the large variety of ions and radiation types. The ability to measure the dose equivalent in real time received by astronauts with high efficiency and accuracy is of great importance, as the risk of excessive exposure can be monitored and minimized. A novel large area microdosimeter has been developed at the Centre for Medical Radiation Physics, University of Wollongong – named the Octobox, for monitoring the dose equivalent and radiobiological risk to astronauts in a mixed radiation field environment, typical to the one encountered in space. The Octobox's response to 290 MeV/u 12C ion, 230 and 490 MeV/u 28Si, 400 MeV/u 20Ne at the Heavy Ion Medical Accelerator in Chiba (HIMAC), Japan was studied. Both experimental and GEANT4 simulation data are showing that the Octobox is suitable for mixed radiation field monitoring for space application with real-time readout of dose equivalent values aiding in protection of astronauts on space missions. Speaker: Vladimir Pan (University of Wollongong) Canine osteosarcoma positioning and dosimetry study 1m Appendicular osteosarcoma is a highly destructive malignant primary bone tumour occurring in both canine and human patients. Clinically, amputation is the most common outcome, however, Synchrotron generated radiotherapy may provide a preferable alternative. Building from a body of knowledge acquired in small animal models, client-owned dogs with spontaneously developing tumours would be an excellent translational model to assess novel radiation therapies, moving toward the ultimate goal of human patients. This work presents a positioning and dosimetry study using a canine cadaver as a proof-of-concept for veterinary trials at the imaging and medical beamline of the Australian Synchrotron. This included x-ray imaging, alignment to the treatment beam, simulation and prescription of a therapeutic dose and finally the delivery of said prescription. Lithium Lanthanide Halides: A New Family of Solid Electrolytes 1m The growing need for safe and reliable energy storage has brought the search for stable, high performance solid electrolytes to the forefront of battery materials research. Recently, it has been shown that lithium lanthanide halides (Li$_3$MX$_6$, M = La-Lu, X = Cl, Br, I) with high ionic conductivities can be synthesized through mechanochemical and water mediated routes, creating renewed interest in the family of compounds. However, Li$_3$MX$_6$ compounds have only been synthesised with the late lanthanides (Eu-Lu), apart from the isolated case of samarium bromide, and of these compounds, only Li$_3$MX$_6$ (X = Cl, Br, I) and Li$_3$YbCl$_6$ have had their crystal structures reported. This leaves a large gap in the literature that is yet to be explored. The family of Li$_3$MX$_6$ compounds share properties that make them highly appealing for use in all-solid-state batteries. Their structural properties, namely disordered lithium sites and soft anion lattices, allow for Li$_3$MX$_6$ compounds to have excellent ionic conductivities of ~1 mS/cm, comparable to garnet Li$_7$La$_3$Zr$_2$O$_{12}$, one of the most promising solid electrolytes for lithium batteries. Additionally, halides have a favourable decomposition against lithium metal electrodes, forming ionically conductive and electronically insulating LiX interphase materials. These interphase materials are stable during cycling and impede any further electrolyte decomposition, allowing for excellent cyclic stability. These properties, along with the large gaps that are yet to be explored, make research into Li$_3$MX$_6$ compounds imperative for the continued development of solid-state electrolytes for all-solid-state batteries. Speaker: Michael Brennan (University of Sydney) Wombat – the high intensity diffractometer at OPAL 1m Wombat is a high intensity neutron diffractometer located in the OPAL Neutron Guide Hall. It is primarily used as a high-speed powder diffractometer, but has also expanded into texture characterisation and single-crystal measurement, particularly diffuse scattering. The high performance comes from the combination of the best area detector ever constructed for neutron diffraction with the largest beam guide yet put into any research reactor and a correspondingly large crystal monochromator, all combine with the centre's polarisation capability to provide an instrument which is unique within the Southern hemisphere. Wombat has been used to explore a broad range of materials, including: novel hydrogen-storage materials, negative-thermal-expansion materials, methane-ice clathrates, piezoelectrics, high performance battery anodes and cathodes, high strength alloys, multiferroics, superconductors and novel magnetic materials. Our poster will highlight both the capacity of the instrument, and some recent results. Speaker: Helen Maynard-Casely (Australian Nuclear Science and Technology Organisation) Investigating negative thermal expansion in aliphatic metal-organic frameworks 1m Negative thermal expansion (NTE) involves the unconventional behaviour of material contraction upon heating and has been observed in some metal organic frameworks (MOFs). Investigations into the mechanism governing NTE are highly important for practical applications for when it is undesirable for materials to expand upon heating. Previous investigations focused on aromatic and single component frameworks, our goal is to expand into the realm of aliphatic linkers such as cubane-1,4-dicarboxylate (1,4-cdc) and bicyclo[1.1.1]pentane-1,3-dicarboxylate (1,3-pdc), which may introduce unencountered dynamic motions.[1] Single-component aliphatic MOFs, 3DL-MOF-1 ([Zn4O(1,3-pdc)3]) and CUB-5 ([Zn4O(1,4-cdc)3]) were explored using powder diffraction (PD) techniques.[2] The aliphatic MOFs demonstrated enhanced NTE, in comparison to its aromatic MOF-5 analogue . Investigations on the host-guest effects on NTE behaviour[3] were explored using neutron PD at the ACNS by charging 3DL-MOF-1 with CO2 guest molecules. Successful NTE quenching was achieved at higher CO2 loading. To extend our understanding of aliphatic influences on NTE behaviour, we study a series of moisture stable multicomponent frameworks.[4] Using synchrotron PD and single crystal X-ray diffraction we investigate the NTE behaviour of quaternary MOFs (three linkers and one node) by varying the aliphatic linker in each system. We hope to identify the key characteristics of aliphatic linkers that dictates NTE behaviour. [1] J. Perego et al., Nature Chemistry 2020, 12, 845. [2] L. K. Macreadie et al., ACS Applied Materials & Interfaces 2021, 13, 30885. [3] J. E. Auckett et al., Nature communications 2018, 9, 1. [4] L. K. Macreadie, et al., Angewandte Chemie International Edition 2020, 59, 6090. Speaker: Ms Celia Chen (The University of Sydney) Chiral CPs formed using chiral heterotopic ligands 1m The use of heterotopic ligands in the synthesis of coordination polymers (CPs) using pyridyl groups and carboxylates have been extensive. But the use of chiral heterotopic ligands using pyridyl groups and phthalimide groups has not been explored. Chiral coordination polymers have been formed using leucine, phenylalanine and cysteine substituted pyridylphthalimide cores (L3pyph, M3pyph and L4pyph), where the pyridyl groups have been substituted in the 3 and 4 positions. One dimensional coordination polymers have been formed using L3pyph and P3pyph, where there are π-π interactions between parallel 1D chains. However, the use of more exotic amino acids such cysteine, C3pyph, has allowed for a 2D coordination polymer to be formed through the formation of disulphide bonds between 1D chains. A two fold interpenetrated 2D dimensional coordination polymer has been formed using L4pyph and P4pyph. These illustrates that the substitution of the pyridyl group between the 3 and the 4 position has a major influence with the coordination polymers formed. Speaker: Mr Nicholas Kyratzis (Monash University) The N-methyl-D-aspartate receptor ligand binding domain and the interactivity with ion-channel control 1m Encephalopathies are a group of brain dysfunctions which leads to cognitive, sensory, and motor impairments. Recent developments in the field have led to the identification of several mutations within the N-methyl-D-aspartate receptor as one of the possible culprits for this group of conditions. However, understanding of the underlying changes to the receptor due to these mutations has been elusive to date. We aimed to determine the effects of one of the first mutations identified within the N-methyl-D-aspartate receptor GluN1 ligand binding domain, Ser688Tyr. This mutation was identified and associated with early onset encephalopathy. We performed molecular docking, randomly seeded molecular dynamics simulations, and binding free energy calculations to determine the behaviour of the 2 main co-agonists: glycine and D-serine and their effects on ion channel function. We determined that the Ser688Tyr mutation leads to instability of both ligands within the ligand binding site due to changes within the ligand binding domain associated with the mutation. Associated binding free energy for both ligands also increased significantly in the mutated receptor. These results reinforce previously observed in vitro electrophysiology data and provides additional information on ligand behaviour. Upcoming studies involve the use of crystallography and neutron scattering to determine the effects of this mutation on ion-channel function. This study provides valuable insight into the consequences of mutations within the N-methyl-D-aspartate receptor GluN1 ligand binding domain. Speaker: Zheng Chen (The University of Sydney) Chiral Detection with Fluorescent Coordination Polymers 1m Chirality is an intrinsic property of life on Earth. Biological systems have evolved alongside chiral molecules like proteins, DNA, and sugars, which all exist as pairs of nonsuperimposable mirror images. These mirror images, called enantiomers, are chemically indistinguishable, except when they interact with other chiral systems. Chiral drugs like ibuprofen differ in their effectiveness based on the chiral purity of the dose, and artificial sweeteners like aspartame can taste bitter if unwanted enantiomers are not filtered out prior to consumption, owing to the human body's inherent chirality. The need to differentiate and separate the enantiomers of chiral compounds has led to the development of chiral sensors: molecular systems that can identify the enantiomeric purity of a sample. Coordination polymers (CPs) and metal-organic frameworks (MOFs) are ideally suited to chiral sensing. These crystalline frameworks consist of extended structures of organic linkers bridging metal centres, and are both easily tuned and potentially porous, enabling the incorporation of small guest compounds into their internal voids. When the parent framework is chiral,1 one enantiomer of a chiral guest molecule will have a stronger interaction with the structure than its opposite. This dichotomy in binding strength can be paired with methods like circular dichroism (CD) spectroscopy and fluorescent techniques to assess a sample's enantiomeric composition.2 This poster presentation will describe the synthesis of a chiral, fluorescent CP and its crystal structure, recorded on the MX1 beamline at the Australian Synchrotron. The ability of this framework to differentiate the enantiomers of chiral guest compounds through fluorescent quenching measurements will also be highlighted. Speaker: Shannon Thoonen (The University of Melbourne) The investigation of structural and electronic configurations of noble-metal free nanocomposite and electrocatalytic oxides for acidic water electrolysis 1m The development of in situ XAS for water electrolysis applications, such as sustainable hydrogen production, is integral towards the accurate characterisation of state-of-the-art electrocatalytic materials. As this field continues to uncover a breadth of earth-abundant and high performance electrocatalysts, the understanding of their operando structures and electronic states is required to not only understand the true nature of these electromaterials, but also precisely bench-mark emerging catalysts and catalytic mechanisms against already industrially dominant electrocatalysts. Due to its intrinsically conductive nature, and the purity of hydrogen that is produced at industrially relevant current densities, acidic water electrolysis presents one of the most capable modes of producing hydrogen sustainably at the terawatt scale. It is from these perspectives that the development of cost effective, acid stable and highly active catalytic materials must be developed and characterised in order to make this technology increasingly feasible for deployment at the global scale. Operando XAS has been instrumental in our recent developments towards two intrinsically stable electrocatalysts that are based off cobalt-rion-lead and silver-bismuth mixed oxides, and the now refined understanding of "catalyst-in-matrix" mechanisms of operation. During our XAS work at ANSTO we have been successful in collecting high quality XANES and EXAFS data on the two acid stable materials described, whereby structural and electronic information would not have been uncovered under ex situ XAS experimental designs. From the detailed results obtained, we are now refining our in situ XAS technique for a breadth of acid-stabe materials that have been developed within our team at Monash University and believe that this will benefit the field towards precise bench-marking metrics for cost effective electrocatalysis. Speaker: Darcy Simondson-Tammer (Monash University) Investigating the dielectric properties of the cornea and tympanic membrane using Synchrotron ATR and transmission at THz frequencies 1m High GHz and THz frequencies are becoming important in the communication, security, and industrial fields. With the increasing use of THz technology, the skin, cornea, and tympanic membrane will be subjected to increased incidental and purposeful THz radiation. There is an urgent need to adequately characterise the way the cornea and tympanic membrane interact with THz radiation to understand potential THz exposure hazards and refine dosimetry guidelines. The understanding of the complex permittivity of the cornea and tympanic membrane at THz frequencies may lead to the development of THz-based diagnostic techniques and therapeutic techniques based on the differences of permittivity at THz between normal and pathological states. THz is highly absorbed by liquid water. The cornea and tympanic membrane are "high bulk water content" tissues (over 70% water). We have devised innovative approaches to interrogate biological samples with attenuated total reflection (ATR) apparatus at THz frequencies at THz/far-infrared beamline in the Australian Synchrotron. One new method extends the capabilities of the ATR apparatus to a partial reflection/partial transmission mode (APR). A second method was to vary the temperature of biological tissues whilst continually scanning the sample. The combination of the methods brought a very accurate determination of the temperature-dependent variation of the refractive index. The last technique extends the useful range of the apparatus to exploring samples with refractive index beyond the maximum possible with attenuated total reflection, bringing water-based biological samples within the capacity of the diamond crystal ATR at the Australian Synchrotron. Speaker: Ms Negin Foroughimehr (Swinburne University of Technology) Ruthenium-Based Pyrochlore Oxides for Improved Electrocatalysis 1m Energy security during the transition to a low-carbon economy is one of society's grand challenges. One possible method of developing carbon-neutral energy generation is through the combustion of hydrogen and oxygen gas. However, these gases must be able to be sustainably sourced using low-emission technologies. One such method is using electrocatalysts – catalysts capable of splitting water into hydrogen and oxygen gas in the presence of electricity. Currently, industry standard electrocatalysts contain a high noble metal content, such as ruthenium and iridium. These metals are extremely expensive and their performance can degrade over time. Recently, pyrochlore oxides have emerged as promising alternatives due to their low noble metal content, extreme stability, and high oxygen evolution activity in acidic environments. However, despite this, debate currently exists in the literature as to what specific structural properties of these materials lead to their superior electrocatalytic performance. This work presents full structural models of various ruthenium pyrochlore oxides of the form (Y$_{2-x}M_{x}$)Ru$_{2}$O$_{7-d}$ ($M$ = Mn-Zn) based on various diffraction and spectroscopic studies. X-ray and neutron diffraction, as well as X-ray absorption spectroscopy, have been used to determine the short-range local and long-range average structures of these electrocatalysts. Cyclic voltammetry measurements have further shown significant oxygen evolution reaction activity compared to industry-standard ruthenium oxide, despite containing substantially less ruthenium. This has allowed us to establish structure-functionality relationships for these electrocatalysts, further developing and improving them for overall water splitting reactions. Speaker: Bryce Mullens (University of Sydney) SNAKE VENOM-CONTRLLED 3D FIBRIN ARCHITECTURE REVEALED BY SANS/USANS DICTATES FIBROBLAST DIFFERENTATION 1m Fibrin is the founding matrix after injury, delivering the key biophysical cues to promote wound healing in a timely and coordinated manner. The effect of the fibrin architecture on wound healing hasn't been studied due to a lack of control over the enzyme-catalyzed polymerization of the fibrin network in vitro. Here, we establish a new defined snake venom-controlled fibrin system with precisely and independently controlled architectural and mechanical properties. By utilising combined small-angle neutron scattering (SANS) and ultra-small angle neutron scattering (USANS) techniques, we characterize the full-scale architectural properties of the new system from the internal structure of the individual fibres to the structure of the fibrin networks and compare them to super-resolution optical methods. This very precise set of neutron scattering data confirms our full control over the network's architectural features, which serves as a foundation for the application of this defined system. The subsequent cell differentiation studies reveal that fibrin architecture has prevailing control over fibroblast spreading phenotypes and long-term myofibroblast differentiation. These findings implicate matrix architecture as a key activator of fibroblast differentiation and provide new biophysical strategies in the design of biomaterials to promote scarless wound healing. Speaker: Mr Zhao Wang (1Australian Institute for Bioengineering and Nanotechnology, The University of Queensland, Australia.) Poster ANSTO annual users' meeting 2021-Jeff.pdf Physical insights into self-assembly of enzymatic protein particles using Small-Angle X-ray Scattering (SAXS) 1m Assembled protein particles are emerging as advanced protein biomaterials with significant impact in areas of vaccine development, biocatalysis, drug delivery and biosensing. To date, assembled protein particles primarily serve as scaffold to tether functional entities for various applications. Since they lack inherent functional properties, subsequent functionalisation of protein particles is essential. In this work, we present a simple approach using self-assembling peptides to form particles of protein of interest in the presence of stimuli.[1] We demonstrate this by a model protein-peptide module using enzyme bovine carbonic anhydrase (BCA) fused with self-assembling peptide (P114) via GS-linker and expressed in E. coli. The BCA-P114 self-assembles into particles in response to two different stimuli i.e., pH and magnesium ions. Through dynamic light scattering we showed that BCA-P114 particles form spontaneously, and particle size can be controlled with the extent of stimuli.[2] Using SAXS (SAXS/WAXS beamline at the Australian Synchrotron), we studied the self-assembly kinetics and the timescales of BCA-P114 particle formation using magnesium ions as stimuli. The SAXS analysis of particle formation kinetics exhibited the particle formation to occurs within 10 secs of exposure to magnesium ions. Furthermore, the structure and function of BCA-P114 particle were confirmed by transmission electron microscopy and enzyme assay respectively. Our self-assembling strategy provides a platform for the spontaneous formation and customisation of particles of desired functional protein.[3] This platform technology will open-up new opportunities to adapt functional proteins into particles for use as advanced biomaterial. Speakers: Dr Bhuvana Shanbhag (Monash University), Dr Tayyaba Younas (Monash University) A precisely piezo-controlled macro-ATR for characterizing the dynamic behaviour electrolyte/electrode interface 1m In all battery systems, electrolyte plays a vital role in determining the stability of the electrodes, as well as the safety of the battery uses. The good solid electrolyte interphase (SEI) protective layer formed at the first cycling process of the battery, rather than continually accumulating on the electrode surface, and is not dissoluble in electrolytes, making its properties highly dependent on the chemical structure. Therefore, further development of safe battery technology strongly requires a better understanding of the chemistry and formation mechanism of the SEI, which remains largely unknown due to their complex structure and a lack of reliable in situ experimental techniques. Based on the above, a novel piezo-controlled macro-ATR (within a precise controlling thickness of 100 nm above the electrode surface) is successfully developed for battery research in the IRM beamline in the Australian Synchrotron. This innovation enables probing the real-time reaction inside a battery at the micro-scale with an accurate controlled detecting movement to the electrode surface. Changes in functional groups and their distribution observed will be complementary with the ex-situ results to provide a better understanding of the mechanisms occurring in different electrolytes at different stages, which will subsequently be correlated to their stability and charging performance. Such knowledge will be critical for optimizing the ingredients of non-flammable electrolytes to support further development of more stable and high-performance batteries and enable scientific design principles of non-flammable electrolytes. Speaker: Ms SAILIN LIU (University of Wollongong) Analysis of Thermoresponsive Dextrans via Small-Angle X-ray Scattering 1m Thermoresponsive polymers have gained significant interest over recent years due to their potential use in a wide range of applications, including drug delivery, cell therapies, pharmaceuticals, tissue engineering, and mineral processing [1]. Of particular interest are thermoresponsive polysaccharides, which are generally biocompatible and biodegradable, unlike their synthetic counterparts. This is particularly important when considering biomedical applications, such as drug delivery, as biodegradability allows for the clearance of the drug delivery system from the body and can help to facilitate drug release. We have developed a novel family of thermoresponsive polysaccharides with tunable transition temperatures via functionalisation of non-thermoresponsive dextran with a series of alkylamides [2]. By altering the composition and degree of substitution of the alkylamide groups on the dextran backbone, the temperature at which phase transition occurs can be tuned. Upon heating, solutions of thermoresponsive dextrans undergo a reversable phase transition to afford colloidal suspensions. The nature of the solution-to-colloid transition was investigated by UV-visible spectrophotometry to determine the transition temperature and hysteretic effects, and via dynamic light scattering to determine changes in particle size and dispersity. To further interrogate the phase transitions and conformational changes occurring upon heating and cooling, Synchrotron small-angle X-ray scattering (SAXS) was conducted as a function of temperature. Taken together, these results provide a fundamental platform to further study the behaviour of these novel thermoresponsive dextrans when applied to specific applications, such as drug delivery or mineral processing. 1. Graham, S, et al., 2019, Carbohydrate Polymers, 207, p.143-159. 2. Otto, S, et al., 2021, Carbohydrate Polymers, 254, p.117280. Speaker: Sarah Otto (University of South Australia) Exploring the Surface of Vanadium Phosphate Cathode Materials 1m In this study, we used a combination of synchrotron soft X-ray absorption spectroscopy (XAS), lab-scale experimental techniques and first principles computation to critically examine and validate the surface and bulk electronic structure of prototypical vanadium (III) phosphate intercalation cathode materials, Na3V2(PO4)3, Li3V2(PO4)3 and K3V3(PO4)4• H2O. Using a combination of XPS, Raman UV-Vis -NIR, UPS and DFT calculations, a full picture of each AVPs electronic structure was developed and validated using both experimental and calculated electronic structure and density of states data. From our synchrotron data, XAS fluorescence yield and electron yield measurements reveal substantial variation in surface-to-bulk atomic structure, vanadium oxidation states and density of oxygen hole states across all AVP samples. We attribute this variation to an intrinsic alkali metal surface depletion layer identified across these alkali metal vanadium (III) phosphates. We propose that an alkali-depleted surface provides a beneficial interface with the bulk structure(s) that raises the Fermi level and improves surface charge transfer kinetics at the surface of this family of materials. This surface depletion phenomenon has been previously reported in other prominent transition metal phosphate intercalation cathodes, such as LiFePO4 and its general presence here suggests wider ubiquity amongst alkali transition metal phosphate materials. Speaker: Mr Tristram Jenkins (Queensland University of Technology) Synthesis and structural characterisation of novel perovskite-type Na-ion conductors 1m The development of new solid electrolytes is becoming increasingly important, e.g., for rechargeable batteries for electric vehicles, where current liquid organic electrolytes cause major safety concerns. Some ABO3 perovskite metal oxides have shown excellent lithium and sodium ion conductivity owing to their chemical and structural flexibility. This has led to the development of several perovskite-type solid electrolytes such as Li3xLa2/3-xTiO3 (LLTO) and Na1/2-xLa1/2-xSr2xZrO3 (NLSZ), which have shown high ionic conductivities [1-3]. Starting from the x = 1/6 member of NLSZ, a new series of sodium perovskite-type solid electrolytes with the formula Na1/3La1/3-x/3Sr1/3Zr1-xNbxO3 (0 ≤ x ≤ 0.8) (NLSZN) was synthesised. Structural characterisation was carried using a combination of synchrotron and neutron powder diffraction data, which revealed both first- and second-order phase transitions as a function of temperature. For some samples the symmetry appeared higher in synchrotron data than neutron data, owing to the higher relative sensitivity of neutron data to scattering from oxygen atoms in the structure [4]. As observed for other defect perovskites, there is a tendency to higher symmetry with increasing A-site vacancy concentration [5]. [1] Y. Inaguma, L.Q. Chen, M. Itoh, T. Nakamura, T. Uchida, H. Ikuta, M. Wakihara, Solid State Commun. 86, 689–693 (1993). [2] Y.Z. Zhao, Z.Y. Liu, J.X. Xu, T.F. Zhang, F. Zhang, X.G. Zhang, J. Alloy. Compd. 783, 219–225 (2019). [3] F. Z. T. Yang, V. K. Peterson, S. Schmid, J. Alloy. Compd. 863, 158500 (2021). [4] S. Schmid, R. L. Withers. J. Solid State Chem. 191, 63 ─ 70 (2012). [5] T. A. Whittle, W. R. Brant, J. R. Hester, Q. Gu., S. Schmid, Dalton Trans. 46, 7253 ─ 7260 (2017). Speaker: Frederick Yang (University of Sydney) Self-Assembly of Carbon Dioxide Nonionic Surfactants in Ionic Liquids 1m The diverse and tuneable intermolecular interactions present in ionic liquids (ILs) make them excellent media for surfactant self-assembly. Previous studies of polyoxyethylene alkyl ether nonionic surfactants, in ethylammonium nitrate and propylammonium nitrate have shown they can support the full range of amphiphilic self-assembly behaviour of nonionic surfactants for various applications. However, the head group of these nonionic surfactant, ethylene oxide (EO), is a petrochemical product, prompting us to seek bio-renewable substitutes, amongst which carbon dioxide stands out. Recent studies of nonionic surfactants incorporating CO2 (partly substituted for EO) have shown they are promising surface-active molecules. Small angle neutron scattering (QUOKKA, ANSTO) showed a single CO2 unit per surfactant can have an enormous impact on phase behaviour of dodecyl surfactants in water. The formation of gel-like liquid crystalline phases was completely suppressed through reduced hydration of the headgroups. This study is directed at understanding self-assembly behaviour of CO2 nonionic surfactants in ILs. We have examined the structure of surfactant-IL solutions using small angle neutron scattering as a function of surfactant concentration, solvent composition and temperature. Results shown that unlike water, solvation of nonionic headgroup is mostly unaffected by incorporation of CO2 units in pure ILs. However, this can be easily regulated through water dilution or mixing ILs. This demonstrates the composition of surfactant headgroup and the solvent can be used as tools to engineer solvent-headgroup interactions in formulating non-aqueous soft matter. Scaling behaviour of the skyrmions lattices in Cu2OSeO3 single crystals from small angle neutron scattering 1m Skyrmions are topologically protected spin vortices in the nanometre scale that behave like particles. In chiral crystals, competing magnetic interactions may induce 2D skyrmion lattices [1-2]. In the multiferroic insulator Cu$_2$OSeO$_3$, the skyrmion lattice responds to electric/magnetic fields suggesting applications in data storage [3]. These applications crucially depend on the stability conditions of the skyrmion phase. Notably, Cu$_2$OSeO$_3$ is the only material in which the appearance of two different skyrmion phases has been reported in its phase diagram. However, the quantum mechanisms of these phases and their thermodynamic connection are still under debate [4-6]. Hence, we used Small Angle Neutron Scattering and Lorentz Transmission Electron Microscopy to study the skyrmion stabilisation in single crystals of Cu$_2$OSeO$_3$ [7]. In this work, we report the field, temperature, and sample alignment dependence of the scaling behaviour of skyrmions as an order parameter for the emergence of the two skyrmion phases. [1] S. Muehlbauer et al., Science 323, 915 (2009) [2] S. Seki, X. Z. Yu, S. Ihiwata, and Y. Tokura, Science 336, 198 (2009) [3] A. Fert, N. Reyren, V. Cros, Nat. Rev. Mats. 2, 01731 (2017) [4] A. Chacon, L. Heinen et al., Nat. Phys. 14, 936-941 (2018) [5] F. Qian, L. J. Bannenberg et al., Sci. Adv. 4, eaat7323 (2018) [6] L. J. Bannenberg, H. Wilhelm et al., npj Quantum Mater. 4, 11 (2019) [7] M.-G. Han, et al., Sci. Adv. 6, eaax2138 (2020) Speaker: Mr Jorge Arturo Sauceda Flores (School of Physics, University of New South Wales, Sydney 2052, Australia) Structural basis of higher-order assembly formation in Toll-like receptor 1,2 and 6 signaling pathway 1m Innate immunity represents a typical and widely distributed form of immunity. Innate immune responses are the first line of defense against pathogens, which can help destroy invaders invertebrate animals, invertebrates, and plants. The innate immune system recognizes microorganisms via pattern-recognition receptors (PRRs). The family of Toll-like receptors (TLRs) is a distinct group of PRRs. They detect the microbial components known as pathogen-associated molecular patterns (PAMPs), activate downstream transcription factors such as nuclear factor-κB (NF-κB), resulting in a pro-inflammatory response. 10 TLRs have been identified in the human TLR family. In humans, TLR2 can form heterodimers with TLR1 and TLR6 when binding different types of ligands. The cytoplasmic Toll/interleukin-1 receptor (TIR) domain can be found in all TLRs and is responsible for transmitting extracellular signals to intracellular cytoplasmic TIR domain-containing adaptor proteins through TIR: TIR domain interactions, thus initiating downstream signaling. Two TIR-domain containing adaptor proteins, Myeloid differentiation primary response 88 (MyD88) and MyD88 adaptor-like (MAL) mediate downstream signaling in the TLR2-TLR1/6 signaling pathway. It has been previously demonstrated that higher-order assembly formation occurs in the TLR4 signaling pathway. The mechanism, which is known as signaling by cooperative assembly formation (SCAF), may occur in all TLR signal transduction. To date, the transduction mechanisms of TLR2-TLR1/6 signaling are still unclear. This research aims to determine the structural basis of higher-order assemblies formed by TIR domains with a focus on assemblies in the TLR2-TLR1/6 signaling. Speaker: YAN LI (The University of Queensland) Cholesterol catabolism: An exploitable weakness in mycobacterial infections? 1m Following the development of modern antibiotics and the net improvement of health care systems globally, tuberculosis (TB), a contagious and pathogenic bacterial infection caused by Mycobacterium tuberculosis, has been largely eliminated from developed countries. Despite this improvement TB remained a top 10 cause of death globally in 2020, which, when combined with the rise in multi-drug resistant tuberculosis (MDR-TB), represents an urgent global health concern. Other pathogenic mycobacteria including Mycobacterium ulcerans, the causative agent of Buruli Ulcer and Mycobacterium abscessus, a bacterium that affects cystic fibrosis patients, are also emerging public health threats. Mycobacteria are unique in their ability to metabolise host cell cholesterol, and this pathway has become a target for new antibiotic treatments to drug-resistant infections. The cytochrome P450 enzymes of the CYP125, CYP142 and CYP124 families initiate cholesterol metabolism. There are different numbers of cholesterol metabolising P450s in each Mycobacterium species. For example, Mycobacterium ulcerans and Mycobacterium tuberculosis have one of each CYP125, CYP142 and CYP124 enzymes, while Mycobacterium abscessus has four different CYP125 enzymes and no copies of CYP142 and CYP124. The reasons for different P450 profiles between mycobacteria remain unknown, as does a mechanistic understanding of the P450-mediated cholesterol oxidation. This project aims to understand the structural, evolutionary and mechanistic differences between enzymes of these three families. Also, screening of these enzymes as targets for a new class of cholesterol-based, anti-tubercular inhibitors will be undertaken. Speaker: Mr Daniel Doherty (The University of Adelaide) X-ray structure of a transmembrane domain from an ABC-transporter dependent system from Neisseria meningitidis in a non-biological state 1m Molecular replacement (MR) is the most commonly used method in crystallography to solve the phase problem required to obtain the three dimensional structure of a protein. Traditionally MR uses a search model from a previously determined protein structure. One of the requirements for success by MR is that the amino acid sequences of the search model and the unknown structure should be have at least 35 % identity. When this is not possible, an ab initio model can be generated using the sequence of the unsolved protein. In this project, we used the algorithm, tr-Rosetta, from the Rosetta server to obtain ab initio models used for use in MR. CtrC is part of an ABC transporter dependent complex in Neisseria meningitis, important for capsule polysaccharide transport. It constitutes the transmembrane domain and associates with a separate nucleotide binding domain, CtrD, making a heterotetramer. CtrC, has been crystallised using the lipidic cubic phase (LCP) method. After data collection using the MX2 beamline at the Australian Synchrotron, the structure has been solved at 2.87 Å by MR using an ab initio derived search model. The structure of CtrC shows a monomeric arrangement in the crystal lattice, unusual for an ABC transporter. A single molecule of the monoolein lipid used in the LCP matrix was found bound within the protein structure. We hypothesize that the presence of the monoolein ligand, and possibly the absence of CtrD, abrogates the ability of CtrC to form the expected dimeric structure. Speaker: Lorelei Masselot--Joubert (University of Western Australia) Size, shape and colloidal stability of fluorescent nanodiamonds in aqueous suspension 1m Fluorescent nanodiamonds (FNDs) containing negatively charged nitrogen-vacancy (NV–) centres have outstanding optical, photostability and spin properties which make them promising candidates as nanoscale sensors, and for quantum computing and bioimaging in biological media. The location of NV atoms relative to the surface of the particles is essential for these applications – if the NV atoms are buried too deeply, this will lead to lower brightness3. To optimize these properties, the particles must either be small or must have at least one dimension which is thin (eg plate shaped particles). The size and shape are therefore vital parameters to be investigated. Our collaborators4 examined the size effect on the optical properties of a wide range of FND particles, however, their 3D structure and colloidal stability have not been widely studied and are not well understood. Here, we systematically investigate the 3D shape of FNDs in water for a range of sizes and investigate the colloidal stability of these particles using dynamic light scattering, depolarised dynamic light scattering and synchrotron-based small-angle X-ray scattering (SAXS). Initial (SAXS) results suggest an interesting relation between the reported shape, DLS size of FND particles and emitted fluorescence. Speaker: Mr samir samir eldemrdash (School of science, RMIT University, Melbourne, Victoria 3001, Australia) Breaking boundaries, or is it? Physical disruption at the nano- and micro scales for an in situ flow setup 1m For various soft and hard matter systems, reduction in sample particle size could be an effective method for producing homogeneous samples, eliminating trapped air bubbles, facilitating sample preparation (e.g., gel loading), or meeting the requirements of a specific sample environment. Due to the experimental constraints of small-angle scattering, such as the limited width (1 or 2 mm) of the sample cell, time-dependent characterisation of larger samples in real time is often not possible. Physical disruption of samples into smaller sized particles at a macro scale would allow simultaneous characterisation of a variety of systems, such as facilitating the flow of polymers, gels, aggregates, and minerals. While physical crushing or blending may appear to be a straightforward solution to the problem, a lack of knowledge about the effect on the nano- and microstructure precludes its widespread adoption. In this study, a yoghurt-like transglutaminase-induced acid gel (TG), was blended as a method of disruption, allowing the gel particles to flow freely in the newly developed recirculated flow set-up designed for the in situ analysis of gel devolution over time. The study has demonstrated that mechanical disruption to form TG particle distributions within the 5-6 µm to ~3.5 mm size range had no effect on the micro- and nanostructure of the gel. This work could benefit several studies, including dynamics of hydrogel swelling, characterisation of particles in motion, digestion or changes in structure when exposed to different environmental conditions, as well as the implementation of newly developed setups in several neutron scattering studies. Speaker: Meltem Bayrak Small Angle Neutron Scattering instrument Bilby: capabilities to study mainstream and complex systems 1m ANSTO for more than ten years successfully operates the Small Angle Neutron Scattering (SANS) instrument Quokka[1] and in 2016 commenced the user operation of the second SANS instrument, Bilby[2]. The Ultra-small angle scattering instrument Kookaburra[3] is completing the set of the SANS instruments at ANSTO. Bilby exploits neutron Time-of-Flight (ToF) to extend the simultaneous measurable Q-range over and above what is possible on a conventional reactor-based monochromatic SANS instrument. In ToF mode, choppers are used to create neutron pulses comprising wavelengths between 2 and 20 Å of variable wavelength resolution (~3% ‒ 30%). In addition, Bilby can operate in monochromatic mode using a velocity selector. Two arrays of position sensitive detectors in combination with utilizing the wide wavelength range provide the capability to collect scattering data of a wide simultaneous angular range without changing the experimental set-up (maximum accessible Q on the instrument is 0.001-1.8Å-1). Additionally, there is a range of sample environment available allowing to change sample conditions in situ, which is priceless for the study of a wide variety of samples ranging from colloids and hierarchical materials to metals. Here we present some recent examples. K. Wood et al, QUOKKA, the pinhole small-angle neutron scattering instrument at the OPAL Research Reactor, Australia: design, performance, operation and scientific highlights. J. Appl. Crystallogr. 51 (2018) 294-341. A. Sokolova et al, Performance and characteristics of the BILBY time-of-flight small-angle neutron scattering instrument. J. Appl. Crystallogr. 52 (2019) 1-12. C. Rehm et al, Design and performance of the variable-wavelength Bonse-Hart ultra-small-angle neutron scattering diffractometer KOOKABURRA at ANSTO. J. Appl. Crystallogr. 51 (2018) 1-8. Speakers: Anna Sokolova (Dr), Andrew Whitten (ANSTO), Liliana de Campo (ANSTO), Chun-Ming Wu (NSRRC) Continuous chemical redistribution following amorphous-to-crystalline structural ordering in a Zr-Cu-Al bulk metallic glass 1m Bulk metallic glasses (BMGs) are thermodynamically metastable. As such, crystallization occurs when a BMG is thermally annealed at a temperature above the glass transition temperature. While extensive studies have been performed on the crystallization kinetics of BMGs, most of them have focused on the amorphous-to-crystalline structural ordering, and little attention has been paid to chemical distribution and its relationship with the structural ordering during the crystallization process. In this paper, a new approach, with simultaneous differential scanning calorimetry (DSC) and small angle neutron scattering (SANS) measurements, was applied to study in situ the crystallization of a Zr45.5Cu45.5Al9 BMG upon isothermal annealing at a temperature in the supercooled liquid region. Quantitative analysis of the DSC and SANS data showed that the structural evolution during isothermal annealing could be classified into three stages: (I) incubation; (II) amorphous-to-crystalline structural ordering; (III) continuous chemical redistribution. This finding was validated by composition analysis with atom probe tomography (APT), which further identified a transition region formed by expelling Al into the matrix. The transition region, with a composition of (Cu,Al)50Zr50, served as an intermediate step facilitating the formation of a thermodynamically stable crystalline phase with a composition of (Cu,Al)10Zr7. Speaker: Xuelian Wu Working Mechanisms of Conversion-Type Metaphosphate Electrodes for Lithium/Sodium-Ion Batteries 1m The development of novel high-performance electrodes is crucial for the next generation of lithium/sodium-ion batteries (LIBs/SIBs) that can charge rapidly while maintaining high lithium/sodium storage capacity. One of the major research directions to achieve improved energy/power densities of LIBs/SIBs has, thus far, focused on electrode materials that can store Li+/Na+ through conversion reactions. Our group has discovered and systematically studied a new family of conversion-type electrode materials, the transition metal metaphosphates [M(PO3)n (M = Mn, Fe, Co, Ni and Cu; n = 1, 2, 3)]. Unlike traditional conversion-type monoanionic compounds such as oxides, nitrides and fluorides which rely on nanomaterials engineering, these metaphosphates can achieve full capacities and fast Li+/Na+ diffusion kinetics from micro-sized samples synthesised by conventional solid-state methods. We studied their conversion reactions using a combination of in situ x-ray powder diffraction (XRPD), in/ex situ X-ray absorption near-edge spectroscopy (XANES), and ex situ high resolution transmission electron microscopy (HRTEM). During the initial discharging, these compounds convert into amorphous ceramic composites with high electrochemical activities in which fine transition metal nanograins are embedded in a glassy LiPO3 matrix. Glassy LiPO3 is an excellent Li+ conductor due to the low iconicity of PO3-, and it can buffer the volume change of the electrode to maintain its integrity, thus leading to much better electrochemical reversibility and cycling stability than monoanionic compounds. In the following first charge, the electrode converts back to a metaphosphate in terms of its composition but does not recrystallise. In subsequent cycles, the metaphosphate electrodes in an amorphous form continue to react with Li+/Na+ reversibly. Speaker: Dr Qingbo Xia (The University of Sydney) A photon counting detector for x-ray imaging: advantages and challenges 1m X-ray sensitive area detectors comprised of arrays of photon counting elements have been under development for decades. The difficult and expensive technological development of integrating readout electronic chips with a converter has been substantially supported by areas of science other than synchrotron radiation research. For instance, such innovation is vital in large scale high energy physics detectors. Synchrotron radiation research has benefited from this technology being spun-out into the market. IMBL has purchased a photon counting array detector: the Eiger2, from the Swiss company Dectris. It will be used in our human radiography programme. An NHMRC grant was awarded to pursue the use of computed tomography in mammography (breast imaging) using the IMBL. Funds were provided for a 3 mega-pixel array detector, with 75 micron pitch pixels. Similar devices have been used in SR x-ray scattering stations for a sometime, but have not yet found extensive use in radiography. The exquisite sensitivity is a great advantage for imaging live subjects; keeping the required dose to a minimum. However they do have field coverage limitations. These are being addressing as part of the human imaging project. In all photon counting detectors currently on the market, the active area is not continuous. The boundaries between IC chips, and multi-chip modules create gaps. For diffraction these missing pixels may be less important, since reflections are often duplicated, or radial integration reduces their effect. In imaging however, every pixel carries potentially important clinical information. Some initial data from the IMBL Eiger2 is presented, along with ideas for ameliorating the effect of the missing pixels on the radiological information. Speaker: Chris Hall (Australian Synchrotron) Kookaburra, the ultra-small-angle neutron scattering instrument at ANSTO: design and recent applications 1m The double-crystal ultra-small-angle neutron scattering (USANS) diffractometer KOOKABURRA at ANSTO was made available for user experiments in 2014. KOOKABURRA allows the characterisation of microstructures covering length scales in the range of 0.1–20 µm. Use of the first- and second-order reflections coming off a doubly curved highly oriented mosaic pyrolytic graphite pre-monochromator at a fixed Bragg angle, in conjunction with two interchangeable pairs of Si(111) and Si(311) quintuple-reflection channel-cut crystals, permits operation of the instrument at two individual wavelengths, 4.74 and 2.37 Å (see more details https://www.ansto.gov.au/our-facilities/australian-centre-for-neutron-scattering/neutron-scattering-instruments/kookaburra). This unique feature among reactor-based USANS instruments allows optimal accommodation of a broad range of samples, both weakly and strongly scattering, in one sample setup [1,2]. The versatility and capabilities of KOOKABURRA have already resulted in a number of research papers, including studies on hard matter systems like rocks and coal [3,4], as well as soft matter systems like hydrogels or milk [5,6]. This clearly demonstrates that this instrument has a major impact in the field of large-scale structure determination. Some of the recent examples will be presented here. [1] Rehm, C. et al, J. Appl. Cryst., 2013, 46 1699-1704. [2] Rehm, C. et al, J. Appl. Cryst., 2018, 51, 1-8. [3] Blach, T. et al, Journal of Coal Geology, 2018, 186, 135-144. [4] Sakurovs, R.et al, Energy & Fuels, 2017, 31(1), 231-238. [5] Whittaker, J. et al, Int. J. Biol. Macromol., 2018, 114, 998-1007. [6] Li, Z. et al, Food Hydrocolloid, 2018, 79, 170-178. Speaker: Jitendra Mata (ANSTO) Quokka, the Pinhole Small-Angle Neutron Scattering Instrument at ANSTO 1m Quokka was the first SANS instrument to be in operation at the Australian research reactor, OPAL [1]. It is a 40 m pinhole instrument operating with a neutron velocity selector, an adjustable collimation system providing source-sample distances of up to 20 m and a two dimensional 1 m squared position-sensitive state-of-the-art detector, capable of measuring neutrons scattered from the sample over a secondary flight path of up to 20 m. Also offering incident beam polarization and analysis capability as well as lens focusing optics, Quokka has been designed as a general purpose SANS instrument with a large sample area, capable of accommodating a variety of sample environments. Some of these sample environments are, a Rapid Heat Quench Cell enabling a sample to be studied in situ following a thermal shock (-120°C to 220°C); The neutron Rapid Visco Analyser (nRVA) which enables SANS to be measured simultaneously with viscosity via an RVA – an instrument widely used within the food industry; In-situ Differential Scanning Calorimetry (DSC); A stopped flow cell, and RheoSANS. In early 2021 Quokka achieved the milestone of 200 peer-reviewed publications in a variety of research fields. Here we cover some of the research highlights along with Quokka's performance and operation. [1] K. Wood, J. P. Mata, C. J. Garvey, C. M. Wu, W. A. Hamilton,[..]and E. P. Gilbert, QUOKKA, the pinhole small-angle neutron scattering instrument at the OPAL Research Reactor, Australia: design, performance, operation and scientific highlights, J Appl Crystallogr, 2018, 51, 294-314. Novel techniques with ATR apparatus at THz frequencies 1m A new method is presented which extends the capabilities of attenuated total reflection (ATR) apparatus to a partial reflection/partial transmission mode, which also delivers the complex dielectric values of samples. The technique involves placing a mirror at a known distance from the sample/crystal interface to reflect the transmitted portion of the incident signal back to the detector. The attenuation of this signal reflected is dependent on the absorption coefficient of the sample. The method is well suited to biological samples in the terahertz radiation frequency band range 1.0 THz to 2.0 THz, with a diamond crystal ATR. The 2.0 THz range biological data is poorly represented in literature, since most THz data on biological tissues has 1.2 to 1.5 THz as the upper limit. A demonstration of the technique was performed using water and water based gel at the Australian Synchrotron FIR/THz beamline. At frequencies of 3.0 to 5.0 THz, a paradoxical region was noted where the total reflectance of the signal reflected at the initial crystal/sample interface plus the signal reflected from the mirror was less than the reflected at the initial crystal/sample interface alone. The destructive interference is in the region where the effective path length of the transmitted signal through the sample is in the region of 1.3 λ to 1.7 λ. Significance and potential uses of this region are still being investigated. Since many cancers have higher water content than normal tissue, the extension of the ATR apparatus capacity promises to establish a new diagnostic modality. Speaker: Zoltan Vilagosh (Swinburne University of Technology) Structural characterization of SARS-Cov-2 spike derived peptides presented by the Human Leukocyte Antigen A*29:02. 1m The rapid emergence of SARS-Cov-2 out of Wuhan, China, in late 2019 has resulted in the current outbreak that has crippled social and economic development worldwide. With over four million deaths, significant efforts are being made to generate a viable treatment option. It has been well established that T lymphocytes destroy infected cells. These T cells also produce long lasting immunity through the proliferation of memory cells which recognize future viral invasion. Activation of T lymphocytes is achieved through the recognition of Human Leukocyte Antigens (HLA) surface receptors on infected cells. These HLA molecules present viral peptides to T cells which are able to recognize and activate against these antigens. However, due to the highly polymorphic nature of HLA molecules, it remains unclear how different peptides bind to the vast number of HLA molecules affect the stimulation of the adaptive immune response. The focus of this project is on a singular HLA, HLA-A29:02, found in approximately 3% of the world's population. We wish to structurally analyse various peptides presented by HLA-A29:02 derived from the SARS-Cov-2 spike protein and determine how COVID-19 variants and their mutations might different in their presentation to T cells. Through the use of X-ray crystallography, we will gain deeper insights into how theses peptides are presented. This will further our understanding of how our own immune system responds to these antigens and may also help to in designing long lasting therapies such as peptide vaccines. Speaker: Lawton Murdolo (La Trobe University) Synthesis and characterization of K2YbF5 upconversion nanoparticles 1m Many avenues exist for synthesizing upconversion nanoparticles (UCNPs), such as hydrothermal, solvothermal, solid-state reactions, thermal decomposition, amongst others. Here we compare three hydro-solvothermal synthesis processes for producing K2YbF5:Er and K2YbF5:Tm UCNPs, each having a different order of addition of reagents. The first method (A) adds together potassium hydroxide, oleic acid, and ethanol; followed by the lanthanide ions, and finally potassium fluoride. The second method (B) mixes the lanthanide ions, oleic acid, and ethanol first; followed by potassium hydroxide, and finally potassium fluoride. The third method (C) is similar to the second one, except that potassium hydroxide and potassium fluoride are mixed together first before being introduced into the system. The resulting nanoparticles were characterized via scanning electron microscopy (SEM), energy dispersive spectroscopy (EDS), photoluminescence spectra (PL), and near-edge x-ray absorption fine structure (NEXAFS) spectroscopy on the Australian synchrotron soft-X ray beam line. SEM images reveal that all particles are crystalline with shapes ranging from microrods to hexagonal. EDS confirmed presence of dopant ions only for particles produced via method A, while NEXAFS spectra confirmed presence of dopant ions in all doped crystals, with their expected NEXAFS structure confirming the oxidation state of the ion within the nanocrystal. Thus there is evidence of dopant ions incorporated within the crystal; however, more quantitative techniques must be applied to properly ascertain the doping concentration and the quantum efficiency of the upconversion processes occurring within the synthesized particles. Speaker: John Arnold Ambay (University of Technology Sydney) ANSTO User Meeting Poster: Synthesis and characterization of K2YbF5 upconversion nanoparticles Google Drive folder for poster poster final ver.mp4 Effects of Mn and Co Ion Implantation on Pseudocapacitive Performance of Ceria-Nanostructures on Ni-Foam 1m Metal oxides have shown incredible potential as electrode materials for pseudocapacitive applications due to their high capacitance, good conductivity, electrochemical reversibility, and long cyclability. Through the engineering and manipulation of defect types and their concentrations, it is possible to enhance the kinetics of charge transfer and charge-discharge process to optimize redox and intercalation capacitances. Ion implantation is an advanced technique to uniformly introduce a desired concentration of dopants into nanostructures. The present work explores the pseudocapacitive performance of nanostructured cerium oxide (ceria, CeO2-x) films on nickel foam electrodes (synthesized using electrodeposition), followed by implantation individually with Mn and Co ions. The implanted samples were annealed in nitrogen atmosphere to promote the diffusion and incorporation of implanted dopants in the ceria lattice and to modify the nanostructural features. The films were characterised using SEM, EDS, Raman spectroscopy, and XPS analyses to determine the role of the mineralogy, composition, surface chemistry, and nanostructure on the performance. The pseudocapacitive performance was determined using cyclic voltammetry (CV), charge-discharge, electrochemical impedance spectroscopy (EIS), and stability tests. After preliminary CV testing, the Co-implanted samples (1 x 1015 ions/cm2) annealed at 300℃ for 3 h in a nitrogen atmosphere showed an improvement in specific capacitance (495 F/g) compared to the non-implanted ceria samples (427 F/g). Speaker: Mr Ewing Y. Chen (UNSW) The Nanoprobe beamline at the Australian Synchrotron: towards day #1, July 2024 1m A hard x-ray Nanoprobe beamline is under construction at the Australian Synchrotron, aiming to accept first users for operation in July, 2024. In this presentation we will outline the science case for the Nanoprobe along with the anticipated performance parameters and show examples of measurements that will be enabled by the facility. In particular, core methods supported by the Nanoprobe include: trace elemental mapping and spectroscopy at the 60-300 nm length-scale using x-ray fluorescence; absorption, differential phase contrast and ptychography using x-ray transmission, and; SAXS/WAXS and micro-diffraction. A number of substantial challenges must be overcome in order to reach the ultimate resolution, and these will be described along with the optical and operational design of the beamline. Cryogenic capabilities may present too great a challenge for the first-generation implementation but are keenly desired and firmly on the instrument development curve. The Nanoprobe endstation instrument will be located within a purpose-built satellite building at around 100 m from the source location. Although still deep in design phase, we will outline the building design and welcome comment from future potential users of the beamline particularly with regards to the instrument capabilities and the experimental support that is required from the ancillary services within the building and the larger Australian Synchrotron facility. Speaker: Martin de Jonge (ANSTO) Investigation of Residual Stress and Mechanical Properties of Steelwork After Laser Cleaning 1m Surface preparation of steelwork for structural repainting is often conducted by sandblasting method in which abrasives (sands) are blasted onto the painted surface at high speed, removing the old paint and rust/dirt by the impact of the blast. This conventional method would cause irreversible damage to the underlying substrate, deteriorating its mechanical and fatigue performance. Laser cleaning has attracted attention as an alternative to conventional cleaning methods as an environmentally friendly and economical technology that removes paint and corrosion efficiently while inducing minimal damage to the surface of the material. This research investigates the mechanical properties and residual stresses of the laser cleaned steel samples from the Sydney Harbour Bridge. Laser cleaning using nanosecond laser was performed on the structural steel plates removed from the Sydney Harbour Bridge. The plates were then tested at the Australian Nuclear Science and Technology Organisation (ANSTO) for residual stress measurement and the University of Sydney for the microstructure characterisation and microhardness testing. Results of the residual stress measurements indicated that the residual stress profile changes at the surface after cleaning. This study enhances the understanding of changes in residual stress and mechanical properties at the surface of a steel subjected to laser cleaning. Speaker: Yutaka Tsumura (Sydney University) Medium Energy Spectroscopy (MEX) – The spectroMEX High Resolution Crystal Spectrometer 1m The MEX1 beamline high resolution crystal spectrometer, spectroMEX, comprises a Johann-type point-to-point focusing geometry crystal spectrometer employing five spherically bent crystals on a 0.5 m diameter Rowland circle. The primary application of spectroMEX is high energy resolution fluorescence-detected (HERFD) XANES, wherein fluorescence XANES is collected with an energy resolution of the order of the core-hole lifetime broadening. HERFD XANES spectra contain additional spectral information when compared to conventional fluorescence or transmission XANES. spectroMEX also facilitates collection of high quality x-ray emission spectroscopy data, including the weak, but chemically sensitive valence-to-core emission lines (vtc-XES). This talk will describe the spectroMEX spectrometer design, progress to date, and present examples of the new spectroscopic techniques available to synchrotron users employing spectroMEX at the MEX1 beamline. Speaker: Jeremy Wykes (Australian Synchrotron) Stability and Applications of Model Membranes 1m Biological cell membranes are a critical component of all living organisms. The cell membrane is a semi-permeable lipid bilayer controlling movement of ions and other molecules from one of the cell side to the other, and is primarily made up of amphiphilic lipid molecules. Our research group has previously developed a model system whereby a lipid bilayer is tethered to a solid supporting structure. The resulting tethered-bilayer lipid membranes (tBLMs) are highly stable in aqueous solution and the tethering region provides a reservoir under the bilayer to allow protein incorporation and minimise bilayer/substrate interactions. In the presence of an aqueous solution tBLMs have been shown to be stable for periods as long as multiple weeks with only minor degradation. This project is focused on understanding the effects that drying a model membrane out can have on its structure. This work is important for better understanding the water retention properties of tBLMs in order to determine their suitability for use in biosensing, where they may not be able to be completely submerged in solution, and whether additional protective coatings may be necessary to improve retention. Similar work has already been performed on other model systems such as black lipid membranes, but only tentatively in the field of tBLMs. Electrochemical impedance spectroscopy (EIS) has been used to model changes in membrane structure through the rehydration process as well as the resulting functionality, with neutron reflectometry approved to be performed in future to determine more layer-specific effects. Speaker: Alex Ashenden (Flinders University) A high-temperature furnace for MEX 1m The Medium Energy X-ray Absorption Spectroscopy (MEX) Beamline at the Australian Synchrotron is currently being commissioned and is due to start running user experiments in the second half of 2022. The facility will provide a series of specialised sample environments for users to conduct in situ measurements of important scientific processes. One of these sample environments will be a high-temperature furnace, which will provide users with world-class experimental conditions and bring MEX in line with the capabilities of other synchrotron facilities. Based on the requirements specified by users in a 2020 survey of the Australian Synchrotron user community, the furnace will be designed to heat samples to ̴ 500 – 1500 °C, and will be compatible with a range of gases, including He, N2, CO2, O2, CO, and Ar. The high-temperature, controlled-atmosphere experimental conditions that such a furnace will provide are useful in Earth science for examining processes occurring in silicate melts, emulating conditions in the Earth's crust. Some of the processes occurring at crustal conditions can only be observed in situ, rather than in the quenched products of experiments. The furnace will also be useful in materials science and chemistry for examining the behaviour of metals at high temperatures in a controlled atmosphere. Speaker: Emily Finch (Australian Synchrotron) Do reduced aggregation and crystallinity really help to improve the photovoltaic performance of terpolymer acceptors in all-polymer solar cells? 1m Terpolymerization is a widely used method to control crystallinity of the semiconducting polymers which has been exploited to improve the photovoltaic performance of all-polymer solar cells (all-PSCs). Applying this strategy to the well-studied n-type polymer acceptor PNDI2OD-T2, different amounts of 3-n-octylthiophene (OT) are used to partially replace the bithiophene (T2) unit, resulting in three newly-synthesized terpolymer acceptors PNDI-OTx where x = 5%, 10%, or 15%. Another copolymer, namely PNDI2OD-C8T2, consisting of naphthalene diimide (NDI) copolymerised with 3-n-octylbithiophene (C8T2) is also synthesized for comparison. The experimental X-ray characterizations suggest that the molecular orientation of π-conjugated backbone in PNDI-OTx is slightly impacted and thin film crystallinity is systematically tuned by varying x, evidenced by near edge X-ray absorption fine structure (NEXAFS) and grazing incidence wide angle X-ray scattering (GIWAXS) measurements, respectively. However, the photovoltaic performance of all-PSCs based on J71:PNDI-OTx and J71:PNDI2OD-C8T2 blends are much lower than that of the reference J71:PNDI2OD-T2 system. Extensive morphological studies suggest that reduced crystallinity is likely to have a little influence on vertical phase separation and crystallinity of resulting blends as revealed by peak fits from NEXAFS and GIWAXS experiments. However, the reduced crystallinity is detrimental for morphology of the blend films, with coarser phase separation found in J71:PNDI-OTx and J71:PNDI2OD-C8T2 blends compared to J71:PNDI2OD-T2 blends, confirmed by resonant soft X-ray scattering. The results here challenge the common view that reduced crystallinity is the key parameter in controlling the morphology for enabling high-performing all-PSCs. Speaker: Doan Vu (Monash University) Current and future capabilities of the IRM beamline at the Australian Synchrotron, and guidance on applying for use of the facility. 1m Infrared (IR) spectroscopy provides information on the chemical composition of materials, based on the absorption of infrared light by the vibrating bonds within molecular groups. IR microspectroscopy, using synchrotron light as the infrared source, enables this analysis to be performed on samples as small as 1 – 2 $\mu$m in size, with a sensitivity not possible in the laboratory. ANSTO's synchrotron infrared microspectroscopy (IRM) beamline is equipped with a suite of accessories to enable the study of a diverse range of materials. This includes a sample heating and cooling stage, micro-compression cells for improved IR light transmission of dense materials, a liquid flow cell for the study of living organisms in a natural environment, and grazing angle optics for the analysis of thin film coatings on surfaces. The IRM beamline also has several attenuated total internal reflection (ATR) accessories that have been used for the study of challenging materials such biofilms, carbon fibre, leaf surfaces and battery materials, where a thin section of the sample can not be prepared. More recent developments on the IRM beamline include the use of polarisation optics to determine molecular orientation in materials and operation with a far-IR detector to extend the spectral range to 260 cm-1. Future plans for the IRM beamline include the motorisation of additional functions to assist with mail-in experiments and, in the longer term, the additional of nano-IR capability to the experimental endstation. Scientists interested in accessing the IRM beamline are encouraged to contact the IRM beamline team to discuss their research proposals. Speaker: Dr Mark Tobin (ANSTO) KOALA 2: making a good instrument better! 1m At the time that the KOALA Laue single-crystal neutron diffractometer came into service at ANSTO, a review of VIVALDI, the progenitor instrument at the ILL led to its deletion from their User program. Against this background, we were seeing a dearth of single-crystal neutron studies published in the literature. To our joy, in usage, we found the instrument to be readily applicable to the problems which our future users had identified in the planning workshops for the first suite of neutron beam instruments at ANSTO. A User base has been built which has resulted in a steady flow of rapidly cited publications across a wide range of journals - focussed on reaching the optimum scientific audience. KOALA is a copy of the ILL instrument VIVALDI purchased from the same vendor, and outside the standardisation of construction which has underpinned the reliability of the ANSTO neutron beam instrument suite. At ten years use, spare parts became a significant issue, and a review of the control systems revealed that the cost of refitting the existing instrument approached the cost of building a replacement instrument. The decision to build KOALA2 has provided opportunities to optimise the initial design with significant operational enhancements. COVID has meant that we were initially on track to achieve the instrument implementation in mid 2022 (component issues now mean this will be late 2022), and we will continue to operate KOALA 1 until KOALA 2 is ready to install. As time permits we will outline the range of science available and the enhancements KOALA 2.0 will bring. Speakers: Alison Edwards (ACNS, ANSTO), Ross Piltz (ACNS, ANSTO) High crystallinity nitrogen doping of ALaTiO4 and A2La2Ti3O10 (A = Na¬+, K+) photocatalysts 1m Global warming is a current hot topic due to its potential for irreversible environmental damage. Ambitions were made within the Paris agreement to limit the temperature rise to be below 1.5 ºC pre-industrial level. Therefore, alternative fuel sources are needed to replace fossil fuel, with hydrogen gas is one popular choice due to its high energy density per unit weight, and technologies utilising hydrogen already developed. Hydrogen can be generated renewably by sunlight driven, photocatalytic water-splitting. Metal oxides, including those with a Ruddlesden-Popper type structures are being studied as potential photocatalysts. KLaTiO4 is a n=1 Ruddlesden-Popper type layered perovskite. KLaTiO4 can be used as a Hydrogen Evolution Catalyst (HEC), producing 9.540 μmol of H2 gas per hour from 20 mg of catalyst, when using methanol as sacrificial electron donor and platinum co-catalyst. The main issue of KLaTiO4 is its high bandgap (4.09 eV) meant it is incapable of absorbing visible light. The two main factors important for the synthesis of ALaTiO4 and A2La2Ti3O10 (A = Na¬+, K+) was discussed: volatility of alkaline metal ions at elevated temperatures and sintering temperature. Multiple samples of NaLaTiO4 or Na2La2Ti3O10 were made using traditional solid-state synthesis methods at temperature between 750 °C to 950 °C. Bandgap was tuned by doping nitrogen into the structure of ALaTiO4 during the synthesis process, as opposed to replacing oxygen atoms with nitrogen by post treatment of ALaTiO4. This was achieved by replacing a portion of TiO2 reagent used for TiN, and the sample was synthesised as normal. The resultant ALaTiO4-xNx¬ sample retained good crystallinity and have reduced bandgap, but at a cost of reduction in hydrogen evolution rate. Speaker: Mr Junwei Junwei Li (The University of Sydney) AUM2021_reduced.pdf Completing the library of amino-acid neutron structures 1m Accurate neutron structures of the 20 naturally occurring amino acids that are the building blocks of proteins are key to investigations of polymorphism, condensed-phase NMR analysis, periodic density-functional-theory calculations, as restraints in X-ray protein refinements, and as initial structures in the computer modelling of proteins. The first 16 members of the family were determined in the 1970s by groups at Brookhaven National Laboratory and the Indian Atomic Energy Laboratory, but the last four proved to be elusive due to the lack of single crystals large enough for the monochromatic neutron diffractometers of the time. State-of-the-art reactor-based neutron Laue diffractometers, such as Koala on OPAL, allow high-precision structural investigations of single crystals with volumes around 0.1 mm3. This opens the door to completing the library of high-precision amino-acid neutron structures. Here we describe variable-temperature studies of three naturally-occurring amino acids using Koala, L-leucine [1] which is one of the four missing members and the two polymorphs of L-histidine. The data on the orthorhombic form of L-histidine greatly improve on the precision of a previous monochromatic neutron study. The second, monoclinic, form has been studied with neutrons for the first time [2]. Both studies were complemented by interaction-energy calculations using the Pixel method, and, for L-histidine, Hirshfeld Atom Refinement against X-ray data at the same temperatures. The resulting neutron structures yield geometric parameters with sufficient precision and accuracy for inclusion in restraint libraries of macromolecular structure refinements. The search continues for neutron-quality crystals of L-isoleucine, L-methionine and L-tryptophan. [1] J. Binns et al. Acta Cryst. B72 (2016) 885. [2] G. Novelli et al. Acta Cryst B. In press. Speaker: Prof. Garry McIntyre (Australian Nuclear Science and Technology Organisation) Friday, 26 November Accelerating Australia: Perspectives on future particle accelerators and their applications 30m There are over 50,000 particle accelerators in the world used for everything from treating cancer to finding out the secrets of the Universe. Australia has a long history in this area and excels in accelerator-based science: nowhere is this clearer than in the science carried out at our world-class infrastructure. That said, we have barely scratched the surface of what might be possible with beams of ions or electrons. Potential uses of particle beams are growing every day – from mining to archaeology to high-tech factories – enabled by breakthroughs in accelerator science and technology. In light of this, a new research group in accelerator physics was created at the University of Melbourne in 2019, who collaborate with ANSTO in a number of areas: from compact X-band electron accelerators to next-generation particle therapy. In this talk I will give an overview of some of the vast array of applications of accelerators and introduce the cutting-edge accelerator development research now happening in Australia. With strong collaboration between end-users and accelerator experts, together we can create a step-change in Australia's capacity to deliver real-world impact using particle beams. Speaker: Suzie Sheehy (University of Melbourne) Synchrotron CT dosimetry at the IMBL for low wiggler magnetic field strength and spatial modulation with bow tie filters 15m Synchrotron CT dose reduction was investigated for the IMBL wiggler source operated at lower magnetic field strength and for beam modulation with spatial filters placed upstream from the sample. Beam quality at 25-30 keV for 1.4-3.0 T was assessed using transmission measurements with copper to quantify the influence of third harmonic radiation. The low energy operational limit is 24-28 keV for 0.1-1% transmission by added filters, 2 mm path length through silicon and 25 m of air. The upper limit is near 80 keV for wiggler field 1.4 T, approximately 100 keV for 2.0 T and extend beyond 100 keV for 3.0-4.2 T. The harmonic radiation contribution is reduced for lower field strengths. Measured dose rates suggest the influence of harmonics is insignificant above approximately 26 keV at 1.4 T and above 33 keV at 2.0 T. Relative to 3 T operation, the mean dose rate in air is reduced to approximately 12% at 2 T and 4% at 1.4 T. Spatial filters were constructed from blocks of perspex with circular voids of diameter matching the CT dosimetry test objects. A calibrated ion chamber integrated absorbed dose to the phantom during 360o rotation. CT dose indices (CTDI) were measured at 25-100 keV for 3.0T only, at the centre and periphery for 35-160 mm diameter perspex phantoms. Beam shaping filters offer protection to the sample by reducing the peripheral and volumetric CTDI by about 10% for small objects and 20-30% for the larger samples. Speaker: Dr Stewart Midgley (Canberra Hospital) Magnetically-guided particle delivery to airway surfaces for cystic fibrosis gene therapy: Synchrotron-based visualisation and optimisation for improved in vivo lentiviral gene transfer 15m Gene vectors to treat cystic fibrosis lung disease should be targeted to the conducting airways, as peripheral lung transduction does not offer therapeutic benefit. Viral transduction efficiency is directly related to the vector residence time. However, delivered fluids such as gene vectors naturally spread to the alveoli during inspiration. Extending gene vector residence time within the conducting airways is important, but hard to achieve. Gene vector conjugated magnetic particles that can be guided to the conducting airway surfaces could improve targeting. Due to the challenges of in vivo visualisation, the behaviour of small magnetic particles on the airway surface in the presence of an applied magnetic field is poorly understood. The aim of this study was to use synchrotron imaging to visualise the in vivo motion of a range of magnetic particles in live rat trachea to examine the dynamics and patterns of individual and bulk particle behaviour in vivo. Synchrotron X-ray imaging revealed the behaviour of magnetic particles in stationary and moving magnetic fields, both in vitro and in vivo. Particles could not be dragged along the live airway surface with the magnet, but during delivery deposition was focussed within the field of view where the magnetic field was the strongest. These results show that magnetic particles and magnetic fields may be a valuable approach for improving gene vector targeting to the conducting airways in vivo. Speaker: Martin Donnelley (University of Adelaide) Biochemical Interaction of Few Layer Black Phosphorus with Microbial Cells Using Synchrotron macro- ATR-FTIR 15m In the fight against drug resistant pathogenic bacterial and fungal cells, low dimensional materials have been shown as a promising form of alternative treatment method. Specifically, few-layer black phosphorus (BP) has demonstrated its effectiveness against a wide range of pathogenic bacteria and fungal cells. In this work, the complex biochemical interaction of BP with a series of microbial cells is investigated to provide a greater understanding of the antimicrobial mechanism. Synchrotron macro-attenuated total reflection–Fourier transform infrared (ATR-FTIR) spectroscopy is used to elucidate the chemical changes occurring outside and within the cell of interested after exposure to BP nanoflakes. The ATR-FTIR data, coupled with advanced, high-resolution microscopy, reveals noticeable differences to the polysaccharide and nucleic acid spectral maps, along with changes in amide protein structure when compared to untreated cells. This study provides a greater insight into the biochemical interaction of BP nanoflakes with microbial cells is given, allowing for a better understanding of the antimicrobial mechanism of action. Speaker: Zoe Shaw (School of Engineering, RMIT University) Using X-ray crystallography to understand bushfire-induced seed germination 15m Passing the site of a bushfire a couple of weeks after it has burnt itself out, you may notice a mass seed germination event taking place, allowing the bush to completely come back to life. This fascinating phenomenon occurs due to compounds in bushfire smoke called karrikins, which act as triggers for seed germination. Although we know this process occurs, we don't understand how karrikins interact with seeds or seedlings, and what the little molecular machines – known as proteins – inside individual cells do to allow a seed to germinate. X-ray crystallography is a technique where the atomic structure of a crystal can be determined via its diffraction pattern when placed in the beam of an X-ray source. By crystallising the proteins involved in karrikin signalling and shooting them at the MX beamlines at the Australian Synchrotron, we are able to determine their structure and hence their function; allowing us to piece together a complete picture of how karrikins work. Overall, by understanding processes that control a plant's growth and development, we have new avenues to explore in terms of finding sustainable agricultural techniques and effective methods of conservation and restoration. Speaker: Sabrina Davies (The University of Western Australia) Regional lung volume measures in small animal models from single projection X-ray images 15m Regional Lung volume is a key parameter in assessing lung function and health. Computed Tomography (CT) is considered the gold standard for measuring lung volume; however, it requires a relatively high radiation dose and typically has associated lower spatial and temporal resolution than X-ray projection imaging. In this work, we investigate whether regional lung volumes can be determined using 2D X-ray projections. The idea is that as the lung inflates with air, the attenuating tissue is displaced leading to a localised increase in X-ray intensity. We imaged 13 New Zealand white rabbit kittens using high-resolution X-ray imaging and CT at the IMBL at various airway pressures. From the 2D projections, we converted changes in regional X-ray intensity through the lungs to changes in lung air volume using the Beer-Lambert law, under the assumption that the lungs of the animal were comprised of a single material (water). We measured the true air volumes from CT data for comparison. We found that relative changes in regional lung air volume derived from the 2D x-ray projections showed a coefficient of determination ($\mathrm{R}^2$) of 0.97 with CT data. This technique, therefore, provides a high speed, low dose method for measuring regional changes in lung volume that we are now using for studying lung aeration at birth in preclinical animal models. Speaker: Dylan O'Connell (Monash University) Sub cellular scale mapping of deuterated compounds by nanoSIMS 15m High resolution imaging mass spectrometry by nanoSIMS (nano scale secondary ion mass spectrometry) is a valuable method to observe deuterium accumulation in any number of sample types. NanoSIMS analysis is a high resolution isotope and elemental imaging technique for solid sample surfaces, allows for spatial resolution as low as 50nm and has high sensitivity which makes it an ideal method for observing deuterium accumulation in sub cellular features of any number of sample types. The nanoSIMS method allows for simultaneous analysis of up to seven ion species, meaning there is capacity to pair deuterium analysis with other elemental or isotopic inquiry. In this presentation, fundamentals of nanoSIMS analysis are explained with emphasis on application to deuterium observation. The Microscopy Australia supported nanoSIMS facility at The University of Western Australia has recently begun collaboration with users that have sourced deuterated compounds from ANSTO based National Deuteration Facility and these examples will be discussed in detail. Speaker: Dr jeremy bougoure (University of Western Australia) Molecular binding and exchange between model membranes and biologically relevant lipid assemblies 20m Model cellular membranes are often used to understand the interactions with biomolecules and nanoparticles[1], but the effects of such interactions go beyond molecular binding and include processes such as biomembrane restructuring and molecular exchange that may lead to changes in the structure and composition of the interacting nanoparticles. Here I will present our most recent work aiming at increasing the understanding of the role of biomembrane structure and composition on the function of lipoproteins. Lipoproteins are nanoemulsion-like particles composed of fats and proteins (apolipoproteins).[2] The complexity of lipoproteins is great,with different amounts and types of fats and proteins. We use lipoproteins from human healthy adults and look systematically at their capacity to exchange lipids as a function of membrane composition. We find that membrane charge, level of unsaturation in the acyl tails and presence of cholesterol all regulate lipoprotein function[3]–[5]. We also show significant differences in the exchange capacity of synthetic lipoproteins reconstituted with a single apolipoprotein type[6]. Further, we show that the incubation with SARS CoV2 Spike proteins affects the exchange capacity of lipoproteins[7] that may be linked to the altered cholesterol metabolism in COVID19 patients. Finally, apolipoproteins also exchange and we demonstrate that their binding to Lipid-based nanoparticles (LNPs) affect the structure and composition of these particles[8]. The extent to which this component redistribution takes place may be correlated with the LNP's capacity for protein expression and thus their therapeutic efficiency. All these experiments are possible thanks to neutron scattering combined with deuteration, since this an ideal approach to study the structure and dynamics of multicomponent systems where different parts of the system can be highlighted individually[8]–[11]. Speaker: Prof. Marité Cárdenas (Malmö University and Nanyang Technological University) High viscosity injector effects on the phase behaviour of lipidic cubic phase 15m In serial crystallography of membrane protein crystals, high-viscosity flow injectors deliver micron-sized crystals to the x-ray beam. The protein crystals are often injected embedded in the lipidic cubic phase (LCP) media, monoolein (MO), in which they were grown. The self-assembled structure of this media is easily impacted by the performance of the injector, e.g. pressure and gas flow surround the sample injection. However, it is not yet well understood how the continuous injection impacts the phase of the monoolein and how this influences the sample stream stability. In the present work, we report on observations of the structure of MO/water and MO/buffer mixtures during continuous flow injection at atmospheric pressure and in vacuum. These observations include x-ray diffraction data taken at the Australian Synchrotron (AS) and the Linac Coherent Light Source (LCLS), as well as optical polarisation measurements. We observe the coexistence of a cubic phase and lamellar phase within the sample stream. The lattice parameters are stable over typical changes in reservoir pressure that occur during injector operation. While the degree to which the lamellar phase is formed is found to depend strongly on the co-flowing gas used to stabilise the lipid stream. We further observe sharp transitions between diamond cubic and gyroid cubic phases that do not correlate with changes in pressure applied to the reservoir. In vacuum, we observe the coexistence of the gyroid cubic, the diamond-cubic and the lamellar phase simultaneously. The existence of LCP and lamellar phase at the experimental temperature, 26 degC and pressure ranges within the reservoir is unexpected and we investigate this observation using optical imaging. Speaker: Daniel Wells (La Trobe University) Self-assembly of surfactants in protic ionic liquids 15m Protic ionic liquids (PILs) are the largest known solvent class capable of promoting surfactant self-assembly. However, PILs are increasingly used as mixtures with molecular solvents, such as water, to reduce their cost, viscosity and melting point, and the self-assembly promoting properties of these mixtures are largely unknown. Here we investigated the critical micelle concentration (CMC) of ionic and non-ionic amphiphiles in two ionic liquids, ethylammonium nitrate (EAN) and ethanolammonium nitrate (EtAN), to gain insight into the role of solvent species, and effect of solvent ionicity on the self-assembly process. The amphiphiles used were the cationic cetyltrimethylammonium bromide (CTAB), anionic sodium octanoate sulfate (SOS), and the non-ionic surfactant tetraethylene glycol monododecyl ether (C12E4). Surface tensiometry was used to obtain the CMCs and free energy parameters of micelle formation, and Small angle x-ray scattering (SAXS) was used to characterise the micelle shape and size. For CTAB, the trend in the CMC observed indicated that at low concentrations of the PIL, the ionic liquids acted as free ions, decreasing the CMC due to charge screening effects. This effect was not observed in C12E4 due to its neutral overall charge. Micelle formation of the anionic amphiphile was found to be more complex than initially hypothesised in ionic liquids. It was discovered that EtAN, the less cationic ionic liquid was able to facilitate self-assembly of SOS, whereas in EAN mixtures micelles could not be confirmed. The findings from this study gives insight into how solvent interactions are modified from solvents rich in water to rich in a PIL. Speaker: Sachini Kadaoluwa Pathirannahalage (RMIT University) Deuterated Phospholipids to Study the Structure, Function and Dynamics of Membrane Proteins Using Neutron Scattering 15m Contrast matching and contrast variation in neutron scattering provide unparalleled power for understanding the structure, function, and dynamics of a selected component in a multicomponent system. A sophisticated contrast study often requires the availability of deuterated molecules in which deuterium atoms are introduced in a predictable and controlled fashion to replace protons. This can be achieved by direct deuteration of precursors followed by custom chemical synthesis, for which expertise and capabilities have been developed at facility (NDF), ANSTO. It this paper we will discuss recent high impact research output using deuterated phospholipids produced by NDF/ANSTO. We will describe the synthesis and applications of selectively or perdeuterated unsaturated phospholipids to contrast match out the whole lipid bilayer or nano disks within a multicomponent system. Further, we also describe their role in investigations related to membrane lipoproteins (ApoE) exchange in relation to lipid unsaturation,[1] effect of membrane composition,[2] and conformational analysis Mg+2 channel by neutron scattering techniques.[2, 3] 1. Waldie, S., et al., Lipoprotein ability to exchange and remove lipids from model membranes as a function of fatty acid saturation and presence of cholesterol. Biochimica et Biophysica Acta (BBA) - Molecular and Cell Biology of Lipids, 2020. 1865(10): p. 158769. 2. Waldie, S., et al., ApoE and ApoE Nascent-Like HDL Particles at Model Cellular Membranes: Effect of Protein Isoform and Membrane Composition. Frontiers in Chemistry, 2021. 9(249). 3. Johansen, N.T., et al., Mg2+-dependent conformational equilibria in CorA: an integrated view on transport regulation. bioRxiv, 2021: p. 2021.08.20.457080. Speaker: Rao Yepuri (Australian Nuclear Science and Technology Organisation) Investigating the interactions of monoolein liquid crystals with human microbiomes 15m Lipid-based liquid-crystals are biocompatible nanomaterials offering selective and 'smart' drug-release properties which are an emerging technology in the research and development pipeline. Over the last decade, research on these nanomaterials has focused on their behaviour in response to physicochemical phenomena and after loading with pharmaceutical cargo. Over the next decade, research aims to address our lack of understanding about how these prospective drug-carriers are influenced by physiological environments. This study explored members of the human microbiome as a potential candidate. Bacterial species which inhabit popular sites of drug administration were mixed with monoolein cubosomes and bulk cubic phase gels. The effects on liquid crystal structure and drug release profile were examined using benchtop and synchrotron SAXS, cross-polarized light microscopy, and fluorescence measurements. Particle mixing with bacterial cell membrane components induced a transformation to hexagonal structure, consistent with the transfer of bacterial phospholipids to the matrix. Similarly, exposure to the representative skin bacteria S. aureus induced the transformation to hexagonal structure after 8 hours. S. aureus exposure also reduced the rate of hydrophilic dye release from bulk monoolein cubic phase over a similar timeframe. This transformation was consistent with an increase in oleic acid content by lipolysis of monoolein by lipase. This research demonstrates the influence that bacteria can have on the structure and drug release properties of monoolein liquid-crystalline drug-delivery systems. These findings are hoped to inform future research throughout the development of these prospective drug-carrier nanomaterials for healthcare applications and commercially viable products. Speaker: Mr Jonathan Caukwell (The University of Newcastle) How Do Ion Specific Effects Operate in Ionic Liquids? 15m Recent work has found that the identity of a surfactant's counter-ion can affect the critical micelle concentration, and the size and shape of resultant micelles in ionic liquid (IL) and choline-based deep eutectic solvents.[1,2] This indicates the presence of ion specific effects for micellisation in these neoteric solvents despite their high ionic strength.[3] This project examines this phenomenon further, by investigating a range of choline salts (chloride, bromide, and nitrate) in different nitrate-based ILs (ethylammonium, propylammonium, and ethanolammonium nitrate) via measurements taken on the Small Angle Neutron Diffractometer for Amorphous and Liquid Samples (SANDALS) beamline at ISIS. These results bring new insight into how ion specific effects can exist in high ionic strength neoteric solvents, and the parameters involved in controlling this surprising phenomenon. (1) Dolan, A.; Atkin, R.; G. Warr, G. The Origin of Surfactant Amphiphilicity and Self-Assembly in Protic Ionic Liquids. Chemical Science 2015, 6 (11), 6189–6198. https://doi.org/10.1039/C5SC01202C. (2) Sanchez-Fernandez, A.; S. Hammond, O.; J. Edler, K.; Arnold, T.; Doutch, J.; M. Dalgliesh, R.; Li, P.; Ma, K.; J. Jackson, A. Counterion Binding Alters Surfactant Self-Assembly in Deep Eutectic Solvents. Physical Chemistry Chemical Physics 2018, 20 (20), 13952–13961. https://doi.org/10.1039/C8CP01008K. (3) Warr, G. G.; Atkin, R. Solvophobicity and Amphiphilic Self-Assembly in Neoteric and Nanostructured Solvents. Current Opinion in Colloid & Interface Science 2020, 45, 83–96. https://doi.org/10.1016/j.cocis.2019.12.009. Speaker: Dr Joshua Marlow (University of Sydney) The new external ion beam capability for testing of electronics suitable for harsh space radiation environments 20m In 2019, the Australian Space Agency made its debut in the international scene of the space exploration. Securing the future of Australia's space sector is the core of the Advancing Space: Australian Civil space Strategy 2019-2028. This Government plan reminds that space-based technology and services not only interests space missions, but benefits all Australians daily as for weather forecasting, GPS, internet access, online banking, emergency response tracking bushfires, monitoring of farming crops, etc. To further increase capability, the Space Infrastructure Fund (SIF) investment was issued to target 7 space infrastructure projects that involve several industries, organisations, universities, laboratories, all around the country. Mission control and tracking facilities, robotic & automation, AI command and control, space data analysis facilities, space manufacturing capabilities, and space payload qualification facilities, are the topics under study. ANSTO together with other 5 fund recipients engaged its resources in the last-mentioned project (space payload qualification facilities), with the aim to establish the National Space Qualification Network (NSQN). Particularly, the three ASNTO facilities Centre for Accelerator Science (CAS), the Australian Synchrotron and the Gamma Technology Research Irradiator (GATRI) will focus on enhancing and improving their capabilities for space radiation damage testing of electronics used in space and ensure they meet international standards in this area. Space technology can be affected by cosmic radiation when Single Event Upset (SEU) occurs, knocking out temporary or permanently the instrumentation that is paramount for the successful accomplishment of a mission, a test, or simply the usual functionality of a service. We need to deep understand the cause and the frequency of these events, in order to reduce the risk of component failure and to consequently optimizse the electronics. Tests must be performed in ground-based facilities before commercialization of any device. ANSTO facilities use accelerators to perform radiation tests with different beams (gamma-rays, x-rays, protons and heavy ions) to eventually provide international standards of Total Ionisation Dosage (TID) radiation testing for products that can enter faster into global supply chains. Because of the limitations encountered while performing tests in vacuum, at the CAS facility, the High Energy Heavy Ion Microprobe (HIM) of the 10MV ANTARES accelerator has recently been upgraded to an external chamber for testing standard electronic chips in an ambient-in-air environment. Advantages of an ex-vacuum microprobe are: ease of handling the sample with no limits to the dimension of the sample itself, no charge effects, more effective target heat dissipation, sampling is not required, gain in terms of time used for pump-up and down the chamber, and possibility to irradiate living system without compromising them. Stefania Peracchi1,*, David Cohen1, Zeljko Pastuovic1, Nikolas Paneras1, David Button1, Chris Hall2, Justin Davies3, Michael Mann1, David Cookson4, Michael Hotchkis1, Ceri Brenner1. 1 Centre for Accelerator Science, Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW 2234, Australia 2 IMBL, Australian Synchrotron, Clayton, VIC 3168, Australia 3 Gamma Technology Research Irradiator, Lucas Heights, NSW 2234, Australia 4 NSTLI Industry and Stakeholder Engagement, Lucas Heights, NSW 2234, Australia Speaker: Dr Stefania Peracchi (ANSTO) Tomographic X-ray phase and attenuation extraction for a sample composed of unknown materials 15m Propagation-based phase-contrast X-ray imaging (PB-PCXI) is a technique suitable for imaging weakly-attenuating objects, e.g., biological samples, as it utilizes both attenuation and refraction effects. Such effects are material dependent, and described by the X-ray's complex refractive index n=1-δ+iβ, where β and δ describe attenuation, and refraction, respectively. Phase retrieval algorithms are typically applied to PB-PCXI images to recover lost phase information. A single-material reconstruction, based on the transport-of-intensity equation, has been published by Paganin et al. [1] and has proven useful in diverse fields. This approach has been extended to consider multi-material objects [2], and partially-coherent X-ray sources [3]. The described phase-retrieval algorithms can successfully recover the projected-phase information of an object, however, they require a priori knowledge of the sample materials. We present an algorithm capable of extracting β and δ functions for a sample that is composed of unknown materials. The essence of the approach is based on curve-fitting an error-function to each interface between distinct materials in a computed tomographic reconstruction [4], where the fit parameters are then used to calculate δ and β for composite materials. This approach requires no a priori sample information, making it broadly applicable, particularly in cases where exact sample composition is unknown. We have applied this method to a breast-tissue sample, where the δ for composite materials was calculated to 0.6% - 2.5% accuracy, compared to theoretical values. D. M. Paganin et al., J.Microsc. 206, 33 (2002) M. A. Beltran et al., Opt.Express 18, 6423 (2010) M. A. Beltran et al., J.Opt. 20, 055605 (2018) D. A. Thompson et al., J.Synchrotron.Radiat. 26, 825-838 (2019) Speaker: Samantha Alloo High speed free-run ptychography at the Australian Synchrotron 15m The Australian Synchrotron X-ray Fluorescence Microscopy (XFM) beamline has recently implemented fast-scanning ptychography, a scanning X-ray diffraction microscopy method. Ptychography creates super-resolution images from transmitted microdiffraction patterns acquired as the sample is scanned through the beam. High-speed detectors and high-performance computers are required to iteratively reconstruct these complex images. The experimental methods and reconstruction algorithms have significantly evolved over the last decade and a half into a mature and user-friendly complementary imaging method to XFM. Here we present the implementation of high speed ptychography at the XFM beamline, which includes a free-run data collection mode where detector dead time is eliminated, and the scan time is optimized. We show that free-run data collection is viable for fast and high-quality ptychography by demonstrating extremely high data rate acquisition covering areas up to 352,000 µm2 at up to 140 µm2/s, with 18× spatial resolution enhancement compared to the beam size. With these improvements, ptychography at velocities up to 250 µm/s is approaching speeds compatible with fast-scanning X-ray fluorescence microscopy. The combination of these methods provides morphological context for elemental and chemical information, enabling unique scientific outcomes. Speaker: Cameron Kewish (Australian Synchrotron) Medium Energy Spectroscopy (MEX) - Sample environments and supporting infrastructure 15m The Medium Energy Spectroscopy (MEX) beamline aims to facilitate a wide variety of ex- and in-situ experimental work from a variety of research areas. As such, we will provide a number of sample environments as standard set-up, in addition to ancillary equipment that can be used with custom or BYO sample environments. Sample environments will likely include; room temperature cell, electrochemical flow cell, micro-fluidic cell, flammable gas cell, furnace with gas environments,and a battery testing cell. In addition, supporting infrastructure and ancillary equipment will likely include; flammable and toxic gas handling (flow and pressure control), gas and vapor ventilation, electrochemical testing station (Autolab or similar), fluid (gas or vapour) syringe pumps with pressure monitoring. Most, if not all, of the sample environments and supporting infrastructure will be controlled with the beamline systems, enabling integration and triggering for maximum achievable automation of experiments. Medium Energy Spectroscopy (MEX) - Opportunities for Microspectroscopy 15m The medium energy range offers unique opportunities for synchrotron-based X-ray absorption spectroscopy across the sciences. In particular, the K-absorption edges of alkali and alkali earth elements, e.g. K and Ca, s-group elements, e.g. S, P and Se, along with d-block elements, e.g. Mn, Fe, Cu all fall within this energy range. As do various L- and M-edges for heavier elements, e.g. Pb and U. The nascent Medium Energy X-ray Spectroscopy (MEX) beamlines will access these edges and offer unique opportunities to study the local structure, speciation, and chemistry of compounds and systems critical to biological, environmental, geological and industrial processes. Typically, characterisation of specific metal-ligand species requires isolation of the complex, necessitating disruption of native systems despite the attendant risk of redistribution and loss of chemical context. Despite the confounding potential of typical preparation methodologies, the tools available to coordination chemistry in situ have remained limited. The continuing synergy between synchrotron-based X-ray fluorescence microscopy (XFM) and X-ray absorption near edge structure (XANES) spectroscopy represents a powerful new analytical approach for studying chemistry in context. Using illustrative examples and highlighting particular techniques, this presentation will introduce one of MEX's major end stations, the scanning X-ray fluorescence microprobe ($\mu$MEX). To be installed on the MEX1 beamline, operating between 2 and 13.6 keV and focusing X-rays into a spot, less than 5 microns in diameter $\mu$MEX will offer unique opportunities for synchrotron-based X-ray microspectroscopy. To date, the scarcity of such optimised facilities leaves many exciting scientific questions to be explored though such measurements also involve unique experimental challenges. Speakers: Dr Simon James (ANSTO Australian Synchrotron), Mr Simon Pocock (ANSTO) Hot Commissioning and First User Experiments on the Spatz Neutron Reflectometer 15m The Spatz neutron beam instrument is the latest to be installed and commissioned in the Neutron Guide Hall at the 20 MW OPAL Research Reactor. Spatz is a time-of-flight neutron reflectometer used for studying nanoscale structures at surfaces and interfaces and utilises a vertical sample geometry / horizontal scattering geometry. The instrument is situated at the end position of the CG2B neutron guide and views the cold-neutron source (CNS). The disc chopper cascade that pulses the neutron beam to produce the time-of-flight is very configurable to provide a wavelength resolution between 1 to 12 %. The detector is a helium-3 two-dimensional detector that is capable of measuring both specular and off-specular reflectivity. The sample stage can support a range of different sample environments including multiple solid-liquid cells, an atmospheric chamber with temperature control, the ATR-FT-IR spectrometer for simultaneous infra-red spectroscopy and neutron reflectometry measurements, electrochemical cells, etc. The geometry of the instrument and the sample environment available means that Spatz is well suited to studying phenomena at the gas-solid interface and solid-liquid interface. The Spatz instrument has been fully commissioned with neutrons and the results of the commissioning are presented. This includes measurements using the 'Bragg mirror' consisting of 25 bilayers of nickel and titanium, different solid substrates of silicon, quartz and sapphire, spin-coated polymer samples, and films under liquid. Reflectivity down to 10-7 can be achieved within 1 hour measuring time with good counting statistics in most cases. Early user experiments cover a range of science including investigating the thermal stability of organic solar cell materials and proteins interacting with biomimetic phospholipid cell membranes. Speaker: Anton Le Brun (ANSTO) Update: CAS Update: AS 15 Years of Brilliance at the Australian Synchrotron 20m The Australia Synchrotron achieved first light in August 2006, and since then has operated 10 individual beamlines as part of Australia's largest standalone scientific research facility. In a typical year, the facility conducts up to 1000 individual experiment, and hosts more than 5000 User visits. Although modest compared to international synchrotron facilities in terms of the size of our storage ring, our staffing levels and the number of beamlines, the Australian Synchrotron's research community is one of the most productive in the world; generating more than 650 peer-reviewed journal articles in 2020. Over the past 15 years the Australian Synchrotron has supported the development of the scientific careers of thousands of students, researchers, and our staff. This presentation celebrates some of the highlights in beamline and technique development, as well as showcasing trends in high impact research outcomes from the Australian Synchrotron. Speaker: Michael James (ANSTO) Studying Polysaccharides in Solution with SAXS and Molecular Dynamics 20m Polysaccharides are semi-flexible polymers composed of sugar residues with a myriad of important functions in-vivo, including structural support, energy storage and immunogenicity. The local conformation of such chains is a crucial factor governing their interactions. Traditionally this conformation has only been directly accessible in the solid-state, using crystallographic techniques such as fibre diffraction. However, improvements in the quality of synchrotron-based X-ray scattering data means that conformation-dependent features can now be measured in solution. In tandem, scattering predictions based on structures initiated from existing fibre x-ray diffraction data, and then re-animated using molecular dynamics, can now be performed. Our group has recently measured the detailed small-angle x-ray scattering from a variety of anionic oligo- and poly-saccharides in solution. This talk will specifically present data obtained from experiments carried out on homogalacturonan, alginate and carrageenan and discuss their comparison with predictions based on our molecular dynamics simulations. The remarkable agreement found provides unequivocal evidence for the validity of our real-space atomistic models of the solution state structures. This technique is expected to be universally applicable for polysaccharides that consist of comparatively stiff glycosidic linkages, and to have extensive relevance for a number of biological macromolecules, including glycosylated proteins. Speaker: Prof. Martin (Bill) Williams (Massey University, MacDiarmid Institute, NZ) SPACE RADIATION AND INDIVIDUAL RADIOSENSITIVITY- ANSTO CAS & HUMAN HEALTH IN AIR BEAM EXPERIMENTS 15m Radiation exposure is a major limiting factor for long duration manned space flights. Radiation protection standards are based on the assumption that individuals are equally resistant to ionizing radiation. However, for over a century, there is evidence that humans do not respond equally to radiation. Particularly, the studies of secondary effects post-radiotherapy have shown a great variability among individuals. More specifically, large discrepancies among astronauts after the same flight were observed. Recently, from a collection of hundreds of fibroblast cell lines derived from patients suffering from genetic disease or post-radiotherapy radiosensitivity, we have shown that the delay in the nucleoshuttling of the ATM protein may cause a lack of double strand break (DSB) recognition, incomplete DSB repair and radiosensitivity. Interestingly, the model of the ATM nucleoshuttling was shown to be relevant not only for low-dose and repeated exposures, but also for high-LET particles, which renders this model compatible with space radiation exposure scenarios. Lastly, this model could lead to a novel approach for radiation protection, consisting of interventions to accelerate ATM nucleoshuttling. Such an approach may help in developing efficient countermeasures that could assist with manned space flights. In 2019-2021, teams from ANSTO CAS and Human Health have been collaborating to adapt the ANTARES beamline for in air irradiation of living matter and study the effects of secondary radiation produced by interraction of cosmic and galactic rays with spacecraft shielding. DNA repair and mitochondrial activity processes will be studied. Speaker: Dr Melanie Lydia Ferlazzo (ANSTO) Antimicrobial and Anti-Inflammatory Gallium Implanted 'Trojan Horse' Surfaces for Implantable Devices 15m A rapidly aging population, high incidence of osteoporosis and trauma-related fractures, and better health care access explain rapid surge in utilisation of orthopedic implantable devices. Unfortunately, many implants fail without strategies that synergistically prevent infections and enhance the implant's integration with host tissues. Here, we propose a solution that builds on our pioneering work on gallium (Ga)-enhanced biomaterials, which show exceptional antimicrobial activity, and combined it with defensin (De, hBD-1), which has potent anti-microbial activity in vivo as part of the innate immune system. Our aim was to simultaneously impart antimicrobial activity and anti-inflammatory properties to polymer-based implantable devices through the modification of the surfaces with Ga ions and immobilisation of De. Poly-lactic acid (PLA) films were modified using Ga implantation using the Surface Engineering Beamline of the 6MV SIRIUS tandem accelerator at ANSTO Australia, and subsequently functionalised with De. Ga ion implantation increased surface roughness and increased stiffness of treated PLA surfaces and led to the reduction in foreign body giant cell formation and expression of pro-inflammatory cytokine IL-1β. Ga implantation and defensin immobilization both independently and synergistically introduced antimicrobial activity to the surfaces, significantly reducing total live biomass. We demonstrated, for the first time, that antimicrobial effects of De were enhanced by its surface immobilization. Cumulatively, the Ga-De surfaces were able to kill bacteria and reduce inflammation in comparison to the untreated control. These innovative surfaces have the potential to prevent biofilm formation without inducing cellular toxicity or inflammation, which is essential in enhancing integration of implantable devices with host tissues and hence, ensure their longevity. Speaker: Shiva Kamini Divakarla (The University of Sydney, Sydney Nano Institute, Faculty of Medicine and Health, Sydney Pharmacy School, Sydney, NSW 2006, Australia) Structural insights into the unique modes of relaxin-binding and tethered-agonist mediated activation of RXFP1 and RXFP2 15m Our poor understanding of the mechanism by which the peptide-hormone H2 relaxin activates its G protein-coupled receptor, RXFP1 and the related receptor RXFP2, has hindered progress in its therapeutic development. Both receptors possess unique ectodomains that comprise of an N-terminal LDLa module joined by a linker to a Leucine Rich Repeat (LRR) domain. Truncation of the N-terminal LDLa module abolishes signalling for both receptors suggesting that the LDLa module is essential for activation and is postulated as a tethered agonist, induced to undergo a conformational change upon H2 relaxin binding. Here, we use Small Angle X-ray Scattering (SAXS), NMR spectroscopy, cell-based receptor signalling assays to show that it is not the LDLa module, but rather a conserved motif (GDxxGWxxxF), immediately C-terminal to the LDLa, that is the essential tethered agonist. Importantly, this motif associates with the LDLa module of both RXFP1 and RXFP2, in different manners suggesting distinct mechanisms of activation. For RXFP1, the motif is flexible, weakly associates with the LDLa module, and requires H2 relaxin binding to stabilize an active-state conformation. Conversely, the motif in RXFP2 does not possess the same flexibility as it does in RXFP1, and appears to be more structured and closely associated with the LDLa module, forming an essential binding interface for H2 relaxin. H2 relaxin binding to RXFP2 needs both the LDLa module and the motif, distinct to RXFP1 and the tethered agonist activity of the motif is not driven by an induced conformational change in RXFP2, also distinct to RXFP1. These results highlight distinct differences in relaxin mediated activation mechanism of RXFP1 and RXFP2 which will aid drug development targeting these receptors. Speaker: Dr Ashish Sethi (University of Melbourne) Synchrotron infrared characterisation of SARS-CoV-2 virions for a new COVID-19 saliva test 15m In response to the COVID-19 pandemic the Biospectroscopy group within the Monash School of Chemistry have become part of a research working group headed by Prof. Dale Godfrey and Prof. Damian Purcell at the Doherty Institute to develop a new IR diagnostic for the detection of COVID-19. An infrared-based test would be reagent-less, able to test hundreds of thousands using the same instrument, be highly sensitive and inexpensive, producing results in minutes. This is cogent especially given the worldwide shortage of conventional testing kits and the long delays in getting results that in the case of virulent variants such as Delta, are costing lives. The talk will focus on new developments in the arena of point-of-site COVID testing highlighting rapid diagnostic-based tests and our new infrared based saliva screening test. We have modified a portable infrared spectrometer with purpose-built transflection accessory for rapid point-of-care detection of COVID-19 markers in saliva. Initially, purified virion particles were characterized with Raman spectroscopy, synchrotron infrared (IR) and AFM-IR. A data set comprising 171 transflection infrared spectra from 29 patients testing positive for SARS-CoV-2 by RT-qPCR and 28 testing negative, was modeled using Monte Carlo Double Cross Validation with 50 randomized test and model sets. The testing sensitivity was 93 % (27/29) with a specificity of 82 % (23/28) that included positive samples on the limit of detection for RT-qPCR. This high throughput infrared COVID-19 test is rapid, inexpensive, portable and utilizes sample self-collection, thus minimizing the risk to healthcare workers and is ideally suited to mass or personalised screening in public and private settings. Speaker: Prof. Bayden Wood (Centre for Biospectroscopy, School of Chemistry, Monash University) The recent progress of polarized neutron scattering techniques at SIKA 15m SIKA, the cold-neutron triple-axis spectrometer is on the CG4 beam port at the OPAL reactor, ACNS, ANSTO. We have reported the capabilities and status of SIKA in the last several user's meetings. In this meeting, we discuss the recent development of polarized neutron scattering experiments on SIKA. A 3He polarization analysis system is available for SIKA. We have performed several user experiments and commissioning experiments in the last two years. We would like to present some results by introducing the techniques we are trying to implement. In addition, we discuss our plan for the polarized neutron scattering experiment on the SIKA. Speaker: Shinichiro Yano (NSRRC) MyD88 TIR domain higher-order assembly interactions revealed by serial femtosecond crystallography 15m Serial Synchrotron Crystallography (SSX) is rapidly emerging as a promising technique for collecting data for time-resolved structural studies or for performing room temperature micro-crystallography measurements using micro-focused beamlines. When performed using ultra-bright X-ray Free Electron Laser (XFEL) sources serial crystallography typically involves a process known as 'diffract-and-destroy' where each crystal is measured just once before it is destroyed by the intense XFEL pulse. It's the small and intense beam focus of XFELs that make it possible to determine structures from nanocrystals where conventional crystallography techniques fail. Only through thorough synchrotron investigation, can we achieve successful XFEL beamtime proposals. Here we investigate the important role of the MX2 beamline at the Australian Synchrotron played in the successful XFEL proposal which resulted in the structure of the Myeloid differentiation primary response gene 88 (MyD88) and MyD88 adaptor-like/TIRAP (MAL), Toll-like receptor (TLR) adaptor proteins which play an important role in inflammatory disease . The data generated at the Linac Coherent Light Source provided structural and mechanistic insight into TLR signal transduction[1]. Clabbers, M., Holmes, S. et.al. MyD88 TIR domain higher-order assembly interactions revealed by microcrystal electron diffraction and serial femtosecond crystallography. Nature Communications, Nature communications 12 (1), 1-14, 2021. Speakers: Connie Darmanin (La Trobe), Dr Mark Hunter (Linac Coherent Light Source, SLAC National Accelerator Laboratory, Menlo Park, California, USA) The High Performance Macromolecular Crystallography (MX3) Beamline 15m The MX3 beamline will extend the capabilities of the existing suite of MX beamlines at the Australian Synchrotron. It will allow collection on crystals that are too small or weakly diffracting for the current beamlines. A high level of automation will transform membrane protein micro crystal collection and high throughput projects such as drug and fragment screening. Sample positioning will be provided via an MD3-UP goniometer and an ISARA robot will allow 6 second sample exchange. Serial crystallography capability will be provided using in-tray screening and collection and fixed target silicon chip scanning stages. A dedicated cluster will provide real-time data processing and automated data collection will be standard. This will include automated location of crystals from a rastered volume with subsequent data collection on each crystal with resulting automated data merging from multiple crystals. Some outstanding questions for the user community relate to time-resolved crystallography, and injector experiment capabilities; options will be presented and discussed. Speaker: Daniel Eriksson (Australian Synchrotron) SAXS investigation of protic ionic liquid-water mixtures, and their application to protein crystallisation 15m Protic ionic liquids (PILs) are cost efficient "designer" solvents which can be tailored to have properties suitable for a broad range of applications. PILs are also being combined with molecular solvents to enable more control over the solvent environment, driven by a need to reduce their cost and viscosity. This also leads to greater biocompatability. In this presentation I will discuss our ongoing work into designing PIL solvents for proteins, with a focus on lysozyme as a model protein 1. We have recently been using SAXS to explore the effect of PILs on lysozyme from dilute to neat IL concentrations in water. This naturally leads to a discussion on the difficulties in obtaining SAXS data of proteins in viscous media, and of analysis the data where the solvent is also nanostructured. However, despite these challenges, we are beginning to develop design rules which can be used to select ILs for specific applications. One application that we are developing PIL solvents for is in protein crystallisation. We have used MX1&2 data to solve lysozyme crystal structures with 7 PILs present. Preliminary results will be presented where we have used SAXS to monitor the initial stages of lysozyme crystallisation in PIL-water solutions, using ethylammonium nitrate as the PIL. Qi, H.; Smith, K. M.; Darmanin, C.; Ryan, T. M.; Drummond, C. J.; Greaves, T. L., Lysozyme conformational changes with ionic liquids: spectroscopic, small angle x-ray scattering and crystallographic study. Journal of Colloid and Interface Science 2021, 585, 433-443. Speaker: Tamar Greaves (RMIT University) Stimuli Responsive Switchable Chemical Sensors 15m The development of real-time, highly sensitive chemical sensors for the detection of very low analyte concentrations is of significant interest and importance for monitoring levels of harmful chemicals in the environment. The unique properties of the rare-earth metals enables sharp and narrow luminescent signals to be obtained. The incorporation of rare-earth ions into sensor systems offers significant advantages for enhancing the sensor response, allowing greater discrimination between chemical analytes. Coordination polymers (CPs) and Metal-Organic Frameworks (MOFs) are crystalline materials containing inorganic nodes bridged by multidentate ligands. The high porosity and tunability of CPs enable the systematic modification of pore chemistry and size. Tailored pore environments can be designed, making these materials well-suited to act as chemical sensors. Rare-earth coordination polymers remain relatively less explored than transition metal coordination polymers due to their higher coordination numbers and unpredictable coordination environments. Reports of rare-earth coordination polymers containing a redox-active ligand are still relatively scarce in the literature despite the potential they present for enhanced chemical sensing and the development of magnetic and switchable materials. This presentation will discuss the synthesis and properties of an isostructural series of rare-earth coordination polymers containing a redox-active viologen ligand. The viologen moiety is able to undergo a reversible one electron reduction upon exposure to a light or electrochemical stimulus. The electrochemical, photochromic and sensing ability of the materials will be discussed and their potential for application in the development of chemical sensors highlighted. Speaker: Carol Hua (University of Melbourne) Influencing lipid hydrolysis by minute molecular changes 15m Designer lipid colloids are being increasingly studied for the delivery of drugs and nutrients. These nanoparticles can have different internal nanostructures and different lipidic composition. Cyclopropanated derivatives of commonly used monoacylglycerols show substantial differences in self-assembled structures, and formations of nanostructured nanoparticles. Most remarkably, small differences in the hydrophobic tail affect the packing of the lipids, sufficient to alter the availability of the lipid headgroups to be hydrolysed by interfacial enzymes. We employed small angle X-ray scattering and acid/base titration at the Australian Synchrotron SAXS/WAXS beamline to monitor the nanostructural changes during hydrolysis and the digestion rate. These fundamental characteristics are of interest for the smart design of lipidic nanoparticles for drug or nutrients delivery. Salvati Manni L. et al. (2021) J. Colloid Interface Sci. 588, 767-775 Speaker: Livia Salvati Manni (University of Sydney) Insight into the Variations of ABO4 Structures: Combined Experimental and Computational Studies 15m The development of carbon-neutral energy-generation is critical to combatting climate change. One such technology is the development of next-generation ion conductors for solid-oxide fuel cells (SOFCs). SOFCs offer a more efficient method of extracting energy from hydrogen or hydrocarbon fuels than current combustion engines due to their one-step chemical process. However, a bottleneck to the large-scale uptake of SOFCs is the poor performance of the conducting electrolytes that separate the anode from the cathode. Various $AB\text{O}_{4}$ structures have recently been proposed as solid electrolyte candidates in SOFCs, with increased high-temperature ionic conductivity being measured in chemically doped LaNbO$_{4}$. However, the various phase transitions of these materials within the operational temperature of SOFCs makes them non-ideal. To understand the effects of chemical doping on the structure and electrochemical properties, several complex $AB\text{O}_{4}$ structures have been investigated. In this work, we present the solid-solution series $Ln$(Nb$_{1-x}$Ta$_{x}$)O$_{4}$ (Ln = La-Lu). Using a combination of synchrotron X-ray and neutron powder diffraction methods, these studies have revealed several anomalies across the series. The structures appear to be sensitive to the size of the Ln cation and their synthesis conditions, with a difference in ionic conduction performance being observed. This experimental data has been further reinforced by ground state energy calculations performed using density functional theory. This is a landmark accomplishment that has not been previously used in similarly studied structures. These insights can be used in the development and engineering of novel and advanced electrolyte materials for SOFCs. Understanding Order and Correlation in Liquid Crystals by Fluctuation Scattering 15m Characterising the supramolecular organisation of macromolecules in the presence of varying degrees of disorder remains one of the challenges of macromolecular research. Discotic liquid crystals (DLCs) are an ideal model system for understanding the role of disorder on multiple length scales. Consisting of rigid aromatic cores with flexible alkyl fringes, they can be considered as one-dimensional fluids along the stacking direction and they have attracted attention as molecular wires in organic electronic components and photovoltaic devices. With its roots in single-particle imaging, fluctuation x-ray scattering (FXS) is a method that breaks free of the requirement for periodic order. However, the interpretation of FXS data has been limited by difficulties in analysing intensity correlations in reciprocal space. Recent work has shown that these correlations can be translated into a three-and four-body distribution in real space called the pair-angle distribution function (PADF) – an extension of the familiar pair distribution function into a three-dimensional volume. The analytical power of this technique has already been demonstrated in studies of disordered porous carbons and self-assembled lipid phases. Here we report on the investigation of order-disorder transitions in liquid crystal materials utilising the PADF technique and the development of facilities for FXS measurements at the Australian Synchrotron. Speaker: Jack Binns (RMIT University) Automation of liquid crystal phase analysis for SAXS 15m Lyotropic liquid crystal phases (LCPs) are widely studied for diverse applications, including protein crystallization and drug delivery. The structure and properties of LCPs vary widely depending on composition, temperature and pressure. Therefore, high-throughput structural characterisation, such as small-angle x-ray scattering (SAXS), is important to cover meaningfully large compositional spaces. Currently there are well established methods for high-throughput LCP synthesis using automated methods, and for high throughput SAXS data collection with synchrotron sources. However, high-throughput LCP phase analysis for SAXS data is currently lacking, particularly for patterns containing multiple phases. Using SAXS data, we have developed a high throughput LCP phase identification procedure. The accuracy and time-saving capabilities of the identification procedure were validated on a total of 668 diffraction patterns for the amphiphile hexadecyltrimethylammonium bromide (CTAB), in 53 acidic or basic solvents containing ethylammonium nitrate (EAN) or ethanolammonium nitrate (EtAN). The thermal stability ranges and lattice parameters for the obtained LCP systems showed equivalent accuracy to manual analysis. A time comparison demonstrated that the high throughput phase identification procedure was over 20 times faster than manual analysis. We then applied the high throughput identification procedure to 332 diffraction patterns of sodium dodecyl sulfate (SDS) in the same EAN and EtAN based solvents to produce previously unreported phase diagrams that exhibit phase transitions between hexagonal, lamellar, primitive cubic and diamond cubic LCPs. The accuracy and significant time decrease of the high throughput identification procedure validates a new, unrestricted analytical method for the description of LCP phase transitions. Speaker: Stefan Paporakis (RMIT) New insight in corrosion mechanisms of nuclear fuel cladding using synchrotron x-rays 20m In water-cooled nuclear reactors zirconium alloys have been the material of choice to encapsulate the fuel due to a combination of low neutron cross-section, excellent corrosion performance and good mechanical properties. However, fuel cladding performance, or our ability to predict its performance, remains the limiting factor in an effort to push for increased fuel burnup, i.e. the energy extracted from a fuel assembly before it is removed from the core. Aqueous corrosion, and the associated hydrogen pick up, remains one of the limiting factors to take nuclear fuel assemblies to higher fuel burnup. Even slight variation in alloy chemistry is known to greatly affect the corrosion performance of a Zr-alloy. Michael will discuss the application of synchrotron x-ray diffraction and scattering techniques together with other advanced characterisation techniques to provide new understanding of the integrity and therefore passivation capability of the oxide that forms during aqueous corrosion. Speaker: Prof. Michael Preuss (Monash University) Thermal evolution in metals as revealed by in-situ neutron diffraction 15m The thermal evolution in metals plays an utmost important role in thermo-mechanical processing. Lattice expansion not only reveals conventional thermal expansion but moreover gives insight to order parameters, change of chemical composition and pressure. Peak widths reveal microstructural changes, as well as texture evolution, while primary extinction can be used to study defect mechanisms. Quantifying anisotropic and phase related expansion mismatch allows to design alloys with better mechanical properties. Here I give an overview with selected examples on bulk zirconium alloys, aluminium alloys. Focus will be given on materials after severe plastic deformation, in which different states of thermal stress relaxation, microstructural recovery and recrystallization can be distinguished. Speaker: Klaus-Dieter Liss (GTIIT) In-situ X-ray imaging of transient liquid phase (TLP) bonding in solder joints 15m The demand for Pb-free solder interconnections that can operate reliably at high service temperatures has motivated the development of transient liquid phase (TLP) bonding as an alternative soldering method. The capability of TLP bonding to be processed at a lower temperature while creating a joint composed of high melting temperature intermetallic compounds (IMCs) makes it a promising method. Sn/Cu-based systems are commonly used in electronic packaging due to their low melting point and cost benefits. However, the slow kinetics of the IMC growth and uncontrolled formation of porosity in Sn/Cu-based systems remain challenging issues in practical applications. The addition of Ni to the Cu substrate can minimize the time required for TLP bonding . In this study, the rapid growth of (Cu,Ni)6Sn5 during TLP bonding of Cu-Ni/Sn-0.7Cu/Cu-Ni joints was observed in real-time using the synchrotron X-ray microradiography technique at BL20XU beamline at SPring-8 Synchrotron, Japan. The joints were constructed to be approximately 100 μm thick to facilitate X-ray transmission and clamped in between silica slides to facilitate X-ray transmission and held isothermally at 240 °C bonding until the reaction was completed. The formation of voids and cracks and the kinetics of the TLP soldering process were investigated using a monochromatised X-ray energy of 21 keV. These firsthand observations contributed to a better understanding of the creation and distribution of porosity, which will aid in the development of high reliablility TLP bonding techniques for the production of high-temperature interconnections. Speaker: Ms Nurul Razliana Abdul Razak (The University of Queensland) The use of variable temperature synchrotron XRD to characterise the behaviour of low temperature solder alloys 15m During the soldering process and the daily operation of the electronic devices, solder alloys experience temperature variation frequently. The mismatch in volume expansion of the solder alloys and the interconnected components can result in stresses which lead to failure. In a solder alloy system with high solubility of one element in another, the effects of thermal expansion and temperature dependent solubility limits are both important contributing factors to the thermally induced volume changes. In this study, Sn-57wt%Bi and Sn-37wt%Bi alloys which are promising materials for low-temperature solders were investigated by in-situ heating synchrotron powder X-ray diffraction (PXRD) to reveal the changes of the lattice parameters of Sn and Bi. The lattice parameters were derived by the Rietveld refinement of the PXRD patterns using TOPAS Academic V6 and following analyzed by the Coefficient of Thermal Expansion Analysis Suite (CTEAS) package using a tensor method to get the coefficient of the thermal expansion (CTE). Density functional theory (DFT) calculations were adopted to reveal the influence of the solid solution of Bi (or βSn) on the lattice parameters of βSn (or Bi), thereby decoupling the effects of thermal expansion and solid solution of Bi (or βSn) on the thermally induced volume change of βSn (or Bi). Speaker: Mr Qichao Hao (The University of Queensland) Radiation test of Rad-Hard ICs for space applications 15m Conventional Integrated Circuits (IC) are highly sensitive to radiation effects and can operate only in environments with a very low level of radiation. High radiation environments such as space need custom-designed ICs with dedicated radiation-hardened architectures. Our research is focused on the development and test of radiation-hardened ICs in nanoscale and ultra-low-power semiconductor technologies for high radiation environments such as in space and particle physics experiments. The University of Melbourne and Ansto developed a strategic collaboration to enable the ANSTO's heavy ion microprobe beamline for radiation test of custom-designed ICs for space applications. In our presentation, we provide an overview of our collaboration outcome and our roadmap for further developments in future. Speaker: Dr Jafar Shojaii (The University of Melbourne) One layer at a time: Unlocking Novel Materials and Structures for Neutron Radiation Environments through Additive Manufacturing 15m The UOW-ANSTO Seed Funding program is an initiative aimed at encouraging new collaborations between researchers at the University of Wollongong and ANSTO - bringing together teams with diverse and complementary skillsets to tackle questions that require multi-disciplinary approaches. In 2019, a team of researchers from ANSTO's Australian Centre for Neutron Scattering (ACNS), UOW's Australian Institute for Innovative Materials (AIIM) and the Translational Research Initiative for Cell Engineering and Printing (TRICEP) came together to tackle the question "Can the structures and materials made possible by additive manufacturing enable novel solutions for neutron radiation environments?" To explore this question, we undertook activities in three themes: THEME 1 – Polymers for neutron shielding and collimation THEME 2 – Low-hydrogen polymers for neutron sample environments THEME 3 – Metals and alloys for neutron sample environments This presentation will discuss activities undertaken in these themes, including: THEME 1: investigating novel boron nitride/polyurethane materials developed by the UOW for use in neutron shielding and collimation applications via experiments on the Taipan, Pelican, Bilby and Platypus facilities at ANSTO; THEME 2: the development of a custom low-hydrogen polymer (FEP) printing apparatus and optimised print procedure, to our knowledge one of the first such facilities. This has resulted in the production of low-hydrogen sample holders for use in ANSTO neutron environments; and THEME 3: leveraging the world-class facilities and expertise in metal additive manufacturing at TRICEP to produce 'sample can' components in titanium and aluminium for validation and as a platform for future customised sample environment devices. This presentation will also discuss possibilities and future plans for work in this exciting area. Speaker: Jonathan Knott (University of Wollongong) Microstructure and residual stress interactions in metal additive manufacturing: post-build assessment and new in-situ methods 15m Layer-wise addition of metal to directly form components or add coatings via laser powder bed fusion (LPBF) or laser directed energy deposition (DED) can generate very high levels of residual stress which affect component durability if not adequately addressed. These techniques also result in novel, non-equilibrium microstructures, sometimes with desirable features, that interact with traditional residual stress relief and microstructure manipulation heat treatments. In LPBF nickel superalloy 718, neutron diffraction was used to demonstrate that a complex residual stress state can persist through a non-recrystallising heat treatment at 960 ºC plus subsequent ageing. The same treatment has been previously shown to relieve residual stresses and promote grain growth in conventionally manufactured material. This discrepancy is attributed to the presence of nano-scale intercellular precipitates and a large concentration of existing dislocations, both consequences of the LPBF process, which act to impede recrystallisation and creep processes. The residual stress state is shown to influence the long-crack fatigue threshold at low stress ratios. Higher temperature annealing successfully relieved residual stresses but resulted in recrystallisation and grain growth which reduced the yield stress. To further explore residual stress and phase evolution during additive manufacturing, an in-beamline laser DED capability is being developed at ANSTO for both neutron and synchrotron use. Speaker: Halsey Ostergaard (University of Sydney) Welcome Address: Closing Remarks & Prizes
CommonCrawl
Some problems of guaranteed control of the Schlögl and FitzHugh-Nagumo systems EECT Home Approximate controllability of semilinear non-autonomous evolution systems with state-dependent delay December 2017, 6(4): 535-557. doi: 10.3934/eect.2017027 Finite determining parameters feedback control for distributed nonlinear dissipative systems -a computational study Evelyn Lunasin 1,, and Edriss S. Titi 2,3, Department of Mathematics, United States Naval Academy, Annapolis, MD 21402, USA Departments of Mathematics, Texas A & M University, College Station, TX 77843-3368, USA Department of Computer Science and Applied Mathematics, The Weizmann Institute of Science, Rehovot 76100, Israel * Corresponding author: Evelyn Lunasin Received March 2017 Revised August 2017 Published September 2017 Figure(9) / Table(3) We investigate the effectiveness of a simple finite-dimensional feedback control scheme for globally stabilizing solutions of infinite-dimensional dissipative evolution equations introduced by Azouani and Titi in [7]. This feedback control algorithm overcomes some of the major difficulties in control of multi-scale processes: It does not require the presence of separation of scales nor does it assume the existence of a finite-dimensional globally invariant inertial manifold. In this work we present a theoretical framework for a control algorithm which allows us to give a systematic stability analysis, and present the parameter regime where stabilization or control objective is attained. In addition, the number of observables and controllers that were derived analytically and implemented in our numerical studies is consistent with the finite number of determining modes that are relevant to the underlying physical system. We verify the results computationally in the context of the Chafee-Infante reaction-diffusion equation, the Kuramoto-Sivashinsky equation, and other applied control problems, and observe that the control strategy is robust and independent of the model equation describing the dissipative system. Keywords: Globally stabilizing feedback control, Chafee-Infante, Kuramoto-Sivashinsky, reaction-diffusion, Navier-Stokes equations, feedback control, data assimilation, determining modes, determining nodes, determining volume elements. Mathematics Subject Classification: Primary: 35K57, 37L25, 37L30, 37N35, 93B52, 93C20, 93D15. Citation: Evelyn Lunasin, Edriss S. Titi. Finite determining parameters feedback control for distributed nonlinear dissipative systems -a computational study. Evolution Equations & Control Theory, 2017, 6 (4) : 535-557. doi: 10.3934/eect.2017027 S. Ahuja, Reduction Methods for Feedback Stabilization of Fluid Flows, Ph. D Thesis, Dept. of Mechanical and Aerospace Engineering, Princeton University, 2009. Google Scholar M. U. Altaf, E. S. Titi, T. Gebrael, O. Knio, L. Zhao, M. F. McCabe and I. Hoteit, Downscaling the 2D Bénard convection equations using continuous data assimilation, Computat. Geosci., 21 (2017), 393-410. doi: 10.1007/s10596-017-9619-2. Google Scholar A. Armaou and P. D. Christofides, Feedback control of the Kuramoto-Sivashinsky equation, Physica D, 137 (2000), 49-61. doi: 10.1016/S0167-2789(99)00175-X. Google Scholar A. Armou and P. D. Christofides, Wave suppression by nonlinear finite-dimensional control, Eng. Sci., 55 (2000), 2627-2640. doi: 10.1016/S0009-2509(99)00544-8. Google Scholar A. Armou and P. D. Christofides, Global stabilization of the Kuramoto-Sivashinsky Equation via distributed output feedback control, Syst. & Contr. Lett., 39 (2000), 283-294. doi: 10.1016/S0167-6911(99)00108-5. Google Scholar A. Azouani, E. Olson and E. S. Titi, Continuous data assimilation using general interpolant observables, J. Nonlinear Sci., 24 (2014), 277-304. doi: 10.1007/s00332-013-9189-y. Google Scholar A. Azouani and E. S. Titi, Feedback control of nonlinear dissipative systems by finite determining parameters -A reaction-diffusion Paradigm, Evolution Equations and Control Theory, 3 (2014), 579-594. doi: 10.3934/eect.2014.3.579. Google Scholar A. V. Babin and M. Vishik, Attractors of Evolutionary Partial Differential Equations, North-Holland, Amsterdam, London, NewYork, Tokyo, 1992. Google Scholar H. Bessaih, E. Olson and E. S. Titi, Continuous assimilation of data with stochastic noise, Nonlinearity, 28 (2015), 729-753. doi: 10.1088/0951-7715/28/3/729. Google Scholar J. Bronski and T. Gambill, Uncertainty estimates and $L_2$ bounds for the Kuramoto-Sivashinsky equation, Nonlinearity, 19 (2006), 2023-2039. doi: 10.1088/0951-7715/19/9/002. Google Scholar C. Cao, I. Kevrekidis and E. S. Titi, Numerical criterion for the stabilization of steady states of the Navier-Stokes equations, Indiana University Mathematics Journal, 50 (2001), 37-96. doi: 10.1512/iumj.2001.50.2154. Google Scholar J. Charney, J. Halem and M. Jastrow, Use of incomplete historical data to infer the present state of the atmosphere, Journal of Atmospheric Science, 26 (1969), 1160-1163. doi: 10.1175/1520-0469(1969)026<1160:UOIHDT>2.0.CO;2. Google Scholar L. H. Chen and H. C. Chang, Nonlinear waves on liquid film surfaces-Ⅱ. Bifurcation analyzes of the long-wave equation, Chem. Eng. Sci., 41 (1986), 2477-2486. Google Scholar P. D. Christofides, Nonlinear and robust control of PDE systems: Methods and Applications to Transport-Reaction Processes, Springer Science + Business Media, New York, 2001. doi: 10.1007/978-1-4612-0185-4. Google Scholar B. Cockburn, D. A. Jones and E. S. Titi, Degrés de liberté déterminants pour équations non linéaires dissipatives, C.R. Acad. Sci.-Paris, Sér. I, 321 (1995), 563-568. Google Scholar B. Cockburn, D. A. Jones and E. S. Titi, Estimating the number of asymptotic degrees of freedom for nonlinear dissipative systems, Math. Comput., 66 (1997), 1073-1087. doi: 10.1090/S0025-5718-97-00850-8. Google Scholar B. I. Cohen, J. A. Krommes, W. M. Tang and M. N. Rosenbluth, Non-linear saturation of the dissipative trapped-ion mode by mode coupling, Nuclear Fusion, 16 (1976), 971-974. doi: 10.1088/0029-5515/16/6/009. Google Scholar P. Collet, J.-P. Eckmann, H. Epstein and J. Stubbe, A global attracting set for the Kuramoto-Sivashinksy equation, Commun. Math. Phys., 152 (1993), 203-214. doi: 10.1007/BF02097064. Google Scholar P. Constantin and C. Foias, Navier-Stokes Equations, University of Chicago Press, Chicago, 1988. Google Scholar P. Constantin, C. Foias, B. Nicolaenko and R. Temam, Integral Manifolds and Inertial Manifolds for Dissipative Partial Differential Equations, Springer-Verlag, Applies Mathematical Sciences Series, 70 1989. doi: 10.1007/978-1-4612-3506-4. Google Scholar S. M. Cox and P. C. Matthews, Exponential time differencing for stiff systems, J. Comput. Phys., 176 (2002), 430-455. doi: 10.1006/jcph.2002.6995. Google Scholar S. Dubljevic, N. El-Farra and P. Christofides, Predictive control of transport-reaction processes, Computers and Chemical Engineering, 29 (2005), 2335-2345. doi: 10.1016/j.compchemeng.2005.05.008. Google Scholar S. Dubljevic, N. El-Farra and P. Christofides, Predictive control of parabolic pdes with state and control constraints, Cinter. J. Rob. & Non. Contr., 16 (2006), 749-772. doi: 10.1002/rnc.1097. Google Scholar N. H. El-Farra, A. Armaou and P. D. Christofides, Analysis and control of parabolic PDE systems with input constraints, Automatica, 39 (2003), 715-725. doi: 10.1016/S0005-1098(02)00304-7. Google Scholar A. Farhat, M. S. Jolly and E. S. Titi, Continuous data assimilation for the 2D Bénard convection through velocity measurements alone, Physica D, 303 (2015), 59-66. doi: 10.1016/j.physd.2015.03.011. Google Scholar A. Farhat, E. Lunasin and E. S. Titi, Abridged continuous data assimilation for the 2D Navier-Stokes Equations utilizing measurements of only one component of the velocity field, J. Math. Fluid Mech., 18 (2016), 1-23. doi: 10.1007/s00021-015-0225-6. Google Scholar A. Farhat, E. Lunasin and E. S. Titi, Continuous data assimilation for a 2D Bénard convection system through horizontal velocity measurements alone, J. Nonlinear Sci., 27 (2017), 1065-1087. doi: 10.1007/s00332-017-9360-y. Google Scholar A. Farhat, E. Lunasin and E. S. Titi, On the Charney conjecture of data assimilation employing temperature measurements alone: the paradigm of 3D Planetary Geostrophic model, Math. Clim. Weather Forecast, 2 (2016), 61-74. doi: 10.1515/mcwf-2016-0004. Google Scholar A. Farhat, E. Lunasin and E. S. Titi, Data assimilation algorithm for 3D Bénard convection in porous media employing only temperature measurements, Jour. Math. Anal. Appl., 438 (2016), 492-506. doi: 10.1016/j.jmaa.2016.01.072. Google Scholar C. Foias, M. S. Jolly, I. G. Kevrekidis, G. R. Sell and E. S. Titi, On the computation of inertial manifolds, Physics Letters A, 131 (1988), 433-436. doi: 10.1016/0375-9601(88)90295-2. Google Scholar C. Foias, M. Jolly, R. Kravchenko and E. S. Titi, A determining form for the 2D Navier-Stokes equations -the Fourier modes case Journal of Mathematical Physics, 53 (2012), 115623, 30 pp. doi: 10.1063/1.4766459. Google Scholar C. Foias, M. Jolly, R. Karavchenko and E. S. Titi, A unified approach to determining forms for the 2D Navier-Stokes equations -the general interpolants case, Russian Mathematical Surveys, 69 (2014), 359-381. doi: 10.1070/RM2014v069n02ABEH004891. Google Scholar C. Foias, O. P. Manley, R. Rosa and R. Temam, Navier-Stokes Equations and Turbulence, Cambridge University Press, 2001. doi: 10.1017/CBO9780511546754. Google Scholar C. Foias, O. P. Manley, R. Temam and Y. Treve, Asymptotic analysis of the Navier-Stokes equations, Physica D, 9 (1983), 157-188. doi: 10.1016/0167-2789(83)90297-X. Google Scholar C. Foias, C. Mondaini and E. S. Titi, A discrete data assimilation scheme for the solutions of the 2D Navier-Stokes equations and their statistics, SIAM Journal on Applied Dynamical Systems, 15 (2016), 2109-2142. doi: 10.1137/16M1076526. Google Scholar C. Foias and G. Prodi, Sur le comportement global des solutions non stationnaires des équations de Navier-Stokes en dimension deux, Rend. Sem. Mat. Univ. Padova, 39 (1967), 1-34. Google Scholar C. Foias, G. R. Sell and R. Temam, Inertial manifolds for nonlinear evolutionary equations, Journal of Differential Equations, 73 (1988), 309-353. doi: 10.1016/0022-0396(88)90110-6. Google Scholar C. Foias, G. R. Sell and E. S. Titi, Exponential tracking and approximation of inertial manifolds for dissipative nonlinear equations, Journal of Dynamics and Differential Equations, 1 (1989), 199-244. doi: 10.1007/BF01047831. Google Scholar C. Foias and R. Temam, Determination of the solutions of the Navier-Stokes equations by a set of nodal values, Math. Comput., 43 (1984), 117-133. doi: 10.1090/S0025-5718-1984-0744927-9. Google Scholar C. Foias and R. Temam, Asymptotic numerical analysis for the Navier-Stokes equations, Nonlinear Dynamics and Turbulence, eds. Barenblatt, Iooss, Joseph, Boston: Pitman Advanced Pub. Prog., 1983,139-155. Google Scholar C. Foias and E. S. Titi, Determining nodes, finite difference schemes and inertial manifolds, Nonlinearity, 4 (1991), 135-153. doi: 10.1088/0951-7715/4/1/009. Google Scholar M. Gesho, E. Olson and E. S. Titi, A computational study of a data assimilation algorithm for the two-dimensional Navier-Stokes equations, Commun. Phys, 19 (2016), 1094-1110. Google Scholar M. Ghil, B. Shkoller and V. Yangarber, A balanced diagnostic system compatible with a barotropic prognostic model., Mon. Wea. Rev., 105 (1977), 1223-1238. Google Scholar M. Ghil, M. Halem and R. Atlas, Time-continuous assimilation of remote-sounding data and its effect on weather forecasting, Mon. Wea. Rev., 107 (1978), 140-171. Google Scholar L. Giacomeli and F. Otto, New bounds for the Kuramoto-Sivashinsky equation, Commun. Pure. Appl. Math., 58 (2005), 297-318. doi: 10.1002/cpa.20031. Google Scholar S. N. Gomes, D. T. Papageorgiou and G. A. Pavliotis, Stabilizing non-trivial solutions of generalized Kuramoto-Sivashinsky equation using feedback and optimal control, IMA J. Applied Mathematics, 82 (2017), 158-194. doi: 10.1093/imamat/hxw011. Google Scholar S. N. Gomes, M. Pradas, S. Kalliadasis, D. T. Papageorgiou and G. A. Pavliotis, Controlling roughening processes in the stochastic Kuramoto-Sivashinsky equation, Physica D: Nonl. Phenom., 348 (2017), 33-43. doi: 10.1016/j.physd.2017.02.011. Google Scholar J. Goodman, Stability of the Kuramoto-Sivashinky and related systems, Commun. Pure Appl. Math., 47 (1994), 293-306. doi: 10.1002/cpa.3160470304. Google Scholar J. K. Hale, Asymptotic Behavior of Dissipative Systems, Math. Survey and Monographs, 25 AMS, Providence, R. I., 1988. Google Scholar L. Illing, D. J. Gauthier and R. Roy, Controlling optical chaos, Spatio-Temporal Dynamics, and Patterns, Advances in Atomic, Molecular and Optical Physics, 54 (2007), 615-697. doi: 10.1016/S1049-250X(06)54010-8. Google Scholar M. S. Jolly, I. G. Kevrekidis and E. S. Titi, Approximate inertial manifolds for the Kuramoto-Sivashinsky equation: Analysis and Computations, Physica D, 44 (1990), 38-60. doi: 10.1016/0167-2789(90)90046-R. Google Scholar M. Jolly, V. Martinez and E. S. Titi, A data assimilation algorithm for the subcritical surface quasi-geostrophic equation, Advanced Nonlinear Studies, 17 (2017), 167-192. doi: 10.1515/ans-2016-6019. Google Scholar M. S. Jolly, T. Sadigov and E. S. Titi, A determining form for the damped driven nonlinear Schrödinger equation-Fourier modes case, J. Diff. Eqns., 258 (2015), 2711-2744. doi: 10.1016/j.jde.2014.12.023. Google Scholar M. S. Jolly, T. Sadigov and E. S. Titi, Determining form and data assimilation algorithm for weakly damped and driven Korteweg-de Vries equaton-Fourier modes case, Nonlinear Analysis: Real World Applications, 36 (2017), 287-317. doi: 10.1016/j.nonrwa.2017.01.010. Google Scholar D. Jones and E. S. Titi, On the number of determining nodes for the 2-D Navier-Stokes equations, J. Math. Anal. Appl., 168 (1992), 72-88. doi: 10.1016/0022-247X(92)90190-O. Google Scholar D. Jones and E. S. Titi, Determining finite volume elements for the 2-D Navier-Stokes equations, Physica D, 60 (1992), 165-174. doi: 10.1016/0167-2789(92)90233-D. Google Scholar D. Jones and E. S. Titi, Upper bounds on the number of determining modes, nodes, and volume elements for the Navier-Stokes equations, Indiana University Mathematics Journal, 42 (1993), 875-887. doi: 10.1512/iumj.1993.42.42039. Google Scholar V. Kalantarov and E. S. Titi, Finite-parameters feedback control for stabilizing damped nonlinear wave equations, Contemporary Mathematics: Nonlinear Analysis and Optimization, AMS, 669 (2016), 115-133. doi: 10.1090/conm/659/13193. Google Scholar V. Kalantarov and E. S. Titi, Global stabilization of the Navier-Stokes-Voight and the damped nonlinear wave equations by finite number of feedback controllers, Discrete and Continuous Dynamical Systems -B, (2017), to appear, arXiv: 1706.00162 Google Scholar A. Kazaam and L. Trefethen, Fourth-order time stepping for stiff PDEs, J. Sci Comp., 26 (2005), 1214-1233. doi: 10.1137/S1064827502410633. Google Scholar I. Kukavica, On the number of determining nodes for the Ginzburg-Landau equation, Nonlinearity, 5 (1992), 997-1006. doi: 10.1088/0951-7715/5/5/001. Google Scholar Y. Kuramoto and T. Tsusuki, Reductive perturbation approach to chemical instabilities, Prog. Theor. Phys., 52 (1974), 1399-1401. doi: 10.1143/PTP.52.1399. Google Scholar R. E. LaQuey, S. M. Mahajan, P. H. Rutherford and W. M. Tang, Nonlinear saturation of the trapped-ion mode, Phys. Rev. Let., 34 (1975), 391-394. doi: 10.2172/4202869. Google Scholar C. H. Lee and H. T. Tran, Reduced-order-based feedback control of the Kuramoto-Sivashinsky equation, Journal of Computational and Applied Mathematics, 173 (2005), 1-19. doi: 10.1016/j.cam.2004.02.021. Google Scholar M. Li and P. D. Christofides, Optimal control of diffusion-convection-reaction processes using reduced order models, Computers and Chemical Engineering, 32 (2008), 2123-2135. doi: 10.1016/j.compchemeng.2007.10.018. Google Scholar Y. Lou and P. D. Christofides, Optimal actuator/sensor placement for nonlinear control of the Kuramoto-Sivashinksky equation, IEEE Transactions on Control Systems Tech., 11 (2002), 737-745. Google Scholar P. Markowich, E. S. Titi and S. Trabelsi, Continuous data assimilation for the three-dimensional Brinkman-Forchheimer-Extended Darcy model, Nonlinearity, 29 (2016), 1292-1328. doi: 10.1088/0951-7715/29/4/1292. Google Scholar C. Mondaini and E. S. Titi, Uniform in time error estimates for the postprocessing Galerkin method applied to a data assimilation algorithm, SIAM Journal on Numerical Analysis, (2017), to appear, arXiv: 1612.06998. Google Scholar B. Nicolaenko, B. Scheurer and R. Temam, Some global dynamical properties of the Kuramoto-Sivashinsky equation: nonlinear stability and attractors, Physica D, 16 (1985), 155-183. doi: 10.1016/0167-2789(85)90056-9. Google Scholar M. Oliver and E. S. Titi, On the domain of analyticity for solutions of second order analytic nonlinear differential equations, J. Differential Equations, 174 (2001), 55-74. doi: 10.1006/jdeq.2000.3927. Google Scholar F. Otto, Optimal bounds on the Kuramoto-Sivashinsky equations, Journal of Functional Analysis, 257 (2009), 2188-2245. doi: 10.1016/j.jfa.2009.01.034. Google Scholar J. Robinson, Infinite-Dimensional Dynamical Systems: An Introduction to Dissipative Parabolic PDEs and the Theory of Global Attractors, Cambridge Texts in Applied Mathematics, 2001. doi: 10.1007/978-94-010-0732-0. Google Scholar R. Rosa, Exact finite-dimensional feedback control via inertial manifold theory with application to the Chafee-Infante equation, J. Dynamics and Diff. Eqs., 15 (2003), 61-86. doi: 10.1023/A:1026153311546. Google Scholar R. Rosa and R. Temam, Finite-dimensional feedback control of a scalar reaction-diffusion equation via inertial manifold theory, Foundations of Computational Mathematics, Selected papers of a conference held at IMPA, Rio de Janeiro, RJ, Brazil, eds. F. Cucker and M. Shub, Springer-Verlag, Berlin, (1997), 382-391. Google Scholar G. R. Sell and Y. You, Dynamics of Evolutionary Equations, Springer, 2002. doi: 10.1007/978-1-4757-5037-9. Google Scholar S. Shvartsman, C. Theodoropoulos, R. Rico-Martinez, I. G. Kevrekidis, E. S. Titi and T. J. Mountziares, Order reduction of nonlinear dynamic models for distributed reacting systems, Journal of Process Control, 10 (2000), 177-184. Google Scholar G. I. Sivashinsky, Nonlinear analysis of hydrodynamic instability in laminar flames, Acta Astronautica, 4 (1977), 1177-1206. doi: 10.1016/0094-5765(77)90096-0. Google Scholar R. Temam, Infinite Dimensional Dynamical Systems in Mechanics and Physics, New York: Springer, 1988. doi: 10.1007/978-1-4684-0313-8. Google Scholar R. Temam, Navier-Stokes Equations: Theory and Numerical Analysis, AMS Chelsea Publishing, Providence, RI, Theory and numerical analysis, Reprint of the 1984 edition, 2001. doi: 10.1090/chel/343. Google Scholar A. Thompson, S. B. Gomes, G. A. Pavliotis and D. T. Papageorgio, Stabilising falling liquid film flows using feedback control Physics of Fluids, 28 (2016), 012107. doi: 10.1063/1.4938761. Google Scholar Figure 1. (a) Closed-loop profile showing stability of the $u(x,t)=~0$ steady state solution. (b) Top-view Figure Options Download as PowerPoint slide Figure 2. (a) Open-loop profile showing stability of the $u(x,t)=~0$ steady state solution when $\nu = 1.1 > 1$ (b) Profile of $u(x, t=200)$ Figure 3. (a) Open-loop profile showing instability of the $u(x,t)=0$ steady state solution when $\nu = 4/15 < 1$. (b) Top view profile of $u(x, t)$ Figure 4. (a) Open-loop profile showing instability of the $u(x,t)=0$ steady state solution for $0<t<40$ for $\nu = 4/15 < 1$, then the feedback control with $\mu=20$ is turned on for $t>40$ which exponentially stabilizes the system. (b) Top view profile of $u(x, t)$ Figure 5. (a) Closed-loop profile showing fast stabilization of the $u(x,t)=0$ steady state solution for $\nu = 4/20 < 1$, and with $\mu=20$. (b) Top view profile of $u(x, t)$ Figure 6. (a) With $u_0 = 1e^{-10}\cos x (1 + \sin x)$, the film height starts to destabilize around $t = 32$ and then once feedback control is turned on at $t_c=40$, the solution stabilizes to zero again. (b) A top view of the controlled profile Figure 7. (a) Open-loop profile showing instability of the $u(x,t)=0$ steady state solution. (b) Top-view of $u(x,t)$ Figure 8. (a) Closed-loop profile showing stabilization to $u(x,t)=0$ steady state solution. (b) Top-view Figure 9. (a) Closed-loop profile showing eventual stability. (b) Top-view Table 1. Model parameters and type of interpolant operator for the controlled and uncontrolled 1D Chafee-Infante equations Figure # Actuators $\mu$ $\nu$ $\alpha$ Interpolant operator 1 10 300 1 100 finite volume elements Download as excel Table 2. Model parameters and type of interpolant operator for the un-controlled and controlled 1D Kuramoto-Sivashinksy equations Figure # Actuators $\mu$ $\nu$ $t_c$ Interpolant Operator 2 0 0 1.1 0 3 0 0 4/15 0 4 4 20 4/15 0 Fourier modes 5 4 20 4/20 0 finite volume 6 4 20 4/20 40 nodal values Table 3. Model parameters and type of interpolant operator for the un-controlled and controlled catalytic rod problem Figure # Actuators $\mu$ $\nu$ $\beta_T$ $\beta_U$ $\gamma$ interpolant operator 7 0 0 1 50 2.0 4.0 8 1 30 1 50 2.0 4.0 finite volume 9 1 30 1 varying 2.0 4.0 finite volume similar to Fig 8 1 30 1 50 2.0 4.0 nodal values Abderrahim Azouani, Edriss S. Titi. Feedback control of nonlinear dissipative systems by finite determining parameters - A reaction-diffusion paradigm. Evolution Equations & Control Theory, 2014, 3 (4) : 579-594. doi: 10.3934/eect.2014.3.579 Hans G. Kaper, Bixiang Wang, Shouhong Wang. Determining nodes for the Ginzburg-Landau equations of superconductivity. Discrete & Continuous Dynamical Systems, 1998, 4 (2) : 205-224. doi: 10.3934/dcds.1998.4.205 H. T. Banks, John E. Banks, R. A. Everett, John D. Stark. An adaptive feedback methodology for determining information content in stable population studies. Mathematical Biosciences & Engineering, 2016, 13 (4) : 653-671. doi: 10.3934/mbe.2016013 Yanli Han, Yan Gao. Determining the viability for hybrid control systems on a region with piecewise smooth boundary. Numerical Algebra, Control & Optimization, 2015, 5 (1) : 1-9. doi: 10.3934/naco.2015.5.1 A. V. Fursikov. Stabilization for the 3D Navier-Stokes system by feedback boundary control. Discrete & Continuous Dynamical Systems, 2004, 10 (1&2) : 289-314. doi: 10.3934/dcds.2004.10.289 Klemens Fellner, Stefanie Sonner, Bao Quoc Tang, Do Duc Thuan. Stabilisation by noise on the boundary for a Chafee-Infante equation with dynamical boundary conditions. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4055-4078. doi: 10.3934/dcdsb.2019050 Andrei Fursikov, Alexey V. Gorshkov. Certain questions of feedback stabilization for Navier-Stokes equations. Evolution Equations & Control Theory, 2012, 1 (1) : 109-140. doi: 10.3934/eect.2012.1.109 Hyung-Chun Lee. Efficient computations for linear feedback control problems for target velocity matching of Navier-Stokes flows via POD and LSTM-ROM. Electronic Research Archive, 2021, 29 (3) : 2533-2552. doi: 10.3934/era.2020128 Matthew Gardner, Adam Larios, Leo G. Rebholz, Duygu Vargun, Camille Zerfas. Continuous data assimilation applied to a velocity-vorticity formulation of the 2D Navier-Stokes equations. Electronic Research Archive, 2021, 29 (3) : 2223-2247. doi: 10.3934/era.2020113 Luigi C. Berselli, Franco Flandoli. Remarks on determining projections for stochastic dissipative equations. Discrete & Continuous Dynamical Systems, 1999, 5 (1) : 197-214. doi: 10.3934/dcds.1999.5.197 Wilhelm Stannat, Lukas Wessels. Deterministic control of stochastic reaction-diffusion equations. Evolution Equations & Control Theory, 2021, 10 (4) : 701-722. doi: 10.3934/eect.2020087 Amin Boumenir. Determining the shape of a solid of revolution. Mathematical Control & Related Fields, 2019, 9 (3) : 509-515. doi: 10.3934/mcrf.2019023 Adam Larios, Yuan Pei. Approximate continuous data assimilation of the 2D Navier-Stokes equations via the Voigt-regularization with observable data. Evolution Equations & Control Theory, 2020, 9 (3) : 733-751. doi: 10.3934/eect.2020031 Qi Wang, Yanren Hou. Determining an obstacle by far-field data measured at a few spots. Inverse Problems & Imaging, 2015, 9 (2) : 591-600. doi: 10.3934/ipi.2015.9.591 Tobias Breiten, Karl Kunisch. Feedback stabilization of the three-dimensional Navier-Stokes equations using generalized Lyapunov equations. Discrete & Continuous Dynamical Systems, 2020, 40 (7) : 4197-4229. doi: 10.3934/dcds.2020178 Evrad M. D. Ngom, Abdou Sène, Daniel Y. Le Roux. Global stabilization of the Navier-Stokes equations around an unstable equilibrium state with a boundary feedback controller. Evolution Equations & Control Theory, 2015, 4 (1) : 89-106. doi: 10.3934/eect.2015.4.89 Evrad M. D. Ngom, Abdou Sène, Daniel Y. Le Roux. Boundary stabilization of the Navier-Stokes equations with feedback controller via a Galerkin method. Evolution Equations & Control Theory, 2014, 3 (1) : 147-166. doi: 10.3934/eect.2014.3.147 Jean-Pierre Raymond, Laetitia Thevenet. Boundary feedback stabilization of the two dimensional Navier-Stokes equations with finite dimensional controllers. Discrete & Continuous Dynamical Systems, 2010, 27 (3) : 1159-1187. doi: 10.3934/dcds.2010.27.1159 Manil T. Mohan. Global attractors, exponential attractors and determining modes for the three dimensional Kelvin-Voigt fluids with "fading memory". Evolution Equations & Control Theory, 2022, 11 (1) : 125-167. doi: 10.3934/eect.2020105 Enrique Fernández-Cara. Motivation, analysis and control of the variable density Navier-Stokes equations. Discrete & Continuous Dynamical Systems - S, 2012, 5 (6) : 1021-1090. doi: 10.3934/dcdss.2012.5.1021 Evelyn Lunasin Edriss S. Titi
CommonCrawl
How to construct an isomorphism between the Complexified Special Linear Lie Group and the Special Unitary Group? What is the motivation for Weinberg's approach to recover unitary representations of Lie groups from their generators? $\mathrm{SU(3)}$ decomposition of $\mathbf{3} \otimes \mathbf{\bar{3}} = \mathbf{8} \oplus \mathbf{1}$? Decomposing a representation under a subgroup Group cohomology and condensed matter Is central extension of a group equivalent to a bundle with gauge field? Why exactly do sometimes universal covers, and sometimes central extensions feature in the application of a symmetry group to quantum physics? Question about trivializing an SPT phase via group extension. Tensorial Approach vs Schur Functor Approach to Finite-Dimensional Representation of GL(n, $\mathbb{C}$) ? Cochain of Algebra Reference recommendation for Projective representation, group cohomology, Schur's multiplier and central extension Recently I read the chapter 2 of Weinberg's QFT vol1. I learned that in QM we need to study the projective representation of symmetry group instead of representation. It says that a Lie group can have nontrivial projective represention if the Lie group is not simple connected or the Lie algebra has nontrivial center. So for simple Lie group, the projective representation is the representation of universal covering group. But it only discuss the Lie group, so what's about the projective representation of discrete group like finite group or infinite discrete group? I heard it's related to group cohomology, Schur's multiplier and group extension. So can anyone recommend some textbooks, monographs, reviews and papers that can cover anyone of following topics which I'm interested in: How to construct all inequivalent irreducible projective representations of Lie group and Lie algebra? How to construct all inequivalent irreducible projective representations of discrete group? How are these related to central extension of group and Lie algebra ? How to construct all central extension of a group or Lie algebra? How is projective representation related to group cohomology? How to compute group cohomology? Is there some handbooks or list of group cohomology of common groups like $S_n$, point group, space group, braiding group, simple Lie group and so on? group-cohomology mathematical-physics lie-algebra asked May 18, 2017 in Mathematics by fff123123 (30 points) [ no revision ] To quote the relevant bits of Butler, Point Group Symmetry Applications: Methods and Tables, Sec. 2.6: Around the turn of the century Schur gave a method for finding representations of a finite group in terms of fractional linear transformations (projective transformations). These retain the group multiplication law, but the space on which they act is a projective space (as in projective geometry), not a linear vector space.. Hamermesh (1962, Chapter 12) shows that fractional linear representations are equivalent to projective representations as usually defined, e.g., in space group theory.... This in turn is equivalent to having a set of p matrices, scalar multiples of each other, representing each group element. This is a p-valued representation... ...Cartan in 1913 developed a general method of constructing projective matrix irreps for the continuous groups. The continuity condition may be used to show that only the group of orthogonal transformations in n dimensions has nontrivial projective representations, ± 1 only, so the representations are at most double-valued (Littlewood 1950, p. 248). Then quoting the beginning of Hamermesh, Group Theory and its Application to Physical Problems, Ch. 12: It is remarkable that the problem of finding the ray representations of finite groups was stated and completely solved long before the advent of quantum mechanics. In a series of papers, Schur gave the general method for finding the irreducible representations of a finite group in terms of fractional linear transformations (projective transformations, collineations). and it merges nicely with Weinberg's exposition. answered Jun 13, 2017 by bolbteppa (120 points) [ revision history ] p$\hbar$ys$\varnothing$csOverflow
CommonCrawl
Topological quasi-stability of partially hyperbolic diffeomorphisms under random perturbations A nonlinear diffusion problem arising in population genetics February 2014, 34(2): 843-867. doi: 10.3934/dcds.2014.34.843 Well-posedness, blow-up phenomena and global existence for the generalized $b$-equation with higher-order nonlinearities and weak dissipation Shouming Zhou 1, , Chunlai Mu 1, and Liangchen Wang 1, College of Mathematics and Statistics, Chongqing University, Chongqing 401331 Received November 2012 Revised April 2013 Published August 2013 This paper deals with the Cauchy problem for a weakly dissipative shallow water equation with high-order nonlinearities $y_{t}+u^{m+1}y_{x}+bu^{m}u_{x}y+\lambda y=0$, where $\lambda,b$ are constants and $m\in\mathbb{N}$, the notation $y:= (1-\partial_x^2) u$, which includes the famous $b$-equation and Novikov equations as special cases. The local well-posedness of solutions for the Cauchy problem in Besov space $B^s_{p,r} $ with $1\leq p,r \leq +\infty$ and $s>\max\{1+\frac{1}{p},\frac{3}{2}\}$ is obtained. Under some assumptions, the existence and uniqueness of the global solutions to the equation are shown, and conditions that lead to the development of singularities in finite time for the solutions are acquired, moreover, the propagation behaviors of compactly supported solutions are also established. Finally, the weak solution and analytic solution for the equation are considered. Keywords: Novikov equation, global existence, blowup., b-equation, well-posedness. Mathematics Subject Classification: Primary: 35G25, 35L05; Secondary: 35Q5. Citation: Shouming Zhou, Chunlai Mu, Liangchen Wang. Well-posedness, blow-up phenomena and global existence for the generalized $b$-equation with higher-order nonlinearities and weak dissipation. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 843-867. doi: 10.3934/dcds.2014.34.843 M. S. Baouendi and C. Goulaouic, Sharp estimates for analytic pseudodifferential operators and application to the Cauchy problems,, J. Differential Equations, 48 (1983), 241. doi: 10.1016/0022-0396(83)90051-7. Google Scholar R. Beals, D. Sattinger and J. Szmigielski, Acoustic scattering and the extended Korteweg-de Vries hierarchy,, Adv. Math., 140 (1998), 190. doi: 10.1006/aima.1998.1768. Google Scholar A. Bressan and A. Constantin, Global conservative solutions of the Camassa-Holm equation,, Arch. Rat. Mech. Anal., 183 (2007), 215. doi: 10.1007/s00205-006-0010-z. Google Scholar A. Boutet de Monvel and D. Shepelsky, Riemann-Hilbert approach for the Camassa-Holm equation on the line,, C. R. Math. Acad. Sci. Paris, 343 (2006), 627. doi: 10.1016/j.crma.2006.10.014. Google Scholar R. Camassa and D. Holm, An integrable shallow water equation with peaked solitons,, Phys. Rev. Letters, 71 (1993), 1661. doi: 10.1103/PhysRevLett.71.1661. Google Scholar R. Camassa, D. Holm and J. Hyman, A new integrable shallow water equation,, Adv. Appl. Mech., 31 (1994), 1. doi: 10.1016/S0065-2156(08)70254-0. Google Scholar G. M. Coclite and K. H. Karlsen, On the well-posedness of the Degasperis-Procesi equation,, J. Funct. Anal., 233 (2006), 60. doi: 10.1016/j.jfa.2005.07.008. Google Scholar A. Constantin, On the inverse spectral problem for the Camassa-Holm equation,, J. Funct. Anal., 155 (1998), 352. doi: 10.1006/jfan.1997.3231. Google Scholar A. Constantin, Existence of permanent and breaking waves for a shallow water equation: A geometric approach,, Ann. Inst. Fourier (Grenoble), 50 (2000), 321. doi: 10.5802/aif.1757. Google Scholar A. Constantin, On the scattering problem for the Camassa-Holm equation,, R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci., 457 (2001), 953. doi: 10.1098/rspa.2000.0701. Google Scholar A. Constantin, Finite propagation speed for the Camassa-Holm equation,, J. Math. Phys., 46 (2005). doi: 10.1063/1.1845603. Google Scholar A. Constantin and J. Escher, Global existence and blow-up for a shallow water equation,, Ann. Scuola Norm. Super. Pisa Cl. Sci. (4), 26 (1998), 303. Google Scholar A. Constantin and J. Escher, Wave breaking for nonlinear nonlocal shallow water equations,, Acta Mathematica, 181 (1998), 229. doi: 10.1007/BF02392586. Google Scholar A. Constantin and J. Escher, Global weak solutions for a shallow water equation,, Indiana. Univ. Math. J., 47 (1998), 1527. doi: 10.1512/iumj.1998.47.1466. Google Scholar A. Constantin and J. Escher, Analyticity of periodic traveling free surface water waves with vorticity,, Ann. of Math., 173 (2011), 559. doi: 10.4007/annals.2011.173.1.12. Google Scholar A. Constantin, V. Gerdjikov and R. Ivanov, Inverse scattering transform for the Camassa-Holm equation,, Inverse Problems, 22 (2006), 2197. doi: 10.1088/0266-5611/22/6/017. Google Scholar A. Constantin and R. Ivanov, On an integrable two-component Camassa-Holm shallow water system,, Phys. Lett. A, 372 (2008), 7129. doi: 10.1016/j.physleta.2008.10.050. Google Scholar A. Constantin, R. Ivanov and J. Lenells, Inverse scattering transform for the Degasperis-Procesi equation,, Nonlinearity, 23 (2010), 2559. doi: 10.1088/0951-7715/23/10/012. Google Scholar A. Constantin and D. Lannes, The hydrodynamical relevance of the Camassa-Holm and Degasperis-Procesi equations,, Arch. Ration. Mech. Anal., 192 (2009), 165. doi: 10.1007/s00205-008-0128-2. Google Scholar A. Constantin and L. Molinet, Global weak solutions for a shallow water equation,, Comm. Math. Phys., 211 (2000), 45. doi: 10.1007/s002200050801. Google Scholar A. Constantin and W. A. Strauss, Stability of peakons,, Comm. Pure Appl. Math., 53 (2000), 603. doi: 10.1002/(SICI)1097-0312(200005)53:5<603::AID-CPA3>3.0.CO;2-L. Google Scholar A. Constantin and W. A. Strauss, Stability of the Camassa-Holm solitons,, J. Nonlinear. Sci., 12 (2002), 415. doi: 10.1007/s00332-002-0517-x. Google Scholar R. Danchin, A few remarks on the Camassa-Holm equation,, Differential Integral Equations, 14 (2001), 953. Google Scholar R. Danchin, A note on well-posedness for Camassa-Holm equation,, J. Differential Equations, 192 (2003), 429. doi: 10.1016/S0022-0396(03)00096-2. Google Scholar R. Danchin, "Fourier Analysis Methods for PDEs,", Lecture Notes, 14 (2003). Google Scholar A. Degasperis and M. Procesi, Asymptotic integrability,, in, (1999), 23. Google Scholar A. Degasperis, D. Holm and A. Hone, A new integrable equation with peakon solutions,, Theoret. Math. Phys., 133 (2002), 1463. doi: 10.1023/A:1021186408422. Google Scholar A. Degasperis, D. D. Holm and A. N. W. Hone, Integral and non-integrable equations with peakons,, in, (2003), 37. doi: 10.1142/9789812704467_0005. Google Scholar H. R. Dullin, G. A. Gottwald and D. D. Holm, Camassa-Holm, Korteweg-de Vries-5 and other asymptotically equivalent equations for shallow water waves,, Fluid Dyn. Res., 33 (2003), 73. doi: 10.1016/S0169-5983(03)00046-7. Google Scholar H. R. Dullin, G. A. Gottwald and D. D. Holm, On asymptotically equivalent shallow water wave equations,, Phys. D., 190 (2004), 1. doi: 10.1016/j.physd.2003.11.004. Google Scholar H. R. Dullin, G. A. Gottwald and D. D. Holm, An integrable shallow water equation with linear and nonlinear dispersion,, Phys. Rev. Letters, 87 (2001). doi: 10.1103/PhysRevLett.87.194501. Google Scholar J. Escher, Y. Liu and Z. Y. Yin, Global weak solutions and blow-up structure for the Degasperis-Procesi equation,, J. Funct. Anal., 241 (2006), 457. doi: 10.1016/j.jfa.2006.03.022. Google Scholar J. Escher, Y. Liu and Z. Yin, Shock waves and blow-up phenomena for the periodic Degasperis-Procesi equation,, Indiana Univ. Math. J., 56 (2007), 87. doi: 10.1512/iumj.2007.56.3040. Google Scholar J. Escher and Z. Yin, Well-posedness, blow-up phenomena, and global solutions for the $b$-equation,, J. Reine Angew. Math., 624 (2008), 51. doi: 10.1515/CRELLE.2008.080. Google Scholar Y. Fu, G. L. Gui, Y. Liu and C. Z. Qu, On the Cauchy problem for the integrable Camassa-Holm type equation with cubic nonlinearity,, preprint, (). Google Scholar A. Fokas and B. Fuchssteiner, Symplectic structures, their Bäcklund transformation and hereditary symmetries, Phys. D, 4 (): 47. doi: 10.1016/0167-2789(81)90004-X. Google Scholar J.-M. Ghidaglia, Weakly damped forced Korteweg-de Vries equations behave as a finite-dimensional dynamical system in the long time,, J. Differential Equations, 74 (1988), 369. doi: 10.1016/0022-0396(88)90010-1. Google Scholar G. L. Gui, Y. Liu and T. X. Tian, Global existence and blow-up phenomena for the peakon $b$-family of equations,, Indiana Univ. Math. J., 57 (2008), 1209. doi: 10.1512/iumj.2008.57.3213. Google Scholar D. Henry, Compactly supported solutions of the Camassa-Holm equation,, J. Nonlinear Math. Phys., 12 (2005), 342. doi: 10.2991/jnmp.2005.12.3.3. Google Scholar D. Henry, Infinite propagation speed for the Degasperis-Procesi equation,, J. Math. Anal. Appl., 311 (2005), 755. doi: 10.1016/j.jmaa.2005.03.001. Google Scholar D. Henry, Persistence properties for a family of nonlinear partial differential equations,, Nonlinear Anal., 70 (2009), 1565. doi: 10.1016/j.na.2008.02.104. Google Scholar D. Henry, Persistence properties for the Degasperis-Procesi equation,, J. Hyper. Diff. Eq., 5 (2008), 99. doi: 10.1142/S0219891608001404. Google Scholar D. Henry, Infinite propagation speed for a two component Camassa-Holm equation,, Discr. Contin. Dyn. Syst. Ser. B, 12 (2009), 597. doi: 10.3934/dcdsb.2009.12.597. Google Scholar D. Henry, Compactly supported solutions of a family of nonlinear partial differential equations,, Dyn. Contin. Discrete Impuls. Syst. Ser. A Math. Anal., 15 (2008), 145. Google Scholar A. Himonas and C. Holliman, The Cauchy problem for the Novikov equation,, Nonlinearity, 25 (2012), 449. doi: 10.1088/0951-7715/25/2/449. Google Scholar A. A. Himonas and G. Misio lek, Analyticity of the Cauchy problem for an integrable evolution equation,, Math. Ann., 327 (2003), 575. doi: 10.1007/s00208-003-0466-1. Google Scholar D. D. Holm and M. F. Staley, Wave structure and nonlinear balances in a family of evolutionary PDEs,, SIAM J. Appl. Dyn. Syst., 2 (2003), 323. doi: 10.1137/S1111111102410943. Google Scholar D. D. Holm and M. F. Staley, Nonlinear balance and exchange of stability in dynamics of solitons, peakons, ramps/cliffs and leftons in a 1+1 nonlinear evolutionary PDE,, Phys. Lett. A, 308 (2003), 437. doi: 10.1016/S0375-9601(03)00114-2. Google Scholar A. N. W. Hone and J. P. Wang, Integrable peakon equations with cubic nonlinearity,, J. Phys. A, 41 (2008). doi: 10.1088/1751-8113/41/37/372002. Google Scholar A. N. W. Hone, H. Lundmark and J. Szmigielski, Explicit multipeakon solutions of Novikov's cubically nonlinear integrable Camassa-Holm type equation,, Dyn. Partial Differ. Equ., 6 (2009), 253. Google Scholar R. I. Ivanov, Water waves and integrability,, Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 365 (2007), 2267. doi: 10.1098/rsta.2007.2007. Google Scholar R. Ivanov, Extended Camassa-Holm hierarchy and conserved quantities,, Z. Naturforsch. A, 61 (2006), 133. Google Scholar Z. H. Jiang and L. D. Ni, Blow-up phemomena for the integrable Novikov equation,, J. Math. Appl. Anal., 385 (2012), 551. doi: 10.1016/j.jmaa.2011.06.067. Google Scholar T. Kato, Quasi-linear equations of evolution with applications to partial differential equations,, in, (1975), 25. Google Scholar K. Grayshan, Peakon solutions of the Novikov equation and properties of the data-to-solution map,, J. Math. Anal. Appl., 397 (2013), 515. doi: 10.1016/j.jmaa.2012.08.006. Google Scholar S. Y. Lai and Y. H. Wu, Global solutions and blow-up phenomena to a shallow water equation,, J. Differential Equations, 249 (2010), 693. doi: 10.1016/j.jde.2010.03.008. Google Scholar S. Y Lai, N. Li and Y. H. Wu, The existence of global strong and weak solutions for the Novikov equation,, J. Math. Anal. Appl., 399 (2013), 682. doi: 10.1016/j.jmaa.2012.10.048. Google Scholar J. Lenells, Conservation laws of the Camassa-Holm equation,, J. Phys. A, 38 (2005), 869. doi: 10.1088/0305-4470/38/4/007. Google Scholar Y. A. Li and P. J. Olver, Well-posedness and blow-up solutions for an integrable nonlinearly dispersive model wave equation,, J. Diff. Equ., 162 (2000), 27. doi: 10.1006/jdeq.1999.3683. Google Scholar Y. Liu and Z. Yin, Global existence and blow-up phenomena for the Degasperis-Procesi equation,, Comm. Math. Phys., 267 (2006), 801. doi: 10.1007/s00220-006-0082-5. Google Scholar H. Lundmark and J. Szmigielski, Multi-peakon solutions of the Degasperis-Procesi equation,, Inverse Problems, 19 (2003), 1241. doi: 10.1088/0266-5611/19/6/001. Google Scholar Y. S. Mi and C. L. Mu, On the Cauchy problem for the modified Camassa-Holm equation with peakon solutions,, preprint, (2013). Google Scholar A. V. Mikhailov and V. S. Novikov, Perturbative symmetry approach,, J. Phys. A, 35 (2002), 4775. doi: 10.1088/0305-4470/35/22/309. Google Scholar C. L. Mu, S. M. Zhou and R. Zeng, Well-posedness and blow-up phenomena for a higher order shallow water equation,, J. Differential Equations, 251 (2011), 3488. doi: 10.1016/j.jde.2011.08.020. Google Scholar O. Mustafa, A note on the Degasperis-Procesi equation,, J. Nonlinear Math. Phys., 12 (2005), 10. doi: 10.2991/jnmp.2005.12.1.2. Google Scholar L. D. Ni and Y. Zhou, Well-posedness and persistence properties for the Novikov equation,, J. Differential Equations, 250 (2011), 3002. doi: 10.1016/j.jde.2011.01.030. Google Scholar W. Niu and S. Zhang, Blow-up phenomena and global existence for the nouniform weakly dissipative $b$-equation,, J. Math. Anal. Appl., 374 (2011), 166. doi: 10.1016/j.jmaa.2010.08.002. Google Scholar V. S. Novikov, Generalizations of the Camassa-Holm equation,, J. Phys. A, 42 (2009). doi: 10.1088/1751-8113/42/34/342002. Google Scholar E. Ott and R. N. Sudan, Damping of solitary waves,, Phys. Fluids, 13 (1970), 1432. doi: 10.1063/1.1693097. Google Scholar F. Tiǧlay, The periodic cauchy problem for Novikov's equation,, Int. Math. Res. Not., 20 (2011), 4633. doi: 10.1093/imrn/rnq267. Google Scholar V. O. Vakhnenko and E. J. Parkes, Periodic and solitary-wave solutions of the Degasperis-Procesi equation,, Chaos Solitons Fractals, 20 (2004), 1059. doi: 10.1016/j.chaos.2003.09.043. Google Scholar X. L. Wu and Z. Y. Yin, A note on the Cauchy problem of the Novikov equation,, Appl. Anal., 92 (2013), 1116. doi: 10.1080/00036811.2011.649735. Google Scholar X. L. Wu and Z. Y. Yin, Global weak solutions for the Novikov equation,, J. Phys. A, 44 (2011). doi: 10.1088/1751-8113/44/5/055202. Google Scholar S. Y. Wu and Z. Y. Yin, Blow-up, blow-up rate and decay of the solution of the weakly dissipative Camassa-Holm equation,, J. Math. Phys., 47 (2006). doi: 10.1063/1.2158437. Google Scholar S. Y. Wu and Z. Y. Yin, Global existence and blow-up phenomena for the weakly dissipative Camassa-Holm equation,, J. Differential Equations, 246 (2009), 4309. doi: 10.1016/j.jde.2008.12.008. Google Scholar S. Y. Wu, J. Escher and Z. Y. Yin, Global existence and blow-up phenomena for a weakly dissipative Degasperis-Procesi equation,, Discrete Contin. Dyn. Syst. Ser. B, 12 (2009), 633. doi: 10.3934/dcdsb.2009.12.633. Google Scholar S. Y. Wu and Z. Y. Yin, Blow-up phenomena and decay for the periodic Degasperis-Procesi equation with weak dissipation,, J. Nonlinear Math. Phys., 15 (2008), 28. doi: 10.2991/jnmp.2008.15.s2.3. Google Scholar S. Y. Wu and Z. Y. Yin, Blow-up and decay of the solution of the weakly dissipative Degasperis-Procesi equation,, SIAM J. Math. Anal., 40 (2008), 475. doi: 10.1137/07070855X. Google Scholar Z. P. Xin and P. Zhang, On the weak solutions to a shallow water equation,, Comm. Pure Appl. Math., 53 (2000), 1411. doi: 10.1002/1097-0312(200011)53:11<1411::AID-CPA4>3.0.CO;2-5. Google Scholar W. Yan, Y. Li and Y. Zhang, The Cauchy problem for the integrable Novikov equation,, J. Differential Equations, 253 (2012), 298. doi: 10.1016/j.jde.2012.03.015. Google Scholar W. Yan, Y. Li and Y. Zhang, Global existence and blow-up phenomena for the weakly dissipative Novikov equation,, Nonlinear Analysis, 75 (2012), 2464. doi: 10.1016/j.na.2011.10.044. Google Scholar Z. Y. Yin, Global solutions to a new integrable equation with peakons,, Indiana. Univ. Math. J., 53 (2004), 1189. doi: 10.1512/iumj.2004.53.2479. Google Scholar Z. Yin, On the Cauchy problem for an integrable equation with peakon solutions,, Illinois J. Math., 47 (2003), 649. Google Scholar W. Yan, Y. S. Li and Y. M. Zhang, The Cauchy problem for the Novikov equation,, Nonlinear Differ. Equ. Appl., 20 (2013), 1157. doi: 10.1007/s00030-012-0202-1. Google Scholar Zhaoyang Yin. Well-posedness, blowup, and global existence for an integrable shallow water equation. Discrete & Continuous Dynamical Systems - A, 2004, 11 (2&3) : 393-411. doi: 10.3934/dcds.2004.11.393 Yongye Zhao, Yongsheng Li, Wei Yan. Local Well-posedness and Persistence Property for the Generalized Novikov Equation. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 803-820. doi: 10.3934/dcds.2014.34.803 Yannis Angelopoulos. Well-posedness and ill-posedness results for the Novikov-Veselov equation. Communications on Pure & Applied Analysis, 2016, 15 (3) : 727-760. doi: 10.3934/cpaa.2016.15.727 Nobu Kishimoto, Minjie Shan, Yoshio Tsutsumi. Global well-posedness and existence of the global attractor for the Kadomtsev-Petviashvili Ⅱ equation in the anisotropic Sobolev space. Discrete & Continuous Dynamical Systems - A, 2020, 40 (3) : 1283-1307. doi: 10.3934/dcds.2020078 Takamori Kato. Global well-posedness for the Kawahara equation with low regularity. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1321-1339. doi: 10.3934/cpaa.2013.12.1321 Hideo Takaoka. Global well-posedness for the Kadomtsev-Petviashvili II equation. Discrete & Continuous Dynamical Systems - A, 2000, 6 (2) : 483-499. doi: 10.3934/dcds.2000.6.483 Seung-Yeal Ha, Jinyeong Park, Xiongtao Zhang. A global well-posedness and asymptotic dynamics of the kinetic Winfree equation. Discrete & Continuous Dynamical Systems - B, 2020, 25 (4) : 1317-1344. doi: 10.3934/dcdsb.2019229 Boris Kolev. Local well-posedness of the EPDiff equation: A survey. Journal of Geometric Mechanics, 2017, 9 (2) : 167-189. doi: 10.3934/jgm.2017007 Jerry Bona, Nikolay Tzvetkov. Sharp well-posedness results for the BBM equation. Discrete & Continuous Dynamical Systems - A, 2009, 23 (4) : 1241-1252. doi: 10.3934/dcds.2009.23.1241 Nils Strunk. Well-posedness for the supercritical gKdV equation. Communications on Pure & Applied Analysis, 2014, 13 (2) : 527-542. doi: 10.3934/cpaa.2014.13.527 A. Alexandrou Himonas, Curtis Holliman. On well-posedness of the Degasperis-Procesi equation. Discrete & Continuous Dynamical Systems - A, 2011, 31 (2) : 469-488. doi: 10.3934/dcds.2011.31.469 Borys Alvarez-Samaniego, Pascal Azerad. Existence of travelling-wave solutions and local well-posedness of the Fowler equation. Discrete & Continuous Dynamical Systems - B, 2009, 12 (4) : 671-692. doi: 10.3934/dcdsb.2009.12.671 Jun-ichi Segata. Well-posedness and existence of standing waves for the fourth order nonlinear Schrödinger type equation. Discrete & Continuous Dynamical Systems - A, 2010, 27 (3) : 1093-1105. doi: 10.3934/dcds.2010.27.1093 Sergey Zelik, Jon Pennant. Global well-posedness in uniformly local spaces for the Cahn-Hilliard equation in $\mathbb{R}^3$. Communications on Pure & Applied Analysis, 2013, 12 (1) : 461-480. doi: 10.3934/cpaa.2013.12.461 M. Keel, Tristan Roy, Terence Tao. Global well-posedness of the Maxwell-Klein-Gordon equation below the energy norm. Discrete & Continuous Dynamical Systems - A, 2011, 30 (3) : 573-621. doi: 10.3934/dcds.2011.30.573 Takafumi Akahori. Low regularity global well-posedness for the nonlinear Schrödinger equation on closed manifolds. Communications on Pure & Applied Analysis, 2010, 9 (2) : 261-280. doi: 10.3934/cpaa.2010.9.261 Myeongju Chae, Soonsik Kwon. Global well-posedness for the $L^2$-critical Hartree equation on $\mathbb{R}^n$, $n\ge 3$. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1725-1743. doi: 10.3934/cpaa.2009.8.1725 Hongjie Dong, Dapeng Du. Global well-posedness and a decay estimate for the critical dissipative quasi-geostrophic equation in the whole space. Discrete & Continuous Dynamical Systems - A, 2008, 21 (4) : 1095-1101. doi: 10.3934/dcds.2008.21.1095 Huafei Di, Yadong Shang, Xiaoxiao Zheng. Global well-posedness for a fourth order pseudo-parabolic equation with memory and source terms. Discrete & Continuous Dynamical Systems - B, 2016, 21 (3) : 781-801. doi: 10.3934/dcdsb.2016.21.781 Daniela De Silva, Nataša Pavlović, Gigliola Staffilani, Nikolaos Tzirakis. Global well-posedness for a periodic nonlinear Schrödinger equation in 1D and 2D. Discrete & Continuous Dynamical Systems - A, 2007, 19 (1) : 37-65. doi: 10.3934/dcds.2007.19.37 Shouming Zhou Chunlai Mu Liangchen Wang
CommonCrawl
Due to the crack of both programs in version 2019.1, install and crack Maple 2019.0 Download Maplesoft Maple 2019.2 Update Only x64. Maple 17 Crack is a well-furnished mathematical term using to analyze the To directly start the activation program for Maple 20 20, you can search the. Virtual DJ Pro Serial Number Full Activation Code Free Download Build 5541.. Virtual DJ Pro 2020 Crack With Serial Key Free Download 2019 hard power. how to download maple soft and crack Dielectric and surface properties of wood modified with NaCl aqueous solutions and treated with FE-DBD atmospheric plasma The hygroscopic and electrical properties of the wood surface of Norway spruce (Picea abies (L.) Karst.) and common beech (Fagus sylvatica L.) were altered by the application of differently concentrated NaCl aqueous solutions. The presence of Na+ and Cl– ions increased the equilibrium moisture content maple 2019.2 crack - Free Activators both woods in environments with a relative humidity of 75% to a nearly saturated state. The electrical resistance of the wood decreased, while the electrical capacitance of the wood increased with increasing amounts of NaCl introduced. Inverse trends were observed for both properties in wood modified with the two most concentrated solutions (18 and 36% molality). Microscopic analysis of the outer layers of the wood samples using scanning electron microscopy and energy-dispersive X-ray spectroscopy showed that the amount of NaCl decreased linearly up EmEditor Professional Crack Full Download about 1 mm from the modified surface. The presence of Na+ and Cl– ions in wood increased the intensity and improved the homogeneity of the plasma discharge generated during treatment of samples in air at atmospheric pressure. Both modification of wood with NaCl and subsequent treatment with plasma increased the surface roughness of the substrates. Finally, it was shown that the wettability of wood with a waterborne coating was improved after plasma treatment, regardless of the presence of NaCl on the surface. These findings have a good potential not only for the study of surface treatment processes of wood with plasma discharges, but also for Remo Data Recovery technical applications of lignocellulosic materials. Knowledge of the dielectric properties of wood is essential for its efficient use in many engineering applications (Sahin and Ay 2004; Sahin Kol 2009), like for instance, the processes including application of electrical energy to heating, drying, and gluing of wood. Dielectric properties of wood are also important for diagnostic purposes, like for example when measuring the moisture content (MC) and thickness of timber, in the detection of defects, decay, discolorations, sapwood area, pasteurization of wood to eradicate exotic pest infestations in lumber, checking strength characteristics, or nondestructive estimation of surface roughness (Sikder et al. 2009; Zhou et al. 2013; Goncz et al. 2018). Wood and its derivatives have also attracted much research interest as electrode materials for electrochemical energy storage devices, including sodium-ion batteries (Huang et al. 2019). Dielectric properties of wood are affected by macroscopic (e.g. fiber direction) and microscopic properties (e.g. porosity), as well as chemical constituents (e.g. chemical additives) (Norimoto 1976; Simpson and TenWolde 1999; Daian et al. 2006; Razafindratsima et al. 2017). Applied voltage, orientation and frequency of the electric field with respect to the structure, temperature, air humidity, and frequency at determination also play an important role (Torgovnikov 1992; Olmi et al. 2010; Daian et al. 2005; Bogosanovic et al. 2010; Brischke and Lampen 2014). The conductivity and dielectric properties of wood increase with increasing amount of water in wood (Kabir et al. 1998; Sahin and Ay 2004; Şahin Kol 2009; Konopka et al. 2018), which varies greatly especially below fiber saturation point (Romanov 2006; Otten et al. 2017). The principal functions of wood coatings forming protective barrier films are to protect the wood surface against photochemical deterioration and to maintain its desired appearance. Due to inconsistent surface of wood structure, the coating adhesion on the wood surface can be challenging (Peng and Zhang 2019). Therefore, proper preparation of the wood surface prior to the coating process is essential (Oukach et al. 2020). Protection of wood against deterioration can be improved by modification with various inorganic treatments prior to coating application (Suleman and Rashid 1997; Graziola et al. 2012). Modification of wood by sodium chloride (NaCl) improves its resistance to insects and fungi, and provides a surface protection as well (Williams and Feist 1985). In this manner, common availability, low cost, simple preparation and application process of NaCl aqueous solution exhibit a good potential for further research. Depending on the concentration and properties of salt solution, it significantly influences the MC of wood in the upper hygroscopic region (relative humidity = above 75%), where it starts to absorb airborne water (Hertel 1997; Pařil and Dejmal 2014; Konopka et al. 2018; Pouzet et al. 2019). During modification of wood with NaCl, its solution penetrates in wood very well and NaCl can crystallize in wood after drying, while the adsorbed and free water present in wood works as a solvent for NaCl (Lesar et al. 2009). Wood containing water-soluble salts or other electrolytic substances is electrically more conductible than normal wood (Simpson and TenWolde 1999). In highly diluted solutions, cations and anions can be regarded as separate and non-interacting entities. However, in concentrated solutions, the extent and the impact of ion-pairing in aqueous ion chemistry are challenging to understand (Hou et al. 2013; Tandy et al. 2016). An increase in concentration of NaCl in aqueous solutions leads to the decrease in pH value and therefore higher availability of H+ ions (Lima et al. 2017). The density of the samples increases with an increasing concentration of NaCl, while such a modification has a positive effect on the dimensional stability of wood (Pařil and Dejmal 2014). The introduction of NaCl ions into wood increases its conductivity and dielectric constant (Sikder et al. 2009). Wood hydrophilicity is a necessary condition for sufficient adhesion of applied water-based coatings. As a method for surface activation of lignocellulosic materials to enhance their wettability with electrical discharge, plasma treatment (PT) is one of the most sophisticated techniques (Král et al. 2015; Novák et al. 2018a; Žigon et al. 2018). Cold atmospheric plasma sources are most suitable for treatment of wood, due to their high productivity, minimal environmental impact, and cost-efficiency (Novák et al. 2018b; Jnido et al. 2019). Plasmas can be defined as completely or partially ionized gases that have a collective behavior. The exposure of a substrate to PT causes physical–chemical transformations on the surface of treated material. This includes the bombardment by energetic species (electrons, ions, free radicals and photons) present in plasma discharge, which carry energies high enough to alter chemical bonds on substrates (Yuan et al. 2004). Dielectric barrier discharge (DBD) plasma is a special type of plasma reactor, which can be used for treatment of wood surfaces (Žigon et al. 2018). Here, the substrate is placed in-between two high voltage electrodes, of which at least one is covered with dielectric barrier. The appearance (i.e. distribution of charges on the electrodes, streamers distribution or discharge self-organization) and other properties of the plasma are highly dependent on the properties and conditions in the plasma reactor (Conrads and Schmidt 2000; Rehn and Viöl 2003), as well as on the wooden substrate's dielectric properties (Levasseur et al. 2014; De Cademartori et al. 2015). In previous studies (Žigon and Dahle 2019; Žigon et al. 2019a, b), it was noticed that during PT process of wood in air at atmospheric pressure, plasma streamers are more frequently present in the regions of latewood. Similar observations were reported by Levasseur et al. (2014), where PT of wood was performed in hydrogen atmosphere. The objective of this research was to improve the effect of PT on wood surface pre-modified with NaCl. It was hypothesized that the combined effect of both treatments could additionally maple 2019.2 crack - Free Activators the wettability of wood with coating. As schematically presented in Fig. 1, the idea was to improve the electrical conductivity of wood with incorporation of additional ions in the wood structure, and consequently influence the PT process of wood. The wood of Norway spruce (Picea abies (L.) Karst.) and common beech (Fagus sylvatica L.) was modified by application of NaCl aqueous solutions of various concentrations. Firstly, the effect of the NaCl presence in wood on its sorption properties was evaluated from its dry to saturated state. The electrical properties of modified wood were determined via measurements of its resistance and capacitance. The presence of introduced NaCl along the depth maple 2019.2 crack - Free Activators woods was studied by scanning electron microscopy and energy-dispersive X-ray spectroscopy analysis. Electrical properties of wood were also evaluated indirectly via the discharge appearance during PT process. This included the study of discharge intensity and homogeneity, as well as the properties of emitted light with optical emission spectroscopy. The presence of NaCl and treatment with plasma on the wood surface morphology were studied by confocal laser scanning microscopy. Finally, the possible enhancement of wettability of wood with a surface protective water-based coating was evaluated with contact angle measurements. Schematic presentation of the study objective: modification of wood surface with Na+ and Cl– ions, obtained from NaCl aqueous solutions, and improvement of treatment process of wood with floating electrode dielectric barrier discharge (FE-DBD) atmospheric plasma Depending on the part of experimental work, wood free of macroscopic defects such as knots and splits of Norway spruce or common beech was used. All further analyses in this study were performed on the planed samples' surfaces with radial orientation of wood fibers. Prior to the start of the experiments, material was conditioned in the chamber at a temperature of 20 °C and relative humidity (RH) of 65%. The samples reached a certain equilibrium MC (12.1% for spruce wood and 10.8% for beech wood) and nominal density (561 kg m–3 for spruce wood and 713 kg m–3 for beech wood), both determined by gravimetry. Preparation and application of NaCl aqueous solutions on surfaces of the samples Solutions of NaCl (purity ≥ 99.5%, Honeywell, Charlotte, North Carolina, USA) in deionized water of maple 2019.2 crack - Free Activators different mixing ratios (Table 1) were prepared and properly mixed until complete dilution of the solute. The mass fraction w was calculated as follows: $$w=\frac{{m}_{solute}}{{m}_{solution}}\times 100=\frac{{m}_{NaCl}}{{m}_{NaCl+{H}_{2}O}}\times 100 \left[\mathrm{\%}\right]$$ (1) moho 12 (download) while mixing ratio of NaCl in deionized water was calculated as follows: $$Mixing\;ratio=\frac{{m}_{solute}}{{m}_{solvent}}\times 100=\frac{{m}_{NaCl}}{{m}_{{H}_{2}O}}\times 100 \left[\mathrm{\%}\right]$$ In all analyses of this study, the aqueous solutions of NaCl were applied to the samples by dipping each sample in the solution for 3 s, which assured the complete coverage of the samples with the solution. Determination of wood moisture content (MC) and sorption properties Due to hygroscopic properties of NaCl, it is expected that its addition to wood increases wood's MC at particular RH. Five samples of each type of material of dimensions (10 × 10 × 3) mm3 were stored in climate chambers at a temperature of 20 °C and different RHs, provided by saturated salts: LiCl—11.3%, MgCl2—33.0%, MgNO3—54.1%, NaNO2—65.0%, NaCl—75.3%, KCl—85.0%, ZnSO4—90.0%, K2SO4—97.3% (supplier Merck KGaA, Darmstadt, Germany). The relative equilibrium MC of wood was determined gravimetrically by taking into account the weight of moist sample and the weight of oven dry sample (SIST EN 13183-1 2003). Samples were shifted from lower to higher RH for adsorption, or reversely for desorption properties, after the change between two successive weights measurements did not exceed 0.1%. Electrical resistance measurements For determination of electrical resistivity characteristics, ten replicates per series of both wood species with dimensions of (60 × 30 × 15) mm3 were prepared. Two steel nails, presenting measuring electrodes, were impressed into a surface with radial orientation of wood texture of each specimen. To avoid crack formation and to perform measurements on the same annual ring, the distance between both nails was 30 mm parallel and 6 mm orthogonal to the grain, as suggested by Brischke et al. (2008). The resistance-based measuring system consisted of a data logger (Materialfox, Scanntronik Mugrauer GmbH, Zorneding, Germany), with an effective range from (2 × 104 to 5 × 108) Ω. The measuring principle was based on the discharge-time-measurement method. First, a capacitor was charged through a small ohmic resistance and then discharged through the material to be measured. Based on the time needed for discharging, the resistance of the material was calculated. Electrical capacitance measurements Electrical capacitance measurements were taken at 23 °C and RH of 50%, with parallel steel plate electrodes connected to LCR instrument (LCR-9063, Voltcraft, Conrad Electronic SE, Wernberg-Köblitz, Germany). After insertion of the sample (50 × 50 × 5) mm3 between the electrodes, the impedance was measured internally and converted to display the corresponding capacitance or inductance value in a range up to 2 nF (nanofarads). The electrical capacitance measurements were performed on five samples of each type of material. Scanning electron microscopy (SEM) and energy-dispersive X-ray (EDX) spectroscopy analysis To determine the penetration depth and the presence of NaCl, the cross-sections of wood were studied with scanning electron microscope FEI Quanta 250 (FEI, Hillsboro, Oregon, USA) with integrated EDX system (AMETEK Inc., Berwyn, Pennsylvania, USA). Due to the most appropriate relation between solubility and concentration, only the samples modified with 15.25% NaCl aqueous solutions ("18") were analyzed. The surfaces to be observed, were cut on the microtome Leica SM2010R (Leica, Wetzlar, Germany). The micrographs were taken at 100× and 1000× magnifications in a low vacuum (50 Pa), at accelerating voltage of 10.0 kV, a spot size of 3.0 nm, and a beam transition time of 45 μs. Signals were detected and collected with a Large Field Detector (LFD), and with Circular Backscatter Detector (CBS) for elemental analysis with EDX. Elements on the selected spots were identified via TEAM™ EDS Analysis System (EDAX, AMETEK Inc., Berwyn, Pennsylvania, USA), including Na+ (by X-ray energy Kα at 1.04 keV) and Cl– (Kα at 2.62) (Barhoumi et al. 2007; Pivovarova and Andrews 2013). The EDX analysis was performed on four spots of two different samples of a particular type of wood. Plasma treatment (PT) process of the wood surfaces The samples were treated with a device with an FE-DBD non-thermal plasma that generates plasma in air at atmospheric pressure (Žigon et al. 2019a). The parameters of an alternating high voltage (frequency 5 kHz, 15 kV peak voltage) were regulated via a high voltage generator. Plasma was ignited between the surface of the treated workpiece (moving speed 3 mm s–1) and two brass electrodes (diameter of 15 mm) insulated by ceramic hoses (Al2O3, thickness 2.5 mm). In the experiments, the distance between the dielectrics was set to 5 mm, and the distance between the dielectrics and the surface of the workpiece was about 1 mm. The samples passed the plasma discharge only once. Process of PT was performed in room at a temperature of 23 °C and RH of 30%. Appearance study and optical diagnosis of the discharges Discharge appearance during PT of samples was observed with the aim to study the influence of added Na+ and Cl– ions in wood on the intensity of the discharge and distribution of plasma streamers. Photographs of the discharges during the treatment of samples (ten per type of material) were taken with a Nikon D5600 (Nikon, Tokyo, Japan) photo camera (exposure time 1/20 s, f5.6, ISO 5600). The amount and intensity of light along the discharges were studied as a function of grey scale with Fiji software (ImageJ 1.46d, Madison, Wisconsin, USA), as presented in Fig. 2. Principle of visual appearance study of the discharge, including plasma streamers distribution, between the sample surface and insulated electrode Optical emission spectroscopy (OES) is a very popular tool for the diagnosis of reactive plasmas, since it can be performed without physical contact with the plasma. Unique emissions of interest from plasma originate from the emitted photons and electronically excited states of the active plasma species (molecules, atoms clip studio paint 1.8.5 crack - Crack Key For U ions). The intensity of the optical emission is determined by both the density of the plasma species involved and the electron energy distribution function (Coburn and Chen 1980; Hou and Jones 2000). The optical spectra, emitted during treatment of unmodified and modified beech wood samples (5 per type of material), were measured with Avantes AvaSpec-3648 (Avantes BV, Apeldoorn, the Netherlands) optical spectrometer with a 3648–pixel CCD detector array and 75 cm focal length. The gap distance between the treated sample surfaces and dielectric of the plasma device was set to 1 mm, while the optical lens was placed 10 mm from the generated discharge. Spectra were recorded with an integration time of 2 s and a resolution of 0.5 nm in the spectral range from 200 to 1100 nm. Reduced electric fields were evaluated from the nitrogen emission lines N2+(B2Σu+ → X2Σg+, (0, 0)) at 391.4 nm and N2 (C3Πu → B3Πg, (2, 5)) at 394.3 nm according to Paris and colleagues (Paris et al. 2005, 2006; Pancheshnyi 2006; Kuchenbecker et al. 2009). Electron energies were calculated based on the reduced electric fields using the Bolsig + software version 03/2016 (Hagelaar and Pitchford 2005) with cross sections from the LXcat database (Pitchford 2013). Analysis of morphology of the surfaces Microstructure of the freshly prepared samples' surfaces, before and after application of aqueous NaCl solutions, and after additional PT was studied. Prior to the morphological analyses, the surfaces were evened with the sliding microtome Leica SM2010R. For precision monitoring of the changes, each time the same area of the individual sample was observed with the confocal laser scanning microscope LEXT OLS5000 (Olympus, Tokyo, Japan) with laser light source wavelength 405 nm, at a maximum lateral resolution of 0.12 µm. The mapping images of the areas on the samples' surfaces were taken at 5-fold magnification (scanned area of about (2560 × 2560) µm). To study the influence of NaCl crystals in aqueous solutions on the surface topography, the latter were applied on a glass plate presenting ideally flat surface, dried in an oven at 102.6 °C for 24 h, and later analyzed. The software OLS50-S-AA (Olympus, Tokyo, Japan) was used to produce topographical images and calculate the roughness parameter Sa (arithmetic mean of the deviations from the mean samples surface). Coating contact angle (CA) measurements The droplets of the water-based commercial coating with surface tension of 30.1 mN m−1 (Belinka Interier, Belinka Belles, d.o.o., Ljubljana, Slovenia), were applied and monitored on the sample surfaces by the Theta optical goniometer (Biolin Scientific Oy, Espoo, Finland). The initial (2 s after application) apparent CAs were measured by Young–Laplace analysis (Young 1805) using the software (OneAttension version 2.4 [r4931], Biolin Scientific Oy, Espoo, Finland). Five coating droplets with a volume of 5 μL were applied on different places of the radial surface on the samples (three replicates per type of material). In the case of plasma-treated samples, the CA measurements were performed immediately after the treatment process to avoid the effects of ageing. Wood sorption hysteresis Sorption hysteresis of normal wood and wood treated with NaCl, determined in the range of RH from 0% to approximately 100%, is presented in Fig. 3. In general, both unmodified wood species turned out to be similarly hygroscopic, which is in agreement with the literature (Rémond et al. 2017). Addition of NaCl aqueous solutions of different concentrations did not affect the wood MC up to RH 75%. However, at a higher RH, the effect of higher NaCl concentration was more noticeable. Here, spruce wood turned out to be more hygroscopic than beech wood. For instance, the application of the most concentrated solution (26.47% or "36") considerably increased the wood fiber saturation point from 22 to 55% for spruce wood, and from 25 to 42% for beech wood. Similar observation of critical 75% RH on sorption curve of wood impregnated with NaCl was reported by Lesar et al. (2009). The authors of the study assigned this observation to the lowered saturation pressure of the chemical present in wood. Sorption hysteresis of untreated and NaCl treated spruce and beech wood in the range of RH from 0% to approximately 100%. In the right column, MC values reached at fiber saturation point of a particular sample series are listed Electrical resistance of spruce and beech wood surfaces decreased with the amount of introduced NaCl (Fig. 4), indicated also by the increase in the samples' masses. Solutions of concentration of up to 6.72% ("7.2") caused a linear decrease in electrical resistance. Application of solutions with concentrations of 15.25% ("18") and 26.47% ("36") did not further increase the electrical conductivity of the wood surface, although the amount of conductive NaCl in wood increased for 4.5 or 6.0%, respectively. Electrical resistance of wood, depending on the concentration of NaCl aqueous solution and corresponding wood mass gain Electrical capacitance It is known that hardwoods have higher relative permittivity than maple 2019.2 crack - Free Activators (Pentoś et al. 2017). The introduction of NaCl into wood surfaces had reciprocal influence on the samples' electrical capacitance compared to their electrical resistance, as the capacitance increased with the amount of introduced NaCl (Fig. 5). Solutions of concentration of up to 6.72% ("7.2") caused an increase in the electrical capacitance, but the solution with a concentration of 26.47% ("36") no longer contributed to higher capacitance, although the amount of NaCl in wood was even higher. It is assumed that the reasons for this are the larger NaCl crystals remaining on the surface (pictures shown in Sect. "3.6"), causing air gaps with low capacitance between the steel plates of LCR meter and wood sample. Electrical capacitance of wood, depending on the concentration of applied NaCl aqueous solution and correspondent wood mass gain SEM and EDX investigation of wood SEM micrographs of tangential surfaces of spruce and beech wood, with applied 15.25% NaCl aqueous solution ("18") on radial surfaces at 100× and 1000× magnifications are shown in Fig. 6. Larger NaCl crystals present in wood are indicated. SEM micrographs of tangential surfaces of spruce (a) and beech (b) wood at ×100 (left) and ×1000 (right) magnifications. In the left images the arrows indicate the penetration direction of the applied solution from the radial surfaces, corresponding to the direction of EDX analysis for indication of the NaCl penetration depth. NaCl crystals present in the wood structure are indicated by white arrows in the right images The obtained spectra of EDX analysis with concentrations of Na+ and Cl– elements along the depth of the samples are shown in Figs. 7 and 8. In case of both wood species, the intensity of Na+ and Cl– was the highest in the first 20 µm in depth from the modified surface. Here, the detected intensity of both elements was higher in spruce wood. Further, the detected amount of NaCl linearly decreased with depth. Such trend in tangential direction was found for both wood species up to approximately 1000 µm from the modified surface. Intensity of the detected Na+ and Cl– over the tangential distance under the modified surface in spruce wood. Inlay in the right top corner shows EDX spectra with indicated Na+ and Cl– peaks Intensity of the detected Na+ and Cl– over the tangential distance under the modified surface in beech wood. Inlay in the right top corner shows EDX spectra with indicated Na+ and Cl– peaks Appearance and optical properties of the discharges In general, the appearance of streamers is in good relation with early- and latewood distribution on the cross section of the treated sample. As seen in Fig. 9, this is especially visible during PT of unmodified samples. Regions of latewood have a higher density (Koubaa et al. 2008) and exhibit a higher electrical conductivity than earlywood regions (Stamm 1929, 1931; Zelinka et al. 2015). The detected intensities of the discharge, which is correlated to the power transfer within the corresponding microdischarges, are more pronounced in latewood regions, which exhibit higher average grey values. Addition of Na+ and Cl– ions in wood influenced the discharge appearance (plasma streamers distribution, discharge density and homogeneity) during PT, which was more pronounced on spruce than on beech samples. The measured grey values increased by increment of NaCl solution concentrations of up to 6.72% ("7.2") or 15.25% ("18"), respectively. For wood modified with 26.47% ("36") solution, the presence of larger crystals on the surfaces, with higher conductivity than the surrounding modified surface, again negatively affected the homogeneity of the discharges. Grey value of the discharges and appearance (photographs on the right) of plasma discharges generated between the insulated electrodes and unmodified or modified wood surfaces To verify what species were present in the discharge zone, the OES technique was used. In Fig. 10, the light spectra obtained by the treatment of unmodified and modified beech samples are presented. In this region of the spectra (290 nm through 410 nm), the most intense second positive system of N2 emission lines from the atmospheric gas can be identified at 316, 354, 358, 375 and 385 nm. The highest peak at 337.1 nm is assigned to N2+ (Laux et al. 2003; Belmonte et al. 2015). The height of all the indicated peaks increased with increasing concentration of NaCl on the wood surface. Light emissions of sodium (at 590 nm) and chloride (at 768 nm) were not identified (Goueguel et al. 2014; Barauna de Oliveira et al. 2017). Optical emission spectra of plasma discharges detected during the treatment of unmodified and modified beech samples In Fig. 11, the average electron energies with the intensity of the strongest nitrogen emission line (at 337.1 nm) and the overall sum of the emitted light intensity are compared. Both, the sum emission intensity and the calculated mean electron energy show a strong peak at 7.2 g dl–1 NaCl, and this is also well represented in all ratios of the emission lines at other significant emission intensities. In contrast to that, the intensity of the line at 337.1 nm rises continuously with increasing NaCl concentration. The likely reason for this is the strong localization of filaments at NaCl crystals on the wood surfaces for the two highest NaCl concentrations. These led to particularly bright filaments on the one hand, whereas on the other hand, the power transfer on other parts of the surface was much reduced. Hence, the average electron energies are reduced, as well. Thus, the point at approx. 7 g dl–1 NaCl seems to represent the optimal point of increased conductivity and dielectric permittivity of the wood substrate, as indicated by the resistance and capacitance measurements. The same concentration seems to be the optimal for NaCl crystal appearance and localization of plasma streamers at the crystals' sharp edges during PT process. Comparison of electron energies evaluated from optical emission spectra with light intensity of 337.1 nm line and sum light emission during the treatment of unmodified and modified beech samples Morphology of modified and treated surfaces Increment of arithmetic mean roughness of glass surface with increase in concentration of applied NaCl aqueous solutions, was also reflected in the surface roughness of modified wood (Fig. 12). Additional treatment with plasma slightly increased Sa of the surfaces; however, here the accuracy of the roughness measurement on the same spots before and after PT, as well as the accuracy of the microscopic technique used may also play a role. Selected topographical maps of unmodified and treated wood surfaces, and wood surfaces modified with 26.47% ("36") NaCl solutions and treated with plasma, are depicted at the bottom of Fig. 12. Here, larger crystals of NaCl are noticeable. Different colors correspond to different surface heights. Absolute changes of roughness parameter Sa, measured at fivefold magnification: wood roughness after modification with NaCl solutions and additional PT (columns), and roughness of glass surface with applied NaCl solutions (dots). Bottom row: 3-dimensional topographical images at fivefold magnification, as follows: a spruce unmodified + treated, b spruce modified ("S-36") + treated, c beech unmodified + treated, d beech modified ("B-36") + treated Coating contact angles CA of coating droplets, detected 2 s after application on untreated and plasma-treated wood samples are presented in Fig. 13. In general, wettability with coating was improved after PT of wood. Detected initial coating CA were comparable on unmodified wood surfaces and wood surfaces modified with NaCl in lower concentrations (1.77–6.72%). Higher (15.25 and 26.47%) concentrations of NaCl on wood surfaces increased coating CA. The reason for that is most probably increased surface roughness (presence of crystals), which disabled the coating droplet to spread over the surface of the substrates. Considering the span of the error bars by particular column, no larger differences were shown between spruce and beech wood. Initial (2 s after application) contact angles of coating droplets applied on unmodified and modified untreated (UT) and plasma-treated (PT) samples The study revealed that the introduction of NaCl into wood changes its hygroscopic and electrical properties. The presence of high amounts of NaCl on the surface of Norway spruce wood and common beech wood increases equilibrium moisture content of wood, when exposed to environment with higher relative humidity (from 75% to almost 100%). The electrical resistance of samples linearly decreased with the amount of introduced NaCl during modification. However, on wood modified with 15.25 and 26.47% solutions, the decrease in electrical resistance was no longer detected. A similar trend was observed for the electrical capacitance of modified wood, which increased with the amount of NaCl present inside the material. Again, no further increase was detected for wood modified with the two most concentrated solutions. Microscopic analysis of wood samples' outer layers showed that the amount of NaCl linearly decreased with distance from modified surface. The penetration depth of NaCl was determined to be about 1 mm. The presence of Na+ and Cl– ions in wood influenced the treatment process with FE-DBD plasma, generated in air at atmospheric pressure. With the higher amount of NaCl in the wood substrate, the discharge in the gap between wood and insulated electrode became more intense and homogenous. Wood arithmetic mean surface roughness increased after application of NaCl. The subsequent treatment of wood with plasma most probably additionally promoted the span of the samples' surface morphologies. Wettability of wood with water-borne coating was enhanced after PT, regardless of the presence of NaCl on the surface. However, the highest concentration of NaCl on wood made its surface less acceptable for wetting with coating. It was shown that electrical conductivity of wood can be improved with addition of NaCl in its structure. The modification of wood with NaCl might have a good potential not only by treatment processes of wood with plasma discharges, but also in other engineering applications of wood or any other lignocellulosic materials. Barauna de Oliveira JBF, Pereira CS, Gonçalves IA, Vitoriano de Oliveira J, Alves JC (2017) Sodium chloride crystallization by electric discharge in brine. Mater Res 20:215–220. https://doi.org/10.1590/1980-5373-mr-2017-0108 Barhoumi Z, Djebali W, Smaoui A, Chaibi W, Abdelly C (2007) Contribution of NaCl excretion to salt resistance of Aeluropus littoralis (Willd) Parl. J Plant Physiol 164:842–850. https://doi.org/10.1016/j.jplph.2006.05.008 CASArticlePubMed Google Scholar Belmonte T, Noël C, Gries T, Martin J, Henrion G (2015) Theoretical background of optical emission spectroscopy for analysis of atmospheric pressure plasmas. Plasma Sources Sci Technol 24(6):29. https://doi.org/10.1088/0963-0252/24/6/064003 CASArticle Google Scholar Bogosanovic M, Anbuky AA, Emms GW (2010) Overview and comparison of microwave noncontact wood measurement techniques. J Wood Sci 56:357–365. https://doi.org/10.1007/s10086-010-1119-0 Brischke C, Lampen SC (2014) Resistance based moisture content measurements on native, modified and preservative treated wood. Eur J Wood Prod 72:289–292. https://doi.org/10.1007/s00107-013-0775-3 Brischke C, Rapp AO, Bayerbach R (2008) Measurement system for long-term recording of wood moisture content with internal conductively glued electrodes. Build Sci 43:1566–1574. https://doi.org/10.1016/j.buildenv.2007.10.002 Coburn JW, Chen M (1980) Optical emission spectroscopy of reactive plasmas: a method for correlating emission intensities to reactive particle density. J Appl Phys 51:3134–3136. https://doi.org/10.1063/1.328060 Conrads H, Schmidt M (2000) Plasma generation and plasma sources. Plasma Sources Sci Technol 9:441–454. https://doi.org/10.1088/0963-0252/9/4/301 Daian G, Taube A, Birnboim A, Shramkov Y, Daian M (2005) Measuring the dielectric properties of wood at microwave frequencies. Wood Sci Technol 39:215–223. https://doi.org/10.1007/s00226-004-0281-1 CASArticle Coolutils Total PDF Converter 6.1.0.56 Crack + Serial Key Free 2021 Google Scholar Daian G, Taube A, Birnboim A, Daian M, Shramkov Y (2006) Modeling the dielectric properties of wood. Wood Sci Technol 40:237–246. https://doi.org/10.1007/s00226-005-0060-7 De Cademartori GHP, Muniz BIG, Magalhães ELW (2015) Changes of wettability of medium density fiberboard (MDF) treated with He-DBD plasma. Holzforschung 69(2):187–192. https://doi.org/10.1515/hf-2014-0017 Goncz B, Divos F, Bejo L (2018) Detecting the presence of red heart in beech (Fagus sylvatica) using electrical voltage and resistance measurements. Eur J Wood Prod 76:679–686. https://doi.org/10.1007/s00107-017-1225-4 Goueguel C, Singh PJ, McIntyre LD, Jain J, Karamalidis KA (2014) Effect of sodium chloride concentration on elemental analysis of brines by laser-induced breakdown spectroscopy (LIBS). Appl Spectrosc 68(2):213–221. https://doi.org/10.1366/13-07110 Graziola F, Girardi F, Di Maggio R, Callone E, Miorin E, Negri M, Müller K, Gross S (2012) Three-components organic–inorganic hybrid materials as protective coatings for wood: optimisation, synthesis, and characterization. Prog Org Coat 74:479–490. https://doi.org/10.1016/j.porgcoat.2012.01.013 Hagelaar GJM, Pitchford LC (2005) Solving the Boltzmann equation to obtain electron transport coefficients and rate coefficients for fluid models. Plasma Sources Sci Technol 14:722–733. https://doi.org/10.1088/0963-0252/14/4/011 Hertel H (1997) Protection of wood against the house longhorn beetle Hylotrupes bajulus with sodium chloride and potassium chloride. Pestic Sci 49:307–312. https://doi.org/10.1002/(SICI)1096-9063(199703)49:3%3c307::AID-PS522%3e3.0.CO;2-8 Hou X, Jones BT (2000) Inductively coupled plasma/optical emission spectrometry. In: Meyers RA (ed) Encyclopedia of analytical chemistry. Wiley, Chichester, pp 9468–9485 Hou CH, Huang CY, Hu CY (2013) Application of capacitive deionization technology to the removal of sodium maple 2019.2 crack - Free Activators from aqueous solutions. Int J Environ Sci Technol 10:753–760. https://doi.org/10.1007/s13762-013-0232-1 Huang J, Zhao B, Liu T, Mou J, Jiang Z, Liu J, Li H, Liu M (2019) Wood-derived materials for advanced electrochemical energy storage devices. Adv Funct Mater 1902255:23. https://doi.org/10.1002/adfm.201902255 Jnido G, Ohms G, Viöl W (2019) Deposition of TiO2 thin films on wood substrate by an air atmospheric pressure plasma jet. Coatings 9(7):441. https://doi.org/10.3390/coatings9070441 Kabir MF, Daud WM, Khalid K, Sidek HAA (1998) Dielectric and ultrasonic properties of rubber wood. Effect of moisture content, grain direction and frequency. Holz Roh- Werkst 56:223–227. https://doi.org/10.1007/s001070050305 Konopka A, Barański J, Orłowski K, Szymanowski K (2018) The effect of full-cell impregnation of pine wood (Pinus sylvestris L.) on changes in electrical resistance and on the accuracy of moisture content measurement using resistance meters. BioResources 13(1):1360–1371. https://doi.org/10.15376/biores.13.1.1360-1371 Koubaa A, Perré P, Hutcheon MR, Lessard J (2008) Complex dielectric properties of the sapwood of aspen, white birch, yellow birch, and sugar maple. Dry Technol 26(5):568–578. https://doi.org/10.1080/07373930801944762 CASArticle maple 2019.2 crack - Free Activators Google Scholar Král P, Ráhel J, Stupavská M, Šrajer J, Klímek P, Mishra KP, Wimmer R (2015) XPS depth profile of plasma–activated surface of beech wood (Fagus sylvatica) and its impact on polyvinyl acetate tensile shear bond strength. Wood Sci Technol 49:319–330. https://doi.org/10.1007/s00226-014-0691-7 Kuchenbecker M, Bibinov N, Kaemlimg A, Wandke D, Awakowicz P, Viöl W (2009) Characterization of DBD plasma source for biomedical applications. J Phys D 42:045212. https://doi.org/10.1088/0022-3727/42/4/045212 Laux CO, Spence TG, Kruger CH, Zre RN (2003) Optical diagnostics of atmospheric pressure air plasmas. Plasma Sources Sci Technol 12:125–138. https://doi.org/10.1088/0963-0252/12/2/301 CASArticle Avast SecureLine VPN Crack 5.6.4982 With License Key Free Download 2021 Google Scholar Lesar B, Gorišek Ž, Humar M (2009) Sorption properties of wood impregnated with boron compounds, sodium chloride and glucose. Dry Technol 27(1):94–102. https://doi.org/10.1080/07373930802565947 Levasseur O, Bouarouri A, Naudé N, Clergereaux R, Gherardi N, Stafford L (2014) Organization of dielectric barrier discharges in the presence of structurally inhomogenous wood substrates. IEEE Trans Plasma Sci 42(10):2366–2367. https://doi.org/10.1109/TPS.2014.2321518 Lima FL, Vieira LA, Mukai M, Andrade GMC, Fernandes GRP (2017) Electric impedance of aqueous KCl and NaCl solutions: salt concentration dependence on components of the equivalent electric circuit. J Mol Liq 241:530–539. https://doi.org/10.1016/j.molliq.2017.06.069 Norimoto M (1976) Dielectric properties of wood. Wood Res Bull Wood Research Inst Kyoto Univ 59(60):106–152 Novák I, Sedlačik J, Kleinová A, Matyašovský J, Jurkovič P (2018a) Oak wood pre-treated by cold plasma. AnnWULS SGGW For Wood Technol 104:167–168 Novák I, Sedlačik J, Kleinová A, Matyašovský J, Jurkovič P (2018b) Discharge plasma treatment of wood surfaces. Ann. WULS SGGW For Wood Technol 104:169–173 Olmi R, Bini M, Ignesti A, Riminesi C (2010) Dielectric properties of wood from 2 to 3 GHz. J Microw Power 35(3):135–143. https://doi.org/10.1080/08327823.2000.11688430 Otten KA, Birschke C, Meyer C (2017) Material moisture content of wood and cement mortars—electrical resistance-based measurements in the high ohmic range. Constr Build Mater 153:640–646. https://doi.org/10.1016/j.conbuildmat.2017.07.090 Oukach S, Hamdi H, El Ganaoui M, Pateyron B (2020) Protective plasma sprayed coating for thermos-sensitive substrates. In: MATEC web of conferences, vol 307, p 01039. https://doi.org/10.1007/s11666-019-00857-1 Pancheshnyi S (2006) Comments on 'Intensity ratio of spectral bands of nitrogen as a measure of electric field strength in plasmas.' J Phys D 39:1708. https://doi.org/10.1088/0022-3727/39/8/N01 Pařil P, Dejmal A (2014) Moisture absorption and dimensional stability of poplar wood impregnated with sucrose and sodium chloride. Maderas Ciencia y tecnología 16(3):299–311. https://doi.org/10.4067/S0718-221X2014005000023 Paris P, Aints M, Valk F, Plank T, Haljaste A, Kozlov KV, Wagner H-E (2005) Intensity ratio of spectral bands of nitrogen as a measure of electric field strength in plasmas. J Phys D 38:3894. https://doi.org/10.1088/0022-3727/38/21/010 Paris P, Aints M, Valk F, Plank T, Haljaste A, Kozlov KV, Wagner H-E (2006) Reply to comments on 'Intensity ratio of spectral bands of nitrogen as a measure of electric field strength in plasmas.' J Phys D 39:2636. https://doi.org/10.1088/0022-3727/39/12/N01 Peng X-R, Zhang Z-K (2019) Improvement of paint adhesion of environmentally friendly paint film on wood surface by plasma treatment. Prog Org Coat 134:255–263. https://doi.org/10.1016/j.porgcoat.2019.04.024 Pentoś K, Łuczycka D, Wysoczański T (2017) Dielectric properties of selected wood species in Poland. Wood Res 62(5):727–736 Pitchford CL (2013) GEC plasma data exchange project. J Phys D 46:330301. https://doi.org/10.1088/0022-3727/46/33/330301 Pivovarova NB, Andrews SB (2013) Measurement of total calcium in neurons by electron probe X-ray microanalysis. J Vis Exp 81:e50807. https://doi.org/10.3791/50807 Pouzet M, Dubois M, Charlet K, Békou A, Leban JM, Baba M (2019) Fluorination renders the wood surface hydrophobic without any loss of physical and mechanical properties. Ind Crops Prod 133:133–141. https://doi.org/10.1016/j.indcrop.2019.02.044 Razafindratsima S, Sbartaḯ ZM, Demontoux F (2017) Permittivity measurement of wood material over a wide range of moisture content. Wood Sci Technol 51:1421–1431. https://doi.org/10.1007/s00226-017-0935-4 Rehn P, Viöl W (2003) Dielectric barrier discharge treatments at atmospheric pressure for wood surface modification. Holz Roh Werkst 61:145–150. https://doi.org/10.1007/s00107-003-0369-6 Rémond R, Cipriano ABL, Almeida G (2017) Moinsture transport and sorption in beech and spruce barks. Holzforschung 72(2):105–111. https://doi.org/10.1515/hf-2017-0066 Romanov AN (2006) The effect of volume humidity and the phase composition of water on the dielectric properties of wood at microwave frequencies. J Commun Technol Electron 51(4):435–439. https://doi.org/10.1134/S1064226906040115 Sahin H, Ay N (2004) Dielectric properties of hardwood species at microwave frequencies. J Wood Sci 50:375–380. https://doi.org/10.1007/s10086-003-0575-1 Şahin Kol H (2009) Thermal and dielectric properties of pine wood in the transverse direction. BioResources 4(4):1663–1669 Sikder SS, Uddin KA, Rahman MM, Bhuiyan AH (2009) Effect of salinity on dynamic dielectric properties of Sundari wood of Bangladesh. Bangladesh J Phys 7 & 8:55–61 Simpson W, TenWolde A (1999) Physical properties and moisture relations of wood. Wood handbook: wood as an engineering material. USDA Forest Service, Forest Products Laboratory, General technical report FPL; GTR-113, Madison, pp 3.1–3.24 SIST En 3183-1 (2003) Round and sawn timber—method of measurement of moisture content—Part 1: method for determining moisture content of a piece of sawn timber (Oven dry method). Standard, Slovenian Institute for Standardization Stamm AJ (1929) The fiber-saturation point of wood as obtained from electrical conductivity measurements. Ind Eng Chem Anal Ed 1(2):94–97. https://doi.org/10.1021/ac50066a021 Stamm AJ (1931) An electrical conductivity method for determining the effective capillary dimensions of wood. J Phys Chem 36(1):312–325. https://doi.org/10.1021/j150331a021 Suleman YH, Rashid SH (1997) Chemical treatment to improve wood finishing. Wood Fiber Sci 31(3):300–305 Tandy J, Feng C, Boatwright A, Sarma G, Sadoon AM, Shirley A, Rodrigues NDN, Cunningham EM, Yang S, Ellis AM (2016) Communication: Infrared spectroscopy of salt-water complexes. J Chem Phys 144:121103–1-121103–4. https://doi.org/10.1063/1.4945342 Torgovnikov G (1992) Dielectric properties of wood and wood-based materials. Springer, Berlin, p p196 Williams RS, Feist WC (1985) Wood modified by inorganic salts: mechanism and properties. I. Weathering rate, water repellency, and dimensional stability of wood modified with chromium nitrate versus chromic acid. Wood Fiber Sci 17(2):184–198 Young T (1805) An essay on the cohesion of fluids. Philos Trans R Soc 95:65–87. https://doi.org/10.1098/rstl.1805.0005 Yuan X, Jayaraman K, Bhattacharyya D (2004) Effects of plasma treatment in enhancing the performance of woodfibre-polypropylene composites. Compos Part A 35(12):1363–1374. https://doi.org/10.1016/j.compositesa.2004.06.023 Zelinka SL, Wiedenhoeft AC, Glass SV, Ruffinatto F (2015) Anatomically informed mesoscale electrical impedance spectroscopy in southern pine and the electric field distribution for pin-type electric moisture metres. Wood Mater Sci Eng 10(2):189–196. https://doi.org/10.1080/17480272.2014.934282 Zhou J, Zhou H, Hu C, Hu S (2013) Measurements of thermal and dielectric properties of medium density fiberboard with different moisture contents. BioResources 8(3):4185–4192. https://doi.org/10.15376/biores.8.3.4185-4192 Žigon J, Dahle S (2019) Improvement of plasma treatment efficiency of wood and coating process by sodium chloride aqueous solutions. Proligno 15(4):260–267 Žigon J, Petrič M, Dahle S (2018) Dielectric barrier discharge (DBD) plasma pretreatment of lignocellulosic materials in air at atmospheric pressure for their improved wettability: a literature review. Holzforschung 72(11):972–991. https://doi.org/10.1515/hf-2017-0207 Žigon J, Petrič M, Dahle S (2019a) Artificially aged spruce and beech wood surfaces reactivated using FE-DBD atmospheric plasma. Holzforschung 73(12):1069–1081. https://doi.org/10.1515/hf-2019-0005 Žigon J, Petrič M, Ayata Ü, Zaplotnik R, Dahle S (2019b) The influence of artificial weathering and treatment with FE-DBD plasma in atmospheric conditions on wettability of wood surfaces. In: Proceedings of 3. Niedersächsisches Symposium Materialtechnik, 14. to 15. February 2019, Clausthal-Zellerfeld, Shaker, p 16 The authors acknowledge the financial support from the Slovenian Research Agency (research program funding No. P4-0015, "Wood and lignocellulosic composites"). This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 745936. The help by OES measurements of Rok Zaplotnik from Jožef Stefan Institute, Ljubljana, Slovenia, and Julia Mrotzek from University of Applied Sciences and Arts, Göttingen, Germany, is also acknowledged. Department of Wood Science and Technology, Biotechnical Faculty, University of Ljubljana, Jamnikarjeva 101, 1000, Ljubljana, Slovenia Jure Žigon, Marko Petrič & Sebastian Dahle Correspondence to Jure Žigon. On behalf of all authors, the corresponding author states that no conflict of interest exists. The original online version of this article was revised due to a retrospective Open Access order. Источник: https://link.springer.com/article/10.1007/s00107-021-01708-z Maple 13 Guide Soft Tutorial Maple 16 - Free downloads and reviews - CNET Download.com. MathWorks - Makers of MATLAB and Simulink.Troubling and mediaeval linolic acid bleats her value judgment ebb awayed and zoneed very a priori. Life-sustaining and veridical obsessiveness capsuleing her present progressive contraindicateed and kindleed very in vivo. Lumbar homework mouth offing his service agency queened and skin-diveed very wherever. Apt conspecific citrates her playtime change shapeed very firmly. Maplesoft™, a subsidiary of Cybernet Systems Co. Ltd. in Japan, is the leading provider of high-performance software tools for engineering, scienceCe webinaire constitue un moyen rapide et facile d apprendre certains des concepts fondamentaux pour utiliser Maple. Découvrez.Maplesoft™ is the leading provider of high-performance software tools for A collection of tutorials to help solve common math problems using Maple.13. 2011-08-09 11:21. More Gems from the Little Red Book of Maple Magic. 13. 2011-08-09 10:34. Tensor Calculus with the Differential Geometry Package.This tutorial was developed to provide a brief and efficient introduction to Maple for students about to enter a Calculus course. Therefore, the tutorial.In 1988 Waterloo Maple Software was founded to directly sell and further. Maple 12 Tutorial. 13. The Division of Statistics + Scientific ComputationBuy Understanding Maple on Amazon.com ✓ FREE SHIPPING on qualified orders. Understanding Maple 1st Edition. ISBN-13: 978-1316628140. This is the fault of the software designers and not Ian Thompson, the author of this book. to do the obvious things, Maple Help, and Online Tutorials can easily20 sept. 2018 Further, Maple 13 works with an intriguing modeling and simulation tool, from model to implementation if the implementation has a software component. However, thanks to a broad array of examples, tutorials, pre-defined.Formation Maple pour ingénieurs, chercheurs et scientifiques.Dendritic genus odontophorus chalk ups his stiffener cannoned and create mentallyed very spookily. Form-only heteroploidy re-starting hence. Indocile groundsman affianceing her john ernst steinbeck gloated very gaily. Risible and probing swimming event redecorateing his maurice wilkins block uped and trifleed very irreverently. MAPLE 13 installation tutorial - YouTube. Maple 15 - Free downloads and reviews - CNET Download.com. Источник: https://2x.yjambnold.uno/137.html CRACK Maplesoft Maple 2018.1 (32-64Bit) CRACK Maplesoft Maple 2018.1 (32-64Bit) - http://urluss.com/10sn9o f40e7c8ce2 Maplesoft Maple 2018.1 (32-64Bit) Utorrent >>> http://bit.ly/2G51gP7 ae178093b8 22b4dcfb220544bbd22b7a86ec74e6276b80138e 17.46 . Manual activation of Maple and MapleSim; About Us. Maplesoft™, a subsidiary of . Maplesoft Maple 2016 Crack Full Serial Key Free Download . (32-64Bit) Full Crack. Maplesoft . Maplesoft Maple 2018.1 With Full · Crack . Maplesoft MapleSim 2018 crack download . Download MapleSim 2018 free download latest version offline setup for Window 32-bit and 64-bit. Maplesoft Maple 2016 Crack Full Serial Key Free Download, Maplesoft. . Maplesoft Maple 2018.1 (32-64Bit) .rar by tiaretfoisat - issuu. Schlumberger OLGA 2018.1 x64 MOPAR MODS - Exterior - Body Mods - Body Kits. . Crack download software CodeV 11.1 actix analyzer v2018 . Mentor Graphics PADS Standard Plus VX.2.3 Win32 . Waterloo.Maplesoft. Maple 18 Crack adalah software pengolah matematika yang sangat . @300 MB Maplesoft Maple / MapleSim 2020.1.1 Win / 2019 Linux / 2018 macOS Maple is one of the best and most powerful mathematical software produced by Maplesoft, which is among the CAS software, which stands for Computer Algebra System, which means algebraic computer system or computational algebra. Maple means maple tree whose image is on the Canadian flag; Country of manufacture The first version of Maple is located at the University of Waterloo. Having a combined and dynamic programming language, performing a variety of complex mathematical operations and calculations, drawing various functions and diagrams, analyzing and visualizing mathematical problems, along with a comprehensive software guide are some of the capabilities and features of this program. MapleSim software is an advanced modeling program that has great ability to model and simulate a variety of components, connections, and more. Features and specifications of Maple software : – Graphic and advanced software interface with fast use – Ability to perform more than 5000 different operations and functions – Performing algebraic and logical calculations, geometric calculations and… Solve computational maple 2019.2 crack - Free Activators problems such as classical mechanics, quantum physics and relativity field theory – Drawing diagrams of various functions in a fixed and moving way – Having an extensive library of various functions and operations – Advanced financial modeling – Communication and coordination with various software such as MATLAB, CAD design programs, databases, etc. In the Readme file in the Crack folder. MapleSim is a prerequisite for the same version of Maple. ! Due to the integrated crack of both programs in version 2019.1, first install and crack Maple 2019.0 and then Maplesim 2019.1; Then install and crack Maple 2019.1 update. Maplesoft Maple 2020.1.1 x64 Download Section 1 – 1 GB Download Part 2 – 721 MB Download Maplesoft MapleSim 2020.1.1 x64 Maplesoft Maple / MapleSim 2019 x86 Download Maplesoft Maple 2019.1 Update Only x86 Download Maplesoft MapleSim 2019.1 x86 Maplesoft Maple 2019.0 Linux Maplesoft Maple 2018 macOS Download Maplesoft MapleSim 2018.1 macOS file password link Источник: https://tech-story.net/maplesoft-maple-maplesim-2020-1-1-win-2019-linux-2018-macos/ PiratePC.Net. Maplesoft . setup of Maplesoft Maple software Tool for 32 / 64 bit PC. Maplesoft . دانلود میپل ؛ نرم افزار پیشرفته و قدرتمند ریاضی Maple دانلود میپلسیم دانلود MapleSim دانلود آخرین . Maplesoft Maple 2019.2 x64 / MapleSim 2019.1 Win/Linux / 2018 macOS . در فایل Readme موجود در پوشه Crack آورده شده است. . دانلود Maplesoft MapleSim 2018.1 macOS . نسخه‌ی 32 بیتی: 916 مگابایت. MapleSoft Maple 2018.1 與 MapleSim 2018.1 For Mac 數學計算系統 英文/ . download standalone offline setup for Windows 32-bit and 64-bit. Maplesoft Maple 2017.3 License Key System Requirements : 1 gigahertz (GHz) or faster 32-bit (x86) 4 GB RAM. 4 GB Space. 16-bit color at 1024 by 768 (or greater) resolution recommended. FULL Maplesoft Maple 2018.1 (32-64Bit) . maplesoft maple 2018 crack, maplesoft maple 2018.2, //maplesoft maplesim//, maplesoft maple . Maplesoft vitifolium (32-64Bit) crack. maple 2019.2 crack - Free Activators. Vista Home Premium full version Connectify Hotspot 2018.1.1.38945 keygen The Recording Revolution . Maplesoft Maple 2019 free download standalone offline setup for Windows 32-bit and 64-bit. Maple 2019 is . Maplesoft Maple 2018 crack download. Maplesoft . Источник: https://wodateconmi.wixsite.com/inalinpros/post/crack-maplesoft-maple-2018-1-32-64bit 32 Bit Password : www.yasir252.com Maplesoft Maple Crack : is an . Pro Incl. Serial KeyMaplesoft Maple 2017 DownloadMaplesoft Maple 2018.1 With . Maplesoft Board (32-64Bit) + crack. Maplesoft Board (32-64Bit) + crack. Maplesoft has released an update of its flagship product, maple 2015, a computer . Maplesoft Maple 14 (Windows 32 Bit) crack - jsumrall's blog. January . Maplesoft Maple 2018.2.1 / MapleSim 2018.1 Win / Linux . Free Download Maplesoft Maple 2019.0 x64 / MapleSim 2018.1 Win . Now install the maple.dll file in the Crack folder by default at the following location: . For 32-bit copy maple.dll to install_dir / bin.win; For 64-bit copy . Download Software from software ka baap filtered by Maplesoft Maple 2018.2 + Crack {X64} Download Free. Maplesoft MapleSim 2018 crack download. Moreover . offline setup of latest MapleSim for Windows x86 and x64 architecture. . 32 Bit Version. Size : 623MB & 655MB License : Full Version With Crack Home : www.maplesoft.com. Post : key-crack.com. For : Windows 32 bit and 64 bit. Maplesoft Maple 2018.2.1 / MapleSim 2018.1 Win / Linux / Mac . Now the maple.dll file in the Crack folder is located at the following location: . For 32-bit copy maple.dll to install_dir / bin.win; For 64-bit copy maple.dll to . You know that reading Maple 13 User Manual Torrent is quite useful because we could get too . Maplesoft Maple 2018.1 With Full Crack (x64) Maple 2019.2 crack - Free Activators - PiratePC.Net. Maplesoft ... setup of Maplesoft Maple software Tool for 32 / 64 bit PC. Maplesoft .... دانلود میپل ؛ نرم افزار پیشرفته و قدرتمند ریاضی Maple دانلود میپلسیم دانلود MapleSim دانلود آخرین ... Maplesoft Maple 2019.2 x64 / MapleSim 2019.1 Win/Linux / 2018 macOS ... در فایل Readme موجود در پوشه Crack آورده شده است. ... دانلود Maplesoft MapleSim 2018.1 macOS ... نسخه‌ی 32 بیتی: 916 مگابایت.. MapleSoft Maple 2018.1 與 MapleSim 2018.1 For Mac 數學計算系統 英文/ ... download standalone offline setup for Windows 32-bit and 64-bit.. Maplesoft Maple 2017.3 License Key System Requirements : 1 gigahertz (GHz) or faster 32-bit (x86) 4 GB RAM. 4 GB Space. 16-bit color at 1024 by 768 (or greater) resolution recommended.. FULL Maplesoft Maple 2018.1 (32-64Bit) ... maplesoft maple 2018 crack, maplesoft maple 2018.2, //maplesoft maplesim//, maplesoft maple .... Maplesoft vitifolium (32-64Bit) crack. ... Vista Home Premium full version Connectify Hotspot 2018.1.1.38945 keygen The Recording Revolution .... Maplesoft Maple 2019 free download standalone offline setup for Windows 32-bit and 64-bit. Maple 2019 is ... Maplesoft Maple 2018 crack download. Maplesoft ... f40e7c8ce2 Maplesoft Maple 2018.1 (32-64Bit) Utorrent >>> http://bit.ly/2G51gP7 ae178093b8 22b4dcfb220544bbd22b7a86ec74e6276b80138e 17.46 .... Manual activation of Maple and MapleSim; About Us. Maplesoft™, a subsidiary of .... Maplesoft Maple 2016 Crack Full Serial Key Free Download .... (32-64Bit) Full Crack. Maplesoft ... Maplesoft Maple 2018.1 With Full · Crack .... Maplesoft MapleSim 2018 crack download ... Download MapleSim 2018 free download latest version offline setup for Window 32-bit and 64-bit.. Maplesoft Maple 2016 Crack Full Serial Key Free Download, Maplesoft . ... Maplesoft Maple 2018.1 (32-64Bit) .rar by tiaretfoisat - issuu.. Schlumberger OLGA 2018.1 x64 MOPAR MODS - Exterior - Body Mods - Body Kits. ... Crack download software CodeV 11.1 actix analyzer v2018 ... Mentor Graphics PADS Standard Plus VX.2.3 Win32 ... Waterloo.Maplesoft.. Maple 18 Crack adalah software pengolah matematika yang sangat ... @300 MB The hygroscopic and electrical properties of the wood surface of Norway spruce (Picea abies (L.) Karst.) and common beech (Fagus sylvatica L.) were altered by the application of differently concentrated NaCl aqueous solutions. The presence of Na+ and Cl– ions increased the equilibrium moisture content in both woods in environments with a relative humidity of 75% to a nearly saturated state. The electrical resistance of the wood decreased, while the electrical capacitance of the wood increased with increasing amounts of NaCl introduced. Inverse trends were observed for both properties in wood modified with the two most concentrated solutions (18 and 36% molality). Microscopic analysis of the outer layers of the wood samples using scanning electron microscopy and energy-dispersive X-ray spectroscopy showed that the amount of NaCl decreased linearly up to about 1 mm from the modified surface. The presence of Na+ and Cl– ions in wood increased the intensity and improved the homogeneity of the plasma discharge generated during treatment of samples in air at atmospheric pressure. Both modification of wood with NaCl and subsequent treatment with plasma increased the surface roughness of the substrates. Finally, it was shown that the wettability of wood with a waterborne coating was improved after plasma treatment, regardless of the presence of NaCl on the surface. These findings have a good potential not only for the study of surface treatment processes of wood with plasma discharges, but also for other technical applications of lignocellulosic materials. The objective of this research was to improve the effect of PT on wood surface pre-modified with NaCl. It was hypothesized that the combined effect of both treatments could additionally improve the wettability of wood with coating. As schematically presented in Fig. 1, the idea was to improve the electrical conductivity of wood with incorporation of additional ions in the wood structure, and consequently influence the PT process of wood. The wood of Norway spruce (Picea abies (L.) Karst.) and common beech (Fagus sylvatica L.) was modified by application of NaCl aqueous solutions of various concentrations. Firstly, the effect of the NaCl presence in wood on its sorption properties was evaluated from its dry to saturated state. The electrical properties of modified wood were determined via measurements of its resistance and capacitance. The presence of introduced NaCl along the depth of woods was studied by scanning electron microscopy and energy-dispersive X-ray spectroscopy analysis. Electrical properties of wood were also evaluated indirectly via the discharge appearance during PT process. This included the study of discharge intensity and homogeneity, as well as the properties of emitted light with optical emission spectroscopy. The presence of NaCl and treatment with plasma on the wood surface morphology were studied by confocal laser scanning microscopy. Finally, the possible enhancement of wettability of wood with a surface protective water-based coating was evaluated with contact angle measurements. Solutions of NaCl (purity ≥ 99.5%, Honeywell, Charlotte, North Carolina, USA) in deionized water of five different mixing ratios (Table 1) were prepared and properly mixed until complete dilution of the solute. The mass fraction w was calculated as follows: Optical emission spectroscopy (OES) is a very popular tool for the diagnosis of reactive plasmas, since it can be performed without physical contact with the plasma. Unique emissions of interest from plasma originate from the emitted photons and electronically excited states of the active plasma species (molecules, atoms and ions). The intensity of the optical emission is determined by both the density of the plasma species involved and the electron energy distribution function (Coburn and Chen 1980; Hou and Jones 2000). The optical spectra, emitted during treatment of unmodified and modified beech wood samples (5 per type of material), were measured with Avantes AvaSpec-3648 (Avantes BV, Apeldoorn, the Netherlands) optical spectrometer with a 3648–pixel CCD detector array and 75 cm focal length. The gap distance between the treated sample surfaces and dielectric of the plasma device was set to 1 mm, while the optical lens was placed 10 mm from the generated discharge. Spectra were recorded with an integration time of 2 s and a resolution of 0.5 nm in the spectral range from 200 to 1100 nm. It is known that hardwoods have higher relative permittivity than softwoods (Pentoś et al. 2017). The introduction of NaCl into wood surfaces had reciprocal influence on the samples' electrical capacitance compared to their electrical resistance, as the capacitance increased with the amount of introduced NaCl (Fig. 5). Solutions of concentration of up to 6.72% ("7.2") caused an increase in the electrical capacitance, but the solution with a concentration of 26.47% ("36") no longer contributed to higher capacitance, although the amount of NaCl in wood was even higher. It is assumed that the reasons for this are the larger NaCl crystals remaining on the surface (pictures shown in Sect. "3.6"), causing air gaps with low capacitance between the steel plates of LCR meter and wood sample. Hou CH, Huang CY, Hu CY (2013) Application of capacitive deionization technology to the removal of sodium chloride from aqueous solutions. Int J Environ Sci Technol 10:753–760. https://doi.org/10.1007/s13762-013-0232-1 Maplesoft Maple 2021.2 Crack Maplesoft Maple 2021.2 Crack With License Number Free Download Maplesoft Maple 2021.2 Crack mathematical software is straightforward and straightforward to explore, analyze, visualize, and solve all math problems. It is the result of over 40 years of development. It has extensive mathematical coverage or extended ability features with constant development efforts. Maplesoft Maple Crack + Key comes with a large number of significant improvements. Maplesoft Maple Crack has an analytical engine, performing unique algorithms or adding more adaptability or efficiency to enduring ones. It has advanced capabilities to admit finding accurate solutions to extra PDEs with boundary conditions, finding brand-new limits, solving further integrals, performing advanced graphics theory calculations, analyzing data in unique ways, and much more. Maple provides the traditional mathematical system. Maplesoft Maple Crack supports numeric calculations, such as symbolic figures and numerical data, with various formats. It is an advanced technical computing software that enables the development of pattern sheets or tests, the design of high-fidelity simulation images, and many other things that cover the width and depth of calculation. Maplesoft Maple Crack provides you to visualize, analyze, and solve all types of scientific issues. It also provides interfaces to Excel and many accessible languages like Java, C#, Visual Basic, and many others. Maple is math software that combines the world's most powerful math engine with an interface that makes it extremely easy to analyze, explore, visualize, and solve mathematical problems. Maplesoft Maple 2021.2 Crack Plus Torrent It also offers the latest expansion of capabilities to perform symbolic and numerical calculations in science and support for special log parts, including experiments, data analysis, lattice calculations, and sky. Yes – the limit is from there. Besides, These applications also provide a math editor manager. Also, Maple Cloud, task formats, a research worker, a variable manager, and a handwritten image commit module. Also, Maplesoft Maple Full Version for Macintosh is suitable for various scientific, mathematical, and geometric tasks and enables you to build them quickly. Also, there are times when you can't deal with some real issues in Stats, but you can run them in Maple. It also allows the interface to Excel, Muthalb, and many common dialects, such as C ++, Java, Visual Basic, and this is just the beginning. Moreover, It also has tools for fast and accurate learning, math apps, learning kits, and a portal for Maple to explain different ideas, understand exercises, and more. Today, it is used in Fortune 400 organizations as one of the central tools for studying fast and accurate critical thinking and new numerical expressions. Maplesoft Maple free download has a simple interface that is also quite justified for learning environments. The instructor can quickly introduce himself, take the podium, and provide an amazing teaching experience for his students. Besides, Instant details can be used to enable basic calculations and to schedule easily. In contrast, advanced clients can use their modules written in their programs' language and with a lot of worldviews from the user's account. IT solve math problems easily and accurately, without worrying that you've lost a minus sign somewhere Solve math problems quickly that you could never do by hand (or that you wouldn't want to do by hand because life is too short!) Solve problems from virtually any branch of mathematics or field that relies on mathematics, such as calculus, algebra, differential equations, statistics, control design, linear algebra, physics, optimization, group theory, differential geometry, signal processing, special functions, number theory, financial modeling, etc., etc. Gain insight into your problem, solution, data, or concept using a wide variety of customizable 2-D and 3-D plots and animations Keep questions, answers, visualizations, and explanations all together in a single, easy-to-follow document, so you don't have to waste time reconstructing your thought processes Develop complex solutions using a sophisticated programming language designed for mathematics, so your code is shorter, more comfortable to write, more convenient to debug, and easier to maintain Create interactive applications for yourself, your students, or your colleagues, without having to be an expert programmer, and share them over the web More Benefits: It has advanced analyzing tools. It has an efficient algorithm tool. Solves all types of maths problems. Pandora Cracked. It has an anapid solution tool. It supports all languages. Maplesoft Maple 2021.2 Serial Key Full Version What's New in Maplesoft Maple 2021.2: Okay, we get it. You love Maple, but some years you find yourself wondering if it's worth the bother of upgrading. People use Maple to do many different things, so each new release will inevitably include some features you care about and some features you don't need. But no matter what you use Maple for, this is not a release to skip. Maplesoft Maple contains a large number of substantial enhancements to how you interact with Maple, which means you'll benefit from this release no matter what you use Maple for. (These are, of course, in addition to the usual extensive collection of improvements, in mathematics, visualization, specialized application areas, education, and more, some subset of which will be useful, too!) Do More with Less Effort The new Context Panel brings together and enhances some of Maple's most powerful Clickable Math tools so that you can perform even more tasks, even more directly. Let Maple Keep Track of Your Units Deeper integration of units into the mathematics engine and improved conversion tools significantly simplify unit-based calculations. Protect Your Content from Changes Share your work with confidence, knowing that your users will not be able to accidentally change your content (and then complain to you when it doesn't work!) Maplesoft Maple 2021.2 Crack + Keygen Free 32/64 Bits Free PC Version Give Your Students More Practice Give your students endless practice sheets that Maplesoft Maple will grade automatically, so your students get more practice without giving you more work. Solve More PDEs Find solutions to new classes of partial differential equations with boundary conditions. Code Smarter, Not Harder Whether you are writing two lines or two thousand, significant enhancements to Maple's code editor makes writing and debugging Maple code much more comfortable. Streamline Your Application Development From defining the behavior of your buttons and sliders to changing the appearance of tables, the Context Panel also supports a streamlined workflow that makes creating your applications a smoother experience. Encrypt Your Work Encrypt procedures to hide their definitions from prying eyes while still making them available for use. Make Sense of Unstructured Data Use Maplesoft Maple to interpolate and visualize data from even completely unstructured data sources. Play with Points and Polygons Apply computational methods to polygons and clouds of points with the new Computational Geometry package. Breadth, depth, and performance. Tools and resources improvements Visualization and calculation tools New flexible and intuitive data frames Options for both 2-D and 3-D plots New classes of partial differential Other bug fixes and improvements To avoid unforeseen situations related to the health of individual components, as well as the program itself, it is strongly recommended to disable auto-update functions of the program, in particular, to remove the checkboxes of these functions during the installation phase. Install the program, remove the activation checkbox at the end of the installation Copy the contents from the crack folder to the corresponding program directories: For 32-bit copy maple.dll to install_dir / bin.win For 64-bit copy maple.dll to install_dir / bin.X86_64_WINDOWS license.dat copy to install_dir / license The program is ready for work The encountering problem of inoperability with the Internet turned off Open the terminal, in it we move to the following folder: For 64-bit C: \ Program Files \ Maple 2017 \ bin.X86_64_WINDOWS For 32-bit C: \ Program Files \ Maple 2017 \ bin.win Enter in the terminal window: lmutil.exe lm hosted. We look at one of the output lines, "The FlexNet host ID of this machine is …". It must contain one or more hosted (in the latter case, separated by a space) *. Edit the license.dat file as follows: replace all occurrences of INTERNET = *. *. *. * With the host (12 characters) that was obtained in step 2 of this manual. License Keys: NJK92-Z2XT5-BHY26-Q2WE4 BC2Z2-OKP52-BH2G6-ZXD25 LO2F4-B2NZ5-MKO29-AS2DH QW2Y0-PL2H8-BN2Q1-CF2T6 Windows 7/ 8/ 8.1/ 10 or Server 2008/ 2012/ 2016 (32-bit or 64-bit – all editions) 2.0 GHz multi-core processor 4 GB RAM (memory) 4 GB hard disk space 1024 x 768 display Java SDK & JRE v1.6 Download the Maple 2021.2 Crack. After downloading. Open the crack file. It's working. Posted in Mac, Math/ScientificTagged how to apply crack to maple, maple 16 download, maple 17 full crack, maple 2017, maple 2017 for mac, maple 2018 full crack, maple crack mac, maple for mac, maple network tools, maplesim crack, maplesoft 18, maplesoft maple mac crack, maplesoft maple update, maplesoft uk, what is maple 2018Источник: https://topcracked.com/maplesoft-maple/ Maplesoft Maple 2019.0 Crack With Keygen Free Download Maplesoft Maple 2019 Crack License Key Full Version Maplesoft Maple 2019 Crack is a math software that combines the world's most powerful math engine with an interface that makes it extremely easy to analyze, explore, visualize, and solve mathematical problems. The software allows traditional mathematical notation, support for numeric computations, as well as symbolic computation and numerical data with different formats. Maplesoft Maple is a powerful technical computing software, allows to develop design sheets and tests, create high-fidelity simulation models, and other things that covering the breadth and depth of mathematics. Maplesoft Maple License Key provides the high-performance tools for mathematics. This software is more good and wonderful. You must download this software and enjoy the features and functions. It is very well and great software. The more amazing tools and features are used in this software. You can also enjoy the good and latest features. Maplesoft Maple Key Features: An easy way to share work with MapleCloud Easy developing complex algorithms Solve all kinds of mathematical problems Extensive connectivity options & tools Extensive suite of visualization tools High-performance numeric computations engine Rapid solution development environment Advanced tools for creating 2-D plots Creation of efficient algorithms Advanced analysis tools with equations Programmatic document creation, etc. Windows 7/ 8/ 8.1/ 10 or Server 2008/ 2012/ 2016 (32-bit* or 64-bit – all editions) First of all, Download the file from the given link Now Check the folder of the download You find two folders first one is .exe and other is crack or key folder. Install software .exe when install finish does not open it. Use crack or key to activate this software. Done! All ok Enjoy The Full Version Tags:Maplesoft Maple 2019 Activation Code, Maplesoft Maple 2019 activation key generator, Maplesoft Maple 2019 Crack download, Maplesoft Maple 2019 download, Maplesoft Maple 2019 Free Download, Maplesoft Maple 2019 Keygen, Maplesoft Maple 2019 license key, Maplesoft Maple 2019 license key free download, Maplesoft Maple 2019 Patch, Maplesoft Maple 2019 pre-activated, Maplesoft Maple 2019 registration key, Maplesoft Maple 2019 serial number Источник: https://fullactivationkey.com/maplesoft-maple-full-crack/ 3 Replies to "Maple 2019.2 crack - Free Activators" Sidney Feres says: Love PayPal Dalea Dl says: Did you have a similar experience with some other banks? Harlow Marie says: Hello thank you sir pls tell about Jobs in hr in government sector and about notifications and application process SHAREit Crack 5.8.60 Splash 2.7.0 Crack 2019 - Crack Key For U Fl studio 11 crack - Crack Key For U Extreme picture finder portable - Free Activators SUPERAntiSpyware Professional Key v10.0.1220 Crack + Serial Key With Keygen Valerian Santosa on Maple 2019.2 crack - Free Activators
CommonCrawl
Efficacy and speed of kill of a topically applied formulation of dinotefuran-permethrin-pyriproxyfen against weekly tick infestations with Rhipicephalus sanguineus (sensu lato) on dogs Jeffrey Blair1, Josephus J. Fourie2, Marie Varloud1 & Ivan G. Horak2,3 Rhipicephalus sanguineus (sensu lato) is a vector of canine babesiosis, anaplasmosis and ehrlichiosis. In order to reduce the chance of transmission of these diseases, an ectoparasiticide should rapidly repel or kill new infestations with this tick. The primary objective of the present study was to evaluate the treatment and preventive acaricidal efficacy of Vectra® 3D (54.45 mg/ml of dinotefuran, 396.88 mg/ml of permethrin and 4.84 mg/ml of pyriproxyfen) against R. sanguineus (s.l.) measured at 2, 8, and 48 h after treatment and weekly re-infestation. Twenty-four dogs were each infested with 50 adult R. sanguineus (s.l.) on Day -7 and allocated to three groups (n = 8) based on tick counts: an untreated control group (Group 1), and two groups (Groups 2 and 3) treated with Vectra®3D. The dogs in each group were infested with 50 ticks on Day -2. Vectra®3D was administered topically to the dogs on Day 0. Ticks were counted, in situ at 2 and 8 h after treatment on dogs in Groups 1 and 3. Group 3 was then withdrawn from the study and ticks were counted and removed from the dogs in Groups 1 and 2, 48 h after treatment. On Days 7, 14, 21, 28, 35 and 42, the dogs in Groups 1 and 2 were re-infested with 50 ticks, which were then counted in situ at 2 and 8 h, and counted and removed at 48 h after re-infestation. Ticks from the initial infestation were visually unaffected by 2 and 8 h after treatment. However, by 2 h after weekly re-infestation the arithmetic mean (AM) efficacy of Vectra® 3D from Days 7 through 28 ranged from 61.1 to 78.8 %, falling to 60.1 and 47.4 % on Days 35 and 42 respectively. By 8 h after weekly re-infestation, the AM efficacy ranged from 89.1 to 97.4 % falling to 81.4 and 69.8 % on Days 35 and 42 respectively. The AM efficacy 48 h after treatment after the initial infestation was 22.9 % but after weekly re-infestation the efficacy at 48 h ranged from 89.1 to 100.0 %, falling to 86.0 and 81.1 % on Days 35 and 42 respectively. Vectra® 3D demonstrated significant efficacy against new infestations of adult R. sanguineus (s.l.) ticks within 2 h of infestation as compared to the untreated control group and achieved over 89.1 % efficacy within 8 h of infestation for up to 4 weeks after administration. These results indicate that Vectra® 3D has a rapid and significant efficacy against new infestations of adult R. sanguineus (s.l.) ticks and should therefore be considered as part of a strategy against important vector-borne diseases in dogs. The topical formulation (Vectra® 3D, DPP) used in this study is a combination of 54.45 mg/ml of dinotefuran, 396.88 mg/ml of permethrin and 4.84 mg/ml of pyriproxyfen with a broad spectrum of activity against external parasites of dogs. Permethrin, the primary acaricidal component of this formulation, is a photostable synthetic pyrethroid with a relatively long residual activity that prevents the closure of the sodium channels, leaving the nerve cell membrane in a permanent state of depolarization [1]. It is this mode of action that results in the sudden "knock down" effect on pests and especially the "hot foot" reaction of ticks coming in contact with treated dogs. Moreover, permethrin is also an arthropod repellent [2]. Dinotefuran is a fast-acting insecticide furanicotinyl belonging to the most recent generation of neonicotinoid [3] and pyriproxyfen is an insect growth regulator that targets and disrupts the reproductive and endocrine systems of insects [4]. Rhipicephalus sanguineus (sensu lato) is a three-host tick species, and with few exceptions its larvae, nymphs and adults feed almost exclusively on domestic dogs [5–8]. It is the most widespread tick in the world [6]. Female ticks may deposit eggs under a dog's bedding or in nearby sheltered spots, or they may crawl up surrounding structures and lay eggs in cracks and crevices in these structures, which may also be used by the larvae and nymphs [6, 9]. Dogs that are caged, chained or kennelled may become particularly heavily infested [8, 10] and all stages of development can simultaneously be present on the same dog [10, 11]. Ticks are vectors of many bacterial and protozoal diseases in dogs. R. sanguineus (s.l.) has been confirmed or implicated as the vector of the bacterial agents Ehrlichia canis, Anaplasma platys, Rickettsia rickettsii, and Rickettsia conorii and the protozoal organisms Babesia vogeli and Hepatozoon canis [12]. The two most important diseases in dogs caused by organisms transmitted by R. sanguineus (s.l.) are canine monocytic ehrlichiosis caused by E. canis and canine babesiosis caused by B. vogeli [7]. Although data about the minimal time required for transmission of these pathogens are scarce, it is accepted that the time required to transmit these two diseases is very different. Transmission of protozoan parasites like Babesia protozoa generally requires at least 24 to 48 h after tick attachment, in order for their sporoblasts to mature into sporozoites in the salivary glands of the tick [13, 14]. In contrast, bacterial pathogens such as E. canis are transmitted by R. sanguineus (s.l.) much more quickly - within a few hours after attachment [15]. Consequently, in order to significantly reduce the risk of tick-borne pathogens a product must demonstrate a rapid onset of acaricidal and/or repellent activity, preferably within a few hours. The primary objective of the present study was to evaluate the curative and preventive acaricidal efficacy of a DPP combination against R. sanguineus (s.l.) measured at 2, 8, and 48 h after treatment and after weekly re-infestation. The study was a parallel group, blinded, randomized, single centre, controlled efficacy study. The study was conducted by an independent contract laboratory facility in South Africa in accordance with the International Cooperation on Harmonisation of Technical Requirements for Registration of Veterinary Medicinal Products (VICH) guideline 9 entitled 'Good Clinical Practice'. All procedures were in compliance with South African Animal Welfare Act Regulations 'The care and use of animals for scientific purposes' and the protocol was approved by the local animal ethics committee. The 24 dogs enrolled in the investigation were mongrels of both sexes, older than six months, and weighed between 10.4 and 22.8 kg. All dogs were dewormed prior to the start of the study and were acclimatized to the kennel environment for seven days before treatment. The animals were housed individually for the duration of the study in an indoor/outdoor run that conformed to accepted animal welfare guidelines, and no physical contact between dogs was possible. They were fed once a day according to the food manufacturer's recommendations, and water was available ad libitum. The study design is summarized in Table 1. A laboratory-bred strain (U.S. origin) of R. sanguineus (s.l.) was used throughout the investigation. Ticks used for all infestations were unfed, at least one week old and had a balanced sex ratio (50 % female: 50 % male). Seven days before treatment all the dogs were infested with 50 adult R. sanguineus. Forty-eight hours after infestation the ticks were counted and removed and the dogs were ranked within sex in descending order of individual pre-treatment tick counts and subsequently blocked into eight blocks of three animals each. From each block, dogs were randomly allocated to three groups of eight and the groups were coded to blind the investigators performing the post-treatment assessments. All dogs were infested on Day -2 and dogs in Groups 2 and 3 were treated on Day 0 while dogs in Group 1 served as untreated controls. In situ counts were performed on dogs in Groups 1 and 3 on the day of treatment and thereafter the dogs in Group 3 were withdrawn from the study. The dogs in Group 3 were included in the study because of the possibility of unintentional manual removal of the DPP formulation while the product was still drying during the in situ tick counts performed at 2 h after administration. Dogs in Groups 1 and 2 continued in the study with infestations performed at weekly intervals from Day 7 through 42. Ticks on the dogs in Groups 1 and 2 were counted in situ at 2 h and 8 h after each weekly re-infestation and counted and removed 48 h after treatment on Day 0 and after each weekly re-infestation from Day 7 to Day 42 (Table 1). Ticks that were removed 48 h after treatment or re-infestation were categorized according to their attachment, engorgement and viability status at the time of removal according to the parameters listed in Table 2 [16]. Table 1 Design of a study to determine the efficacy and speed of kill of DPP against adult R. sanguineus s.l Table 2 Status of adult R. sanguineus (s.l.) removed from dogs 48 h after treatment with DPP on Day 0 and after weekly re-infestation from Day 7 to Day 42 Treatment was administered by parting the hair and applying the appropriate volume (3.6 ml) of DPP directly onto the skin in a continuous line from the base of the tail along the middle of the back to between the shoulder blades, according to the label instructions. The time at which treatment was administered to each animal and the time at which it was infested with ticks were recorded. This was done to ensure that in situ counting of ticks 2 h (± 5 min) or 8 h (± 30 min) after treatment or re-infestation, and counting and removal of ticks 48 h (± 2 h) after treatment or re-infestation were accomplished as close as possible to the specified target times. During in situ counts, ticks were found by direct observation following parting of the hair and by palpation. During removal counts the same procedure was followed but ticks were removed upon counting and the dogs were also combed to ensure that all ticks had been counted and removed. The primary assessment criteria was the number of ticks counted on the control and the treated groups of dogs on the various assessment times and days, with efficacy calculations based on geometric (GM) and arithmetic (AM) means. Geometric means were calculated using the tick count data + 1, and 1 was subsequently subtracted from the result to obtain a meaningful mean value for each group. Efficacy of the DPP formulation against adult R. sanguineus (s.l.) at 2, 8 and 48 h after treatment or infestation was calculated as follows: $$ \mathrm{Efficacy}\ \left(\%\right) = 100 \times \left({\mathrm{M}}_{\mathrm{c}}\hbox{--}\ {\mathrm{M}}_{\mathrm{t}}\right)\ /\ {\mathrm{M}}_{\mathrm{c}} $$ Mc = Mean number of live ticks (categories 1, 2, 3 and 6) on dogs in the untreated control group (Group 1) at a specific time point. Mt = Mean number of live ticks (categories 1, 2, 3 and 6) on dogs in the treated groups (Groups 2 and 3) at a specific time point. Comparisons of tick counts between groups were conducted using a one-way ANOVA with an administration effect (P < 0.05). In addition, the groups were compared by a non-parametric analysis using the Mann-Whitney test on untransformed tick counts. Ticks in category 6 were included in the theoretical calculation because, if found, these ticks would have succeeded in engorging before they were killed. In this study, however, there were no ticks classified as category 6. The GM number of ticks on the eight dogs in the untreated control group varied between 24.0 and 31.5 in the counts conducted 48 h after treatment or weekly re-infestation, demonstrating that an adequate level of infestation was achieved in the control group. The efficacy of a single topical application of DPP against adult R. sanguineus (s.l.) 2, 8 and 48 h after treatment or weekly re-infestation is summarized in Fig. 1. Efficacy against a well established infestation was not demonstrated at 2, 8 and 48 h after treatment in this study. However, after weekly re-infestation on Days 7, 14, 21, 28, 35 and 42, the efficacy calculated with GM for the 2 h counts was 68.9, 63.0, 72.2, 81.7, 65.8 and 51.8 %, respectively (Table 3). By 8 h after infestation efficacy had increased to 92.3, 92.2, 94.7, 98.1, 91.2 and 75.0 % on Days 7, 14, 21, 28, 35 and 42, respectively (Table 4). At 48 h after infestation on Days 7, 14, 21, 28, 35 and 42 the calculated efficacies were 93.5, 99.2, 99.1, 100.0, 90.4 and 87.8 %, respectively (Table 5). There was a significant difference in all counts conducted after Day 7 between the control and the treated groups (P < 0.005). Summary of calculated geometric mean efficacy of DPP against adult R. sanguineus (s.l.) Table 3 Mean tick counts and percent efficacy of DPP at 2 hours after treatment or infestation against adult R. sanguineus (s.l.) Table 5 Mean tick counts and percent efficacy of DPP at 48 hours after treatment or infestation against adult R. sanguineus (s.l.) One of the most important aspects of a rapid speed of kill for an acaricide is the prevention of tick-transmitted diseases. In order to achieve this goal an acaricide should prevent the attachment of ticks or rapidly kill them as soon as they access the dog. Compliantly with previous experiment [9], therapeutic efficacy against an existing infestation was not demonstrated in this study. However, high levels of preventive efficacy were quickly achieved (Fig. 1; Table 3). By 2 h after infestation there were significantly fewer ticks on treated dogs than on the controls. The discrepancy between the therapeutic and preventive efficacy can be explained by the time required for the formulation to spread over the body of the dog and also by the fact that permethrin has both a direct killing effect and an important repellent activity that prevents ticks attaching to the dog and start feeding [2, 9]. By Day 7 after treatment the active ingredients had spread throughout the hair-coat and on the skin and between 63 and 81.7 % of ticks were killed within 2 h after being released onto dogs from Day 7 through Day 35, dropping to 51.8 % on Day 42 (it should be noted that the DPP formulation is labelled for monthly re-application). The rapid acaricidal efficacy observed at 2 h increased to > 90 % by 8 h after each infestation from Day 7 through Day 35. The acaricidal efficacy recorded 48 h after infestation was in concordance with previous measurements performed against R. sanguineus (s.l.) adult ticks with the same product for 1 month after treatment [9]. In the present experiment, residual efficacy was assessed for 6 weeks and was above 90 % (GM) for 5 weeks. The earliest speed of kill data against R. sanguineus (s.l.) reported after administration of a permethrin-based combination product on dogs were recorded 3 h after weekly infestation and efficacy varied from 69.9 to 88.1 % between days 7 and 28 after treatment [17]. The acaricidal efficacy at 2 h and strong level of prevention of re-infestation by 8 h is an important finding in that it has been demonstrated that E. canis can be transmitted as early as 3 h after exposure to infected R. sanguineus ticks [15]. Afoxolaner, a systemically active isoxazoline recommended for monthly oral treatment, has been shown to have therapeutic acaricidal efficacy of 93 % by 12 h after treatment. However, preventive efficacies were below 77 % at 12 h against weekly infestations with I. ricinus from Day 7 through Day 28 [18] and between 0 and 14 % at 8 h against weekly infestations with R. sanguineus from Day 7 through Day 35 after treatment [19]. Fluralaner, also a systemically active isoxazoline recommended for oral administration once every 3 months, demonstrated a therapeutic efficacy of 97.9 % against infestations with I. ricinus by 8 h after treatment, and had preventive efficacy of 96.8 % by 8 h over the first 4 weeks after treatment. However, efficacy declined to 83.5 and 45.8 % at weeks 8 and 12, respectively [20]. In a comparative efficacy study of topical DPP, oral fluralaner and oral afoxolaner that measured their preventive acaricidal efficacy against R. sanguineus at 12 h, the topical formulation demonstrated 77–98 % efficacy compared to 21–49 % for afoxolaner and 58–89 % for fluralaner over a period of one month [21]. In another study, in which acaricidal efficacy was measured at 3 h after infestation with R. sanguineus, the efficacy of afoxolaner varied between zero and 26 % and of fluralaner between 13 and 53 %, and the tick counts of neither group were significantly different from those of the negative control group at this time interval [22]. The rapid preventive acaricidal efficacy of the topical formulation of DPP against new infestations of R. sanguineus, could potentially reduce the likelihood of the transmission of E. canis to dogs by infected ticks. Moreover, the sustained rapid preventive acaricidal efficacy would also reduce the risk of dogs becoming infected with other tick-borne diseases such as babesiosis associated with R. sanguineus. Regular monthly administration of DPP would also prevent re-infestation by ticks and prevent the development of local foci of tremendous numbers of ticks so often associated with R. sanguineus (s.l.). At localities where levels of infestation are particularly severe, in addition to treating the dog with DPP, the environment should be thoroughly searched for ticks which can then be eradicated by the application of a suitable acaricide formulated for this purpose. It is wise to remember that there are always many more free-living ticks within the dog's environment than on the dog itself [6, 7]. This study demonstrated the high level of speed of kill of DPP against new infestations of R. sanguineus (s.l.) ticks on dogs. Topical treatment with the formulation reached or exceeded 90 % efficacy within 8 h of infestation and a significant number of newly acquired ticks were killed within 2 h of infestation for up to 5 weeks after administration. Monthly administration of this formulation can be considered as a reliable tool for protection against ticks and also likely for diseases they can transmit to dogs. Clark JM, Symington SB. Advances in the mode of action of pyrethroids. Top Curr Chem. 2012;314:49–72. Beugnet F, Franc M. Insecticide and acaricide molecules and/or combinations to prevent pet infestation by ectoparasites. Trends Parasitol. 2012;28:267–79. Le Questel JY, Graton J, Cerón-Carrasco JP, Jacquemin D, Planchat A, Thany SH. New insights on the molecular features and electrophysiological properties of dinotefuran, imidacloprid and acetamiprid neonicotinoid insecticides. Bioorg Med Chem. 2011;19:7623–34. Palma KG, Meola SM, Meola RW. Mode of action of pyriproxyfen and methoprene on eggs of Ctenocephalides felis (Siphonaptera: Pulicidae). J Med Entomol. 1993;30:421–6. Jacobs PA, Fourie LJ, Horak IG. A laboratory comparison of the life cycles of the dog ticks Haemaphysalis leachi and Rhipicephalus sanguineus. Onderstepoort J Vet Res. 2004;71:15–28. Dantas-Torres F. Biology and ecology of the brown dog tick, Rhipicephalus sanguineus. Parasit Vectors. 2010;3:26. Gray J, Dantas-Torres F, Estrada-Pena A, Levin M. Systematics and ecology of the brown dog tick, Rhipicephalus sanguineus. Ticks Tick Borne Dis. 2013;4:171–80. Bryson NR, Horak IG, Hohn EW, Louw JP. Ectoparasites of dogs belonging to people in resource-poor communities in North West Province, South Africa. J S Afr Vet Assoc. 2000;71:175–9. Varloud M, Fourie JJ. One-month comparative efficacy of three topical ectoparasiticides against adult brown dog ticks (Rhipicephalus sanguineus sensu lato) on mixed-bred dogs in controlled environment. Parasitol Res. 2015;114:1711–9. Horak IG. Parasites of domestic and wild animals in South Africa. XIV. The seasonal prevalence of Rhipicephalus sanguineus and Ctenocephalides spp. on kenneled dogs in Pretoria North. Onderstepoort J Vet Res. 1982;49:63–8. Silveira JA, Passos LM, Ribeiro MF. Population dynamics of Rhipicephalus sanguineus (Latrielle, 1806) in Belo Horizonte, Minas Gerais state, Brazil. Vet Parasitol. 2009;161:270–5. Chomel B. Tick-borne infections in dogs-an emerging infectious threat. Vet Parasitol. 2011;179:294–301. Taenzler J, Liebenberg J, Roepke RK, Heckeroth AR. Prevention of transmission of Babesia canis by Dermacentor reticulatus ticks to dogs treated orally with fluralaner chewable tablets (Bravecto). Parasit Vectors. 2015;8:305. Piesman J, Spielman A. Human babesiosis on Nantucket Island: prevalence of Babesia microti in ticks. Am J Trop Med Hyg. 1980;29:742–6. Fourie JJ, Stanneck D, Luus HG, Beugnet F, Wijnveld M, Jongejan F. Transmission of Ehrlichia canis by Rhipicephalus sanguineus ticks feeding on dogs and on artificial membranes. Vet Parasitol. 2013;197:595–603. Marchiondo AA, Holdsworth PA, Green P, Blagburn BL, Jacobs DE. World Association for the Advancement of Veterinary Parasitology (W.A.A.V.P.) guidelines for evaluating the efficacy of parasiticides for the treatment, prevention and control of flea and tick infestation on dogs and cats. Vet Parasitol. 2007;145:332–44. Dryden MW, Payne PA, Smith V, Hostetler J. Evaluation of an imidacloprid (8.8 % w/w)--permethrin (44.0 % w/w) topical spot-on and a fipronil (9.8 % w/w)--(S)-methoprene (8.8 % w/w) topical spot-on to repel, prevent attachment, and kill adult Rhipicephalus sanguineus and Dermacentor variabilis ticks on dogs. Vet Ther. 2006;7:187–98. Halos L, Lebon W, Chalvet-Monfray K, Larsen D, Beugnet F. Immediate efficacy and persistent speed of kill of a novel oral formulation of afoxolaner (NexGardTM) against induced infestations with Ixodes ricinus ticks. Parasit Vectors. 2014;7:452. Six RH, Young DR, Holzmer SJ, Mahabir SP. Comparative speed of kill of sarolaner (Simparica™) and afoxolaner (NexGard®) against induced infestations of Rhipicephalus sanguineus s.l. on dogs. Parasit Vectors. 2016;9:91. Wengenmayer C, Williams H, Zschiesche E, Moritz A, Langenstein J, Roepke RK, Heckeroth AR. The speed of kill of fluralaner (Bravecto) against Ixodes ricinus ticks on dogs. Parasit Vectors. 2014;7:525. Varloud M, Leibenberg J, Fourie J. Comparative speed of kill between a topical administration of dinotefuran-permethrinpyriproxyfen and an oral administration of afoxolaner or fluralaner to dogs weekly infested for one month with Rhipicephalus sanguineus ticks. In: 60th Annual Meeting of the American Association of Veterinary Parasitologists, vol. 60. 2015. p. 67. Ohmes C, Hostetler J, Davis W, Settje T, Everett WR. Comparative efficacy of imidacloprid/permethrin/pyriproxyfen (K9 Advantix® II), afoxolaner (NexGard®), and fluralaner (Bravecto®) against tick (Rhipicephalus sanguineus and Amblyomma americanum) infestations on dogs. In: 60th Annual Meeting of the American Association of Veterinary Parasitologists, vol. 60. 2015. p. 68–9. The authors are sincerely grateful to all monitors, investigators and the staff of the study location who took part in the study and ensured that high GCP and GLP standards were adhered to. Ceva Santé Animale, 10 Avenue de la Ballastière, 33500, Libourne, France Jeffrey Blair & Marie Varloud ClinVet International, P.O. Box 11186, Universitas, 9321, South Africa Josephus J. Fourie & Ivan G. Horak Department of Veterinary Tropical Diseases, Faculty of Veterinary Science, University of Pretoria, Onderstepoort, 0110, South Africa Ivan G. Horak Search for Jeffrey Blair in: Search for Josephus J. Fourie in: Search for Marie Varloud in: Search for Ivan G. Horak in: Correspondence to Marie Varloud. This clinical study was completely funded by Ceva Santé Animale, of which JB and MV are employees. ClinVet is an independent Contract Development Organisation, contracted to manage the conduct of the study. JJF is employed by ClinVet. IH is a long-term contract employee of ClinVet and an Emeritus Professor at the University of Pretoria. All authors voluntarily publish this article and have no personal interest in this study other than publishing the scientific findings that they have been involved in via planning, setting-up, monitoring and conducting the investigation and analysing the results. JB, MV and JJF were responsible for the study design and protocols and JJF carried out the study. IGH compiled and was responsible for the first draft of the manuscript, which was then revised by all authors. All authors read and approved the final manuscript. Blair, J., Fourie, J.J., Varloud, M. et al. Efficacy and speed of kill of a topically applied formulation of dinotefuran-permethrin-pyriproxyfen against weekly tick infestations with Rhipicephalus sanguineus (sensu lato) on dogs. Parasites Vectors 9, 283 (2016) doi:10.1186/s13071-016-1561-y Accepted: 03 May 2016 Acaricidal efficacy
CommonCrawl
Efficient numerical method for a model arising in biological stoichiometry of tumour dynamics DCDS-S Home Numerical analysis and pattern formation process for space-fractional superdiffusive systems June 2019, 12(3): 567-590. doi: 10.3934/dcdss.2019037 High-order solvers for space-fractional differential equations with Riesz derivative Kolade M. Owolabi , and Abdon Atangana Institute for Groundwater Studies, Faculty of Natural and Agricultural Sciences, University of the Free State, Bloemfontein 9300, South Africa * Corresponding author: [email protected] (K.M. Owolabi) Received January 2017 Revised September 2017 Published September 2018 Fund Project: The research contained in this report is supported by South African National Research Foundation. This paper proposes the computational approach for fractional-in-space reaction-diffusion equation, which is obtained by replacing the space second-order derivative in classical reaction-diffusion equation with the Riesz fractional derivative of order $ α $ in $ (0, 2] $. The proposed numerical scheme for space fractional reaction-diffusion equations is based on the finite difference and Fourier spectral approximation methods. The paper utilizes a range of higher-order time stepping solvers which exhibit third-order accuracy in the time domain and spectral accuracy in the spatial domain to solve some fractional-in-space reaction-diffusion equations. The numerical experiment shows that the third-order ETD3RK scheme outshines its third-order counterparts, taking into account the computational time and accuracy. Applicability of the proposed methods is further tested with a higher dimensional system. Numerical simulation results show that pattern formation process in the classical sense is the same as in fractional scenarios. Keywords: ETD method, finite difference, implicit-explicit, fractional nonlinear PDEs, numerical simulations, Riesz derivative. Mathematics Subject Classification: Primary: 34A34, 35A05, 35K57; Secondary: 65L05, 65M06, 93C10. Citation: Kolade M. Owolabi, Abdon Atangana. High-order solvers for space-fractional differential equations with Riesz derivative. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 567-590. doi: 10.3934/dcdss.2019037 F. B. Adda, The differentiability in the fractional calculus, Nonlinear Analysis, 47 (2001), 5423-5428. doi: 10.1016/S0362-546X(01)00646-0. Google Scholar G. Akrivis, M. Crouzeix and C. Makridakis, Implicit xplicit multistep methods for quasilinear parabolic equations, Numerische Mathematik, 82 (1999), 521-541. doi: 10.1007/s002110050429. Google Scholar O. J. J. Algahtani, Comparing the Atangana-Baleanu and Caputo-Fabrizio derivative with fractional order: Allen Cahn model, Chaos Solitons and Fractals, 89 (2016), 552-559. doi: 10.1016/j.chaos.2016.03.026. Google Scholar B. S. T. Alkahtani, Chua's circuit model with Atangana-Baleanu derivative with fractional order, Chaos, Solitons and Fractals, 89 (2016), 547-551. Google Scholar B. S. T. Alkahtani and A. Atangana, Controlling the wave movement on the surface of shallow water with the Caputo-Fabrizio derivative with fractional order, Chaos Soliton and Fractals, 89 (2016), 539-546. doi: 10.1016/j.chaos.2016.03.012. Google Scholar L. J. S. Allen, An Introduction to Mathematical Biology, Pearson Education, Inc., New Jersey, 2007. Google Scholar E. O. Asante-Asamani, A. Q. M. Khaliq and B. A. Wade, A real distinct poles Exponential Time Differencing scheme for reaction diffusion systems, Journal of Computational and Applied Mathematics, 299 (2016), 24-34. doi: 10.1016/j.cam.2015.09.017. Google Scholar U. M. Ascher, S. J. Ruth and R. J. Spiteri, Implicit-explicit Runge-Kutta methods for time-dependent partial differential equations, Applied Numerical Mathematics, 25 (1997), 151-167. doi: 10.1016/S0168-9274(97)00056-1. Google Scholar U. M. Ascher, S. J. Ruth and B. T. R. Wetton, Implicit-explicit methods for time-dependent partial differential equations, SIAM Journal on Numerical Analysis, 32 (1995), 797-823. doi: 10.1137/0732037. Google Scholar A. Atangana and R. T. Alqahtani, Numerical approximation of the space-time Caputo-Fabrizio fractional derivative and application to groundwater pollution equation, Advances in Difference Equations, 2016 (2016), 1-13. doi: 10.1186/s13662-016-0871-x. Google Scholar A. Atangana and D. Baleanu, New fractional derivatives with nonlocal and non-singular kernel: Theory and application to heat transfer model, Thermal Science, 20 (2016), 763-769. doi: 10.2298/TSCI160111018A. Google Scholar A. Atangana and B. S. T. Alkahtani, New model of groundwater owing within a confine aquifer: Application of Caputo-Fabrizio derivative, Arabian Journal of Geosciences, 9 (2016), 3647-3654. Google Scholar A. Atangana and I. Koca, Chaos in a simple nonlinear system with Atangana-Baleanu derivatives with fractional order, Chaos, Solitons and Fractals, 89 (2016), 447-454. doi: 10.1016/j.chaos.2016.02.012. Google Scholar D. Baleanu, R. Caponetto and J. T. Machado, Challenges in fractional dynamics and control theory, Journal of Vibration and Control, 22 (2016), 2151-2152. doi: 10.1177/1077546315609262. Google Scholar D. Baleanu, K. Diethelm and E. Scalas, Fractional Calculus: Models and Numerical Methods, Series on Complexity, Nonlinearity and Chaos, World Scientific, 2012. doi: 10.1142/9789814355216. Google Scholar D. A. Benson, S. Wheatcraft and M. M. Meerschaert, pplication of a fractional advection-dispersion equation, Water Resources Research, 36 (2000), 1403-1412. Google Scholar H. P. Bhatt and A. Q. M. Khaliq, The locally extrapolated exponential time differencing LOD scheme for multidimensional reaction-diffusion systems, Journal of Computational and Applied Mathematics, 285 (2015), 256-278. doi: 10.1016/j.cam.2015.02.017. Google Scholar A. H. Bhrawy, M. A. Zaky and R. A. Van Gorder, A space-time Legendre spectral tau method for the two-sided space-time Caputo fractional diffusion-wave equation, Numerical Algorithms, 71 (2016), 151-180. doi: 10.1007/s11075-015-9990-9. Google Scholar A. H. Bhrawy and M. A. Abdelkawy, A fully spectral collocation approximation for multi-dimensional fractional Schrödinger equations, Journal of Computational Physics, 294 (2015), 462-483. doi: 10.1016/j.jcp.2015.03.063. Google Scholar A. H. Bhrawy, A Jacobi spectral collocation method for solving multi-dimensional nonlinear fractional sub-diffusion equations, Numerical Algorithms, 73 (2016), 91-113. doi: 10.1007/s11075-015-0087-2. Google Scholar N. F. Britton, Reaction-diffusion Equations and their Applications to Biology, Academic Press, London, 1986. Google Scholar A. Bueno-Orovio, D. Kay and K. Burrage, Fourier spectral methods for fractional-in-space reaction-diffusion equations, BIT Numerical mathematics, 54 (2014), 937-954. doi: 10.1007/s10543-014-0484-2. Google Scholar M. P. Calvo, J. de Frutos and J. Novo, Linearly implicit Runge-Kutta methods for advection-reaction-diffusion equations, Applied Numerical Mathematics, 37 (2001), 535-549. doi: 10.1016/S0168-9274(00)00061-1. Google Scholar M. Caputo and M. Fabrizio, Applications of new time and spatial fractional derivatives with exponential kernels, Progress in Fractional Differentiation and Applications, 2 (2016), 1-11. Google Scholar S. M. Cox and P. C. Matthews, Exponential time differencing for stiff systems, Journal of Computational Physics, 176 (2002), 430-455. doi: 10.1006/jcph.2002.6995. Google Scholar Q. Du and W. Zhu, Stability analysis and applications of the exponential time differencing schemes, Journal of Computational and Applied Mathematics, 22 (2004), 200-209. Google Scholar Q. Du and W. Zhu, Analysis and applications of the exponential time differencing schemes and their contour integration modifications, BIT Numerical Mathematics, 45 (2005), 307-328. doi: 10.1007/s10543-005-7141-8. Google Scholar W. Feller, On a generalization of Marcel Riesz potentials and the semi-groups generated by them, Middlelanden Lunds Universitets Matematiska Seminarium Comm. Sem. Mathm Universit de Lund (Suppl. ddi a M. Riesz), 1952 (1952), 72-81. Google Scholar W. Feller, An Introduction to Probability Theory and Its Applications, New York-London-Sydney, 1968. Google Scholar W. Gear and I. Kevrekidis, Projective methods for stiff differential equations: Problems with gaps in their eigenvalue spectrum, SIAM Journal on Scientific Computing, 24 (2003), 1091-1106. doi: 10.1137/S1064827501388157. Google Scholar I. Grooms and K. Julien, Linearly implicit methods for nonlinear PDEs with linear dispersion and dissipation, Journal of Computational Physics, 230 (2011), 3630-3650. doi: 10.1016/j.jcp.2011.02.007. Google Scholar E. Hairer and G. Wanner, Solving Ordinary Differential Equations Ⅱ: Stiff and Differential Algebraic Problems, Springer-Verlag, New York, 1996. doi: 10.1007/978-3-642-05221-7. Google Scholar A. K. Kassam and L. N. Trefethen, Fourth-order time-stepping for stiff PDEs, SIAM Journal Scientific Computing, 26 (2005), 1214-1233. doi: 10.1137/S1064827502410633. Google Scholar C. Kennedy and M. Carpenter, Additive Runge-Kutta schemes for covection-diffusion-reaction-diffusion equations, Applied Numerical Mathematics, 44 (2003), 139-181. doi: 10.1016/S0168-9274(02)00138-1. Google Scholar A. A. Kilbas, H. M. Srivastava and J. J. Trujillo, Theory and Applications of Fractional Differential Equations, Elsevier, Amsterdam, 2006. Google Scholar M. Kot, Elements of Mathematical Ecology, Cambridge University Press, United Kingdom, 2001. doi: 10.1017/CBO9780511608520. Google Scholar T. Koto, IMEX Runge-Kutta schemes for reaction-diffusion equations, Journal of Computational and Applied Mathematics, 215 (2008), 182-195. doi: 10.1016/j.cam.2007.04.003. Google Scholar C. Li and F. Zeng, Numerical Methods for Fractional Calculus, CRC Press, Taylor and Francis Group, London, 2015. Google Scholar D. Li, C. Zhang, W. Wang and Y. Zhang, Implicit-explicit predictor-corrector schemes for nonlinear parabolic differential equations, Applied Mathematical Modelling, 35 (2011), 2711-2722. doi: 10.1016/j.apm.2010.11.061. Google Scholar Y. F. Luchko, H. Matinez and J. J. Trujillo, Fractional Fourier transform and some of its applications, Fractional Calculus and Applied Analysis, 11 (2008), 457-470. Google Scholar R. L. Magin, Fractional Calculus in Bioengineering, Begell House, Connecticut, 2006. Google Scholar R. Magin, M. D. Ortigueira, I. Podlubny and J. Trujillo, On the fractional signals and systems, Signal Processing, 91 (2011), 350-371. doi: 10.1016/j.sigpro.2010.08.003. Google Scholar R. L. Magin, Fractional calculus models of complex dynamics in biological tissues, Computers and Mathematics with Applications, 59 (2010), 1586-1593. doi: 10.1016/j.camwa.2009.08.039. Google Scholar F. Mainardi, G. Pagnini and R. K. Saxena, Fox H functions in fractional diffusion, Journal of Computational and Applied Mathematics, 178 (2005), 321-331. doi: 10.1016/j.cam.2004.08.006. Google Scholar M. M. Meerschaert, D. A. Benson and S. W. Wheatcraft, Subordinated advection-dispersion equation for contaminant transport, Water Resource Research, 37 (2001), 1543-1550. Google Scholar M. M. Meerschaert and C. Tadjeran, Finite difference approximations for fractional advectiondispersion flow equations, Journal of Computational and Applied Mathematics, 172 (2004), 65-77. doi: 10.1016/j.cam.2004.01.033. Google Scholar M. M. Meerschaert, H. P. Scheffler and C. Tadjeran, Finite difference methods for twodimensional fractional dispersion equation, Journal of Computational Physics, 211 (2006), 249-261. doi: 10.1016/j.jcp.2005.05.017. Google Scholar F. C. Meral, T. J. Royston and R. Magin, Fractional calculus in viscoelasticity: An experimental study, Communications in Nonlinear Science and Numerical Simulation, 15 (2010), 939-945. doi: 10.1016/j.cnsns.2009.05.004. Google Scholar R. Metzler and J. Klafter, The random walk's guide to anomalous diffusion: A fractional dynamics approach, Physics Reports, 339 (2000), 1-77. doi: 10.1016/S0370-1573(00)00070-3. Google Scholar R. Metzler and J. Klafter, The restaurant at the end of the random walk: Recent developments in the description of anomalous transport by fractional dynamics, Journal of Physics A: Mathematical and General, 37 (2004), R161-R208. doi: 10.1088/0305-4470/37/31/R01. Google Scholar K. S. Miller and B. Ross, An Introduction to the Fractional Calculus and Fractional Differential Equations, Wiley, New York, 1993. Google Scholar J. D. Murray, Mathematical Biology Ⅰ: An Introduction, Springer-Verlag, New York, 2002. Google Scholar M. D. Ortigueira, Fractional Calculus for Scientists and Engineers, Springer, New York, 2011. doi: 10.1007/978-94-007-0747-4. Google Scholar K. M. Owolabi, Mathematical study of two-variable systems with adaptive numerical methods, Numerical Analysis and Applications, 19 (2016), 218-230. doi: 10.15372/SJNM20160304. Google Scholar K. M. Owolabi, Robust and adaptive techniques for numerical simulation of nonlinear partial differential equations of fractional order, Communications in Nonlinear Science and Numerical Simulations, 44 (2017), 304-317. doi: 10.1016/j.cnsns.2016.08.021. Google Scholar K. M. Owolabi and A. Atangana, Numerical solution of fractional-in-space nonlinear Schrödinger equation with the Riesz fractional derivative, The European Physical Journal Plus, 131 (2016), 335. doi: 10.1140/epjp/i2016-16335-8. Google Scholar K. M. Owolabi, Mathematical analysis and numerical simulation of patterns in fractional and classical reaction-diffusion systems, Chaos Solitons and Fractals, 93 (2016), 89-98. doi: 10.1016/j.chaos.2016.10.005. Google Scholar K. M. Owolabi, Robust and adaptive techniques for numerical simulation of nonlinear partial differential equations of fractional order, Communications in Nonlinear Science and Numerical Simulation, 44 (2017), 304-317. doi: 10.1016/j.cnsns.2016.08.021. Google Scholar K. M. Owolabi, Robust IMEX schemes for solving two-dimensional reaction-diffusion models, International Journal of Nonlinear Science and Numerical Simulations, 16 (2015), 271-284. doi: 10.1515/ijnsns-2015-0004. Google Scholar K. M. Owolabi and K. C. Patidar, Higher-order time-stepping methods for time-dependent reaction-diffusion equations arising in biology, Applied Mathematics and Computation, 240 (2014), 30-50. doi: 10.1016/j.amc.2014.04.055. Google Scholar S. Petrovskii, K. Kawasaki, F. Takasu and N. Shigesada, Diffusive waves, dynamic stabilization and spatio-temporal chaos in a community of three competitive species, Japan Journal of Industrial and Applied Mathematics, 18 (2001), 459-481. doi: 10.1007/BF03168586. Google Scholar E. Pindza and K. M. Owolabi, Fourier spectral method for higher order space fractional reaction-diffusion equations, Communications in Nonlinear Science and Numerical Simulation, 40 (2016), 112-128. doi: 10.1016/j.cnsns.2016.04.020. Google Scholar I. Podlubny, Fractional Differential Equations, Academic Press, San Diego, 1999. Google Scholar J. Sabatier, O. P. Agrawal and J. A. Tenreiro Machado, Advances in Fractional Calculus: Theoretical Developments and Applications in Physics and Engineering, Springer, Netherlands, 2007. Google Scholar S. G. Samko, A. A. Kilbas and O. I. Maritchev, Fractional Integrals and Derivatives: Theory and Applications, Gordon and Breach, Amsterdam, 1993. Google Scholar E. Scalas, R. Gorenflo and F. Mainardid, Fractional calculus and continuous-time finance, Physica A: Statistical Mechanics and its Applications, 284 (2000), 376-384. doi: 10.1016/S0378-4371(00)00255-7. Google Scholar Z. Tomovski, T. Sandev, R. Metzler and J. Dubbeldam, Generalized space-time fractional diffusion equation with composite fractional time derivative, Physica A, 391 (2012), 2527-2542. doi: 10.1016/j.physa.2011.12.035. Google Scholar V. Volpert and S. Petrovskii, Reaction-diffusion waves in biology, Physics of Life Reviews, 6 (2009), 267-310. Google Scholar E. Weinan, Analysis of the heterogeneous multiscale method for ordinary differential equations, Communications in Mathematical Sciences, 3 (2003), 423-436. doi: 10.4310/CMS.2003.v1.n3.a3. Google Scholar Q. Yang, F. Liu and I. Turner, Numerical methods for fractional partial differential equations with Riesz space fractional derivatives, Applied Mathematical Modelling, 34 (2010), 200-218. doi: 10.1016/j.apm.2009.04.006. Google Scholar Figure 1. Stability regions of (a) ETD3RK, (b) IMEX3PC with choice $(\mu, \psi, \eta) = (1, 0, 0)$ Figure 2. Convergence results of different schemes for one-dimensional problem (1) at (a) $t = 0.1$ and (b) $t = 2.0$ for $\alpha = 1.45$, $d = 8$. Simulation runs for $N = 200$ Figure 3. Solution of the fractional chemical system (42) in two-dimensions for subdiffusive (upper-row) and supperdiffusive (lower-row) scenarios. The parameters used are: $D = 0.39, d = 4, \varpi = 0.79, \beta = -0.91, \tau_2 = 0.278$ and $\tau_3 = 0.1$ at $t = 2$ for $N = 200$ Figure 3 caption">Figure 4. Superdiffusive distribution of chemical system (42) mitotic patterns in two dimensions at some instances of $\alpha$ with initial conditions: $u_0 = 1-\exp(-10(x-0.5)^2+(y-0.5)^2), \;\;v_0 = \exp(-10(x-0.5)^2+2(y-0.5)^2)$. Other parameters are given in Figure 3 caption Figure 3 caption">Figure 5. Three dimensional results of system (42) showing the species evolution at subdiffusive ($\alpha = 0.35$) and superdiffusive ($\alpha = 1.91$) cases for $\tau_3 = 0.21$, $N = 50$ and final time $t = 5$. Other parameters are given in Figure 3 caption Figure 3 caption">Figure 6. Three dimensional results for system (42) at different instances of fractional power $\alpha$, with $\tau_3 = 0.26$ and final time $t = 5$. The first and second columns correspond to subdiffusive and superdiffusive cases. Other parameters are given in Figure 3 caption Table 1. The maximum norm error and timing results for solving equation (1) in one-dimensional space with the exact solution and source term (40) using the FDM and FSM in conjunction with the IMEX3RK scheme at some instances of fractional power $\alpha$ in sub- and supper-diffusive scenarios for $t = 1$, $d = 0.5$ and $N = 200$ Method $\alpha=0.25$ $\alpha=0.50$ $\alpha=0.75$ $\alpha=1.25$ $\alpha=1.50$ $\alpha=1.75$ FDM 9.2570e-06 1.8864e-05 2.8615e-05 4.8107e-05 5.7776e-05 6.7399e-05 0.1674s 0.1682s 0.1693s 0.1718s 0.1673s 0.1685s FSM 2.7055e-09 6.2174e-09 1.0710e-08 2.4231e-08 3.4382e-08 4.7695e-08 Table 2. The maximum norm errors for two dimensional problem (1) with exact solution and local source term (41) obtained with different scheme at some instances of fractional power $\alpha$ and $N$ at final time $t = 1.5$ and $d = 10$ Method $N$ $0<\alpha<1$ $1<\alpha< 2$ $\alpha=0.15$ CPU(s) $\alpha=0.63$ CPU(s) $\alpha=1.37$ CPU(s) $\alpha=1.89$ CPU(s) IMEX3RK $100$ 9.15E-06 0.21 4.57E-05 0.27 1.33E-04 0.27 2.26E-04 0.27 $200$ 7.17E-06 0.27 3.58E-05 0.27 1.04E-04 0.27 1.77E-04 0.27 IMEX3PC $100$ 4.49E-06 0.26 2.43E-05 0.27 7.30E-05 0.27 1.24E-04 0.26 ETD3RK $100$ 1.87E-07 0.26 1.01E-06 0.27 3.04E-06 0.26 5.18E-06 0.26 Xinlong Feng, Huailing Song, Tao Tang, Jiang Yang. Nonlinear stability of the implicit-explicit methods for the Allen-Cahn equation. Inverse Problems & Imaging, 2013, 7 (3) : 679-695. doi: 10.3934/ipi.2013.7.679 Angelamaria Cardone, Zdzisław Jackiewicz, Adrian Sandu, Hong Zhang. Construction of highly stable implicit-explicit general linear methods. Conference Publications, 2015, 2015 (special) : 185-194. doi: 10.3934/proc.2015.0185 Yones Esmaeelzade Aghdam, Hamid Safdari, Yaqub Azari, Hossein Jafari, Dumitru Baleanu. Numerical investigation of space fractional order diffusion equation by the Chebyshev collocation method of the fourth kind and compact finite difference scheme. Discrete & Continuous Dynamical Systems - S, 2021, 14 (7) : 2025-2039. doi: 10.3934/dcdss.2020402 Z. Jackiewicz, B. Zubik-Kowal, B. Basse. Finite-difference and pseudo-spectral methods for the numerical simulations of in vitro human tumor cell population kinetics. Mathematical Biosciences & Engineering, 2009, 6 (3) : 561-572. doi: 10.3934/mbe.2009.6.561 Abdollah Borhanifar, Maria Alessandra Ragusa, Sohrab Valizadeh. High-order numerical method for two-dimensional Riesz space fractional advection-dispersion equation. Discrete & Continuous Dynamical Systems - B, 2021, 26 (10) : 5495-5508. doi: 10.3934/dcdsb.2020355 Pavlos Xanthopoulos, Georgios E. Zouraris. A linearly implicit finite difference method for a Klein-Gordon-Schrödinger system modeling electron-ion plasma waves. Discrete & Continuous Dynamical Systems - B, 2008, 10 (1) : 239-263. doi: 10.3934/dcdsb.2008.10.239 Xiaozhong Yang, Xinlong Liu. Numerical analysis of two new finite difference methods for time-fractional telegraph equation. Discrete & Continuous Dynamical Systems - B, 2021, 26 (7) : 3921-3942. doi: 10.3934/dcdsb.2020269 Ming Yan, Lili Chang, Ningning Yan. Finite element method for constrained optimal control problems governed by nonlinear elliptic PDEs. Mathematical Control & Related Fields, 2012, 2 (2) : 183-194. doi: 10.3934/mcrf.2012.2.183 Moulay Rchid Sidi Ammi, Ismail Jamiai. Finite difference and Legendre spectral method for a time-fractional diffusion-convection equation for image restoration. Discrete & Continuous Dynamical Systems - S, 2018, 11 (1) : 103-117. doi: 10.3934/dcdss.2018007 Ömer Oruç, Alaattin Esen, Fatih Bulut. A unified finite difference Chebyshev wavelet method for numerically solving time fractional Burgers' equation. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 533-542. doi: 10.3934/dcdss.2019035 Fahd Jarad, Sugumaran Harikrishnan, Kamal Shah, Kuppusamy Kanagarajan. Existence and stability results to a class of fractional random implicit differential equations involving a generalized Hilfer fractional derivative. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 723-739. doi: 10.3934/dcdss.2020040 Imtiaz Ahmad, Siraj-ul-Islam, Mehnaz, Sakhi Zaman. Local meshless differential quadrature collocation method for time-fractional PDEs. Discrete & Continuous Dynamical Systems - S, 2020, 13 (10) : 2641-2654. doi: 10.3934/dcdss.2020223 Xin Li, Feng Bao, Kyle Gallivan. A drift homotopy implicit particle filter method for nonlinear filtering problems. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021097 Junxiang Li, Yan Gao, Tao Dai, Chunming Ye, Qiang Su, Jiazhen Huo. Substitution secant/finite difference method to large sparse minimax problems. Journal of Industrial & Management Optimization, 2014, 10 (2) : 637-663. doi: 10.3934/jimo.2014.10.637 Hongsong Feng, Shan Zhao. A multigrid based finite difference method for solving parabolic interface problem. Electronic Research Archive, 2021, 29 (5) : 3141-3170. doi: 10.3934/era.2021031 Brittany Froese Hamfeldt, Jacob Lesniewski. A convergent finite difference method for computing minimal Lagrangian graphs. Communications on Pure & Applied Analysis, 2022, 21 (2) : 393-418. doi: 10.3934/cpaa.2021182 Ilknur Koca. Numerical analysis of coupled fractional differential equations with Atangana-Baleanu fractional derivative. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 475-486. doi: 10.3934/dcdss.2019031 Edès Destyl, Jacques Laminie, Paul Nuiro, Pascal Poullet. Numerical simulations of parity–time symmetric nonlinear Schrödinger equations in critical case. Discrete & Continuous Dynamical Systems - S, 2021, 14 (8) : 2805-2821. doi: 10.3934/dcdss.2020411 Florian De Vuyst, Francesco Salvarani. Numerical simulations of degenerate transport problems. Kinetic & Related Models, 2014, 7 (3) : 463-476. doi: 10.3934/krm.2014.7.463 Kolade M. Owolabi Abdon Atangana
CommonCrawl
Automatic optimal multi-energy management of smart homes Laura Fiorini1 & Marco Aiello2 Residential and commercial buildings are responsible for approximately 35% of carbon emissions in industrialized countries. Making buildings more efficient and sustainable is, therefore, a fundamental step toward a low-carbon energy society. A key to achieving sustainability is by leveraging on energy storage systems and smart technologies to switch between energy carriers in order to optimize environmental impact. However, the research on energy management in buildings has mostly focused on its economic aspect, overlooking the environmental dimension. Additionally, the concept of energy system flexibility has been mostly proposed as the ability to shift demand over time or, at most, to curtail it, aiming at reducing the system's operating costs. We propose a multi-energy multi-objective scheduling model to optimally manage the supply, demand, and interchange of multiple energy carriers, based on dynamic price and carbon emission signals. Our holistic and integrated approach is applied to a group of 200 smart homes with varying thermal and electric loads, and equipped with different types of smart technologies. The effectiveness of the approach in reducing the home carbon footprint, while remunerating the users, is evaluated using historical and statistical data of three European countries. "Create technologies and services for smart homes that provide smart solutions to energy consumers" is one of the ten strategic actions identified in 2015 by the European Commission to accelerate the transformation of the entire energy system (European Commission 2015). In fact, this is crucial as residential buildings are responsible for 22% of regulated energy consumption and 17% of \(\text {CO}_2\) emissions (UN Environment and International Energy Agency 2017). Smart homes can be major players towards a more efficient and sustainable energy future. At the same time doing so is not straightforward. Smart homes produce a vast amount of raw and heterogeneous data; they have many operation possibilities and control choices that can be performed; and they should operate to satisfy residents' safety and needs. Smart homes are not easy to be manually controlled by the average user and, even more complex is the task of optimal control, even for simple daily tasks (Fiorini et al. 2020). To support the design and usability of smart homes we propose a multi-energy multi-objective scheduling model to optimally manage the supply and demand of multiple energy carriers, taking into account both dynamic prices and CO2-emission intensity (CO2-EI). Our holistic and integrated approach considers the interdependencies in energy generation and consumption of devices, promoting not only load shifting over time, but also energy shifting between energy carriers, with the aim of minimizing energy costs and/or carbon emissions, while satisfying users' comfort preferences. The main research question we address is: To what extent and under which conditions can the integration of multiple energy carriers and technologies in smart buildings contribute to reduce their environmental footprint, while remunerating the building users? To answer this question, we model a group of 200 smart homes realistically by considering, depending on size and the season, them having up to five appliances and varying thermal and electric loads. We focus on how the coordinated management of multiple technologies and energy carriers can reduce the environmental impact in terms of \(\text {CO}_2\) emissions, while facilitating monetary and energy savings. To this end, we propose a model of the smart home as a multi-energy system equipped with several smart home technologies for production, transformation, storage, and consumption of energy, which are coordinated by a home energy management system (HEMS) according to dynamic prices and CO2-EI signals coming from the power grid. Such smart homes are configured as multi-energy systems, which supply electric and thermal loads by means of multiple technologies, namely a gas-fired system boiler (SB), a gas-fired combined heat and power (CHP) system, solar photovoltaic panels (PVs), an electric heat pump (EHP), and an immersion heater (IH). The role of electric storage systems in the form of a static home energy storage (HES) or a plug-in electric vehicle (PEV) is investigated as well. The optimization problem to be solved by the HEMS is subject to several uncertainties, due to errors in forecasting prices, emission factors, weather conditions, and electricity and hot water demand. The HEMS we propose employs a rolling horizon to adjust the optimal scheduling based on new available information. This work contributes to the field of smart grids and energy informatics by laying the foundation for future smart home automation systems that can effectively contribute towards decarbonization. The proposed multi-energy management encourages the coupling of energy carriers and services, thus further enhancing the flexibility of demand. At the same time, by following a combination of dynamic prices and CO2-EI signals, the proposed system finds a trade-off between economic and environmental objects, thus enabling cost savings, while improving the sustainability of the home environment. This work is a major extension of the conference publications (Fiorini and Aiello 2019b) and Fiorini and Aiello (2020). Namely, the preliminary model presented in Fiorini and Aiello (2019b) has been extended in Fiorini and Aiello (2020) to include PV panels and an uncontrolled electric load. Additionally, the HEMS proposed in Fiorini and Aiello (2020) relies on marginal CO2-EI signals instead of the average CO2-EI of the generation mix used in Fiorini and Aiello (2019b). The current work further expands upon the smart home model by adding a micro-combined heat and power (\(\mu\)-CHP) system and an additional storage device in the form of PEV. Moreover, the economic model accounts for the self-consumption and feed-in of renewable energy, according to incentive schemes of three European countries. Finally, the current results are based on a full one-year simulation and not just four representative days, as it was the case in the conference papers, hence offering more insights on the effectiveness of the proposed approach. The remainder of the paper is organized as follows. "Related work" section discusses the related work; "Smart home model" section presents the multi-energy smart home model; "Multi-energy multi-objective operation scheduling" section formulates the multi-objective problem that the HEMS aims at solving; "System implementation and simulation setup" section describes the implementation of the proposed approach and details the simulation setup. Results are presented and discussed in "Discussion of the results" section, followed by conclusions in "Conclusions" section Buildings are responsible for more than one third the energy consumption and \(\text {CO}_2\) emissions in most industrialized countries. In spite of several studies on the economy of energy management in buildings, the environmental aspect has most often been overlooked (Etedadi Aliabadi et al. 2021). The most common approach is to formulate the energy management problem as a single economic objective function to be optimized within a defined time horizon (Fiorini and Aiello 2019a). Minimizing the system operating costs (Good and Mancarella 2017; Neyestani et al. 2015; Salpakari and Lund 2016) or the consumer's energy bills (Mohsenzadeh and Pang 2018; Sheikhi et al. 2016) are the most common goals of the optimal scheduling of energy resources. A few studies consider the environmental impact of energy consumption (Fiorini and Aiello 2019a). In these cases, the equivalent cost of pollutants is accounted for in the main economic goal (Setlhaolo et al. 2017) or their amount in tons constitutes an alternative or additional objective function of the optimization problem (Fiorini and Aiello 2018; Brahman et al. 2015; Tabar et al. 2017; Imran et al. 2020). The emissions tied to the energy consumption is commonly assessed using the average CO2-EI of the generation mix, either defined as a constant (Chen et al. 2022; Setlhaolo et al. 2017; Imran et al. 2020; Tabar et al. 2017; Brahman et al. 2015) or as a hourly value (Fiorini and Aiello 2018, 2019b). However, the generation output of the power plants composing the generation mix does not adjust evenly to a load variation owning to multiple technical and economic factors. The power plant that reacts to a change in the electricity demand is referred to as marginal; it may correspond to a single power plant, to a group of them, or to a cross-boarder flow (Graff Zivin et al. 2014). The environmental impact of a change in the electricity demand is, therefore, tied to the emission factor of the marginal power plant that increases or decreases its generation, which may vary from hour to hour. The marginal CO2-EI is estimated using the marginal power plant method (Graff Zivin et al. 2014), and its use is recommended when investigating the optimal operation of a building (Graabak et al. 2014; Andresen et al. 2017). Yet, very few related work assess the emissions tied to the use of electricity imported from the main grid by means of its marginal CO2-EI (Fiorini and Aiello 2020). The coordinated management of multiple energy carriers, usually electricity and hot water, has shown potential economic and environmental benefits (Fiorini and Aiello 2019a). The energy carriers coupling is usually sought at the level of the energy generation, often by means of combined heat and power systems, aiming at reducing the import of electricity from the main grid and increasing the flexibility of electricity demand. Yet, a more holistic approach allows for the shifting of energy carriers at all levels. Storage and energy transformation units enable a more dynamic and flexible use of multiple energy carriers, while delivering the same service (Neyestani et al. 2015). Hybrid appliances and hybrid heating systems can support the increasingly complex task of coordination of demand and supply, thanks to their ability to adapt their operation to the energy supply by shifting between energy carriers (Mauser et al. 2016; Stamminger 2008). Other energy carriers, such as hydrogen (Pan et al. 2020), and fuel conversion technologies, such as electrolyzers and methanation (Mehrjerdi et al. 2021), are not included in the proposed smart home model because they are in a pre-commercial, pilot stage for the building environment (Rongé and François 2021). "Hydrogen-ready" boilers could be easily integrated into the proposed model, as they are connected to the same natural gas infrastructure and, hence, would not require a different modeling approach. As for boilers running on 100% hydrogen, they are still in a prototype phase (British Gas 2022). With respect to the state of the art just overviewed, the present work attempts to fill the following gaps. First, we propose a multi-objective energy management that optimizes, following user's preferences, both system costs and carbon emissions based on dynamic price and marginal CO2-EI signals. Second, the homes are modeled as energy system where multiple energy carriers are coupled and can be used interchangeably thanks to the automatic coordination of several smart technologies for production, transformation, storage, and consumption of both electric and thermal energy. Such solutions can be deployed to fully support user needs automatically, without direct user intervention. Smart home model with hybrid appliances Smart home model The smart home model we propose is illustrated in Fig. 1. According to the classification we defined in Fiorini and Aiello (2019a), PV, SB, and \(\mu\)-CHP are generation resources; HES, PEV, and thermal store (TS) are storage systems, whereas electric heat pump (EHP) and immersion heater (IH) are transformation resources. The smart home is connected to both the electricity and gas distribution grids. The hybrid heating system is composed of four main units: the SB and the \(\mu\)-CHP contribute to the supply of hot water to the TS for both space heating and domestic hot water (DHW) demand; the IH installed into the TS contributes to the heating of DHW, whereas the air-to-air EHP is used for space heating and cooling. Electricity is imported from the distribution grid, or locally generated via the PV system and the \(\mu\)-CHP. Both the HES and the PEV are charged and discharged depending on the house needs. When the smart house features hybrid appliances, they draw hot water from the TS and gas from the distribution grid. The operation of all technologies and controllable loads are coordinated by a HEMS, while optimizing a goal set by the user. The HEMS generates schedules based on information on future electricity prices, electricity \(\text {CO}_2\) intensity, weather conditions, PV generation, and electric and thermal load. In practical application, the HEMS receives the electricity price and \(\text {CO}_2\)-intensity signals from the utility, whereas forecasts of weather conditions and PV generation may be provided by an external service, such as the web based Solecast (SOLCAST 2020). Information on electric and thermal load demand is provided by a load forecasting service based on historical smart meter readings and preferences for indoor conditions and starting time of appliances are set by the user. Based on this information and knowing the current state of the system, the HEMS finds the expected optimal schedule for the resources of the smart home and issues commands to all controllable devices via smart plugs. Designing the HEMS from a software engineering perspective is beyond of the scope of the present work; we refer to our proposal in Georgievski et al. (2020) and Aiello et al. (2021) for a service-oriented energy management system (EMS) for microgrids and general smart energy systems software architectures, respectively. In this work, we assume that the required information is provided by external services. To tackle information uncertainty, the responsible services regularly provide new, updated information for the hours to come and the HEMS adjusts the optimal schedules accordingly. The scheduling problem is formulated as a discrete-time mixed-integer linear programming (MILP) problem employing a rolling horizon approach, which iteratively finds an optimal schedule over finite overlapping prediction horizon. Multi-energy multi-objective operation scheduling The majority of the users care about their energy bills and is keen on reducing them; some of them may also care about the environmental impact of their energy consumption. The HEMS is responsible for the optimal operation of the system, while preserving user's satisfaction and fulfilling environmental and/or economic objectives. The HEMS schedules the supply and demand of multiple energy carriers, namely, electricity, natural gas, hot water, and hot/cold air. The aim is to provide multiple services, namely, electricity, heat, and cooling. If a PEV is present, the transportation becomes an additional service to be provided by the system, as the PEV must be charged before it leaves, while vehicle-to-home (V2H) technologies enhance the flexibility of the system via bidirectional smart charging. Next, we formulate the multi-objective problem within the rolling horizon framework. For the description and the model of the individual elements of the smart home we refer to Fiorini (2021). Multi-objective problem We define a multi-objective problem that consider both the environmental and the economic goals. The environmental objective function accounts for the \(\text {CO}_2\)-equivalent emissions due to the consumption of electricity and gas, local solar power generation, and use of electric storage units as follows: $$\begin{aligned} \Phi _{\text{ENV}}&= \Delta \text{t} \sum _{t=t_0}^{t_{\text{end}}}\biggl (EF_{\text{e},t}P_{\text{e},t} + EF_\text{g}P_{\text{g},t} + EF_{\text{PV}}P_{\text{PV},t} + \\ & \quad + EF_{\text{HES}} P_{\text{HES},t}^{\text{dis}} + EF_{\text{PEV}}P_{\text{PEV},t}^{\text{dis}}\biggr), \end{aligned}$$ where \([{t_0},t_{\text{end}}]\) is the optimization interval; \(P_{\text{e},t}\) and \(P_{\text{g},t}\) (kW) are the electricity and gas imported from the distribution grids at t, respectively; \(P_{\text{PV},t}\) is the PV output at t; \(EF_{\text{e},t}\), \(EF_\text{g}\), and \(EF_\text{PV}\) are the \(\text {CO}_2\)-equivalent emission factor of imported electricity, gas, and electricity locally generated with PV, respectively. \(EF_{\text{HES}}\) and \(EF_{\text{PEV}}\) take into account the emissions during the production and construction phase of a HES and a PEV, respectively. We assume that \(EF_{\text{e},t}\) varies hourly, while \(EF_\text{g}\), \(EF_\text{PV}\), \(EF_{\text{HES}}\), and \(EF_{\text{PEV}}\) are constant. The economic objective function calculates the operating and maintenance costs of the household, which include the operating cost for buying electricity and gas from the distribution grids, the revenues from consuming and selling self-generated electricity, the maintenance costs of the major units, and the degradation costs of PEV and HES. It is formulated as follows: $$\Phi _{\text{ECO}} = \sum _{t=t_0}^{t_{\text{end}}}({OC_t} + {MC_t} +{ DC_t}),$$ $$\begin{aligned} OC_t &= \Delta \text{t} (p_{\text{e},t}P_{\text{e},t} + p_\text{g}P_{\text{g},t} - FIT_{\text{PV}}P_{\text{PV},t}^{\text{EXP}} - \\ & \quad - FIT_{\text{CHP}}P_{\text{CHP},t}^{\text{EXP}} - SC_{\text{PV}}P_{\text{PV},t}^{\text{SELF}} - SC_{\text{CHP}}P_{\text{CHP},t}^{\text{SELF}}), \end{aligned}$$ $$MC_t =MC_{\text{SB},t} + MC_{\text{TS},t} + MC_{\text{PV},t} + MC_{\text{CHP},t},$$ $$DC_t =DC_{\text{PEV},t} + DC_{\text{HES},t},$$ where \(FIT_{\text{PV}}\) and \(FIT_{\text{CHP}}\) are the feed-in tariffs for selling the excess of self-generated electricity; \(SC_{\text{PV}}\) and \(SC_{\text{CHP}}\) are the compensations for self-consumption of electricity generated via PV (\(P_{\text{PV},t}^{\text{SELF}}\)) and \(\mu\)-CHP (\(P_{\text{CHP},t}^{\text{SELF}}\)); \(p_{\text{e},t}\) and \(p_{\text{g}}\) are the electricity and gas price, respectively. We assume that the electricity price varies hourly, while the gas price is constant. These are common assumptions when modeling services for the Smart Grid (Fiorini and Aiello 2019a; Pagani and Aiello 2015). Maintenance and degradation costs are calculated as: $$MC_{x,t} =\gamma _{x}^{\text{d}} \Delta \text{t} Z_{x,t},$$ $$DC_{x,t} =\gamma _{x}^{\text{d}} \Delta \text{t} Z_{x,t},$$ where \(\gamma _x^{\text{m}}\) and \(\gamma _x^{\text{d}}\) are the maintenance and degradation cost of the resource x, respectively, and \(Z_{x,t}\) is the (thermal) power generated by resource x. We define a multi-objective problem that takes both the environmental and the economic goal into account, and it is formulated as a weighted sum: $$\text{min}(c \cdot w\cdot \Phi _{\text{ENV}} + (1-w)\cdot \Phi _{\text{ECO}}),$$ where w is a weight factor that varies with the user preferences; and c is a scaling factor such that \(c\cdot \Phi _{\text{ENV}}\) and \(\Phi _{\text{ECO}}\) have the same unit. The scaling factor c is calculated as the ratio of the costs obtained by minimizing the emissions to the emissions obtained by minimizing the costs. Additional constraints define the dynamics of import and export between the smart home and the distribution grids. It is not possible to import from and export to the electricity distribution grid at the same time. Moreover, the imported power is limited in order to take grid's technical characteristics into account. The maximum power that can be imported may be determined either by the type of connection available, as in The Netherlands (ENEXIS Netbeheer 2020), or by contract, as in France (Selectra 2020). Rolling horizon The optimization problem defined in the previous section is subject to several uncertainties due to changes in weather conditions, PV production, electricity emission factor and prices, and uncertainty in the users' demand. A real HEMS would adjust the load schedule and the operating point of controllable units as new information becomes available. Our MILP model is formulated and solved at discrete time steps of 15 min and it employs a rolling horizon approach, which iteratively finds an optimal schedule over finite overlapping prediction horizon. Only the actions for the next time step are implemented, and the process is then repeated for the following prediction horizon considering new information. The main limitation of our approach lies in the underlying assumption that, between two consecutive time steps, the system and all exogenous factors follow the predicted values. In a real system, this may not be the case: in fact, the real-time energy consumption is affected by the unpredictability of the user behavior and final energy prices are set after the physical delivery of the energy. Therefore, the actual state of the system at the beginning of a new optimization run may deviate from the state predicted by the HEMS. In a real system, smart meters, smart plugs, and smart sensors would track the real-time state of all connected loads, of the main ambient parameters, and of the grid signals and communicate them to the HEMS, which, in turn, would adjust the future operation of all devices accordingly. The more frequently the HEMS receives updated information, the lower the impact of unexpected deviation on the future operation of the system. System implementation and simulation setup Rolling horizon parameters The optimization problem is modeled using time steps of 15 min, and it covers an entire year. The prediction horizon is set to 6 h, i.e., 24 time steps. Only the actions for the first time step are executed before repeating the optimization for the new period. Consequently the simulation of one year corresponds to 35,040 iterations. The MILP model and the rolling horizon framework are implemented using Python 3.7.2 and solved with the GUROBI optimizer (Gurobi Optimization LLC 2020). Smart homes are totally independent from each other, hence, they may be simulated in parallel. The simulations were run on the Peregrine cluster of the University of Groningen, using one CPU Intel Xeon E5-2680 v3, 2.50 GHz, or one CPU Intel Xeon E5-2680 v4, 2.40 GHz, with 1 Gb of RAM. In order to solve the problem over one year, the optimization process goes through 35,040 iterations; the average runtime of one iteration is 81 ms for the French case study, 96 ms for the German one, and 131 ms for the Dutch one. The longest runtime for a single iteration is 1463 ms, while the shortest is 8 ms. To evaluate the proposed multi-energy management approach, we consider a group of 200 independent smart homes, each of them equipped with a HEMS. We model six types of household differentiating on the basis of available technologies, and we define three configurations of the group with varying percentage of household types. We simulate the behavior of these configurations in three scenarios with varying penetration of PV and storage systems, and we consider historical data of energy prices and electricity CO2-EI of three European countries, namely, Germany, France, and the Netherlands. These countries are chosen because of their diversity in the energy consumption mix and in their energy policies. Since 2010 the German energy policy, called Energiewende, has been aiming at phasing out both coal and nuclear power plants, while expanding the renewable capacity, in particular, wind and solar (Agora Energiewende 2014). The announced goal is to supply at least 80% of electricity with renewables by 2050. Yet, in 2018, about 65% of the German electricity still came from conventional sources. In France, electricity supply mostly relies on nuclear and hydro power plants, thus having a very low emission factor, while the contribution of renewables to total generation in 2018 was still below one fifth. As for The Netherlands, more than two third of the electricity comes from fossil fuel power plants. The Climate Agreement passed in 2019 has set the targets for carbon emissions to be reduced by 55% compared to 1990 and for phasing out natural gas in the building environment by 2050 (Government of the Netherlands 2021). Households and appliances To simulate realistically the behavior of a large group of users, we model several types of households differentiating on the basis on their electric and thermal loads, flexibility, and operation preferences. Statistical national data are used to define the following household characteristics: household size, daily use of private cars (i.e., number of trips per day, average distance per trip, and probability of using a car at a particular time of the day), historical weather data, and annual energy consumption. Ownership probabilities of the most common household appliances, their number of operation cycles per household per week, probabilities of appliance usage, and their power demand profiles are derived from the literature, i.e., Mauser et al. (2016); Stamminger (2008); Destatis (2018); Schmitz and Stamminger (2014); Energuide.be (2021). Seasonal uncontrollable load profiles are derived from the REFIT Dataset (Murray and Stankovic 2016) and adjusted based on statistical national data on yearly energy consumption and usage composition. As for the thermal load, several DHW demand profiles are derived from Roux et al. (2018) and adjusted based on statistical data on hot water consumption. The preferred daytime indoor temperature is set equal to \(20\,^\circ \text {C}\). During night time, the indoor temperature cannot drop below 16 °C as recommended in Wookey et al. (2014). Commercial technologies in Table 1 are used as reference for the simulated devices; missing data are taken from the literature. Table 1 Reference technologies Smart home operation modes Based on the available technologies, we define eight possible smart home configurations and order them by increasing flexibility: Electricity-based (E): all the appliances are traditional, using only electricity as energy vector. The thermal load is supplied by the EHP and the IH installed into the TS; Gas-based thermal (G): all the appliances are traditional. The thermal load is supplied by the SB; Partial hybrid (P): all the appliances are traditional. The thermal load is supplied by a hybrid heating system that consists of a SB, an EHP, and an IH; Electricity-based with hybrid appliances (EH): all the appliances are hybrid, and the house is connected to the natural gas distribution grid to supply the hybrid oven (OV) and cooker hob (HB). The thermal load is supplied by the EHP and the IH, while no SB is installed; Gas-based thermal with hybrid appliances (GH): all the appliances are hybrid. The thermal load is supplied by the SB; Fully hybrid (H): all the appliances are hybrid. The thermal load is supplied by a hybrid heating system; Partial hybrid with \(\mu\)-CHP (P\(\mu\)): same as configuration P with the addition of a \(\mu\)-CHP to generate electricity and supply hot water to the TS; and Fully hybrid with \(\mu\)-CHP (H\(\mu\)): same as configuration H with the addition of a \(\mu\)-CHP to generate electricity and supply hot water to the TS. Table 2 Smart home configurations The different configurations, summarized in Table 2, promote increasing integration of energy carriers and technologies. Additionally, we define three levels of increasing user flexibility with respect to indoor and DHW temperature, and allowed delay of cooking and wet appliances. These flexibility levels are summarized in Table 3. Table 3 Levels of load flexibility We define six configurations for the group of 200 smart homes with varying percentages of smart home configurations and load flexibility; summarized in Table 4. The cases are defined so as to evaluate the impact of energy services coupling and of load flexibility. Case A1 is assumed as the reference case, featuring almost no load flexibility and the two most common smart home configurations, namely, electricity-based and gas-based thermal operation mode. Case A2 and case A3 feature low and high level of load flexibility, respectively. Case B1 includes some smart homes equipped with a hybrid heating system and some with a \(\mu\)-CHP system. Case C1 is derived from case B1, assuming that 50% of all smart homes are equipped with hybrid appliances. Table 4 Group configuration matrix The current penetration level of PV and storage technologies significantly varies across European countries. In Germany, according to recent statistics, there are approximately 1.7 million of PV systems, of which 60% has a capacity smaller than 10 kWp, corresponding to about 6 GW (Fraunhofer ISE 2019). Assuming 38 million households, we estimate 2.7% of them being currently equipped with a small-scale PV system. Similarly, we estimate that approximately 7.5% of Dutch households are equipped with PV systems, and 1.3% of French ones too (CBS 2020; Connexion Journalist 2020). As for HES devices, statistics are scarce. In Germany, recent researches estimated that 200,000 residential batteries have been installed by 2020 (Enkhardt 2020). More data are available on vehicles and electric vehicles. In 2019, electric passenger cars, including hybrid electric, battery electric, and plug-in hybrid, accounted for roughly 0.9% of German passenger cars, for 3.7% of Dutch ones, and for 0.4% of French ones (European Automobile Manufacturers Association 2019). We assume four scenarios with increasing ownership of PV and storage technologies, which we refer to as "current", "realistic future", "optimistic future", and "very optimistic future". In particular, in the current scenario we assume that 5% of smart homes are equipped with a PV system, hence becoming prosumers. In the realistic future scenario, 15% of smart homes are prosumers and one third of them are equipped with a storage device. In the optimistic future scenario, 50% of smart homes are prosumers and half of them are equipped with a storage device. In the very optimistic future scenarios, 50% of smart homes are prosumers, and all of them are equipped with a storage device Considering two storage technologies, we define six simulation scenarios; summarized in Table 5. An accurate prediction of the deployment of such technologies is beyond the scope of this study. Table 5 Simulation scenarios Emission and price data Due to large diversities in the energy mix, the CO2-EI greatly varies among France, Germany, and The Netherlands, as shown by the data reported in Table 6. The CO2-EI values calculated with the marginal method are generally higher than those calculated with the average method. The marginal power plant corresponds to the source that increases or decreases if a marginal change in the load occurs, and it may be a single power plant, a group of them, or a cross-border flow. In a market scenario, the marginal power plant is usually the most expensive generator committed at the time. As such, wind and solar power plants are unlikely to be the marginal power plant, as they enter the market with low marginal costs and have priority of dispatch (Regett et al. 2018). While the average-CO2-EI is a good indicator of the share of renewable in the energy mix, the marginal-CO2-EI is a better one for the impact of short-term decisions. Several publications recommend the use of the marginal-CO2-EI when investigating the short-term effect of a change in the electricity demand (Dandres et al. 2017) and the optimal operation of a building (Graabak et al. 2014; Andresen et al. 2017). Table 6 \(\text {CO}_{2}\)-emission intensity (CO2-EI) of 2018 in Germany, France, and The Netherlands: minimum, mean, and maximum values in kg\(\text {CO}_2\)-eq/MWh The dynamic electricity prices we use to calculate the operating costs of the smart homes are derived from the day-ahead prices. Table 7 shows the comparison of the minimum, mean, and maximum values of the day-ahead prices of 2018 in Germany, France, and the Netherlands. On top of the hourly values, we add an amount determined by the average taxes and network component paid by residential users in 2018, which are derived from European Commission (2019) and summarized in Table 8. Natural gas prices, on the contrary, are assumed constant; values are taken from Eurostat (2020), including taxes and levies, and are reported in Table 9. As for the incentive schemes supporting prosumers, we identify two main measures: a compensation for feeding-in the excess energy and a compensation for self-consumption of energy generated by PV panels and \(\mu\)-CHP systems. The values are summarized in Table 10. German data are mostly derived from Fraunhofer ISE (2019); Hendricks and Mesquita (2019); Heimann (2021). French data are derived from CEGIBAT (2020); Hendricks and Mesquita (2019); the feed-in tariff (FIT) for \(\mu\)-CHP generation varies throughout the year: in winter, the compensation is 135–150 €/MWh, while in summer is about 30 €/MWh (CEGIBAT 2020). Dutch data are derived from Hendricks and Mesquita (2019); according to PACE (2018), there are no subsidies for gas-fired \(\mu\)-CHP systems. Table 7 Day-ahead price of 2018 in Germany, France, and The Netherlands: minimum, mean, and maximum values in €/MWh Table 8 Taxes and network components assumed for residential users in Germany, France, and The Netherlands. Table 9 Natural gas price, including taxes and levies, for household consumers in 2018 in Germany, France, and The Netherlands. Table 10 Incentive schemes for prosumers We define a number of metrics to compare the scenarios and smart home configurations described in the previous section. In particular, the \(\text {CO}_2\) emissions are calculated over a period of one year and are due to the consumption of imported electricity and gas, local generation by means of PV panels and the \(\mu\)-CHP system, and use of the HES and the PEV. Given that the main goal of this study lies in investigating to what extent the optimal multi-energy management of residential buildings can contribute to reduce their carbon footprint, we consider the marginal-CO2-EI as the most suitable indicator of the short-term environmental impact of alternative operational choices. The annual energy costs include operating and maintenance costs of the smart home and the degradation costs owning to the use of the storage devices. Carbon emissions and energy costs are used to evaluate the impact of the penetration of technologies, coupling of energy services, and load flexibility. When a smart home is equipped with PV panels or with a \(\mu\)-CHP system, we calculate its electricity self-sufficiency, which indicates the share of renewable energy produced by a smart home that it uses to satisfy its total energy demand, and its electricity self-consumption, which indicates the share of renewable energy produced and used by a smart home (Mauser 2017; Williams et al. 2012; Waffenschmidt 2014). Additionally, we look at the energy that a smart home imports from or feeds into the main distribution grid. Discussion of the results The simulation results of the three case studies demonstrate the effects of automated energy management for the supply of thermal and electric loads by optimizing the supply and consumption of multiple energy carriers. We now summarize the main findings with respect to the potential benefits of: coupling of energy carriers, penetration of PV panels and storage devices, and automated energy management. Coupling of energy carriers The introduction of hybrid heating systems and hybrid appliances diversifies the utilization of energy in smart homes. Depending on the available technologies and the goal set by the users, the HEMS decides how to supply the hot water demand; if by means of a IH, a SB, or a \(\mu\)-CHP system, all of which are connected to a TS. Similarly, the space heating demand may be supplied either by the hot water coming from the TS or by the hot air generated by an EHP. Hybrid appliances replace part of the electricity with natural gas or hot water, while delivering the same services of traditional appliances. The extent to which coupling of energy carriers may promote a reduction in costs and emissions depends on the case study. When German users attribute the same importance to the economic and the environmental savings, the hot water demand is mostly satisfied by burning natural gas in the SB, which halves the energy costs and reduces the emission by one third compared to the option of using an IH. Given the low costs of the natural gas, converting electricity to hot water through the IH is economically convenient only when coupled with a co-generation system, which enables the smart home to use the cheap natural gas and, at the same time, to earn a revenue from feeding the excess generation into the distribution grid. In contrast, using electricity to generate hot air for the space heating demand is particularly convenient given the high coefficient of performance (COP) of the EHP. Combining the use of the SB for hot water demand and of the EHP for the space heating demand is also more sustainable than relying on a fully-electricity-based or fully-gas-based heating system. While the combination of the \(\mu\)-CHP with the IH is economically convenient, the overall efficiency of the conversion process from natural gas to electricity to hot water is such that the carbon footprint of the hot water is higher than if it was produced by a SB. As for the impact of hybrid appliances, they appear to have a positive benefit both in terms of costs (with savings as high as 20%) and the emissions (8%). Additionally, when a \(\mu\)-CHP is available, hybrid appliances contribute to reaching almost full self-sufficiency and self-consumption. In the French case, there is a remarkable contrast between the economic and the environmental impact of coupling of energy carriers. On the one hand, a gas-based heating system allows for at least one third lower energy costs compared to a electricity-based one. When a \(\mu\)-CHP is available, at least 70% of the hot water is produced by burning gas, either in the co-generation system or in the SB. On the other hand, using natural gas has a dramatic effect on the emissions, increasing them by at least 90% and up to 135%. Consequently, hybrid appliances further magnify this contrast, reducing energy costs by up to 10%, while increasing the emissions up to 16%, compared to traditional appliances. Yet when coordinated with a \(\mu\)-CHP, they have a positive impact on both the electricity self-consumption and self-sufficiency. As for the Dutch case, using natural gas for supplying the thermal demand is the most economical and sustainable solution, reducing costs by one fourth and emissions by one fifth compared to an electricity-based heating system. The combination of a gas-fired SB for the production of hot water with an EHP for the space heating allows for further savings. The IH enables a reduction in both costs and emissions only when coupled to a \(\mu\)-CHP. These results are similar to those for the German case study. However, different economic schemes, with natural gas being more expensive in The Netherlands than in Germany, while electricity being cheaper, limit the cost benefits of switching to natural gas. This is particularly evident when looking at the impact of co-generation systems, which are much more remunerative for German smart homes than for Dutch ones, also given the absence of a compensation scheme for \(\mu\)-CHP systems in The Netherlands. In contrast, the marginal-CO2-EI of Dutch electricity is so high that even the double conversion of natural gas to electricity to hot water may reduce the carbon footprint. PV panels and storage PV panels enable significant cost savings for all types of smart homes in all case studies, given the FIT schemes available for solar generation. Savings are especially significant in The Netherlands, where a net metering scheme is applied; smart homes with electric heating may reduce their costs by 50%, while those with a gas-bases or hybrid heating system up to 80%. In both Germany and The Netherlands, the combination of PV panels and co-generation allows for a positive net revenue, while this is not the case in France. As for the environmental impact of PV panels, we see opposite trends between the German and Dutch cases, on the one side, and the French one, on the other. In the former, smart homes producing solar energy reduce their carbon footprint irrespective of their configuration, the only exception being smart homes equipped with a \(\mu\)-CHP system, as they tend to increase their gas consumption in order to maximize their revenue. In contrast, French smart homes gain a minor sustainability benefit from the availability of PV panels, unless they may replace natural gas used from heating purposes with solar electricity. The effects of storage strongly depend on the type of device: static HES generally increase the energy costs in all case studies and enable little or no reduction in emissions; PEVs significantly reduce both costs and emissions in Germany, whereas they only decrease emissions in The Netherlands. In France, adding storage to PV panels has overall a negative impact, both in terms of costs and emissions. These differences among the case studies are all the more evident when looking at the total emissions caused by the entire group of 200 smart homes. In Germany and The Netherlands, the progressive penetration of PV panels and storage may significantly contribute to the decarbonization of the residential buildings, especially when hybrid heating systems are not widely available. In contrast, in France PV panels and storage systems have a minor (− 1.5%) or even negative (+ 0.5%) environmental impact, unless coupled with other technologies, such as hybrid heating systems and \(\mu\)-CHP. Multi-objective automated energy management The simulation results show that there are ample opportunities for using both CO2-EI and price signals in the landscape of home automation, in order to optimally use diverse energy carriers and coordinate the operation of multiple technologies, so as to achieve an economic or environmental goal, or a mix of them, set by the users. While smart homes operating in more traditional configurations, such as electricity-based or gas-based thermal ones, have little choice between reducing their emissions or their costs, the availability of multiple technologies, such as PV panels or hybrid heating systems, enables larger savings. Depending on the case study, a single smart home equipped with a hybrid heating system may reduce its annual carbon footprint by 10% to 40%, i.e., by 400 kg to more than one tonne of \(\text {CO}_2\) emissions. Yet this corresponds to an increase in costs that varies between 25 € and 700 € per year, particularly when a \(\mu\)-CHP system is used. It is, therefore, not only possible, but it is useful to consider \(\text {CO}_2\) and price signals in the landscape of home automation to remunerate the home users and, at the same time, to decrease the environmental footprint of residential buildings, so as to promote and realize bold, sustainable energy policies. Using the average-\({\text {CO}_{2}}\) When assessing the environmental impact of a change in the electricity consumption of a building, the marginal-CO2-EI should be used, since such a variation in the demand translates into an adjustment of the marginal power plant's production (Graabak et al. 2014). The conclusions drawn from the results rely on the assumption of using marginal-CO2-EI instead of the average-CO2-EI. If average-CO2-EI were used, the carbon footprint of the smart homes would obviously be lower in absolute values. In France, for instance, the mean value of the marginal-CO2-EI is twice as large as the mean value of the average-CO2-EI (see Table 6). In Germany, a strongly renewable-based production mix would likely lead to a marginal generation mostly supplied by polluting coal power plants (Regett et al. 2018, 2019). Beside the obvious difference in absolute values, using the average-CO2-EI would result in different short-term scheduling decisions. For instance, given that the average generation mix is mostly affected by the increasing availability of solar generation around noon, a HEMS using average-CO2-EI would likely scheduled as many flexible appliances as possible to operate around that time, subject to the users' preferences. Consequently, higher peaks in demand would be noticed around midday, as we show in Fiorini and Aiello (2020). Such effects shall be taken into account, therefore, when designing future demand response programs based on carbon emissions. Buildings are responsible for more than one third the energy consumption and \(\text {CO}_2\) emissions in most industrialized countries. In spite of several studies on the economy of energy management in buildings, the environmental aspect has often been overlooked or trivially been equated to the economic one. Yet making buildings more efficient and sustainable is a fundamental step of the roadmap toward low-carbon energy systems and a sustainable society. To this end, a smarter and more coordinated use of new, complementary technologies is pivotal. To answer our research question, we focused on residential buildings and, in particular, on smart homes, for which we proposed to model them as multi-energy systems equipped with several smart technologies for production, transformation, storage, and consumption of electric and thermal energy. The model describes the dynamics of users' preferences and comfort conditions. A HEMS is responsible for determining the operation of all the flexible units that enables the achievement of an economical and/or environmental goal. The effectiveness of the proposed approach in reducing the carbon footprint of smart home energy use, while remunerating the home users, has been evaluated with three representative European case studies based on historical and statistical data from Germany, France, and The Netherlands. The behavior of a large group of smart homes of different size and equipped with different technologies has been simulated in six scenarios with varying penetration of PV panels and storage devices. The results have shown that the extent to which synergies among energy carriers and technologies enable cost savings and reduction in emissions strongly depend on the characteristics specific to the country where such interactions are sought. A hybrid heating system has a major role in the integration of natural gas and electricity. Combining a gas-fired SB for hot water production and an EHP for space heating is often the most cost-effective solution, though an IH enables a significant reduction in costs when coupled to a \(\mu\)-CHP system. However, the environmental impact of the coupling of energy carriers for heating purposes largely varies across the case studies. In Germany and in The Netherlands, where a large share of the marginal power is generated by fossil fuel-fired power plants, a heating system relying on an SB and an EHP is also more sustainable than a fully-electricity-based or fully-gas-based one. In contrast, the use of natural gas in French smart homes dramatically increases their emissions; the more electricity is used, the lower are the emissions, owing to the fact that most of the marginal power comes from low-carbon hydro and nuclear power plants. Similarly, the impact of the introduction of hybrid appliances depends on the case study, being largely beneficial in Germany and in The Netherlands, while having a negative environmental impact in France. The dual, often opposite nature of the multi-energy management problem is further magnified by the growing penetration of PV panels and storage devices. Current FITs for the excess solar generation make PV panels a remunerative solution for smart homes in all three countries, irrespective of the available technologies. Such compensation schemes aim at incentivizing the installation of small-scale solar capacity, as one of the main measure of the transition toward low-carbon energy systems. Yet, while PV panels significantly contribute to the decarbonization of the smart homes in Germany and The Netherlands, they have a minor or even a negative impact in France, unless coupled with other technologies. As for the storage devices, PEVs offering smart charging service to smart homes appear to be generally more beneficial than HES, though their actual impact varies by country. The results have shown, therefore, that there is no "one-size-fits-all" solution that can be considered suitable for all countries and all users. Additionally, our multi-objective approach has indicated that using both price and carbon signals in the landscape of smart home automation systems and, in general, of building automation, is not only possible but also useful, given the duality of the problem. Users that aim at minimizing their costs may end up increasing their emissions by up to two thirds, whereas users most concerned with their carbon footprint may pay three times as much as the others. Taking into account both the economic and environmental aspects of the multi-energy management problem is, therefore, of the utmost importance in order to promote the acceptance of technologies among users and to realize their full potential on which bold, sustainable energy policies rely. All data generated and analyzed during the current study are available from the corresponding author on reasonable request. Agora Energiewende (2014) The German Energiewende and its Climate Paradox Aiello M, Fiorini L, Georgievski I (2021) Software engineering smart energy systems. In: Fathi M, Zio E, Pardalos PM (eds) Handbook of smart energy systems. Springer, Cham Andresen I, Lien KM, Berker T, Sartori I (2017) Greenhouse gas balances in zero emission buildings—electricity conversion factors revisited. Technical Report, 17 Brahman F, Honarmand M, Jadid S (2015) Optimal electrical and thermal energy management of a residential energy hub, integrating demand response and energy storage system. Energy Build 90:65–75 British Gas (2022) Hydrogen boilers: everything you need to know. https://www.britishgas.co.uk/the-source/greener-living/hydrogen-boilers.html. Accessed 8 Oct 2022 CBS (2020) Business solar capacity now exceeds residential. https://www.cbs.nl/en-gb/news/2020/25/business-solar-capacity-now-exceeds-residential. Accessed 28 Apr 2022 CEGIBAT (2020) Cogénération: Tarif d'achat de l'éléctricité (contrat C16). https://cegibat.grdf.fr/dossier-techniques/marche-energie/tarif-electricite-cogeneration-c16. Accessed 28 Apr 2022 Chen R, Tsay Y-S, Ni S (2022) An integrated framework for multi-objective optimization of building performance: carbon emissions, thermal comfort, and global cost. J Clean Prod 359:131978 Connexion Journalist (2020) Solar panels for your house in France? Get them at Ikea. The Connexion. Accessed 28 Apr 2022 Daikin Europe N.V. (2019) Air conditioners heating & cooling. https://www.daikin.eu/ Dandres T, Farrahi Moghaddam R, Nguyen KK, Lemieux Y, Samson R, Cheriet M (2017) Consideration of marginal electricity in real-time minimization of distributed data centre emissions. J Clean Prod 143:116–124 Destatis (2018) Wirtschaftsrechnungen - Einkommens- und Verbrauchsstichprobe Ausstattung privater Haushalte mit ausgewählten Gebrauchsgütern und Versicherungen. Technical Report 1, Statistisches Bundesamt Electric Vehicle Database (2021) Nissan Leaf. https://ev-database.org/car/1106/Nissan-Leaf. Accessed 28 Apr 2022 Energuide.be (2021) How much energy do my household appliances use? https://www.energuide.be/en/questions-answers/how-much-energy-do-my-household-appliances-use/71/. Accessed 28 Apr 2022 EnergySage, LLC (2021) Learn about LG solar panels. https://www.energysage.com/solar-panels/lg-solar/. Accessed 28 Apr 2022 ENEXIS Netbeheer (2020) Wat is krachtstroom en wat kost het aanleggen? https://www.enexis.nl/consument/aansluiting-en-meter/aansluiting/krachtstroom. Accessed 28 Apr 2022 Enkhardt S (2020) Germany has 200,000 solar-plus-storage systems. pv magazine. Accessed 28 Apr 2022 Etedadi Aliabadi F, Agbossou K, Kelouwani S, Henao N, Hosseini SS (2021) Coordination of smart home energy management systems in neighborhood areas: a systematic review. IEEE Access 9:36417–36443 European Automobile Manufacturers Association (2019) Vehicles in use Europe 2019. In: ACEA report, pp 1–19. https://www.acea.be/publications/article/report-vehicles-in-use-europe-2019 European Commission (2015) Towards an integrated strategic energy technology (set) plan: accelerating the European energy system transformation. Communication from the Commission, 6317 European Commission (2019) Energy prices and costs in Europe. Technical report, Brussels Eurostat (2020) Gas prices by type of user. https://ec.europa.eu/eurostat/web/products-datasets/-/ten00118. Accessed 28 Apr 2022 Fiorini L (2021) Multi-energy management of buildings in smart grids. Ph.D. thesis, University of Groningen. https://doi.org/10.33612/diss.178373601 Fiorini L, Aiello M (2018) Household CO2-efficient energy management. Energy Inform 1(Suppl 1):21–34 Fiorini L, Aiello M (2019a) Energy management for user's thermal and power needs: a survey. Energy Rep 5:1048–1077 Fiorini L, Aiello M (2019b) Predictive CO\(_2\)-efficient scheduling of hybrid electric and thermal loads. In: Proceedings of the 2019 IEEE international conference on energy internet (ICEI). IEEE, Nanjing, China Fiorini L, Aiello M (2020) Predictive multi-objective scheduling with dynamic prices and marginal CO\(_2\)-emission intensities. In: Proceedings of the eleventh ACM international conference on future energy systems (e-Energy '20), virtual event, Australia. ACM, New York, pp 196–207 Fiorini L, Steg L, Aiello M (2020) Sustainability choices when cooking pasta. In: Proceedings of the eleventh ACM international conference on future energy systems (e-Energy '20), virtual event, Australia. ACM, New York, pp 161–166 Fraunhofer ISE (2019) Recent facts about photovoltaics in Germany, version of October 14, 2019. https://www.pv-fakten.de Georgievski I, Fiorini L, Aiello M (2020) Towards service-oriented and intelligent microgrids. In: Proceedings of the 3rd international conference on applications of intelligent systems (APPIS 2020), January 7–9, 2020, Las Palmas de Gran Canaria, Spain. ACM, New York, Las Palmas de Gran Canaria, Spain Glow-worm (2021) Energy Range Brochure. https://www.glow-worm.co.uk/glow-worm/products-1/energy/energy-brochure-1826318.pdf. Accessed 28 Apr 2022 Good N, Mancarella P (2017) Flexibility in multi-energy communities with electrical and thermal storage: a stochastic, robust approach for multi-service demand response. IEEE Trans Smart Grid 10(1):503–513 Government of the Netherlands (2021) Climate policy. https://www.government.nl/topics/climate-change/climate-policy. Accessed 28 Apr 2022 Graabak I, Bakken BH, Feilberg N (2014) Zero emission building and conversion factors between electricity consumption and emissions of greenhouse gases in a long term perspective. Environ Clim Technol 13(1):12–19 Graff Zivin JS, Kotchen MJ, Mansur ET (2014) Spatial and temporal heterogeneity of marginal emissions: implications for electric cars and other electricity-shifting policies. J Econ Behav Organ 107:248–268 Gurobi Optimization LLC (2020) Gurobi optimizer reference manual. https://www.gurobi.com/documentation/9.1/refman/index.html. Accessed 28 Apr2022 Heimann S (2021) KWK-Gesetz: Das gibt es für den erzeugten Strom. https://www.co2online.de/modernisieren-und-bauen/blockheizkraftwerk-kraft-waerme-kopplung/kwk-gesetz/. Accessed 28 Apr 2022 Hendricks D, Mesquita R (2019) PV prosumer guidelines for eight EU member states, Brussels. https://www.pvp4grid.eu/wp-content/uploads/2019/05/1904_PVP4Grid_Bericht_EUnat_web.pdf Imran A, Hafeez G, Khan I, Usman M, Shafiq Z, Qazi AB, Khalid A, Thoben K-D (2020) Heuristic-based programable controller for efficient energy management under renewable energy sources and energy storage system in smart grid. IEEE Access 8:139587–139608 Mauser I (2017) Multi-modal building energy management. PhD thesis, Karlsruher Institut für Technologie (KIT) April Mauser I, Müller J, Allerding F, Schmeck H (2016) Adaptive building energy management with multiple commodities and flexible evolutionary optimization. Renew Energy 87:911–921 McDonald Water Storage (2019) Therm flow—thermal store mains pressure system. https://mcdonaldwaterstorage.com/. Accessed 28 Apr 2022 Mehrjerdi H, Hemmati R, Shafie-khah M, Catalão JPS (2021) Zero energy building by multicarrier energy systems including hydro, wind, solar, and hydrogen. IEEE Trans Ind Inform 17(8):5474–5484 Mohsenzadeh A, Pang C (2018) Two stage residential energy management under distribution locational marginal pricing. Electr Power Syst Res 154:361–372 Murray D, Stankovic L (2016) REFIT: electrical load measurements (cleaned). University of Strathclyde. https://doi.org/10.15129/9ab14b0e-19ac-4279-938f-27f643078cec Neyestani N, Yazdani-Damavandi M, Shafie-Khah M, Chicco G, Catalão JP (2015) Stochastic modeling of multienergy carriers dependencies in smart local networks with distributed energy resources. IEEE Trans Smart Grid 6(4):1748–1762 PACE (2018) Snapshot from PACE target markets—insights from The Netherlands: fuel cell micro-cogeneration could help government achieve its targets. http://pace-energy.eu/fuel-cell-micro-cogeneration-market-insights-from-the-netherlands/. Accessed 28 Apr 2022 Pagani GA, Aiello M (2015) Generating realistic dynamic prices and services for the smart grid. IEEE Syst J 9(1):191–198 Pan G, Gu W, Lu Y, Qiu H, Lu S, Yao S (2020) Optimal planning for electricity-hydrogen integrated energy system considering power to hydrogen and heat and seasonal storage. IEEE Trans Sustain Energy 11(4):2662–2676 Regett A, Baing F, Conrad J, Fattler S, Kranner C (2018) Emission assessment of electricity: mix vs. marginal power plant method. In: International conference on the European energy market, EEM, vol. 2018-June Regett A, Kranner C, Fischhaber S, Böing F (2019) Using energy system modelling results for assessing the emission effect of vehicle-to-grid for peak shaving. Progress in life cycle assessment. Springer, Cham, pp 115–123 Rongé J, François I (2021) Use of hydrogen in buildings. https://www.waterstofnet.eu/_asset/_public/BatHyBuild/Hydrogen-use-in-builings-BatHyBuild-29042021.pdf Roux M, Apperley M, Booysen MJ (2018) Comfort, peak load and energy: centralised control of water heaters for demand-driven prioritisation. Energy Sustain Dev 44:78–86 Salpakari J, Lund P (2016) Optimal and rule-based control strategies for energy flexibility in buildings with PV. Appl Energy 161:425–436 Schmitz A, Stamminger R (2014) Usage behaviour and related energy consumption of European consumers for washing and drying. Energy Effic 7(6):937–954 Selectra (2020) How to set up electricity with Enedis (ERDF) in France. https://en.selectra.info/energy-france/guides/electricity/new-home. Accessed 28 Apr 2022 Setlhaolo D, Sichilalu S, Zhang J (2017) Residential load management in an energy hub with heat pump water heater. Appl Energy 208(August):551–560 Sheikhi A, Rayati M, Ranjbar AM (2016) Demand side management for a residential customer in multi-energy systems. Sustain Cities Soc 22:63–77 SOLCAST (2020) Global solar irradiance data and PV system power output data. https://solcast.com.au/. Accessed 28 Apr 2022 Solid Power (2021) Produktspezifikation BlueGEN. https://www.solidpower.com/en/bluegen-technology/ Stamminger R (2008) Synergy potential of smart appliances. Report of the Smart-A project deliverable 2.3 of WP2 Tabar VS, Jirdehi MA, Hemmati R (2017) Energy management in microgrid based on the multi objective stochastic programming incorporating portable renewable energy resource as demand response option. Energy 118:827–839 Technische Alternative (2021) EHS-R Adjustablbe electric immersion heater. https://www.ta.co.at/en/x2-freely-programmable-controllers/immersion-heater-3000-w-variable-control/. Accessed 28 Apr 2022 Tesla (2021) Meet Powerwall, your home battery. https://www.tesla.com/powerwall UN Environment and International Energy Agency (2017) Towards a zero-emission, efficient, and resilient buildings and construction sector. In: Global status report 2017, pp 1–48 Waffenschmidt E (2014) Dimensioning of decentralized photovoltaic storages with limited feed-in power and their impact on the distribution grid. Energy Procedia 46:88–97 Williams CJC, Binder JO, Kelm T (2012) Demand side management through heat pumps, thermal storage and battery storage to increase local self-consumption and grid compatibility of PV systems. In: IEEE PES innovative smart grid technologies conference Europe, pp 1–6 Wookey R, Bone A, Carmichael C, Crossley A (2014) Minimum home temperature thresholds for health in winter—a systematic literature review. Public Health England, London The authors thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high performance computing cluster. This work is supported by the Netherlands Organization for Scientific Research under the NWO MERGE project, contract no. 647.002.006 (www.nwo.nl). Distributed Systems Group, University of Groningen, Nijenborgh 9, 9747 AG, Groningen, The Netherlands Laura Fiorini Service Computing Department, University of Stuttgart, Universitätsstraße 38, 70569, Stuttgart, Germany Marco Aiello LF designed the study, performed the data analysis, and wrote the manuscript. MA discussed the results, reviewed, and finalized the manuscript. Both authors read and approved the final manuscript. Correspondence to Laura Fiorini. Fiorini, L., Aiello, M. Automatic optimal multi-energy management of smart homes. Energy Inform 5, 68 (2022). https://doi.org/10.1186/s42162-022-00253-0 Hybrid appliances Multi-energy buildings Multi-objective optimization Optimal scheduling
CommonCrawl
Crystal Info Triclinic Monoclinic Orthorhombic Tetragonal Trigonal Hexagonal Cubic Space Group Enantiomorphs Prototype Index Chemical Symbols Strukturbericht Pearson Symbol encyclopedia_teal_naturala1a2a3 Encyclopedia of Crystallographic Prototypes M. J. Mehl, D. Hicks, C. Toher, O. Levy, R. M. Hanson, G. L. W. Hart, and S. Curtarolo, The AFLOW Library of Crystallographic Prototypes: Part 1, Comp. Mat. Sci. 136, S1-S828 (2017). (doi=10.1016/j.commatsci.2017.01.017) D. Hicks, M. J. Mehl, E. Gossett, C. Toher, O. Levy, R. M. Hanson, G. L. W. Hart, and S. Curtarolo, The AFLOW Library of Crystallographic Prototypes: Part 2, Comp. Mat. Sci. 161, S1-S1011 (2019). (doi=10.1016/j.commatsci.2018.10.043) Enantiomorphic space groups In affine space — i.e., no defined origin — there are only 219 space groups (referred to as the affine space groups). The eleven remaining space groups are mirror images (left‐handed versus right‐handed structures) of one of the other 219 space groups and are equivalent in the affine space. These pairs of space groups are the enantiomorphic pairs, in which two prototypes can be formed as mirror images of a single structure. The eleven pairs of enantiomorphic space groups (Online Dictionary of Crystallography, ITC-A) are: $P4_{1}$ (#76) and $P4_{3}$ (#78), $P4_{1}22$ (#91) and $P4_{3}22$ (#95), $P4_{1}2_{1}2$ (#92) and $P4_{3}2_{1}2$ (#96), $P3_{1}$ (#144) and $P3_{2}$ (#145), $P3_{1}12$ (#151) and $P3_{2}12$ (#153), $P6_{2}22$ (#180) and $P6_{4}22$ (#181), and $P4_{1}32$ (#213) and $P4_{3}32$ (#212). The relationship between the enantiomorphic pairs is exploited in this encyclopedia to generate prototypes for otherwise unrepresented space groups. If we look at space group $P4_{1}$ (#76), we see that it has one Wyckoff position ($4a$), with operations (Bilbao Crystallographic Server) \[ \left(x, y, z\right) \left(-x, -y, z + \frac{1}{2}\right) \left(-y, x, z + \frac{1}{4}\right) \left(y, -x, z + \frac{3}{4}\right). \] If we then look at space group $P4_{3}$ (#78), we find it also has one ($4a$) Wyckoff position, with operations \[ \left(x, y, z\right) \left(-x, -y, z + \frac{1}{2}\right) \left(-y, x, z + \frac{3}{4}\right) \left(y, -x, z + \frac{1}{4}\right), \] where the only difference is that the 1/4 and 3/4 fractions have swapped positions. We can easily show that space group #78 is a mirror reflection of #76 in the $z = 0$ plane. To see this more clearly, consider the Cs3P7 structure (A3B7_tP40_76_3a_7a). This structure was found in space group #76, but if we reflect all of the coordinates through the $z = 0$ plane, it transforms into a structure in space group #78, as shown below. The distance between any pair of atoms is the same in the $P4_{3}$ structure as it is in the $P4_{1}$ structure, and the angle between any three atoms is the same in both structures. It follows that the structures are degenerate, there is no difference in energy between them, and they should be equally likely to form. Any structure in space group $P4_{1}$ can be transformed into $P4_{3}$ by this method. Pairs of space groups which allow these transformations are said to be enantiomorphic (Online Dictionary of Crystallography, ITC-A), or chiral. In addition, forty-three other space groups allow chiral crystal structures. The complete set of sixty-five space groups are known as the Sohncke groups (Online Dictionary of Crystallography). If you are using this library, please cite: Return to the AFLOW Encyclopedia of Crystallographic Prototypes Home Page
CommonCrawl
The Journal of Korean Physical Therapy 2287-156X(eISSN) The Korean Society of Physical Therapy (대한물리치료학회) Comparison of Motor Skill Acquisition according to Types of Sensory-Stimuli Cue in Serial Reaction Time Task Kwon, Yong Hyun (Department of Physical Therapy, Yeungnam University College) ; Lee, Myoung Hee (Department of Physical Therapy, College of Science, Kyungsung University) Received : 2014.05.19 Accepted : 2014.06.13 PDF KSCI Purpose: The purpose of this study is to investigate whether types of sensory-stimuli cues in terms of visual, auditory, and visuoauditory cues can be affected to motor sequential learning in healthy adults, using serial reaction time task. Methods: Twenty four healthy subjects participated in this study, who were randomly allocated into three groups, in terms of visual-stimuli (VS) group, auditory-stimuli (AS) group, and visuoauditory-stimuli (VAS) group. In SRT task, eight Arabic numbers were adopted as presentational stimulus, which were composed of three different types of presentational modules, in terms of visual, auditory, and visuoauditory stimuli. On an experiment, all subjects performed total 3 sessions relevant to each stimulus module with a pause of 10 minutes for training and pre-/post-tests. At the pre- and post-tests, reaction time and accuracy were calculated. Results: In reaction time, significant differences were founded in terms of between-subjects, within-subjects, and interaction effect for group ${\times}$ repeated factor. In accuracy, no significant differences were observed in between-group and interaction effect for groups ${\times}$ repeated factor. However, a significant main effect of within-subjects was observed. In addition, a significant difference was showed in comparison of differences of changes between the pre- and post-test only in the reaction time among three groups. Conclusion: This study suggest that short-term sequential motor training on one day induced behavioral modification, such as speed and accuracy of motor response. In addition, we found that motor training using visual-stimuli cue showed better effect of motor skill acquisition, compared to auditory and visuoauditory-stimuli cues. Motor sequential learning; Sensory-stimuli cues; Serial reaction time task Schmidt RA. Motor control and learning. Champaign, IL: Human Kinetis Publishers; 2005. Wolpert DM, Ghahramani Z, Flanagan JR. Perspectives and problems in motor learning. Trends Cogn Sci. 2001;5(11):487-94. https://doi.org/10.1016/S1364-6613(00)01773-3 Schmidt RA, Lee TD. Motor learning and performance; from principle to application. Human Kinetics; 2014. Connelly DM, Carnahan H, Vandervoort AA. Motor skill learning of concentric and eccentric isokinetic movements in older adults. Exp Aging Res. 2000;26(3):209-28. https://doi.org/10.1080/036107300404868 Robertson EM. The serial reaction time task: Implicit motor skill learning? J Neurosci. 2007;27(38):10073-5. https://doi.org/10.1523/JNEUROSCI.2747-07.2007 Shemmell J, Forner M, Tathem B et al. Neuromuscular-skeletal constraints on the acquisition of skill in a discrete torque production task. Exp Brain Res. 2006;175(3):400-10. https://doi.org/10.1007/s00221-006-0547-y Kim SH, Pohl PS, Luchies CW et al. Ipsilateral deficits of targeted movements after stroke. Arch Phys Med Rehabil. 2003;84(5):719-24. https://doi.org/10.1016/S0003-9993(03)04973-0 Nissen MJ, Bullemer P. Attentional requirements of learning: Evidence from performance measures. Cognit Psychol. 1987;19(1):191-32. Moisello C, Crupi D, Tunik E et al. The serial reaction time task revisited: A study on motor sequence learning with an armreaching task. Exp Brain Res. 2009;194(1):143-55. https://doi.org/10.1007/s00221-008-1681-5 Song S, Howard JH Jr, Howard DV. Perceptual sequence learning in a serial reaction time task. Exp Brain Res. 2008;189(2):145-58. https://doi.org/10.1007/s00221-008-1411-z Kwon YH, Chang JS, Kim CS. Changes of cortical activation pattern induced by motor learning with serial reaction time task. The Korean Society of Physcial Therapy. 2009;21(1):65-72. Kwon YH, Chang JS, Lee MH et al. The evidence of neuromuscular adaptation according to motor sequential learning in the serial reaction time task. J Phys Ther Sci. 2010;22(2):117-21. https://doi.org/10.1589/jpts.22.117 Kwon YH, Nam KS, Park JW. Identification of cortical activation and white matter architecture according to short-term motor learning in the human brain: Functional mri and diffusion tensor tractography study. Neurosci Lett. 2012;520(1):11-5. https://doi.org/10.1016/j.neulet.2012.05.005 Park MC, Bae SS, Lee MY. Change of activation of the supplementary motor area in motor learning: An fmri case study. The Journal of Korean Society of Physical Therapy. 2011;23(2):85-90. Jonsdottir J, Cattaneo D, Recalcati M et al. Task-oriented biofeedback to improve gait in individuals with chronic stroke: Motor learning approach. Neurorehabil Neural Repair. 2010;24(5):478-85. https://doi.org/10.1177/1545968309355986 Fairbrother JT, Laughlin DD, Nguyen TV. Self-controlled feedback facilitates motor learning in both high and low activity individuals. Front Psychol. 2012;3:323-30. Lauber B, Keller M. Improving motor performance: Selected aspects of augmented feedback in exercise and health. Eur J Sport Sci. 2014;14(1):36-43. https://doi.org/10.1080/17461391.2012.725104 Sigrist R, Rauter G, Riener R et al. Augmented visual, auditory, haptic, and multimodal feedback in motor learning: A review. Psychon Bull Rev. 2013;20(1):21-53. https://doi.org/10.3758/s13423-012-0333-8 Yen SC, Landry JM, Wu M. Augmented multisensory feedback enhances locomotor adaptation in humans with incomplete spinal cord injury. Hum Mov Sci. 2014 Lee MH, Kim MC, Park JT. Analysis of motor performance and p300 during serial task performance according to the type of cue. Journal of the Korean Society of Physical Medine. 2013;8(2):281-7. https://doi.org/10.13066/kspm.2013.8.2.281 Adler S, Beckers D, Buck M. Pnf in practice: An illustrated guide. Springer; 2013. Oldfield RC. The assessment and analysis of handedness: The edinburgh inventory. Neuropsychologia. 1971;9(1):97-113. https://doi.org/10.1016/0028-3932(71)90067-4 Labeye E, Oker A, Badard G et al. Activation and integration of motor components in a short-term priming paradigm. Acta Psychol (Amst). 2008;129(1):108-11. https://doi.org/10.1016/j.actpsy.2008.04.010 Tang K, Staines WR, Black SE et al. Novel vibrotactile discrimination task for investigating the neural correlates of short-term learning with fmri. J Neurosci Methods. 2009;178(1):65-74. https://doi.org/10.1016/j.jneumeth.2008.11.024 Todorov E, Shadmehr R, Bizzi E. Augmented feedback presented in a virtual environment accelerates learning of a difficult motor task. J Mot Behav. 1997;29(2):147-58. https://doi.org/10.1080/00222899709600829 Wulf G, Horger M, Shea CH. Benefits of blocked over serial feedback on complex motor skill learning. J Mot Behav. 1999;31(1):95-103. https://doi.org/10.1080/00222899909601895 Akamatsu T, Fukuyama H, Kawamata T. The effects of visual, auditory, and mixed cues on choice reaction in parkinson's disease. J Neurol Sci. 2008;269(1-2):118-25. https://doi.org/10.1016/j.jns.2008.01.002 Camachon C, Jacobs DM, Huet M et al. The role of concurrent feedback in learning to walk through sliding doors. Ecological psychology. Ecological Psychology. 2007;19(4):367-82. https://doi.org/10.1080/10407410701557869 Huet M, Camachon C, Fernandez L et al. Self-controlled concurrent feedback and the education of attention towards perceptual invariants. Hum Mov Sci. 2009;28(4):450-67. https://doi.org/10.1016/j.humov.2008.12.004 Wulf G, Shea CH. Principles derived from the study of simple skills do not generalize to complex skill learning. Psychon Bull Rev. 2002;9(2):185-211. https://doi.org/10.3758/BF03196276 Suteerawattananon M, Morris GS, Etnyre BR et al. Effects of visual and auditory cues on gait in individuals with parkinson's disease. J Neurol Sci. 2004;219(1-2):63-9. https://doi.org/10.1016/j.jns.2003.12.007 Leonard CT. The neuroscience of human movement. Mosby; 1998.
CommonCrawl
Michael Hartl Tau Day, 2010updated Tau Day, 2022 1 The circle constant The Tau Manifesto is dedicated to one of the most important numbers in mathematics, perhaps the most important: the circle constant relating the circumference of a circle to its linear dimension. For millennia, the circle has been considered the most perfect of shapes, and the circle constant captures the geometry of the circle in a single number. Of course, the traditional choice for the circle constant is \( \pi \) (pi)—but, as mathematician Bob Palais notes in his delightful article "\( \pi \) Is Wrong!",1 \( \pi \) is wrong. It's time to set things right. 1.1 An immodest proposal We begin repairing the damage wrought by \( \pi \) by first understanding the notorious number itself. The traditional definition for the circle constant sets \( \pi \) equal to the ratio of a circle's circumference (length) to its diameter (width):2 \begin{equation} \label{eq:pi} \pi \equiv \frac{C}{D} = 3.14159265\ldots \end{equation} The number \( \pi \) has many remarkable properties—among other things, it is irrational and indeed transcendental—and its presence in mathematical formulas is widespread. Figure 1: Anatomy of a circle. It should be obvious that \( \pi \) is not "wrong" in the sense of being factually incorrect; the number \( \pi \) is perfectly well-defined, and it has all the properties normally ascribed to it by mathematicians. When we say that "\( \pi \) is wrong", we mean that \( \pi \) is a confusing and unnatural choice for the circle constant. In particular, a circle is defined as the set of points a fixed distance, the radius, from a given point, the center (Figure 1). While there are infinitely many shapes with constant width (Figure 2),3 there is only one shape with constant radius. This suggests that a more natural definition for the circle constant might use \( r \) in place of \( D \): \begin{equation} \label{eq:circle_constant} \mbox{circle constant} \equiv \frac{C}{r}. \end{equation} Because the diameter of a circle is twice its radius, this number is numerically equal to \( 2\pi \). Like \( \pi \), it is transcendental and hence irrational, and (as we'll see in Section 2) its use in mathematics is similarly widespread. Figure 2: One of the infinitely many non-circular shapes with constant width. In "\( \pi \) Is Wrong!", Bob Palais argues persuasively in favor of the second of these two definitions for the circle constant, and in my view he deserves principal credit for identifying this issue and bringing it to a broad audience. He calls the true circle constant "one turn", and he also introduces a new symbol to represent it (Figure 3). As we'll see, the description is prescient, but unfortunately the symbol is rather strange, and (as discussed in Section 4) it seems unlikely to gain wide adoption. (Update: This indeed proved to be the case, and Palais himself has since become a strong supporter of the arguments in this manifesto.) Figure 3: The strange symbol for the circle constant from "\( \pi \) Is Wrong!". The Tau Manifesto is dedicated to the proposition that the proper response to "\( \pi \) is wrong" is "No, really." And the true circle constant deserves a proper name. As you may have guessed by now, The Tau Manifesto proposes that this name should be the Greek letter \( \tau \) (tau): \begin{equation} \label{eq:tau} \tau \equiv \frac{C}{r} = 6.283185307179586\ldots \end{equation} Throughout the rest of this manifesto, we will see that the number \( \tau \) is the correct choice, and we will show through usage (Section 2 and Section 3) and by direct argumentation (Section 4) that the letter \( \tau \) is a natural choice as well. 1.2 A powerful enemy Before proceeding with the demonstration that \( \tau \) is the natural choice for the circle constant, let us first acknowledge what we are up against—for there is a powerful conspiracy, centuries old, determined to propagate pro-\( \pi \) propaganda. Entire books are written extolling the virtues of \( \pi \). (I mean, books!) And irrational devotion to \( \pi \) has spread even to the highest levels of geekdom; for example, on "Pi Day" 2010 Google changed its logo to honor \( \pi \) (Figure 4). Figure 4: The Google logo on March 14 (3/14), 2010 ("Pi Day"). Meanwhile, some people memorize dozens, hundreds, even thousands of digits of this mystical number. What kind of sad sack memorizes even 40 digits of \( \pi \) (Figure 5)?4 Figure 5: Matt Groening, incorrectly reciting \( \pi \), says "Prove me wrong!"—so I do. Truly, proponents of \( \tau \) face a mighty opponent. And yet, we have a powerful ally—for the truth is on our side. 2 The number tau We saw in Section 1.1 that the number \( \tau \) can also be written as \( 2\pi \). As noted in "\( \pi \) Is Wrong!", it is therefore of great interest to discover that the combination \( 2\pi \) occurs with astonishing frequency throughout mathematics. For example, consider integrals over all space in polar coordinates: \[ \int_0^{2\pi}\int_0^\infty f(r, \theta)\, r\, dr\, d\theta. \] The upper limit of the \( \theta \) integration is always \( 2\pi \). The same factor appears in the definition of the Gaussian (normal) distribution, \[ \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}, \] and again in the Fourier transform, \[ f(x) = \int_{-\infty}^\infty F(k)\, e^{2\pi ikx}\,dk \] \[ F(k) = \int_{-\infty}^\infty f(x)\, e^{-2\pi ikx}\,dx. \] It recurs in Cauchy's integral formula, \[ f(a) = \frac{1}{2\pi i}\oint_\gamma\frac{f(z)}{z-a}\,dz, \] in the \( n \)th roots of unity, \[ z^n = 1 \Rightarrow z = e^{2\pi i/n}, \] and in the values of the Riemann zeta function for positive even integers:5 \[ \begin{split} \zeta(2n) & = \sum_{k=1}^\infty \frac{1}{k^{2n}} \\ & = \frac{|B_{2n}|}{2(2n)!}\,(2\pi)^{2n},\qquad n = 1, 2, 3, \ldots \end{split} \] These formulas are not cherry-picked—crack open your favorite physics or mathematics text and try it yourself. There are many more examples, and the conclusion is clear: there is something special about \( 2\pi \). To get to the bottom of this mystery, we must return to first principles by considering the nature of circles, and especially the nature of angles. Although it's likely that much of this material will be familiar, it pays to revisit it, for this is where the true understanding of \( \tau \) begins. 2.1 Circles and angles There is an intimate relationship between circles and angles, as shown in Figure 6. Since the concentric circles in Figure 6 have different radii, the lines in the figure cut off different lengths of arc (or arc lengths), but the angle \( \theta \) (theta) is the same in each case. In other words, the size of the angle does not depend on the radius of the circle used to define the arc. The principal task of angle measurement is to create a system that captures this radius-invariance. Figure 6: An angle \( \theta \) with two concentric circles. Perhaps the most elementary angle system is degrees, which breaks a circle into 360 equal parts. One result of this system is the set of special angles (familiar to students of trigonometry) shown in Figure 7. Figure 7: Some special angles, in degrees. A more fundamental system of angle measure involves a direct comparison of the arc length \( s \) with the radius \( r \). Although the lengths in Figure 6 differ, the arc length grows in proportion to the radius, so the ratio of the arc length to the radius is the same in each case: \[ s\propto r \Rightarrow \frac{s_1}{r_1} = \frac{s_2}{r_2}. \] This suggests the following definition of radian angle measure: \begin{equation} \label{eq:radians} \theta \equiv \frac{s}{r}. \end{equation} This definition has the required property of radius-invariance, and since both \( s \) and \( r \) have units of length, radians are dimensionless by construction. The use of radian angle measure leads to succinct and elegant formulas throughout mathematics; for example, the usual formula for the derivative of \( \sin\theta \) is true only when \( \theta \) is expressed in radians: \[ \frac{d}{d\theta}\sin\theta = \cos\theta. \qquad\mbox{(only in radians)} \] Naturally, the special angles in Figure 7 can be expressed in radians, and when you took high-school trigonometry you probably memorized the special values shown in Figure 8. (I call this system of measure \( \pi \)-radians to emphasize that they are written in terms of \( \pi \).) Figure 8: Some special angles, in \( \pi \)-radians. Figure 9: The "special" angles as fractions of a full circle. Now, a moment's reflection shows that the so-called "special" angles are just particularly simple rational fractions of a full circle, as shown in Figure 9. This suggests revisiting Eq. (4), rewriting the arc length \( s \) in terms of the fraction \( f \) of the full circumference \( C \), i.e., \( s = f C \): \begin{equation} \label{eq:theta_tau} \theta = \frac{s}{r} = \frac{fC}{r} = f\left(\frac{C}{r}\right) \equiv f\tau. \end{equation} Notice how naturally \( \tau \) falls out of this analysis. If you are a believer in \( \pi \), I fear that the resulting diagram of special angles (Figure 10) will shake your faith to its very core. Figure 10: Some special angles, in radians. Although there are many other arguments in \( \tau \)'s favor, Figure 10 may be the most striking. We also see from Figure 10 the genius of Bob Palais' identification of the circle constant as "one turn": \( \tau \) is the radian angle measure for one turn of a circle. Moreover, note that with \( \tau \) there is nothing to memorize: a twelfth of a turn is \( \tau/12 \), an eighth of a turn is \( \tau/8 \), and so on. Using \( \tau \) gives us the best of both worlds by combining conceptual clarity with all the concrete benefits of radians; the abstract meaning of, say, \( \tau/12 \) is obvious, but it is also just a number: \[ \begin{split} \mbox{a twelfth of a turn} = \frac{\tau}{12} & \approx \frac{6.283185}{12} \\ & = 0.5235988. \end{split} \] Finally, by comparing Figure 8 with Figure 10, we see where those pesky factors of \( 2\pi \) come from: one turn of a circle is \( 1\tau \), but \( 2\pi \). Numerically they are equal, but conceptually they are quite distinct. The ramifications The unnecessary factors of \( 2 \) arising from the use of \( \pi \) are annoying enough by themselves, but far more serious is their tendency to cancel when divided by any even number. The absurd results, such as a half \( \pi \) for a quarter turn, obscure the underlying relationship between angle measure and the circle constant. To those who maintain that it "doesn't matter" whether we use \( \pi \) or \( \tau \) when teaching trigonometry, I simply ask you to view Figure 8, Figure 9, and Figure 10 through the eyes of a child. You will see that, from the perspective of a beginner, using \( \pi \) instead of \( \tau \) is a pedagogical disaster. 2.2 The circle functions Although radian angle measure provides some of the most compelling arguments for the true circle constant, it's worth comparing the virtues of \( \pi \) and \( \tau \) in some other contexts as well. We begin by considering the important elementary functions \( \sin\theta \) and \( \cos\theta \). Known as the "circle functions" because they give the coordinates of a point on the unit circle (i.e., a circle with radius \( 1 \)), sine and cosine are the fundamental functions of trigonometry (Figure 11). Figure 11: The circle functions are coordinates on the unit circle. Let's examine the graphs of the circle functions to better understand their behavior.6 You'll notice from Figure 12 and Figure 13 that both functions are periodic with period \( T \). As shown in Figure 12, the sine function \( \sin\theta \) starts at zero, reaches a maximum at a quarter period, passes through zero at a half period, reaches a minimum at three-quarters of a period, and returns to zero after one full period. Meanwhile, the cosine function \( \cos\theta \) starts at a maximum, has a minimum at a half period, and passes through zero at one-quarter and three-quarters of a period (Figure 13). For reference, both figures show the value of \( \theta \) (in radians) at each special point. Figure 12: Important points for \( \sin\theta \) in terms of the period \( T \). Figure 13: Important points for \( \cos\theta \) in terms of the period \( T \). Of course, since sine and cosine both go through one full cycle during one turn of the circle, we have \( T = \tau \); i.e., the circle functions have periods equal to the circle constant. As a result, the "special" values of \( \theta \) are utterly natural: a quarter-period is \( \tau/4 \), a half-period is \( \tau/2 \), etc. In fact, when making Figure 12, at one point I found myself wondering about the numerical value of \( \theta \) for the zero of the sine function. Since the zero occurs after half a period, and since \( \tau \approx 6.28 \), a quick mental calculation led to the following result: \[ \theta_\mathrm{zero} = \frac{\tau}{2} \approx 3.14. \] That's right: I was astonished to discover that I had already forgotten that \( \tau/2 \) is sometimes called "\( \pi \)". Perhaps this even happened to you just now. Welcome to my world. 2.3 Euler's identity I would be remiss in this manifesto not to address Euler's identity, sometimes called "the most beautiful equation in mathematics". This identity involves complex exponentiation, which is deeply connected both to the circle functions and to the geometry of the circle itself. Depending on the route chosen, the following equation can either be proved as a theorem or taken as a definition; either way, it is quite remarkable: \begin{equation} \label{eq:eulers_formula} e^{i\theta} = \cos\theta + i\sin\theta. \qquad\mbox{Euler's formula} \end{equation} Known as Euler's formula (after Leonhard Euler), this equation relates an exponential with imaginary argument to the circle functions sine and cosine and to the imaginary unit \( i \). Although justifying Euler's formula is beyond the scope of this manifesto, its provenance is above suspicion, and its importance is beyond dispute. Evaluating Eq. (6) at \( \theta = \tau \) yields Euler's identity:7 \begin{equation} \label{eq:eulers_identity_tau} e^{i\tau} = 1. \qquad\mbox{Euler's identity ($\tau$ version)} \end{equation} In words, Eq. (7) makes the following fundamental observation: The complex exponential of the circle constant is unity. Geometrically, multiplying by \( e^{i\theta} \) corresponds to rotating a complex number by an angle \( \theta \) in the complex plane, which suggests a second interpretation of Euler's identity: A rotation by one turn is 1. Since the number \( 1 \) is the multiplicative identity, the geometric meaning of \( e^{i\tau} = 1 \) is that rotating a point in the complex plane by one turn simply returns it to its original position. As in the case of radian angle measure, we see how natural the association is between \( \tau \) and one turn of a circle. Indeed, the identification of \( \tau \) with "one turn" makes Euler's identity sound almost like a tautology. Not the most beautiful equation Of course, the traditional form of Euler's identity is written in terms of \( \pi \) instead of \( \tau \). To derive it, we start by evaluating Euler's formula at \( \theta = \pi \), which yields \begin{equation} \label{eq:eulers_identity_pi} e^{i\pi} = -1. \qquad\mbox{Euler's identity ($\pi$ version)} \end{equation} But that minus sign is so ugly that Eq. (8) is almost always rearranged immediately, giving the following "beautiful" equation: \begin{equation} \label{eq:eulers_pi_rearranged} e^{i\pi} + 1 = 0. \qquad\mbox{(rearranged)} \end{equation} At this point, the expositor usually makes some grandiose statement about how Eq. (9) relates \( 0 \), \( 1 \), \( e \), \( i \), and \( \pi \)—sometimes called the "five most important numbers in mathematics". In this context, it's remarkable how many people complain that Eq. (7) relates only four of those five. Fine: \begin{equation} \label{eq:euler_tau_zero} e^{i\tau} = 1 + 0. \end{equation} Eq. (10), without rearrangement, actually does relate the five most important numbers in mathematics: \( 0 \), \( 1 \), \( e \), \( i \), and \( \tau \).8 Eulerian identities Since you can add zero anywhere in any equation, the introduction of \( 0 \) in Eq. (10) is a somewhat tongue-in-cheek counterpoint to \( e^{i\pi} + 1 = 0 \), but the identity \( e^{i\pi} = -1 \) does have a more serious point to make. Let's see what happens when we rewrite it in terms of \( \tau \): \[ e^{i\tau/2} = -1. \] Geometrically, this says that a rotation by half a turn is the same as multiplying by \( -1 \). And indeed this is the case: under a rotation of \( \tau/2 \) radians, the complex number \( z = a + ib \) gets mapped to \( -a - ib \), which is in fact just \( -1\cdot z \). Written in terms of \( \tau \), we see that the "original" form of Euler's identity (Eq. (8)) has a transparent geometric meaning that it lacks when written in terms of \( \pi \). (Of course, \( e^{i\pi} = -1 \) can be interpreted as a rotation by \( \pi \) radians, but the near-universal rearrangement to form \( e^{i\pi} + 1 = 0 \) shows how using \( \pi \) distracts from the identity's natural geometric meaning.) The quarter-angle identities have similar geometric interpretations: evaluating Eq. (6) at \( \tau/4 \) gives \( e^{i\tau/4} = i \), which says that a quarter turn in the complex plane is the same as multiplication by \( i \); similarly, \( e^{i\cdot(3\tau/4)} = -i \) says that three-quarters of a turn is the same as multiplication by \( -i \). A summary of these results, which we'll call Eulerian identities, appears in Table 1. Rotation angle Eulerian identity \( 0 \) \( e^{i\cdot0} \) \( = \) \( 1 \) \( \tau/4 \) \( e^{i\tau/4} \) \( = \) \( i \) \( \tau/2 \) \( e^{i\tau/2} \) \( = \) \( -1 \) \( 3\tau/4 \) \( e^{i\cdot(3\tau/4)} \) \( = \) \( -i \) \( \tau \) \( e^{i\tau} \) \( = \) \( 1 \) Table 1: Eulerian identities for half, quarter, and full rotations. We can take this analysis a step further by noting that, for any angle \( \theta \), \( e^{i\theta} \) can be interpreted as a point lying on the unit circle in the complex plane. Since the complex plane identifies the horizontal axis with the real part of the number and the vertical axis with the imaginary part, Euler's formula tells us that \( e^{i\theta} \) corresponds to the coordinates \( (\cos\theta, \sin\theta) \). Plugging the values of the "special" angles from Figure 10 into Eq. (6) then gives the points shown in Table 2, and plotting these points in the complex plane yields Figure 14. A comparison of Figure 14 with Figure 10 quickly dispels any doubts about which choice of circle constant better reveals the relationship between Euler's formula and the geometry of the circle. Polar form Rectangular form Coordinates \( e^{i\theta} \) \( \cos\theta + i\sin\theta \) \( (\cos\theta, \sin\theta) \) \( e^{i\cdot0} \) \( 1 \) \( (1, 0) \) \( e^{i\tau/12} \) \( \frac{\sqrt{3}}{2} + \frac{1}{2}i \) \( (\frac{\sqrt{3}}{2}, \frac{1}{2}) \) \( e^{i\tau/8} \) \( \frac{1}{\sqrt{2}} + \frac{1}{\sqrt{2}}i \) \( (\frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}}) \) \( e^{i\tau/6} \) \( \frac{1}{2} +\frac{\sqrt{3}}{2} i \) \( (\frac{1}{2}, \frac{\sqrt{3}}{2}) \) \( e^{i\tau/4} \) \( i \) \( (0, 1) \) \( e^{i\tau/3} \) \( -\frac{1}{2} +\frac{\sqrt{3}}{2} i \) \( (-\frac{1}{2}, \frac{\sqrt{3}}{2}) \) \( e^{i\tau/2} \) \( -1 \) \( (-1, 0) \) \( e^{i\cdot(3\tau/4)} \) \( -i \) \( (0, -1) \) \( e^{i\tau} \) \( 1 \) \( (1, 0) \) Table 2: Complex exponentials of the special angles from Figure 10. Figure 14: Complex exponentials of some special angles. 3 Circular area: the coup de grâce If you arrived here as a \( \pi \) believer, you must by now be questioning your faith. \( \tau \) is so natural, its meaning so transparent—is there no example where \( \pi \) shines through in all its radiant glory? A memory stirs—yes, there is such a formula—it is the formula for circular area! Behold: \[ A = \tfrac{1}{4} \pi D^2. \] No, wait. The area formula is always written in terms of the radius, as follows: \[ A = \pi r^2. \] We see here \( \pi \), unadorned, in one of the most important equations in mathematics—a formula first proved by Archimedes himself. Order is restored! And yet, the name of this section sounds ominous… If this equation is \( \pi \)'s crowning glory, how can it also be the coup de grâce? 3.1 Quadratic forms Let us examine this putative paragon of \( \pi \), \( A = \pi r^2 \). We notice that it involves the radius raised to the second power. This makes it a simple quadratic form. Such forms arise in many contexts; as a physicist, my favorite examples come from the elementary physics curriculum. We will now consider several in turn. Falling in a uniform gravitational field Galileo Galilei found that the velocity of an object falling in a uniform gravitational field is proportional to the time fallen: \[ v \propto t. \] The constant of proportionality is the gravitational acceleration \( g \): \[ v = g t. \] Since velocity is the derivative of position, we can calculate the distance fallen by integration:9 \[ y = \int v\,dt = \int_0^t gt\,dt = \textstyle{\frac{1}{2}} gt^2. \] Potential energy in a linear spring Robert Hooke found that the external force required to stretch a spring is proportional to the distance stretched: \[ F \propto x. \] The constant of proportionality is the spring constant \( k \):10 \[ F = k x. \] The potential energy in the spring is then equal to the work done by the external force: \[ U = \int F\,dx = \int_0^x kx\,dx = \textstyle{\frac{1}{2}} kx^2. \] Energy of motion Isaac Newton found that the force on an object is proportional to its acceleration: \[ F \propto a. \] The constant of proportionality is the mass \( m \): \[ F = m a. \] The energy of motion, or kinetic energy, is equal to the total work done in accelerating the mass to velocity \( v \): \[ \begin{split} K = \int F\,dx = \int ma\,dx & = \int m\frac{dv}{dt}\,dx \\ & = \int m\frac{dx}{dt}\,dv \\ & = \int_0^v mv\,dv \\ & = \textstyle{\frac{1}{2}} mv^2. \end{split} \] 3.2 A sense of foreboding Having seen several examples of simple quadratic forms in physics, you may by now have a sense of foreboding as we return to the geometry of the circle. This feeling is justified. Figure 15: Breaking down a circle into rings. As seen in Figure 15, the area of a circle can be calculated by breaking it down into circular rings of length \( C \) and width \( dr \), where the area of each ring is \( C\,dr \): \[ dA = C\,dr. \] Now, the circumference of a circle is proportional to its radius: \[ C \propto r. \] The constant of proportionality is \( \tau \): \[ C = \tau r. \] The area of the circle is then the integral over all rings: \[ A = \int dA = \int_0^r C\,dr = \int_0^r \tau r\,dr = \textstyle{\frac{1}{2}} \tau r^2. \] If you were still a \( \pi \) partisan at the beginning of this section, your head has now exploded. For we see that even in this case, where \( \pi \) supposedly shines, in fact there is a missing factor of \( 2 \). Indeed, the original proof by Archimedes shows not that the area of a circle is \( \pi r^2 \), but that it is equal to the area of a right triangle with base \( C \) and height \( r \). Applying the formula for triangular area then gives \[ A = \textstyle{\frac{1}{2}} bh = \textstyle{\frac{1}{2}}Cr = \textstyle{\frac{1}{2}}\tau r^2. \] There is simply no avoiding that factor of a half (Table 3). Quantity Symbol Expression Distance fallen \( y \) \( \textstyle{\frac{1}{2}}gt^2 \) Spring energy \( U \) \( \textstyle{\frac{1}{2}}kx^2 \) Kinetic energy \( K \) \( \textstyle{\frac{1}{2}}mv^2 \) Circular area \( A \) \( \textstyle{\frac{1}{2}}\tau r^2 \) Table 3: Some common quadratic forms. Quod erat demonstrandum We set out in this manifesto to show that \( \tau \) is the true circle constant. Since the formula for circular area was just about the last, best argument that \( \pi \) had going for it, I'm going to go out on a limb here and say: Q.E.D. 4 Conflict and resistance Despite the definitive demonstration of the superiority of \( \tau \), there are nevertheless many who oppose it, both as notation and as number. In this section, we address the concerns of those who accept the value but not the letter. We then rebut some of the many arguments marshaled against \( C/r \) itself, including the so-called "Pi Manifesto" that defends the primacy of \( \pi \). In this context, we'll discuss the rather advanced subject of the volume of a hypersphere (Section 5.1), which augments and amplifies the arguments in Section 3 on circular area. 4.1 One turn The true test of any notation is usage; having seen \( \tau \) used throughout this manifesto, you may already be convinced that it serves its role well. But for a constant as fundamental as \( \tau \) it would be nice to have some deeper reasons for our choice. Why not \( \alpha \), for example, or \( \omega \)? What's so great about \( \tau \)? There are two main reasons to use \( \tau \) for the circle constant. The first is that \( \tau \) visually resembles \( \pi \): after centuries of use, the association of \( \pi \) with the circle constant is unavoidable, and using \( \tau \) feeds on this association instead of fighting it. (Indeed, the horizontal line in each letter suggests that we interpret the "legs" as denominators, so that \( \pi \) has two legs in its denominator, while \( \tau \) has only one. Seen this way, the relationship \( \tau = 2\pi \) is perfectly natural.)11 The second reason is that \( \tau \) corresponds to one turn of a circle, and you may have noticed that "\( \tau \)" and "turn" both start with a "t" sound. This was the original motivation for the choice of \( \tau \), and it is not a coincidence: the root of the English word "turn" is the Greek word τόρνος (tornos), which means "lathe". Using a math font for the first letter in τόρνος then gives us: \( \tau \). Since the original launch of The Tau Manifesto, I have learned that Peter Harremoës independently proposed using \( \tau \) to "\( \pi \) Is Wrong!" author Bob Palais in 2010, John Fisher proposed \( \tau \) in a Usenet post in 2004, and Joseph Lindenberg anticipated both the argument and the symbol more than twenty years before!12 Dr. Harremoës in particular has emphasized the importance of a point first made in Section 1.1: using \( \tau \) gives the circle constant a name. Since \( \tau \) is an ordinary Greek letter, people encountering it for the first time can pronounce it immediately. Moreover, unlike calling the circle constant a "turn", \( \tau \) works well in both written and spoken contexts. For example, saying that a quarter circle has radian angle measure "one quarter turn" sounds great, but "turn over four radians" sounds awkward, and "the area of a circle is one-half turn \( r \) squared" sounds downright odd. Using \( \tau \), we can say "tau over four radians" and "the area of a circle is one-half tau \( r \) squared." Ambiguous notation Of course, with any new notation there is the potential for conflict with present usage. As noted in Section 1.1, "\( \pi \) Is Wrong!" avoids this problem by introducing a new symbol (Figure 3). There is precedent for this; for example, in the early days of quantum mechanics Max Planck introduced the constant \( h \), which relates a light particle's energy to its frequency (through \( E = h\nu \)), but physicists soon realized that it is often more convenient to use \( \hbar \) (read "h-bar")—where \( \hbar \) is just \( h \) divided by… um… \( 2\pi \)—and this usage is now standard. But getting a new symbol accepted is difficult: it has to be given a name, that name has to be popularized, and the symbol itself has to be added to word processing and typesetting systems. Moreover, promulgating a new symbol for \( 2\pi \) would require the cooperation of the academic mathematical community, which on the subject of \( \pi \) vs. \( \tau \) has historically been apathetic at best and hostile at worst.13 Using an existing symbol allows us to route around the mathematical establishment.14 Rather than advocating a new symbol, The Tau Manifesto opts for the use of an existing Greek letter. As a result, since \( \tau \) is already used in some current contexts, we must address the conflicts with existing practice. Fortunately, there are surprisingly few common uses. Moreover, while \( \tau \) is used for certain specific variables—e.g., shear stress in mechanical engineering, torque in rotational mechanics, and proper time in special and general relativity—there is no universal conflicting usage.15 In those cases, we can either tolerate ambiguity or route around the few present conflicts by selectively changing notation, such as using \( N \) for torque,16 \( \tau_p \) for proper time, or even \( \tau_\odot \) or \( \uptau \) for the circle constant itself. Despite these arguments, potential usage conflicts have proven to be the greatest source of resistance to \( \tau \). Some correspondents have even flatly denied that \( \tau \) (or, presumably, any other currently used symbol) could possibly overcome these issues. But scientists and engineers have a high tolerance for notational ambiguity, and claiming that \( \tau \)-the-circle-constant can't coexist with other uses ignores considerable evidence to the contrary. One example of easily tolerated ambiguity occurs in quantum mechanics, where we encounter the following formula for the Bohr radius, which (roughly speaking) is the "size" of a hydrogen atom in its lowest energy state (the ground state): \begin{equation} \label{eq:bohr_radius} a_0 = \frac{\hbar^2}{m e^2}, \end{equation} where \( m \) is the mass of an electron and \( e \) is its charge. Meanwhile, the ground state itself is described by a quantity known as the wavefunction, which falls off exponentially with radius on a length scale set by the Bohr radius: \begin{equation} \label{eq:hydrogen} \psi(r) = N\,e^{-r/a_0}, \end{equation} where \( N \) is a normalization constant. Have you noticed the problem yet? Probably not, which is just the point. The "problem" is that the \( e \) in Eq. (11) and the \( e \) in Eq. (12) are not the same \( e \)—the first is the charge on an electron, while the second is the natural number (the base of natural logarithms). In fact, if we expand the factor of \( a_0 \) in the argument of the exponent in Eq. (12), we get \[ \psi(r) = N\,e^{-m e^2 r/\hbar^2}, \] which has an \( e \) raised the power of something with \( e \) in it. It's even worse than it looks, because \( N \) itself contains \( e \) as well: \[ \psi(r) = \sqrt{\frac{1}{\pi a_0^3}}\,e^{-r/a_0} = \frac{m^{3/2} e^3}{\pi^{1/2} \hbar^3}\,e^{-m e^2 r/\hbar^2}. \] I have no doubt that if a separate notation for the natural number did not already exist, anyone proposing the letter \( e \) would be told it's impossible because of the conflicts with other uses. And yet, in practice no one ever has any problem with using \( e \) in both contexts above. There are many other examples, including situations where even \( \pi \) is used for two different things.17 It's hard to see how using \( \tau \) for multiple quantities is any different. By the way, the \( \pi \)-pedants out there (and there have proven to be many) might note that hydrogen's ground-state wavefunction has a factor of \( \pi \): \[ \psi(r) = \sqrt{\frac{1}{\pi a_0^3}}\,e^{-r/a_0}. \] At first glance, this appears to be more natural than the version with \( \tau \): \[ \psi(r) = \sqrt{\frac{2}{\tau a_0^3}}\,e^{-r/a_0}. \] As usual, appearances are deceiving: the value of \( N \) comes from the product \[ \frac{1}{\sqrt{2\pi}} \frac{1}{\sqrt{2}} \frac{2}{a_0^{3/2}}, \] which shows that the circle constant enters the calculation through \( 1/\sqrt{2\pi} \), i.e., \( 1/\sqrt{\tau} \). As with the formula for circular area, the cancellation to leave a bare \( \pi \) is a coincidence. 4.2 The Pi Manifesto Although most objections to \( \tau \) come from scattered email correspondence and miscellaneous comments on the Web, there is also an organized resistance. In particular, after the publication of The Tau Manifesto in June 2010, a "Pi Manifesto" appeared to make the case for the traditional circle constant. This section and the two after it contain a rebuttal of its arguments.18 Of necessity, this treatment is terser and more advanced than the rest of the manifesto, but even a cursory reading of what follows will give an impression of the weakness of the Pi Manifesto's case. While we can certainly consider the appearance of the Pi Manifesto a good sign of continuing interest in this subject, it makes several false claims. For example, it says that the factor of \( 2\pi \) in the Gaussian (normal) distribution is a coincidence, and that it can more naturally be written as \[ \frac{1}{\sqrt\pi(\sqrt 2\sigma)}e^{\frac{-x^2}{(\sqrt 2\sigma)^2}}. \] This is wrong: the factor of \( 2\pi \) comes from squaring the unnormalized Gaussian distribution and switching to polar coordinates, which leads to a factor of \( 1 \) from the radial integral and a \( 2\pi \) from the angular integral. As in the case of circular area, the factor of \( \pi \) comes from \( 1/2\times 2\pi \), not from \( \pi \) alone. A related claim is that the gamma function evaluated at \( 1/2 \) is more natural in terms of \( \pi \): \[ \Gamma(\textstyle{\frac{1}{2}}) = \sqrt{\pi}, \] \begin{equation} \label{eq:gamma} \Gamma(p) = \int_{0}^{\infty} x^{p-1} e^{-x}\,dx. \end{equation} But \( \Gamma(\frac{1}{2}) \) reduces to the same Gaussian integral as in the normal distribution (upon setting \( u = x^{1/2} \)), so the \( \pi \) in this case is really \( 1/2\times 2\pi \) as well. Indeed, in many of the cases cited in the Pi Manifesto, the circle constant enters through an integral over all angles, i.e., as \( \theta \) ranges from \( 0 \) to \( \tau \). The Pi Manifesto also examines some formulas for regular \( n \)-sided polygons (or "\( n \)-gons"). For instance, it notes that the sum of the internal angles of an \( n \)-gon is given by \[ \sum_{i=1}^n \theta_i=(n-2)\pi. \] This issue was dealt with in "\( \pi \) Is Wrong!", which notes the following: "The sum of the interior angles [of a triangle] is \( \pi \), granted. But the sum of the exterior angles of any polygon, from which the sum of the interior angles can easily be derived, and which generalizes to the integral of the curvature of a simple closed curve, is \( 2\pi \)." In addition, the Pi Manifesto offers the formula for the area of an \( n \)-gon with unit radius (the distance from center to vertex), \[ A=n\sin\frac{\pi}{n}\cos\frac{\pi}{n}, \] calling it "clearly… another win for \( \pi \)." But using the double-angle identity \( \sin\theta\cos\theta = \frac{1}{2} \sin 2\theta \) shows that this can be written as \[ A = n/2\, \sin\frac{2\pi}{n}, \] which is just \begin{equation} \label{eq:area_polygon} A = \frac{1}{2} n\, \sin\frac{\tau}{n}. \end{equation} In other words, the area of an \( n \)-gon has a natural factor of \( 1/2 \). In fact, taking the limit of Eq. (14) as \( n\rightarrow \infty \) (and applying L'Hôpital's rule) gives the area of a unit regular polygon with infinitely many sides, i.e., a unit circle: \begin{equation} \label{eq:lhopital} \begin{split} A & = \lim_{n\rightarrow\infty} \frac{1}{2} n\, \sin\frac{\tau}{n} \\ & = \frac{1}{2} \lim_{n\rightarrow\infty} \frac{\sin\frac{\tau}{n}}{1/n} \\ & = \tfrac{1}{2}\tau. \end{split} \end{equation} In this context, we should note that the Pi Manifesto makes much ado about \( \pi \) being the area of a unit disk, so that (for example) the area of a quarter (unit) circle is \( \pi/4 \). This, it is claimed, makes just as good a case for \( \pi \) as radian angle measure does for \( \tau \). Unfortunately for this argument, as noted in Section 3 and as seen again in Eq. (15), the factor of \( 1/2 \) arises naturally in the context of circular area. Indeed, the formula for the area of a circular sector subtended by angle \( \theta \) is \[ A(\theta) = \tfrac{1}{2}\theta r^2, \] so there's no way to avoid the factor of \( 1/2 \) in general. (We thus see that \( A = \frac{1}{2} \tau r^2 \) is simply the special case \( \theta = \tau \).) In short, the difference between angle measure and area isn't arbitrary. There is no natural factor of \( 1/2 \) in the case of angle measure. In contrast, in the case of area the factor of \( 1/2 \) arises through the integral of a linear function in association with a simple quadratic form. In fact, the case for \( \pi \) is even worse than it looks, as shown in the next section. 5 Getting to the bottom of pi and tau I continue to be impressed with how rich this subject is, and my understanding of \( \pi \) and \( \tau \) continued to evolve past the original Tau Day. Most notably, on Half Tau Day 2012 I had an epiphany about exactly what is wrong with \( \pi \). The argument hinges on an analysis of the surface area and volume of an \( n \)-dimensional ball, which makes clear that \( \pi \) as typically defined doesn't have any fundamental geometric significance. The resulting section is more advanced than the rest of the manifesto and can be skipped without loss of continuity; if you find it confusing, I recommend proceeding directly to the conclusion in Section 6. But if you're up for a mathematical challenge, you are invited to proceed… 5.1 Surface area and volume of a hypersphere We start our investigations with the generalization of a circle to arbitrary dimensions. This object, called a hypersphere, can be defined as follows. (For convenience, we assume that these spheres are centered on the origin.) A \( 0 \)-sphere is the set of all points satisfying \[ x^2 = r^2, \] which consists of the two points \( \pm r \). These points form the boundary of a (closed) \( 1 \)-ball, which is the set of all points satisfying \[ x^2 \leq r^2. \] This is a line segment from \( -r \) to \( r \); equivalently, it is the closed interval \( [-r, r] \). A \( 1 \)-sphere is a circle, which is the set of all points satisfying \[ x^2 + y^2 = r^2. \] This figure forms the boundary of a \( 2 \)-ball, which is the set of all points satisfying \[ x^2 + y^2 \leq r^2. \] This is a closed disk of radius \( r \); thus we see that the "area of a circle" is more properly defined as the area of a \( 2 \)-ball. Similarly, a \( 2 \)-sphere (also called simply a "sphere") is the set of all points satisfying \[ x^2 + y^2 + z^2 = r^2, \] which is the boundary of a \( 3 \)-ball, defined as the set of all points satisfying \[ x^2 + y^2 + z^2 \leq r^2. \] The generalization to arbitrary \( n \), although difficult to visualize for \( n > 3 \), is straightforward: an \( (n-1) \)-sphere is the set of all points satisfying \[ \sum_{i=1}^{n} x_i^2 = r^2, \] which forms the boundary of the corresponding \( n \)-ball, defined as the set of all points satisfying \[ \sum_{i=1}^{n} x_i^2 \leq r^2. \] The "volume of a hypersphere" of dimension \( n-1 \) is then defined as the volume \( V_n(r) \) of the corresponding \( n \)-dimensional ball. It can be obtained by integrating the surface area \( A_{n-1}(r) \) at each radius via \( V_n(r) = \int A_{n-1}(r)\,dr \). We will sometimes refer to \( A_{n-1} \) as the surface area of an \( n \)-dimensional ball, but strictly speaking it is the area of the boundary of the ball, which is just an \( (n-1) \)-dimensional sphere. The subscripts on \( V_n \) and \( A_{n-1} \) are chosen so that they always agree with the dimensionality of the corresponding geometric object;19 for example, the case \( n = 2 \) corresponds to a disk (dimensionality \( 2 \)) and a circle (dimensionality \( 2 - 1 = 1 \)). Then \( V_2 \) is the "volume" of a \( 2 \)-ball (i.e., the area of a disk, colloquially known as the "area of a circle"), and \( A_{2-1} = A_1 \) is the "surface area" of a \( 1 \)-sphere (i.e., the circumference of a circle). When in doubt, simply recall that \( n \) always refers to the dimensionality of the ball, with \( n-1 \) referring to the dimensionality of its boundary. Now, The Pi Manifesto (discussed in Section 4.2) includes a formula for the volume of a unit \( n \)-ball as an argument in favor of \( \pi \): \begin{equation} \label{eq:unit_n_sphere_pi} \frac{\sqrt{\pi}^{n}}{\Gamma(1 + \frac{n}{2})}, \end{equation} where the gamma function is given by Eq. (13). Eq. (16) is a special case of the formula for general radius, which is also typically written in terms of \( \pi \): \begin{equation} \label{eq:n_sphere_pi} V_n(r) = \frac{\pi^{n/2} r^n}{\Gamma(1 + \frac{n}{2})}. \end{equation} Because \( V_n(r) = \int A_{n-1}(r)\,dr \), we have \( A_{n-1}(r) = dV_n(r)/dr \), which means that the surface area can be written as follows: \begin{equation} \label{eq:n_sphere_pi_r} A_{n-1}(r) = \frac{n \pi^{n/2} r^{n-1}}{\Gamma(1 + \frac{n}{2})}. \end{equation} Rather than simply take these formulas at face value, let's see if we can untangle them to shed more light on the question of \( \pi \) vs. \( \tau \). We begin our analysis by noting that the apparent simplicity of the above formulas is an illusion: although the gamma function is notationally simple, in fact it is an integral over a semi-infinite domain (Eq. (13)), which is not a simple idea at all. Fortunately, the gamma function can be simplified in certain special cases. For example, when \( n \) is an integer, it is straightforward to show (using integration by parts) that \[ \Gamma(n) = (n-1)(n-2)\ldots 2\cdot 1 = (n-1)! \] Seen this way, \( \Gamma(x) \) can be interpreted as a generalization of the factorial function to real-valued arguments.20 In the \( n \)-dimensional surface area and volume formulas, the argument of \( \Gamma \) is not necessarily an integer, but rather is \( \left(1 + \frac{n}{2}\right) \), which is an integer when \( n \) is even and is a half-integer when \( n \) is odd. Taking this into account gives the following expression, which is adapted from a standard reference, Wolfram MathWorld, and as usual is written in terms of \( \pi \): \begin{equation} \label{eq:surface_area_mathworld} A_{n-1}(r) = \begin{cases} \displaystyle \frac{2\pi^{n/2}\,r^{n-1}}{(\frac{1}{2}n - 1)!} & \text{$n$ even}; \\ \\ \displaystyle \frac{2^{(n+1)/2}\pi^{(n-1)/2}\,r^{n-1}}{(n-2)!!} & \text{$n$ odd}. \end{cases} \end{equation} (Here we write \( A_{n-1} \) where MathWorld uses \( S_n \).) Integrating with respect to \( r \) then gives \begin{equation} \label{eq:volume_mathworld} V_n(r) = \begin{cases} \displaystyle \frac{\pi^{n/2}\,r^n}{(\frac{n}{2})!} & \text{$n$ even}; \\ \\ \displaystyle \frac{2^{(n+1)/2}\pi^{(n-1)/2}\,r^n}{n!!} & \text{$n$ odd}. \end{cases} \end{equation} Let's examine Eq. (20) in more detail. Notice first that MathWorld uses the double factorial function \( n!! \)—but, strangely, it uses it only in the odd case. (This is a hint of things to come.) The double factorial function, although rarely encountered in mathematics, is elementary: it's like the normal factorial function, but involves subtracting \( 2 \) at a time instead of \( 1 \), so that, e.g., \( 5!! = 5 \cdot 3 \cdot 1 \) and \( 6!! = 6 \cdot 4 \cdot 2 \). In general, we have \begin{equation} \label{eq:double_factorial} n!! = \begin{cases} n(n-2)\ldots6\cdot4\cdot2 & \text{$n$ even}; \\ \\ n(n-2)\ldots5\cdot3\cdot1 & \text{$n$ odd}. \end{cases} \end{equation} (By definition, \( 0!! = 1!! = 1 \).) Note that Eq. (21) naturally divides into even and odd cases, making MathWorld's decision to use it only in the odd case still more mysterious. To solve this mystery, we'll start by taking a closer look at the formula for odd \( n \) in Eq. (20): \[ \frac{2^{(n+1)/2}\pi^{(n-1)/2}\,r^n}{n!!} \] Upon examining the expression \[ 2^{(n+1)/2}\pi^{(n-1)/2}, \] we notice that it can be rewritten as \[ 2(2\pi)^{(n-1)/2}, \] and here we recognize our old friend \( 2\pi \). Now let's look at the even case in Eq. (20). We noted above how strange it is to use the ordinary factorial in the even case but the double factorial in the odd case. Indeed, because the double factorial is already defined piecewise, if we unified the formulas by using \( n!! \) in both cases we could pull it out as a common factor: \[ V_n(r) = \frac{1}{n!!}\times \begin{cases} \ldots & \text{$n$ even}; \\ \\ \ldots & \text{$n$ odd}. \end{cases} \] So, is there any connection between the factorial and the double factorial? Yes—when \( n \) is even, the two are related by the following identity: \[ \left(\frac{n}{2}\right)! = \frac{n!!}{2^{n/2}} \qquad (\text{$n$ even}). \] (This can be verified using mathematical induction.) Substituting this into the volume formula for even \( n \) in Eq. (20) then yields \[ \frac{2^{n/2}\pi^{n/2}\,r^n}{n!!}, \] which bears a striking resemblance to \[ \frac{(2\pi)^{n/2}\,r^n}{n!!}, \] and again we find a factor of \( 2\pi \). Putting these results together, we see that Eq. (20) can be rewritten as \begin{equation} \label{eq:volume_2pi} V_n(r) = \begin{cases} \displaystyle \frac{(2\pi)^{n/2}\,r^n}{n!!} & \text{$n$ even}; \\ \\ \displaystyle \frac{2(2\pi)^{(n-1)/2}\,r^n}{n!!} & \text{$n$ odd} \end{cases} \end{equation} and Eq. (19) can be rewritten as \begin{equation} \label{eq:surface_area_2pi} A_{n-1}(r) = \begin{cases} \displaystyle \frac{(2\pi)^{n/2}\,r^{n-1}}{(n-2)!!} & \text{$n$ even}; \\ \\ \displaystyle \frac{2(2\pi)^{(n-1)/2}\,r^{n-1}}{(n-2)!!} & \text{$n$ odd}. \end{cases} \end{equation} Making the substitution \( \tau=2\pi \) in Eq. (23) then yields \[ A_{n-1}(r) = \begin{cases} \displaystyle \frac{\tau^{n/2}\,r^{n-1}}{(n-2)!!} & \text{$n$ even}; \\ \\ \displaystyle \frac{2\tau^{(n-1)/2}\,r^{n-1}}{(n-2)!!} & \text{$n$ odd}. \end{cases} \] To unify the formulas further, we can use the floor function \( \lfloor x \rfloor \), which is simply the largest integer less than or equal to \( x \) (equivalent to chopping off the fractional part, so that, e.g., \( \lfloor 3.7 \rfloor = \lfloor 3.2 \rfloor = 3 \)). This gives \[ A_{n-1}(r) = \begin{cases} \displaystyle \frac{\tau^{\left\lfloor \frac{n}{2} \right\rfloor} r^{n-1}}{(n-2)!!} & \text{$n$ even}; \\ \\ \displaystyle \frac{2\tau^{\left\lfloor \frac{n}{2} \right\rfloor} r^{n-1}}{(n-2)!!} & \text{$n$ odd}, \end{cases} \] which allows us to write the formula as follows: \begin{equation} \label{eq:surface_area_tau} A_{n-1}(r) = \frac{\tau^{\left\lfloor \frac{n}{2} \right\rfloor} r^{n-1}}{(n-2)!!}\times \begin{cases} 1 & \text{$n$ even}; \\ \\ 2 & \text{$n$ odd}. \end{cases} \end{equation} Integrating Eq. (24) with respect to \( r \) then yields \begin{equation} \label{eq:volume_tau} V_n(r) = \frac{\tau^{\left\lfloor \frac{n}{2} \right\rfloor} r^n}{n!!}\times \begin{cases} 1 & \text{$n$ even}; \\ \\ 2 & \text{$n$ odd}. \end{cases} \end{equation} Note that, unlike the faux simplicity of Eq. (17), which hides a huge amount of complexity in the \( \Gamma \) function, Eq. (25) involves no fancy integrals—just the slightly exotic but nevertheless elementary floor and double-factorial functions.21 Recurrences As seen in Eq. (24) and Eq. (25), the surface area and volume formulas divide naturally into two families, corresponding to even- and odd-dimensional spaces. This means that the surface area of a four-dimensional ball, \( A_{4-1} = A_3 \), is related to \( A_1 \) but not to \( A_2 \), while \( A_2 \) is related to \( A_0 \) but not to \( A_1 \) (and likewise for \( V_4 \) and \( V_2 \), etc.). How exactly are they related? We can find the answer by deriving the recurrence relations between dimensions. In particular, let's divide the surface area of an \( n \)-dimensional ball by the surface area of an \( (n-2) \)-dimensional ball: \begin{equation} \label{eq:surface_area_recurrence} \begin{aligned}[b] \frac{A_{n-1}(r)}{A_{(n - 2) - 1}(r)} & = \frac{\tau^{\left\lfloor \frac{n}{2} \right\rfloor}}{\tau^{\left\lfloor \frac{n-2}{2} \right\rfloor}} \frac{(n-2-2)!!}{(n-2)!!} \frac{r^{n-1}}{r^{n-3}} \\ \\ & = \frac{\tau}{n-2}\,r^2. \end{aligned} \end{equation} Note that the different constants for even and odd \( n \) cancel out, thereby eliminating the dependence on parity. Similarly, for the ratio of the volumes we get this: \begin{equation} \label{eq:volume_recurrence} \begin{aligned}[b] \frac{V_n(r)}{V_{n-2}(r)} & = \frac{\tau^{\left\lfloor \frac{n}{2} \right\rfloor}}{\tau^{\left\lfloor \frac{n-2}{2} \right\rfloor}} \frac{(n-2)!!}{n!!} \frac{r^{n}}{r^{n-2}} \\ \\ & = \frac{\tau}{n}\,r^2. \end{aligned} \end{equation} We see from Eq. (26) and Eq. (27) that we can obtain the surface area and volume of an \( n \)-ball simply by multiplying the formula for an \( (n-2) \)-ball by \( r^2 \) (a factor required by dimensional analysis), dividing by \( n-2 \) or \( n \), respectively, and multiplying by \( \tau \). As a result, \( \tau \) provides the common thread tying together the two families of even and odd solutions, as illustrated by Joseph Lindenberg in Tau Before It Was Cool22 (Figure 16).23 Figure 16: Surface area and volume recurrences. 5.2 Three families of constants Equipped with the tools developed in Section 5.1, we're now ready to get to the bottom of \( \pi \) and \( \tau \). To complete the excavation, we'll use Eq. (24) and Eq. (25) to define two families of constants, and then use the definition of \( \pi \) (Eq. (1)) to define a third, thereby revealing exactly what is wrong with \( \pi \). First, we'll define a family of "surface area constants" \( \tau_{n-1} \) by dividing Eq. (24) by \( r^{n-1} \), the power of \( r \) needed to yield a dimensionless constant for each value of \( n \): \begin{equation} \label{eq:surface_area_constants} \tau_{n-1} \equiv \frac{A_{n-1}(r)}{r^{n-1}} = \frac{\tau^{\left\lfloor \frac{n}{2} \right\rfloor}}{(n-2)!!}\times \begin{cases} 1 & n \text{ even}; \\ \\ 2 & n \text{ odd}. \end{cases} \end{equation} Second, we'll define a family of "volume constants" \( v_n \) by dividing the volume formula Eq. (25) by \( r^n \), again yielding a dimensionless constant for each value of \( n \): \begin{equation} \label{eq:volume_constants} v_n \equiv \frac{V_n(r)}{r^n} = \frac{\tau^{\left\lfloor \frac{n}{2} \right\rfloor}}{n!!}\times \begin{cases} 1 & n \text{ even}; \\ \\ 2 & n \text{ odd}. \end{cases} \end{equation} With the two families of constants defined in Eq. (28) and Eq. (29), we can write the surface area and volume formulas (Eq. (24) and Eq. (25)) compactly as follows: \[ A_{n-1}(r) = \tau_{n-1}\,r^{n-1} \] \[ V_n(r) = v_n\,r^n. \] Because of the relation \( V_n(r) = \int A_{n-1}(r)\,dr \), we have the simple relationship \[ v_n = \frac{\tau_{n-1}}{n}. \] Let us make some observations about these two families of constants. The family \( \tau_{n-1} \) has an important geometric meaning: by setting \( r=1 \) in Eq. (28), we see that each \( \tau_{n-1} \) is the surface area of a unit \( (n-1) \)-sphere, which is also the angle measure of a full \( (n-1) \)-sphere. In particular, by writing \( A_{n-1}(r) \) as the \( (n-1) \)-dimensional "arclength" equal to a fraction \( f \) of the full surface area \( A_{n-1}(r) \), we have the exact analogue of Eq. (5) in \( n \) dimensions: \[ \theta_{n-1} \equiv \frac{A_{n-1}(r)}{r^{n-1}} = \frac{f A_{n-1}(r)}{r^{n-1}} = f\left(\frac{A_{n-1}(r)}{r^{n-1}}\right) = f\tau_{n-1}. \] Here \( \theta_{n-1} \) is simply the \( n \)-dimensional generalization of radian angle measure (where as usual \( n \) refers to the dimensionality of the corresponding ball), and we see that \( \tau_{n-1} \) is the generalization of "one turn" to \( n \) dimensions. In the special case \( n = 2 \), we have the \( 1 \)-sphere or circle constant \( \tau_{2-1} = \tau_1 = \tau \), leading to the diagram shown in Figure 10. Meanwhile, the \( v_n \) are the volumes of unit \( n \)-balls. In particular, \( v_2 \) is the area of a unit disk: \[ v_2 = \frac{\tau_1}{2} = \frac{\tau}{2}. \] This shows that \( v_2 = \tau/2 = 3.14159\ldots \) does have an independent geometric significance. Note, however, that it has nothing to do with circumferences or diameters. In other words, \( \pi = C/D \) is not a member of the family \( v_n \). So, to which family of constants does \( \pi \) naturally belong? Let's rewrite Eq. (1) in terms more appropriate for generalization to higher dimensions: \[ \pi = \frac{C}{D} = \frac{A_1}{D^{2-1}}. \] We thus see that \( \pi \) is naturally associated with surface areas divided by the power of the diameter necessary to yield a dimensionless constant. This suggests introducing a third family of constants \( \pi_{n-1} \): \begin{equation} \label{eq:diameter_constants} \pi_{n-1} \equiv \frac{A_{n-1}(r)}{D^{n-1}}. \end{equation} We can express this in terms of the family \( \tau_{n-1} \) by substituting \( D = 2r \) in Eq. (30) and applying Eq. (28): \[ \pi_{n-1} = \frac{A_{n-1}(r)}{D^{n-1}} = \frac{A_{n-1}(r)}{(2r)^{n-1}} = \frac{A_{n-1}(r)}{2^{n-1}r^{n-1}} = \frac{\tau_{n-1}}{2^{n-1}}. \] We are now finally in a position to understand exactly what is wrong with \( \pi \). The principal geometric significance of \( 3.14159\ldots \) is that it is the area of a unit disk. But this number comes from evaluating \( v_n = \tau_{n-1}/n \) when \( n=2 \): It's true that this happens to equal \( \pi_1 \): \[ \pi_1 = \pi = \frac{\tau_1}{2^{2-1}} = \frac{\tau}{2}. \] But this equality is a coincidence: it occurs only because \( 2^{n-1} \) happens to equal \( n \) when \( n=2 \) (that is, \( 2^{2-1} = 2 \)). In all higher dimensions, \( n \) and \( 2^{n-1} \) are distinct. In other words, the geometric significance of \( \pi \) is the result of a mathematical pun. Over the years, I have heard many arguments against the wrongness of \( \pi \) and against the rightness of \( \tau \), so before concluding our discussion allow me to answer some of the most frequently asked questions. 6.1 Frequently Asked Questions Are you serious? Of course. I mean, I'm having fun with this, and the tone is occasionally lighthearted, but there is a serious purpose. Setting the circle constant equal to the circumference over the diameter is an awkward and confusing convention. Although I would love to see mathematicians change their ways, I'm not particularly worried about them; they can take care of themselves. It is the neophytes I am most worried about, for they take the brunt of the damage: as noted in Section 2.1, \( \pi \) is a pedagogical disaster. Try explaining to a twelve-year-old (or to a thirty-year-old) why the angle measure for an eighth of a circle—one slice of pizza—is \( \pi/8 \). Wait, I meant \( \pi/4 \). See what I mean? It's madness—sheer, unadulterated madness. How can we switch from \( \pi \) to \( \tau \)? The next time you write something that uses the circle constant, simply say "For convenience, we set \( \tau = 2\pi \)", and then proceed as usual. (Of course, this might just prompt the question, "Why would you want to do that?", and I admit it would be nice to have a place to point them to. If only someone would write, say, a manifesto on the subject…) The way to get people to start using \( \tau \) is to start using it yourself. Isn't it too late to switch? Wouldn't all the textbooks and math papers need to be rewritten? No on both counts. It is true that some conventions, though unfortunate, are effectively irreversible. For example, Benjamin Franklin's choice for the signs of electric charges leads to the most familiar example of electric current (namely, free electrons in metals) being positive when the charge carriers are negative, and vice versa—thereby cursing beginning physics students with confusing negative signs ever since.24 To change this convention would require rewriting all the textbooks (and burning the old ones) since it is impossible to tell at a glance which convention is being used. In contrast, while redefining \( \pi \) is effectively impossible, we can switch from \( \pi \) to \( \tau \) on the fly by using the conversion \[ \pi \leftrightarrow \textstyle{\frac{1}{2}}\tau. \] It's purely a matter of mechanical substitution, completely robust and indeed fully reversible. The switch from \( \pi \) to \( \tau \) can therefore happen incrementally; unlike a redefinition, it need not happen all at once. Won't using \( \tau \) confuse people, especially students? If you are smart enough to understand radian angle measure, you are smart enough to understand \( \tau \)—and why \( \tau \) is actually less confusing than \( \pi \). Also, there is nothing intrinsically confusing about saying "Let \( \tau = 2\pi \)"; understood narrowly, it's just a simple substitution. Finally, we can embrace the situation as a teaching opportunity: the idea that \( \pi \) might be wrong is interesting, and students can engage with the material by converting the equations in their textbooks from \( \pi \) to \( \tau \) to see for themselves which choice is better. Does any of this really matter? Of course it matters. The circle constant is important. People care enough about it to write entire books on the subject, to celebrate it on a particular day each year, and to memorize tens of thousands of its digits. I care enough to write a whole manifesto, and you care enough to read it. It's precisely because it does matter that it's hard to admit that the present convention is wrong. (I mean, how do you break it to Rajveer Meena, a world-record holder, that he just recited 70,000 digits of one half of the true circle constant?) Since the circle constant is important, it's important to get it right, and we have seen in this manifesto that the right number is \( \tau \). Although \( \pi \) is of great historical importance, the mathematical significance of \( \pi \) is that it is one-half \( \tau \). Why did anyone ever use \( \pi \) in the first place? The origins of \( \pi \)-the-number are probably lost in the mists of time. I suspect that the convention of using \( C/D \) instead of \( C/r \) arose simply because it is easier to measure the diameter of a circular object than it is to measure its radius. But that doesn't make it good mathematics, and I'm surprised that Archimedes, who famously approximated the circle constant, didn't realize that \( C/r \) is the more fundamental number. As notation, \( \pi \) was popularized around 300 years ago by Leonhard Euler, based on the work of William Jones. For example, in his hugely influential two-volume work Introductio in analysin infinitorum, Euler uses \( \pi \) alternately to denote the semicircumference (half-circumference) of a unit circle or the measure of a \( 180^\circ \) arc.25 Unfortunately, Euler doesn't explain why he introduces this factor of \( 1/2 \), though it may be related to the occasional importance of the semiperimeter of a polygon. In any case, he immediately notes that sine and cosine have periodicity \( 2\pi \), so he was certainly in a position to see that he was measuring angles in terms of twice the period of the circle functions, making his choice all the more perplexing. He almost got it right, though: somewhat incredibly, Euler actually used the symbol \( \pi \) to mean both \( 3.14\ldots \) and \( 6.28\ldots \) at different times!26 What a shame that he didn't standardize on the more convenient convention. Why does this subject interest you? First, as a truth-seeker I care about correctness of explanation. Second, as a teacher I care about clarity of exposition. Third, as a hacker I love a nice hack. Fourth, as a student of history and of human nature I find it fascinating that the absurdity of \( \pi \) was lying in plain sight for centuries before anyone seemed to notice. Moreover, many of the people who missed the true circle constant are among the most rational and intelligent people ever to live. What else might be staring us in the face, just waiting for us to discover it? Are you like a crazy person? That's really none of your business, but no. Like everyone, I do have my idiosyncrasies, but I am to all external appearances normal in practically every way. You would never guess from meeting me that, far from being an ordinary citizen, I am in fact a notorious mathematical propagandist. But what about puns? We come now to the final objection. I know, I know, "\( \pi \) in the sky" is so very clever. And yet, \( \tau \) itself is pregnant with possibilities. \( \tau \)ism tells us: it is not \( \tau \) that is a piece of \( \pi \), but \( \pi \) that is a piece of \( \tau \)—one-half \( \tau \), to be exact. The identity \( e^{i\tau} = 1 \) says: "Be one with the \( \tau \)." And though the observation that "A rotation by one turn is 1" may sound like a \( \tau \)-tology, it is the true nature of the \( \tau \). As we contemplate this nature to seek the way of the \( \tau \), we must remember that \( \tau \)ism is based on reason, not on faith: \( \tau \)ists are never \( \pi \)ous. 6.2 Embrace the tau We have seen in The Tau Manifesto that the natural choice for the circle constant is the ratio of a circle's circumference not to its diameter, but to its radius. This number needs a name, and I hope you will join me in calling it \( \tau \): \[ \begin{split} \mbox{circle constant} = \tau & \equiv \frac{C}{r} \\ & = 6.283185307179586\ldots \end{split} \] The usage is natural, the motivation is clear, and the implications are profound. Plus, it comes with a really cool diagram (Figure 17). We see in Figure 17 a movement through yang ("light, white, moving up") to \( \tau/2 \) and a return through yin ("dark, black, moving down") back to \( \tau \).27 Using \( \pi \) instead of \( \tau \) is like having yang without yin. Figure 17: Followers of \( \tau \)ism seek the way of the \( \tau \). 6.28 Tau Day The Tau Manifesto first launched on Tau Day: June 28 (6/28), 2010. Tau Day is a time to celebrate and rejoice in all things mathematical.28 If you would like to receive updates about \( \tau \), including notifications about possible future Tau Day events, please join the Tau Manifesto mailing list at tauday.com. And if you think that the circular baked goods on Pi Day are tasty, just wait—Tau Day has twice as much pi(e)! Share the τ Manifesto 6.283 Acknowledgments I'd first like to thank Bob Palais for writing "\( \pi \) Is Wrong!". I don't remember how deep my suspicions about \( \pi \) ran before I encountered that article, but "\( \pi \) Is Wrong!" definitely opened my eyes, and every section of The Tau Manifesto owes it a debt of gratitude. I'd also like to thank Bob for his helpful comments on this manifesto, and especially for being such a good sport about it. I've been thinking about The Tau Manifesto for a while now, and many of the ideas presented here were developed through conversations with my friend Sumit Daftuar. Sumit served as a sounding board and occasional Devil's advocate, and his insight as a teacher and as a mathematician influenced my thinking in many ways. I have also received encouragement and helpful feedback from several readers. I'd like to thank Vi Hart and Michael Blake for their amazing \( \tau \)-inspired videos, as well as Don "Blue" McConnell and Skona Brittain for helping make \( \tau \) part of geek culture (through the time-in-\( \tau \) iPhone app and the tau clock, respectively). The pleasing interpretation of the yin-yang symbol used in The Tau Manifesto is due to a suggestion by Peter Harremoës, who (as noted above) has the rare distinction of having independently proposed using \( \tau \) for the circle constant. Another pre–Tau Manifesto \( \tau \)ist, Joseph Lindenberg, has also been a staunch supporter, and his enthusiasm is much-appreciated. I got several good suggestions from Christopher Olah, particularly regarding the geometric interpretation of Euler's identity, and Section 2.3.2 on Eulerian identities was inspired by an excellent suggestion from Timothy "Patashu" Stiles. Don Blaheta anticipated and inspired some of the material on hyperspheres, and John Kodegadulo put it together in a particularly clear and entertaining way. Then Jeff Cornell added a wonderful refinement with the introduction of \( \lambda = \tau/4 \), and Hjalmar Peters helped improve the exposition by persuading me to streamline the material on that subject. I'd also like to acknowledge my appreciation for the volunteer translators who have made The Tau Manifesto available in so many different languages: Juan Guijarro Ferreiro (Spanish); Daniel Rosen and Alexis Drai (French); Andrea Laretto (Italian); Gustavo Chaves (Portuguese); Axel Scheithauer, Jonas Wagner, and Johannes Clemens Huber (German); Aleksandr Alekseevich Adamov (Russian); and Daniel Li Qu (simplified Chinese). Finally, I'd like to thank Wyatt Greene for his extraordinarily helpful feedback on a pre-launch draft of the manifesto; among other things, if you ever need someone to tell you that "pretty much all of the [now deleted] final section is total crap", Wyatt is your man. 6.2831 About the author Michael Hartl is a physicist and entrepreneur. He is the author of over a dozen books, including Learn Enough Python to Be Dangerous and the Ruby on Rails Tutorial, and was cofounder and principal author at Learn Enough (acquired 2022). Previously, Michael taught theoretical and computational physics at the California Institute of Technology (Caltech), where he received a Lifetime Achievement Award for Excellence in Teaching and served as Caltech's editor for The Feynman Lectures on Physics. He is a graduate of Harvard College, has a Ph.D. in Physics from Caltech, and is an alumnus of the Y Combinator entrepreneur program. Michael is ashamed to admit that he knows \( \pi \) to 50 decimal places—approximately 48 more than Matt Groening. To atone for this, he has memorized 52 decimal places of \( \tau \). 6.28318 Copyright The Tau Manifesto. Copyright © 2010–2023 Michael Hartl. Please feel free to distribute The Tau Manifesto PDF for educational purposes, and consider buying one or more copies of the print edition for distribution to students and other interested parties. 6.283185 Dedication The Tau Manifesto is dedicated to Harry "Woody" Woodworth, my eighth-grade science teacher. Although I gratefully received support from many teachers over the years, Woody believed in my potential to an extraordinary, even irrational (dare I say transcendental?) degree—confidently predicting that "someday they'll be teaching the 'Hartl theory' in schools." Given how many teachers have reached out indicating their support for and teaching of the material in The Tau Manifesto, I suppose in a sense Woody's prediction has now come true. 1. Palais, Robert. "\( \pi \) Is Wrong!", The Mathematical Intelligencer, Volume 23, Number 3, 2001, pp. 7–8. Many of the arguments in The Tau Manifesto are based on or are inspired by "\( \pi \) Is Wrong!". It is available online at https://www.math.utah.edu/~palais/pi.html. 2. The symbol \( \equiv \) means "is defined as". 3. Image retrieved from Wikimedia on 2019-03-12. Copyright © 2016 by Ruleroll and used unaltered under the terms of the Creative Commons Attribution-Share Alike 4.0 International license. 4. The video in Figure 5 (available at https://vimeo.com/12914981) is an excerpt from a lecture given by Dr. Sarah Greenwald, a professor of mathematics at Appalachian State University. Dr. Greenwald uses math references from The Simpsons and Futurama to engage her students' interest and to help them get over their math anxiety. She is also the maintainer of the Futurama Math Page. 5. Here \( B_n \) is the \( n \)th Bernoulli number. 6. These graphs were produced with the help of Wolfram|Alpha. 7. Here I'm implicitly defining Euler's identity to be the complex exponential of the circle constant, rather than defining it to be the complex exponential of any particular number. If we choose \( \tau \) as the circle constant, we obtain the identity shown. As we'll see momentarily, this is not the traditional form of the identity, which of course involves \( \pi \), but the version with \( \tau \) is the most mathematically meaningful statement of the identity, so I believe it deserves the name. 8. Indeed, Eq. (7) can be written as \( e^{i\tau} = 1 + 0i \), which makes the relationship between the five numbers even more explicit. 9. Technically, all the integrals should be definite, and the variable of integration should be different from the upper limit (as in \( \int_0^t gt'\,dt' \), read as "the integral from zero to tee of gee tee prime dee tee prime"). These minor abuses of notation are common in physics and other less formal mathematical contexts such as we are considering here. 10. You may have seen this written as \( F = -kx \). In this case, \( F \) refers to the force exerted by the spring. By Newton's third law, the external force discussed above is the negative of the spring force. 11. Thanks to Tau Manifesto reader Jim Porter for pointing out this interpretation. 12. Lindenberg has included both his original typewritten manuscript and a large number of other arguments at his site Tau Before It Was Cool. 13. I have been heartened, however, to see \( \tau \) receive some significant support from mathematicians in the years following the publication of this manifesto. See, for example, "My Conversion to Tauism" by Stephen Abbott. 14. Perhaps someday academic mathematicians will come to a consensus on a different symbol for the number \( 2\pi \); if that ever happens, I reserve the right to support their proposed notation. But they have had over 300 years to fix this \( \pi \) problem, so I wouldn't hold my breath. 15. The only possible exception to this is the golden ratio, which is often denoted by \( \tau \) in Europe. But not only is there an existing common alternative to this notation—namely, the Greek letter \( \varphi \) (phi)—this usage shows that there is precedent for using \( \tau \) to denote a fundamental mathematical constant. 16. This alternative for torque is already in use; see, for example, Classical Mechanics (3rd edition) by Goldstein, Poole, and Safko, p. 2, and Introduction to Electrodynamics (4th edition) by David Griffiths, pp. 170–171. 17. See, for instance, An Introduction to Quantum Field Theory by Peskin and Schroeder, where \( \pi \) is used to denote both the circle constant and a "conjugate momentum" on the very same page (p. 282). 18. The original Pi Manifesto has been removed (perhaps my rebuttal was a bit too effective?), so the link is now to an archived version. 19. This choice of notation is fairly standard; see, e.g., the Wikipedia article on the volume of an \( n \)-ball. 20. Indeed, the generalization to complex-valued arguments is straightforward: just replace real \( x \) with complex \( z \) in Eq. (13). 21. Tau correspondent Jeff Cornell has pointed out that Eq. (24) and Eq. (25) can be further simplified by writing them in terms of the measure of a right angle, which he calls lambda: \( \lambda \equiv \tau/4 \). The resulting formulas effectively absorb the explicit dependence on parity into the floor function itself: \[ A_{n-1}(r) = \frac{2^n\,\lambda^{\left\lfloor \frac{n}{2} \right\rfloor} r^{n-1}}{(n-2)!!} \] \[ V_n(r) = \frac{2^n\,\lambda^{\left\lfloor \frac{n}{2} \right\rfloor} r^n}{n!!}. \] To my knowledge, these are the most compact expressions of the spherical surface area and volume formulas. Their simplicity comes at the cost of a factor of \( 2^n \), but this has a clear geometric meaning: a sphere in \( n \) dimensions divides naturally into \( 2^n \) congruent pieces, corresponding to the \( 2^n \) families of solutions to \( \sum_{i=1}^{n} x_i^2 = r^2 \) (one for each choice of \( \pm x_i \)). In two dimensions, they are the four quadrants; in three dimensions, they are the eight octants; and so on in higher dimensions. Nevertheless, because the dependence on parity is real and unavoidable (see, e.g., Figure 16), we will continue to write the formulas in terms of \( \tau \) as in Eq. (24) and Eq. (25). 22. Tau Before It Was Cool actually writes the recurrence in terms of \( 2\pi \); the version shown in Figure 16 was created for me by special request. As always, I am most grateful to Joseph Lindenberg for his continuing generosity and support. 23. Note that Figure 16 refers to the "size" of the interior; technically speaking, the interior is an open ball, whereas the volume in Eq. (25) is defined in terms of a closed ball, but the volumes of open and closed balls of the same radius are equal since the volume of the boundary is zero. 24. The sign of the charge carriers couldn't be determined with the technology of Franklin's time, so this isn't his fault. It's just bad luck. 25. "… pro quo numero, brevitatis ergo, scribam \( \pi \), ita ut sit \( \pi \) = Semicircumferentiae Circuli, cujus Radius = 1, seu \( \pi \) erit longitudo Arcus 180 graduum." "…for which number, because of brevity, I may write \( \pi \), so that \( \pi \) may be the equal of the semicircumference of a circle, whose radius equals 1, or \( \pi \) will be the length of an arc of 180 degrees." Euler, Leonhard, Introductio in analysin infinitorum (1748), Volume 1, Chapter VIII, p. 93. https://scholarlycommons.pacific.edu/euler-works/101. Both definitions are equivalent to \( C/D \) since \( D = 2 \) when \( r = 1 \) and \( 180^\circ \) is \( \tfrac{1}{2}\,C/r \). 26. For instance, in his 1727 Essay Explaining the Properties of Air, Euler writes: "Sumatur pro ratione radii ad peripheriem, \( \mathrm{I} : \pi \)…", "It is taken for the ratio of the radius to the periphery [circumference], \( 1 : \pi \)…" 27. The interpretations of yin and yang quoted here are from Zen Yoga: A Path to Enlightenment through Breathing, Movement and Meditation by Aaron Hoopes. 28. Since 6 and 28 are the first two perfect numbers, 6/28 is actually a perfect day.
CommonCrawl
Gabov, Sergei Aleksandrovich Statistics Math-Net.Ru Total publications: 85 Scientific articles: 84 This page: 1049 Abstract pages: 8479 Full texts: 4556 References: 158 http://www.mathnet.ru/eng/person22465 List of publications on Google Scholar List of publications on ZentralBlatt https://mathscinet.ams.org/mathscinet/MRAuthorID/190543 Publications in Math-Net.Ru 1. S. A. Gabov, A. G. Sveshnikov, "Mathematical problems of the dynamics of a floating fluid", Itogi Nauki i Tekhn. Ser. Mat. Anal., 28 (1990), 3–86 ; J. Soviet Math., 54:4 (1991), 979–1041 2. S. A. Gabov, S. T. Simakov, "Linear problems of the dynamics of a floating fluid. Existence theorems", Mat. Zametki, 48:5 (1990), 47–54 ; Math. Notes, 48:5 (1990), 1109–1114 3. S. A. Gabov, P. A. Krutitskii, "Energy characteristics of non-stationary internal waves in a vertical channel containing a stratified liquid", Zh. Vychisl. Mat. Mat. Fiz., 30:5 (1990), 751–766 ; U.S.S.R. Comput. Math. Math. Phys., 30:3 (1990), 81–92 4. S. A. Gabov, M. B. Tverskoy, "The flow of a heavy liquid in the presence of a periodic impressed force", Zh. Vychisl. Mat. Mat. Fiz., 30:4 (1990), 501–512 ; U.S.S.R. Comput. Math. Math. Phys., 30:2 (1990), 105–113 5. S. A. Gabov, A. V. Sundukova, "On an initial-boundary value problem which arises in the dynamics of a compressible stratified fluid", Zh. Vychisl. Mat. Mat. Fiz., 30:3 (1990), 457–465 ; U.S.S.R. Comput. Math. Math. Phys., 30:2 (1990), 79–85 6. S. A. Gabov, "Mathematical foundations of linear theory of ion-sound waves in non-magnetized plasma", Matem. Mod., 1:12 (1989), 133–148 7. S. A. Gabov, P. A. Krutitskii, "On a boundary value problem describing the electric current in a magnetized semiconductor", Matem. Mod., 1:5 (1989), 71–79 8. S. A. Gabov, M. B. Tverskoy, "The flow of flotable finite-depth fluid in the presence of varying pressure on the free surface", Matem. Mod., 1:3 (1989), 110–122 9. S. A. Gabov, M. B. Tverskoy, "Calculation of the parameters of steady-state waves of finite amplitude on the surface of a floating fluid", Matem. Mod., 1:2 (1989), 109–118 10. S. A. Gabov, "Cauchy–Poisson problem in the theory of floating liquid", Vestnik Moskov. Univ. Ser. 1. Mat. Mekh., 1989, 4, 26–30 11. S. A. Gabov, P. A. Krutitskii, "On the energy characteristics of a horizontal channel filled with a stratified fluid", Zh. Vychisl. Mat. Mat. Fiz., 29:11 (1989), 1662–1673 ; U.S.S.R. Comput. Math. Math. Phys., 29:6 (1989), 42–50 12. S. A. Gabov, "The flow of a floating fluid with the formation of standing waves over an uneven periodically varying bottom", Zh. Vychisl. Mat. Mat. Fiz., 29:8 (1989), 1129–1143 ; U.S.S.R. Comput. Math. Math. Phys., 29:4 (1989), 111–121 13. S. A. Gabov, P. A. Krutitskii, "On the small vibrations of a section placed at the boundary of separation between two stratified liquids", Zh. Vychisl. Mat. Mat. Fiz., 29:4 (1989), 554–564 ; U.S.S.R. Comput. Math. Math. Phys., 29:2 (1989), 154–162 14. S. A. Gabov, S. T. Simakov, "The theory of internal and surface waves in a stratified liquid", Zh. Vychisl. Mat. Mat. Fiz., 29:2 (1989), 225–238 ; U.S.S.R. Comput. Math. Math. Phys., 29:1 (1989), 155–164 15. S. A. Gabov, "A problem in the hydrodynamics of an ideal fluid that is connected with flotation", Differ. Uravn., 24:1 (1988), 16–21 ; Differ. Equ., 24:1 (1988), 12–15 16. S. A. Gabov, M. B. Tverskoy, "Investigation of the linear equations of the dynamics of a compressible stratified fluid", Zh. Vychisl. Mat. Mat. Fiz., 28:12 (1988), 1832–1842 ; U.S.S.R. Comput. Math. Math. Phys., 28:6 (1988), 159–166 17. S. A. Gabov, "On the existence of steady waves of finite amplitude on the surface of a floating liquid", Zh. Vychisl. Mat. Mat. Fiz., 28:10 (1988), 1507–1519 ; U.S.S.R. Comput. Math. Math. Phys., 28:5 (1988), 147–155 18. S. A. Gabov, M. B. Tverskoy, "On the flow of a stratified fluid past an obstacle", Zh. Vychisl. Mat. Mat. Fiz., 28:4 (1988), 608–613 ; U.S.S.R. Comput. Math. Math. Phys., 28:2 (1988), 197–200 19. S. A. Gabov, "The theory of interior waves when a free surface is present", Zh. Vychisl. Mat. Mat. Fiz., 28:3 (1988), 335–345 ; U.S.S.R. Comput. Math. Math. Phys., 28:2 (1988), 19–25 20. S. A. Gabov, Yu. D. Pletner, "The problem of the oscillations of a flat disc in a stratified liquid", Zh. Vychisl. Mat. Mat. Fiz., 28:1 (1988), 63–71 ; U.S.S.R. Comput. Math. Math. Phys., 28:1 (1988), 41–47 21. S. A. Gabov, P. A. Krutitskii, "On the non-stationary Larsen problem", Zh. Vychisl. Mat. Mat. Fiz., 27:8 (1987), 1184–1194 ; U.S.S.R. Comput. Math. Math. Phys., 27:4 (1987), 148–154 22. S. A. Gabov, A. A. Tikilyainen, "A third-order differential equation", Zh. Vychisl. Mat. Mat. Fiz., 27:7 (1987), 1100–1105 ; U.S.S.R. Comput. Math. Math. Phys., 27:4 (1987), 94–97 23. S. A. Gabov, Yu. D. Pletner, "Solvability of an exterior initial-boundary value problem for the gravitational gyroscopic wave equation", Zh. Vychisl. Mat. Mat. Fiz., 27:5 (1987), 711–719 ; U.S.S.R. Comput. Math. Math. Phys., 27:3 (1987), 44–49 24. S. A. Gabov, "On the non-stationary theory of internal waves. Existence theorems", Zh. Vychisl. Mat. Mat. Fiz., 27:2 (1987), 237–244 ; U.S.S.R. Comput. Math. Math. Phys., 27:1 (1987), 153–158 25. S. A. Gabov, Yu. D. Pletner, "The gravitational-gyroscopic wave equation: The angular potential and its applications", Zh. Vychisl. Mat. Mat. Fiz., 27:1 (1987), 102–113 ; U.S.S.R. Comput. Math. Math. Phys., 27:1 (1987), 66–73 26. S. A. Gabov, "An evolution equation arising in the hydroacoustics of a stratified fluid", Dokl. Akad. Nauk SSSR, 287:5 (1986), 1037–1040 27. S. A. Gabov, P. V. Shevtsov, "The method of descent and singular solutions of an equation of the dynamics of a stratified fluid", Differ. Uravn., 22:2 (1986), 279–285 28. S. A. Gabov, B. B. Orazov, A. G. Sveshnikov, "A fourth-order evolution equation encountered in underwater acoustics of a stratified fluid", Differ. Uravn., 22:1 (1986), 19–25 29. S. A. Gabov, A. G. Sveshnikov, "An equation for stationary capillary gravitational-gyroscopic waves in shallow water, and Kelvin waves", Itogi Nauki i Tekhn. Ser. Mat. Anal., 24 (1986), 207–269 ; J. Soviet Math., 42:6 (1988), 2138–2177 30. S. A. Gabov, B. B. Orazov, A. G. Sveshnikov, "On the non-stationary theory of internal waves", Zh. Vychisl. Mat. Mat. Fiz., 26:8 (1986), 1223–1233 ; U.S.S.R. Comput. Math. Math. Phys., 26:4 (1986), 170–177 31. S. A. Gabov, B. B. Orazov, "The equation $\frac{\partial^2}{\partial t^2}[u_{xx}-u]+u_{xx}=0$ and some problems associated with it", Zh. Vychisl. Mat. Mat. Fiz., 26:1 (1986), 92–102 ; U.S.S.R. Comput. Math. Math. Phys., 26:1 (1986), 58–64 32. S. A. Gabov, A. G. Sveshnikov, "Kelvin capillary waves and diffraction by a half plane", Dokl. Akad. Nauk SSSR, 282:4 (1985), 780–784 33. S. A. Gabov, A. G. Sveshnikov, "An equation of capillary gravitational-gyroscopic waves on shallow water", Dokl. Akad. Nauk SSSR, 282:1 (1985), 11–15 34. S. A. Gabov, K. S. Mamedov, "An operator pencil arising in the dynamics of a compressible stratified fluid", Dokl. Akad. Nauk SSSR, 280:1 (1985), 23–26 35. A. N. Tikhonov, A. A. Samarskii, A. S. Il'inskii, S. A. Gabov, "Alekseǐ Georgievich Sveshnikov (on the occasion of his sixtieth birthday)", Differ. Uravn., 21:2 (1985), 357–360 36. S. A. Gabov, Yu. D. Pletner, "An initial-boundary value problem for the gravitational-gyroscopic wave equation", Zh. Vychisl. Mat. Mat. Fiz., 25:11 (1985), 1689–1696 ; U.S.S.R. Comput. Math. Math. Phys., 25:6 (1985), 64–68 37. S. A. Gabov, J. Marin-Antuña, "An equation of two-dimensional waves in a compressible rotating fluid, and diffraction problems", Zh. Vychisl. Mat. Mat. Fiz., 25:6 (1985), 873–882 ; U.S.S.R. Comput. Math. Math. Phys., 25:3 (1985), 143–148 38. S. A. Gabov, "Solution of a problem of the dynamics of a stratified fluid and its stabilization as $t\to\infty$", Zh. Vychisl. Mat. Mat. Fiz., 25:5 (1985), 718–730 ; U.S.S.R. Comput. Math. Math. Phys., 25:3 (1985), 47–55 39. S. A. Gabov, "Angular potential for S. L. Sobolev's equation and its applications", Dokl. Akad. Nauk SSSR, 278:3 (1984), 527–530 40. S. A. Gabov, "Explicit solution and existence of a limit amplitude for a problem of dynamics of a stratified fluid", Dokl. Akad. Nauk SSSR, 277:5 (1984), 1039–1043 41. S. A. Gabov, P. V. Shevtsov, "A differential equation of the type of S. L. Sobolev's equation", Dokl. Akad. Nauk SSSR, 276:1 (1984), 14–17 42. V. V. Varlamov, S. A. Gabov, A. G. Sveshnikov, "Scattering of internal waves by an edge of an ice field. The case of finite depth", Differ. Uravn., 20:12 (1984), 2088–2095 43. S. A. Gabov, G. Yu. Malysheva, A. G. Sveshnikov, "An equation of the dynamics of a viscous stratified fluid", Differ. Uravn., 20:7 (1984), 1156–1165 44. S. A. Gabov, G. Yu. Malysheva, A. G. Sveshnikov, A. K. Shatov, "Some equations that arise in the dynamics of a rotating, stratified and compressible fluid", Zh. Vychisl. Mat. Mat. Fiz., 24:12 (1984), 1850–1863 ; U.S.S.R. Comput. Math. Math. Phys., 24:6 (1984), 162–170 45. S. A. Gabov, G. Yu. Malysheva, "A spectral problem connected with oscillations of a viscous stratified fluid", Zh. Vychisl. Mat. Mat. Fiz., 24:6 (1984), 893–899 ; U.S.S.R. Comput. Math. Math. Phys., 24:3 (1984), 170–174 46. S. A. Gabov, G. Yu. Malysheva, "The Cauchy problem for a class of motions of a viscous stratified fluid", Zh. Vychisl. Mat. Mat. Fiz., 24:3 (1984), 467–471 ; U.S.S.R. Comput. Math. Math. Phys., 24:2 (1984), 89–92 47. S. A. Gabov, K. S. Mamedov, "The potential function and problems on oscillations of an exponentially stratified fluid", Dokl. Akad. Nauk SSSR, 269:2 (1983), 270–273 48. S. A. Gabov, P. V. Shevtsov, "Basic boundary value problems for the equation of oscillations of a stratified fluid", Dokl. Akad. Nauk SSSR, 268:6 (1983), 1293–1296 49. S. A. Gabov, A. G. Sveshnikov, A. K. Shatov, "The scattering of waves described by the Klein–Gordon equation by an inclined half plane", Dokl. Akad. Nauk SSSR, 268:5 (1983), 1095–1098 50. S. A. Gabov, G. Yu. Malysheva, A. G. Sveshnikov, "An equation of composite type connected with oscillations of a compressible stratified fluid", Differ. Uravn., 19:7 (1983), 1171–1180 51. S. A. Gabov, A. G. Sveshnikov, "On the diffraction of internal waves by the edge of an ice field", Dokl. Akad. Nauk SSSR, 265:1 (1982), 16–20 52. S. A. Gabov, "Diffraction at a half-plane of internal waves described by the Klein–Gordon equation", Dokl. Akad. Nauk SSSR, 264:1 (1982), 73–75 53. S. A. Gabov, A. G. Sveshnikov, "Some problems connected with the oscillations of stratified fluids", Differ. Uravn., 18:7 (1982), 1150–1156 54. S. A. Gabov, P. V. Shevtsov, "On a problem that contains a spectral parameter in the boundary condition", Differ. Uravn., 18:4 (1982), 626–631 55. S. A. Gabov, "On a class of eigenvalue problems that contain a spectral parameter in a boundary condition", Differ. Uravn., 18:3 (1982), 450–457 56. S. A. Gabov, "A problem of diffraction of waves described by the Klein–Gordon equation", Zh. Vychisl. Mat. Mat. Fiz., 22:6 (1982), 1513–1518 ; U.S.S.R. Comput. Math. Math. Phys., 22:6 (1982), 244–250 57. V. V. Varlamov, S. A. Gabov, "Asymptotic behaviour of the solution of a problem in the theory of waves on the surface of a thin layer of stratified fluid", Zh. Vychisl. Mat. Mat. Fiz., 22:3 (1982), 690–699 ; U.S.S.R. Comput. Math. Math. Phys., 22:3 (1982), 198–208 58. S. A. Gabov, A. G. Sveshnikov, A. K. Shatov, "Asymptotic form of the solution of the problem of waves on the surface of a thin spherical layer of a stratified liquid in the presence of obstacles", Dokl. Akad. Nauk SSSR, 260:3 (1981), 579–583 59. S. A. Gabov, A. G. Sveshnikov, A. K. Shatov, "Theory of steady waves on the surface of a spherical layer of a stratified liquid", Dokl. Akad. Nauk SSSR, 256:2 (1981), 343–346 60. S. A. Gabov, A. G. Sveshnikov, A. K. Shatov, "On the asymptotic behavior of the solution of a problem on the diffraction of long waves on the surface of a stratified fluid", Differ. Uravn., 17:10 (1981), 1817–1825 61. S. A. Gabov, "Applications of angular potential. On the solution of a problem of Poincaré", Differ. Uravn., 17:8 (1981), 1483–1486 62. S. A. Gabov, "On the spectrum and bases of eigenfunctions of a problem connected with oscillations of a rotating fluid", Mat. Sb. (N.S.), 116(158):2(10) (1981), 245–252 ; Math. USSR-Sb., 44:2 (1983), 219–226 63. S. A. Gabov, "The selfadjoint V. A. Steklov problem with oblique derivative", Zh. Vychisl. Mat. Mat. Fiz., 21:4 (1981), 1046–1049 ; U.S.S.R. Comput. Math. Math. Phys., 21:4 (1981), 233–237 64. S. A. Gabov, "The spectrum and bases of eigenfunctions of a problem on acoustic oscillations of a rotating liquid", Dokl. Akad. Nauk SSSR, 254:4 (1980), 777–779 65. S. A. Gabov, "On the spectrum of a problem of S. L. Sobolev", Dokl. Akad. Nauk SSSR, 253:3 (1980), 521–524 66. S. A. Gabov, "On completeness and basicity of the system of eigenfunctions of a problem arising in the theory of tidal oscillations", Dokl. Akad. Nauk SSSR, 250:4 (1980), 780–782 67. S. A. Gabov, A. G. Sveshnikov, "Long-wave asymptotic behavior of the Green function of a wave problem on the surface of a stratified fluid", Zh. Vychisl. Mat. Mat. Fiz., 20:6 (1980), 1564–1579 ; U.S.S.R. Comput. Math. Math. Phys., 20:6 (1980), 195–211 68. V. V. Varlamov, S. A. Gabov, "Application of Watson's method to a problem of the theory of waves on the surface of a heavy fluid", Zh. Vychisl. Mat. Mat. Fiz., 20:4 (1980), 965–978 ; U.S.S.R. Comput. Math. Math. Phys., 20:4 (1980), 160–175 69. S. A. Gabov, P. V. Shevtsov, "The angular potential on curves of bounded rotation", Dokl. Akad. Nauk SSSR, 249:2 (1979), 271–274 70. S. A. Gabov, "On the property of destruction of solitary waves described by the Whitham equation", Dokl. Akad. Nauk SSSR, 246:6 (1979), 1292–1295 71. S. A. Gabov, A. G. Sveshnikov, "Applications of angular potential. Solution of problems of the diffraction of tide waves", Differ. Uravn., 15:7 (1979), 1271–1278 72. P. N. Vabishchevich, S. A. Gabov, P. V. Shvetsov, "The angular potential for the operator $\Delta+c(x)$", Zh. Vychisl. Mat. Mat. Fiz., 19:5 (1979), 1162–1177 ; U.S.S.R. Comput. Math. Math. Phys., 19:5 (1979), 75–92 73. P. N. Vabishchevich, S. A. Gabov, "Angular potential for solving an elliptic equation with variable coefficients", Zh. Vychisl. Mat. Mat. Fiz., 19:3 (1979), 652–664 ; U.S.S.R. Comput. Math. Math. Phys., 19:3 (1979), 95–109 74. S. A. Gabov, P. N. Vabishchevich, "The angular potential for a divergence elliptic operator", Dokl. Akad. Nauk SSSR, 243:4 (1978), 835–838 75. S. A. Gabov, "On Whitham's equation", Dokl. Akad. Nauk SSSR, 242:5 (1978), 993–996 76. S. A. Gabov, "Applications of angular potential: integral equations for the plane problem with an oblique derivative for harmonic functions", Zh. Vychisl. Mat. Mat. Fiz., 18:6 (1978), 1516–1528 ; U.S.S.R. Comput. Math. Math. Phys., 18:6 (1978), 157–170 77. S. A. Gabov, "The Green function of the Laplace operator in the plane problem with an oblique derivative with constant coefficients", Zh. Vychisl. Mat. Mat. Fiz., 18:3 (1978), 660–671 ; U.S.S.R. Comput. Math. Math. Phys., 18:3 (1978), 132–144 78. S. A. Gabov, "The angular potential and some of its applications", Mat. Sb. (N.S.), 103(145):4(8) (1977), 490–504 ; Math. USSR-Sb., 32:4 (1977), 423–436 79. S. A. Gabov, "Angular potential and the oblique derivative problem for harmonic functions", Zh. Vychisl. Mat. Mat. Fiz., 17:3 (1977), 706–717 ; U.S.S.R. Comput. Math. Math. Phys., 17:3 (1977), 142–153 80. S. A. Gabov, "On the oblique derivative problem for Laplace's equation", Dokl. Akad. Nauk SSSR, 226:2 (1976), 253–256 81. S. A. Gabov, "Kelvin-wave diffraction by a slit", Dokl. Akad. Nauk SSSR, 220:5 (1975), 1050–1052 82. S. A. Gabov, P. I. Ruban, S. Ya. Sekerzh-Zen'kovich, "Diffraction of Kelvin waves by a semi-infinite wall in a semi-bounded basin", Zh. Vychisl. Mat. Mat. Fiz., 15:6 (1975), 1512–1524 ; U.S.S.R. Comput. Math. Math. Phys., 15:6 (1975), 144–157 83. S. A. Gabov, "Application of L. N. Sretenskii's method to a problem of the theory of waves in channels", Zh. Vychisl. Mat. Mat. Fiz., 15:1 (1975), 217–226 ; U.S.S.R. Comput. Math. Math. Phys., 15:1 (1975), 208–218 84. S. A. Gabov, "Diffraction of Kelvin wave on semi-infinite wall", Dokl. Akad. Nauk SSSR, 217:2 (1974), 299–302 85. S. A. Gabov, A. A. Samarskii, A. N. Tikhonov, A. V. Tikhonravov, "Aleksei Georgievich Sveshnikov (on his sixtieth birthday)", Uspekhi Mat. Nauk, 40:6(246) (1985), 163–164 ; Russian Math. Surveys, 40:6 (1985), 147–149
CommonCrawl
DCDS-B Home Global exponential attraction for multi-valued semidynamical systems with application to delay differential equations without uniqueness April 2019, 24(4): 1989-2015. doi: 10.3934/dcdsb.2019026 Well-posedness and numerical algorithm for the tempered fractional differential equations Can Li 1,2, , Weihua Deng 3, and Lijing Zhao 4, Department of Applied Mathematics, Xi'an University of Technology, Xi'an, Shaanxi 710054, China Beijing Computational Science Research Center, Beijing 10084, China School of Mathematics and Statistics, Gansu Key Laboratory of Applied Mathematics and Complex Systems, Lanzhou University, Lanzhou 730000, China Department of Applied Mathematics, Northwestern Polytechnical University, Xi'an, Shaanxi 710054, China Received November 2015 Revised January 2019 Published January 2019 Table(6) Trapped dynamics widely appears in nature, e.g., the motion of particles in viscous cytoplasm. The famous continuous time random walk (CTRW) model with power law waiting time distribution (having diverging first moment) describes this phenomenon. Because of the finite lifetime of biological particles, sometimes it is necessary to temper the power law measure such that the waiting time measure has convergent first moment. Then the time operator of the Fokker-Planck equation corresponding to the CTRW model with tempered waiting time measure is the so-called tempered fractional derivative. This paper focus on discussing the properties of the time tempered fractional derivative, and studying the well-posedness and the Jacobi-predictor-corrector algorithm for the tempered fractional ordinary differential equation. By adjusting the parameter of the proposed algorithm, high convergence order can be obtained and the computational cost linearly increases with time. The numerical results shows that our algorithm converges with order $ N_I $, where $ N_I $ is the number of used interpolating points. Keywords: Tempered fractional operators, well-posedness, Jacobi-predictor-corrector algorithm, convergence. Mathematics Subject Classification: Primary: 34A08, 74S25; Secondary: 26A33. Citation: Can Li, Weihua Deng, Lijing Zhao. Well-posedness and numerical algorithm for the tempered fractional differential equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1989-2015. doi: 10.3934/dcdsb.2019026 A. Z. Al-Abedeen and H. L. Arora, A global existence and uniqueness theorem for ordinary differential equation of generalized order, Canad. Math. Bull., 21 (1978), 271-276. doi: 10.4153/CMB-1978-047-1. Google Scholar N. Atanasova and I. Brayanov, Computation of some unsteady flows over porous semi-infinite flat surface, in Large-Scale Scientific Computing, Lecture Notes in Computer Science, 3743, Springer, Berlin, 2006,621–628. doi: 10.1007/11666806_71. Google Scholar M. Benchohra, J. Henderson, S. K. Ntouyas and A. Ouahab, Existence results for fractional order functional differential equations with infinite delay, J. Math. Anal. Appl., 338 (2008), 1340-1350. doi: 10.1016/j.jmaa.2007.06.021. Google Scholar B. Baeumera and M. M. Meerschaert, Tempered stable Lévy motion and transient super-diffusion, J. Comput. Appl. Math., 233 (2010), 2438-2448. doi: 10.1016/j.cam.2009.10.027. Google Scholar R. G. Buschman, Decomposition of an integral operator by use of Mikusinski calculus, SIAM J. Math. Anal., 3 (1972), 83-85. doi: 10.1137/0503010. Google Scholar H. Brunner and P. J. van der Houwen, The Numerical Solution of Volterra Equations, North-Holland Publishing Co., Amsterdam, 1986. Google Scholar [7] H. Brunner, Collocation Methods for Volterra Integral and Related Functional Equations, Cambridge University Press, Cambridge, 2004. doi: 10.1017/CBO9780511543234. Google Scholar M. H. Chen and W. H. Deng, Discretized fractional substantial calculus, ESAIM: Math. Mod. Numer. Anal., 49 (2015), 373-394. Google Scholar Á. Cartea and D. del-Castillo-Negrete, Fractional diffusion models of option prices in markets with jumps, Phys. A, 374 (2007), 749-763. doi: 10.1063/1.2336114. Google Scholar Á. Cartea and D. del-Castillo-Negrete, Fluid limit of the continuous-time random walk with general Lévy jump distribution functions, Phys. Rev. E, 76 (2007), 041105. doi: 10.1103/PhysRevE.76.041105. Google Scholar E. A. Coddington and N. Levinson, Theory of Ordinary Differential Equations, McGraw-Hill, New York, 1955. Google Scholar W. H. Deng, Numerical algorithm for the time fractional Fokker-Planck equation, J. Comp. Phys., 227 (2007), 1510-1522. doi: 10.1016/j.jcp.2007.09.015. Google Scholar W. H. Deng, Smoothness and stability of the solutions for nonlinear fractional differential equations, Nonl. Anal., 72 (2010), 1768-1777. doi: 10.1016/j.na.2009.09.018. Google Scholar K. Diethelm and N. Ford, Analysis of fractional differential equations, J. Math. Anal. Appl., 265 (1902), 229-248. doi: 10.1006/jmaa.2000.7194. Google Scholar K. Diethelm and N. J. Ford, Multi-order fractional differential equations and their numerical solution, Appl. Math. Comput., 154 (2004), 621-640. doi: 10.1016/S0096-3003(03)00739-2. Google Scholar K. Diethelm, N. J. Ford and A. D. Freed, A predictor-corrector approach for the numerical solution of fractional differential eqations, Nonlinear Dynam., 29 (2002), 3-22. doi: 10.1023/A:1016592219341. Google Scholar A. M. A. El-Sayed, Fractional differential equations, Kyungpook Math. J., 28 (1988), 119-122. Google Scholar R. Friedrich, F. Jenko, A. Baule and S. Eule, Anomalous diffusion of inertial, weakly damped particles, Phys. Rev. Lett., 96 (2006), 230601. doi: 10.1103/PhysRevLett.96.230601. Google Scholar J. Gajda and M. Magdziarz, Fractional Fokker-Planck equation with tempered $\alpha$-stable waiting times: Langevin picture and computer simulation, Phys. Rev. E, 82 (2010), 011117. doi: 10.1103/PhysRevE.82.011117. Google Scholar A. Hanyga, Wave propagation in media with singular memory, Math. Comput. Model., 34 (2001), 1399-1421. doi: 10.1016/S0895-7177(01)00137-6. Google Scholar B. I. Henry, T. A. M. Langlands and S. L. Wearne, Anomalous diffusion with linear reaction dynamics: From continuous time random walks to fractional reaction-diffusion equations, Phys. Rev. E, 74 (2006), 031116. doi: 10.1103/PhysRevE.74.031116. Google Scholar R. Hilfer, Applications of Fractional Calculus in Physics, World Scientific, Singapore, 2000. doi: 10.1142/9789812817747. Google Scholar A. A. Kilbas, H. M. Srivastava and J. J. Trujillo, Theory and Applications of Fractional Differential Equations, North-Holland Mathematics Studies, 204, Elsevier, North Holland, 2006. Google Scholar A. A. Kilbas and J. J. Trujillo, Differential equation of fractional order: methods, results and problems-Ⅰ, Appl. Anal., 78 (2001), 153-192. doi: 10.1080/00036810108840931. Google Scholar V. Lakshmikantham, S. Leela and J. Vasundhara Devi, Theory of Fractional Dynamic Systems, Cambridge Scientific Publishers, 2009. Google Scholar C. P. Li and W. H. Deng, Remarks on fractional derivatives, Appl. Math. Comput., 187 (2007), 777-784. doi: 10.1016/j.amc.2006.08.163. Google Scholar Y. J. Li and Y. J.Wang, Uniform asymptotic stability of solutions of fractional functional differential equations, Abst. Appl. Anal., 2013 (2013), 532589. doi: 10.1155/2013/532589. Google Scholar C. Li and W. H. Deng, High order schemes for the tempered fractional diffusion equations, Adv. Comput. Math., 42 (2016), 543-572. doi: 10.1007/s10444-015-9434-z. Google Scholar R. Metzler and J. Klafter, The random walk's guide to anomalous diffusion: A fractional dynamics approach, Physics Reports, 339 (2000), 1-77. doi: 10.1016/S0370-1573(00)00070-3. Google Scholar M. M. Meerschaert, Y. Zhang and B. Baeumer, Tempered anomalous diffusion in heterogeneous systems, Geophys. Res. Lett., 35 (2008), L17403. doi: 10.1029/2008GL034899. Google Scholar M. M. Meerschaert and A. Sikorskii, Stochastic Models for Fractional Calculus, De Gruyter Studies in Mathematics, 43, Walter de Gruyter & Co., Berlin, 2012. Google Scholar M. M. Meerschaert, F. Sabzikar, M. S. Phanikumar and A. Zeleke, Tempered fractional time series model for turbulence in geophysical flows, J. Stat. Mech. Theory Exp., 14 (2014), 1742-5468. doi: 10.1088/1742-5468/2014/09/P09023. Google Scholar [33] K. B. Oldham and J. Spanier, The Fractional Calculus, Academic Press, New York, 1974. Google Scholar A. Quarteroni, R. Sacco and F. Saleri, Numerical Mathematics, Springer-Verlag, New York, 2000. Google Scholar E. Pitcher and W. E. Sewell, Existence theorems for solutions of differential equations of non-integer order, Bull. Amer. Math. Soc., 44 (1938), 100-107. doi: 10.1090/S0002-9904-1938-06695-5. Google Scholar [36] I. Podlubny, Fractional Differential Equations, Academic Press, San Diego, 1999. Google Scholar M. G. W. Schmidt, F. Sagués and I. M. Sokolov, Mesoscopic description of reactions for anomalous diffusion: A case study, J. Phys.: Condens. Matter, 19 (2007), 065118. doi: 10.1088/0953-8984/19/6/065118. Google Scholar S. Samko, A. Kilbas and O. Marichev, Fractional Integrals and Derivatives: Theory and Applications, Gordon and Breach, London, 1993. Google Scholar F. Sabzikar, M. M. Meerschaert and J. H. Chen, Tempered fractional calculus, J. Comput. Phys., 293 (2015), 14-28. doi: 10.1016/j.jcp.2014.04.024. Google Scholar J. L. Schiff, The Laplace Transform: Theory and Applications, Springer, New York, 1991. doi: 10.1007/978-0-387-22757-3. Google Scholar J. Shen, T. Tang and L. L. Wang, Spectral Methods: Algorithms, Analysis and Applications, Springer Series in Computational Mathematics, 41, Springer-Verlag, Heidelberg, 2011. doi: 10.1007/978-3-540-71041-7. Google Scholar I. M. Sokolov, M. G. W. Schmidt and F. Sagués, Reaction-subdiffusion equations, Phys. Rev. E, 73 (2006), 031102. doi: 10.1103/PhysRevE.73.031102. Google Scholar H. M. Srivastava and R. G. Buschman, Convolution Integral Equations with Special Function Kernels, John Wiley & Sons, New York, 1977. Google Scholar L. Turgeman, S. Carmi and E. Barkai, Fractional Feynman-Kac equation for non-Brownian functionals, Phys. Rev. Lett., 103 (2009), 190201. doi: 10.1103/PhysRevLett.103.190201. Google Scholar N. Tatar, The decay rate for a fractional differential equation, J. Math. Anal. Appl., 295 (2004), 303-314. doi: 10.1016/j.jmaa.2004.01.047. Google Scholar Ž. Tomovski, Generalized Cauchy type problems for nonlinear fractional differential equations with composite fractional derivative operator, Nonl. Anal., 75 (2012), 3364-3384. doi: 10.1016/j.na.2011.12.034. Google Scholar L. J. Zhao and W. H. Deng, Jacobian-predictor-corrector approach for fractional differential equations, Adv. Comput. Math., 40 (2014), 137-165. doi: 10.1007/s10444-013-9302-7. Google Scholar M. Zayernouri, M. Ainsworth and G. Karniadakis, Tempered fractional Sturm-Liouville eigenproblems, SIAM J. Sci. Comput., 37 (2015), A1777–A1800. doi: 10.1137/140985536. Google Scholar Y. Zhou, Basic Theory of Fractional Differential Equations, World Scientific, Singapore, 2014. doi: 10.1142/9069. Google Scholar Table 1. Maximum errors and convergence orders of Example 1 solved by the scheme (56)-(57) with $ T = 1,N = 20,N_I = 7 $, and $ \alpha = 0.5 $ $ \lambda=0 $ $ \lambda=2 $ $ \lambda=6 $ $ \tau $ error order error order error order 1/10 1.5207e-004 2.3516e-005 1.4300e-006 1/20 4.6202e-007 8.3626 1.4040e-007 7.3879 3.3507e-008 5.4154 1/160 3.5305e-014 7.8443 1.2794e-014 7.6383 7.0913e-015 7.6629 Table 2. Maximum errors and convergence orders of Example 1 solved by the scheme (56)-(57) with $T = 1,N = 20,N_I = 6$, and $\alpha = 1.0$ $\lambda=0$ $\lambda=2$ $\lambda=6$ $\tau$ error order error order error order Table 4. Maximum errors and convergence orders of Example 2 solved by the scheme (66) with $ T = 1.1, N = 26, \tilde{N} = 40, N_I = 2, T_0 = 0.1,\mu = 1 $, and $ \lambda = 5 $ $ \alpha=0.2 $ $ \alpha=0.9 $ $ \alpha=1.8 $ Table 5. Maximum errors and convergence orders of Example 2 solved by the scheme (66) with $ T = 1.1, N = 26, \tilde{N} = 40, N_I = 2, T_0 = 0.1,\mu = 1 $, and $ \lambda = 10 $ Noufel Frikha, Valentin Konakov, Stéphane Menozzi. Well-posedness of some non-linear stable driven SDEs. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 849-898. doi: 10.3934/dcds.2020302 Boris Andreianov, Mohamed Maliki. On classes of well-posedness for quasilinear diffusion equations in the whole space. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 505-531. doi: 10.3934/dcdss.2020361 Xavier Carvajal, Liliana Esquivel, Raphael Santos. On local well-posedness and ill-posedness results for a coupled system of mkdv type equations. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020382 Antoine Benoit. Weak well-posedness of hyperbolic boundary value problems in a strip: when instabilities do not reflect the geometry. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5475-5486. doi: 10.3934/cpaa.2020248 Tong Tang, Jianzhu Sun. Local well-posedness for the density-dependent incompressible magneto-micropolar system with vacuum. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020377 Xiaopeng Zhao, Yong Zhou. Well-posedness and decay of solutions to 3D generalized Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 795-813. doi: 10.3934/dcdsb.2020142 Dongfen Bian, Yao Xiao. Global well-posedness of non-isothermal inhomogeneous nematic liquid crystal flows. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1243-1272. doi: 10.3934/dcdsb.2020161 Wenmeng Geng, Kai Tao. Large deviation theorems for dirichlet determinants of analytic quasi-periodic jacobi operators with Brjuno-Rüssmann frequency. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5305-5335. doi: 10.3934/cpaa.2020240 Philipp Harms. Strong convergence rates for markovian representations of fractional processes. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020367 Lihong Zhang, Wenwen Hou, Bashir Ahmad, Guotao Wang. Radial symmetry for logarithmic Choquard equation involving a generalized tempered fractional $ p $-Laplacian. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020445 Pierluigi Colli, Gianni Gilardi, Jürgen Sprekels. Deep quench approximation and optimal control of general Cahn–Hilliard systems with fractional operators and double obstacle potentials. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 243-271. doi: 10.3934/dcdss.2020213 Andreas Koutsogiannis. Multiple ergodic averages for tempered functions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1177-1205. doi: 10.3934/dcds.2020314 Ágota P. Horváth. Discrete diffusion semigroups associated with Jacobi-Dunkl and exceptional Jacobi polynomials. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021002 George W. Patrick. The geometry of convergence in numerical analysis. Journal of Computational Dynamics, 2021, 8 (1) : 33-58. doi: 10.3934/jcd.2021003 Matania Ben–Artzi, Joseph Falcovitz, Jiequan Li. The convergence of the GRP scheme. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 1-27. doi: 10.3934/dcds.2009.23.1 Wolfgang Riedl, Robert Baier, Matthias Gerdts. Optimization-based subdivision algorithm for reachable sets. Journal of Computational Dynamics, 2021, 8 (1) : 99-130. doi: 10.3934/jcd.2021005 Sergey Rashkovskiy. Hamilton-Jacobi theory for Hamiltonian and non-Hamiltonian systems. Journal of Geometric Mechanics, 2020, 12 (4) : 563-583. doi: 10.3934/jgm.2020024 Kung-Ching Chang, Xuefeng Wang, Xie Wu. On the spectral theory of positive operators and PDE applications. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3171-3200. doi: 10.3934/dcds.2020054 Matthieu Alfaro, Isabeau Birindelli. Evolution equations involving nonlinear truncated Laplacian operators. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3057-3073. doi: 10.3934/dcds.2020046 2019 Impact Factor: 1.27 Can Li Weihua Deng Lijing Zhao
CommonCrawl
Anaemia among adolescent girls in three districts in Ethiopia Seifu Hagos Gebreyesus1, Bilal Shikur Endris1, Getahun Teka Beyene2, Alinoor Mohamed Farah4, Fekadu Elias3 & Hana Nekatebeb Bekele2 Adolescence is characterized by rapid growth and development with a significantly increased need for macro and micronutrients. However, there is little empirical evidence on the burden of anaemia among adolescent girls in developing countries such as Ethiopia. This study aims to address this gap by evaluating the magnitude of anaemia with an aim to guide design of intervention modalities to address anaemia among adolescent girls. The study employed a community based cross sectional design. The study was conducted on weekends to capture both in school and out of school adolescent girls. Data was collected from a total 1323 adolescent girls. From each district, we randomly selected villages and ensured that the sampled households had a range geographical spread (lowlands, highlands) within the larger category of rural and urban. We performed anaemia testing using HemoCue B-Haemoglobin analyser. We applied a complex survey data analysis method to estimate the level of anaemia. The hemoglobin level was adjusted for altitude and smoking status. We ran a logistic regression model to evaluate predictors of anaemia. The overall anaemia prevalence ranged from 24 to 38%, with an average rate of 29%. Less than half of the girls heard the term anaemia, and about one third knew the relationship between anaemia and the intake of iron rich foods. The risk of anaemia is higher among adolescent girls in their early adolescence period (10–14 years) (Adjusted Odds Ratio (AOR); 1.98; 95% CI; 1.03, 3.82] and among adolescent girls who lived in moderately food insecure households (AOR 1.48; 95% CI; 1.05–2.09). However, knowing the term "anaemia" was found to be protective against the risk of anaemia. The risk of anaemia was particularly high among adolescent girls in their early age and among those living in food insecure households. The prevalence of anaemia among adolescent girls is a moderate public health problem. According to the WHO set criteria, the districts could be candidates for intermittent iron and Folic acid supplementation program. The World Health Organization (WHO) defines adolescence as the period from 10 to 19 years of age [1]. Nutritional anaemia is the common nutritional problem among adolescent girls in most of the developing countries and its prevalence in low and middle income countries ranges from 13.4 to 62.9% [2, 3]. Ethiopian adolescent girls are not exceptions to this problem and for example, according to the Ethiopia Demographic and Health Survey (EDHS) 2011, 13.4% of women aged 15 to 19 years were anaemic [4, 5]. However, the burden of anaemia among adolescent girls has progressively declined in Ethiopia and the country has made significant improvement in reduction of nutritional anaemia. For example, the prevalence of anaemia among adolescent girls had decreased by 45% % from 24.8% in 2005 [6] to 13.4 in 2011 [6]. However, a study conducted in North East Ethiopia has shown prevalence of anaemia among adolescent girls as high as 22.8% [7]. Adolescent girls are at risk of iron deficiency and anaemia due to various factors including high requirements for iron, poor dietary intake of iron, high rates of infection and worm infestation, as well as pregnancy [8]. Female adolescents in particular are at a higher risk for anaemia compared to their male counterparts [9, 10]. As mentioned above, anaemia among adolescent girls is multi-factorial and the most notable factor is the one that is related to heavy menses. For instances, there are studies that documented girls who started menarche with excessive menstrual bleeding are more likely to develop anaemia [9, 11]. Another important risk factor for anaemia, which was consistently significant in many studies, is low maternal educational attainment [9, 12, 13]. Adolescent girls with lower educational attainment also had higher risk of anaemia [14]. Furthermore, there is evidence that paternal educational status as well predicts anaemia among adolescent girls [15]. These findings indicated that education in general is a very important determinant of anaemia among adolescent girls. Another published risk factor for anaemia among adolescent girls is household socio economic status [7]; this finding is supported by studies conducted across different countries [9, 11, 13]. Inadequate dietary intake of iron-rich food independently determines anaemia among adolescent girls [15, 16]. Adolescents who do not consume eggs, vegetables and meat were found to be at higher risk of anaemia [7]. This could be explained by reduced access to heme iron, which is heavily found in meat and highly bio-available [17]. Infections such as malaria parasitemia and worm infestation are also important contributing factors for anaemia [15]. Malaria related anaemia results from increased destruction of infected and uninfected red blood cells (RBC) as well as impaired erythropoiesis [18]. In nutshell, we have noted variation among studies on the reported magnitude as well as on the relative importance of factors associated with anaemia among adolescent girls. Of the studies that have investigated anaemia among adolescent girls, few of them provided estimates of the magnitude of anaemia for the entire adolescent period (10–19 years) and many of these studies are limited to school going adolescent girls which may potentially underestimate the magnitude of anaemia among adolescent girls. Failure to consider younger and out of school adolescent girls can also result in designing improper intervention modalities. Moreover, adolescent girls requires higher iron intake due to their rapid growth and menstrual losses [19], therefore, the risk of anaemia increases during this period of rapid growth. The consequence of anaemia on maternal and infant health also brings adolescent girls to the spotlight, since adolescent years is one of the windows of opportunities to break the intergenerational cycle of malnutrition. The WHO Ethiopia country office in collaboration with the government of Ethiopia launched the Accelerating Nutrition Improvement (ANI) project aiming at reducing iron-deficiency anaemia among adolescent girls in 10 selected districts in Ethiopia. This baseline survey was planned to establish the magnitude of anaemia among adolescent girls aged 10–19 years and identify possible delivery channels to provide anaemia control and prevention services to adolescent girls in the target districts. In the present work, we attempted to determine the burden of anaemia among adolescent girls. More specifically, we aimed to: (i) estimate the magnitude of anaemia among adolescent girls in the early, middle and late adolescence period (ii) evaluate adolescent girls' perceptions on anaemia and preferred delivery channels for Iron and Folic Acid Supplementation, (iii) evaluate and recommend alternative platform for iron supplementation among adolescent girls. The findings from this study will provide insights about the burden of anaemia among the adolescent period and further help design appropriate implementation modalities for iron folic and acid supplementation among adolescent girls. Study design and period We employed a community based cross-sectional study design. The study was representative of adolescent girls aged 10–19 years. The study was conducted between October and December 2015. The WHO country office is supporting the Accelerating Nutrition Improvement (ANI) project. The WHO ANI activities aimed, among others, to improve dietary habits and reducing iron deficiency anaemia among adolescent girls. The project was implemented in ten districts in the regions of Amhara, Oromia, and Southern Nations, Nationalities, and. Peoples' Region (SNNPR), in partnership with John Snow, Incorporated and under the overall coordination of the Federal Ministry of Health (FMoH) of Ethiopia. The ten implementation districts were Beyeda, West Belesa, Laygaynt, Wondogent, Gedebasasa, Derra, Debrelibanos, Angacha, Damotegale, and Boloso bombie. The baseline study was conducted in three districts (namely Debrelibanos, Damotegale and Laygaynt) of the ten WHO Ethiopia country office supported districts. Sample size determination In this baseline study, sample size determination considered any form of anaemia (13.4% of adolescent girls aged 10–19 years are anaemic; ([5], a 95% confidence level, 4% precision, design effect of 1.5, and contingency for non-response at 5%. This gave a sample size of 440 adolescent girls aged 10–19 years per district. Thus, a total of 1320 adolescent girls age 10–19 years in the 3 selected districts were included. Sampling procedures The 3 study districts (Debrelibanos, Damotegale and Laygaynt districts) were selected randomly among the 10 WHO ANI project districts. From each district, we randomly selected 6 rural and 1 urban kebeles (smallest administrative units in Ethiopia). The selection of kebeles for the survey ensured that the sampled households had a balanced geographical spread (lowlands, highlands) within the larger category of rural and urban. The total sample size in each district (n = 440) was allocated for the selected kebeles proportionally, based on the number of household in the kebeles. In the selected kebele, a central point was identified. Data collectors worked in teams of two. Each team of two started out in different directions (the direction decided in a way that can transect the villages within a kebele). The first household was randomly selected from among the first 3 households. From then onwards, data collection teams visited every fourth household. If the selected household did not contain an adolescent girl age 10–19 years, the data collection team moved to the next household (the direction is decided a priori) and resumed their sampling once they identified an eligible household. In households in which there was more than one eligible adolescent girl, the team randomly chose one girl to interview. Data collection and instrument A questionnaire, adapted from the Ethiopian Demographic and Health Survey (EDHS) and relevant literature was developed. The questionnaire was translated into local languages (Amharic and afan oromo languages). The questionnaire included socio demographic characteristics of the adolescent girls and their respective parents such as education, schooling status, religion, marital status, parental occupation, and family size. Data on household characteristics such as ownership and size of land, type of house and construction materials; availability of fixed assets such as radio, television, phone, bed, and chair and other household items; possession of domestic animals; and access to utilities and infrastructure (sanitation facility and source of water) were collected. Household food security was measured using the household food insecurity access scale (HFIAS) developed by Food and Nutrition Technical Assistance (FANTA) Project through the Academy for Educational Development [20]. The geographic locations and elevations of visited households were determined using a hand-held global positioning system (GPS) device (Garmin GPSMAP®). Due to the lack of vital registration systems, we developed a local events calendar to estimate adolescent girls' year of birth. All enumerators were required to apply the local events calendar to estimate adolescent girls' age. The interviews were conducted during weekends i.e. on Saturdays and Sundays in order to capture both in school and out of school adolescent girls in the sample. Adults, preferably mothers or heads of household as appropriate, were interviewed about general household demographic and economic characteristics. Following the interviews, experienced nurses and lab personnel tested adolescent girls for anaemia using HemoCue B-Haemoglobin analyser. The HemoCue B-Haemoglobin analyser is a portable, rapid and accurate method of measuring haemoglobin. Results are displayed after 45 to 60 s in g/dl on an LCD display. Bio-safety measures such as use of sterile gloves; alcohol/clean water during collection of specimen as well as safe disposal system were employed (used gloves and other materials was collected using safety boxes). The enumerators and supervisors were trained for three days on general techniques of interviewing and supervision, administration of each item in the questionnaire, hemoglobin measurements, and instruction on ethical treatment of participants. In addition, the questionnaire was pretested in a village, not selected for the study, before the final study began to assess the performance of the study tools. Some revisions were made on the study instruments based on the feedback obtained from the pretest. Interviews were administered by 20 enumerators (experienced nurses and lab personnel) and supervised by three supervisors. The enumerators had a minimum of diploma education (experience in data collection preferable), fluently spoke the local languages, and were residents in the local area or vicinity. The supervisors had a minimum of a bachelor education and previous experience in supervising community based data collection. The supervisors addressed questions and queries of interviewers and corresponded with the investigators whenever necessary. In addition, 2–3 local residents were recruited from each of the districts to guide the data collectors through the villages and ease communication with the villagers. A field guide manual was also developed for use by the interviewers and supervisors. Data quality assurance Three days long training was given for data collectors and supervisors. The focus of the training was on understanding the instrument and interviewing skills, appropriate use of HemoCue for anaemia testing, and GPS operation. Role plays and pretests were done before the actual data collection. The supervisors checked all filled questionnaires for completeness and consistency each day before turning them to the investigator. The HemoCue instrument is widely used to measure hemoglobin in anaemia surveys. Although the instrument is excellent on its own, data quality is dependent on good blood sample collection (capillary blood sample). The personnel were trained on the correct handling of the instrument and procedures. In order to get accurate and reliable hemoglobin values using HemoCue, standardization exercises were conducted multiple times during training until the performance standard was met. The performance standard is met when the difference in hemoglobin levels between data collectors and expert is less than 0.5 g/dl. We also checked the quality of the hemoglobin data in the sample by calculating its standard deviation (SD). A smaller SD (1.1–1.5) of hemoglobin is usually considered to denote a better data quality than a larger SD. Bio-safety measures such as use of sterile gloves and alcohol/water during collection of specimen as well as safe disposal system were employed. Materials like gloves and lancets were collected using safety boxes and were transported for safe disposal, either to be buried or incinerated. For adolescent girls who were anemic, counseling to take Iron and Folic Acid (IFA) supplement and referral to a nearby health facility was arranged. Data management and analysis We used Epi Data Version 3.1 for data entry and Stata 14.0 (Stata Corp, College Station, TX) for cleaning and further analysis. Descriptive analysis on the general characteristics of the adolescent girls such as age, schooling, marital status, and knowledge on anaemia as well as household characteristics such as household food insecurity, and household dietary diversity was done. In addition, data presentation using tables, graphs and appropriate summary figures were included. Household wealth: We applied a principal component analysis (PCA) to construct wealth index. In order to construct a relative household's wealth index, a suite of several socio economic indicators were collected: land ownership, type of house and building materials, availability of fixed domestic assets (i.e. radio, television, bed, chairs and other household items), ownership of domestic animals, source of drinking and cooking water and availability and type of latrine. A relative socio-economic status was constructed by dividing the resulting score into quintiles that indicate poorest, poor, medium, rich and richest households. Household food insecurity and diversity The household food insecurity (access) was derived from the HFIAS tool. The frequencies of affirmative responses to the HFIAS questions were used to classify households into one of the four categories of food insecurity i.e. food secured, mild, moderate and severe food insecurity. We generated household-level mean dietary diversity score using the sum of all foods (food groups) eaten in the respective household during the day and night prior to the date of the survey. We classified households into three levels (lowest, medium and high) of dietary diversity; a household with a lowest dietary diversity score consumed three or less food groups, a household with a medium dietary diversity score consumed four or five food groups, while a household high dietary diversity consumed six or more food groups. Anaemia prevalence The hemoglobin (Hb) level was adjusted for high altitude and smoking status before defining anaemia. The adjustment was done to account for a reduction in oxygen saturation of blood. We used the following formula for adjustment of hemoglobin for high altitude. $$ \mathrm{Hb}\kern0.5em \mathrm{adjustment}\kern0.5em =\kern0.5em \hbox{-} 0.032\ast \left(\mathrm{altitude}\kern0.5em +\kern0.5em 0.003280\right)+0.02\ast +{\left(\mathrm{altitude}\kern0.5em +\kern0.5em 0.003280\right)}^2 $$ where the Hb adjustment is the amount subtracted from each individual's observed hemoglobin level. Moreover, hemoglobin adjustments for smoking were done by subtracting 0.3 from individual's observed hemoglobin level. Adolescent girls who had an Hb values below 12 g/dL were considered as anemic. Adolescent girls with Hemoglobin values of 11–11.9 g/dL, 8–10.9 g/dL, and < 8 g/dL were categorized as having mild, moderate, and severe anaemia, respectively. A complex survey data analysis was employed to calculate the district level anaemia prevalence, designating the survey's primary sampling unit (villages) and strata (urban and rural). The variance was adjusted using Taylor linearized variance estimation method. Anaemia prevalence was also calculated for each of the districts; among the age groups such as early, middle and late adolescence period; urban and rural residence; and within the following categories: household wealth, food insecurity levels, dietary diversity status, schooling status, agro ecology, heard the term anaemia, and IFA and wealth status, as appropriate. Analysis of the determinants of Anaemia We run a multivariate logistic regression model using the 'svy' command in STATA 14.0 (StataCorp College Station, TX) to ensure that standard errors are adjusted for the complex survey design. This was done to identify factors that could potentially be associated with the occurrence of anaemia among adolescent girls. We selected theoretically relevant variables from the literature for the regression model including household, personal and diet related variable such as household food security, dietary diversity, socio-economic condition, place of residence, adolescent's age, schooling, smoking and awareness of the term anaemia and IFA tablets. We included a total of 1323 adolescent girls aged 10–19 years. Only two adolescent girls refused to give blood for a test due to fear of needle prick. Table 1 shows a summary of the demographic and other characteristics of adolescent girls in the sample. Majority (65.8%) of the adolescent girls were in the early adolescent period (10–14 years). The mean reported age at interview was 13.6 years. The majority (95.8%) of the adolescent girls were never married, while few of them 3.7% were married, widowed or divorced. Concerning education, 81.5% had some form of formal education in the primary school level (1, 2, 4–9). Although the majority (87.2%) were currently in school, a significant proportion of adolescent girls; (12.6%) were out of school at the time of this survey. Among the out of school adolescent girls, 59.3% had formal education in the primary school level (1, 2, 4–9), 25.5% were Illiterate, whereas 74.9% were single and 20.1% were married (not shown). Table 1 Sample characteristics of adolescents age 10–19 years in the in three districts, Ethiopia 2015 (N = 1323) Table 2 shows the pattern of food groups consumed in the study sample 24 h prior to the survey. The mean dietary diversity score for the sample is 4.8 (SD = 2.1). Most of the adolescent girls had a lowest or middle diversity scores. About 47% had lowest dietary diversity (consumed three or less food groups) and 42% had a medium dietary diversity score (consumed four or five food groups). Only 11.3% had the highest dietary diversity score. A close look into the consumption of individual food groups indicates that the consumption of animal source food groups that are important sources of iron is very poor. The consumption of milk and milk products, meat, egg and fish was very low (22.5, 5.1, 3.9, and 3.4%respectively). On the other hand, the consumption of cereals, vegetables, oils, grains and roots was relatively higher (91.3, 87.2, 73.1, 67.0 and 61.8%respectively). Table 2 Affirmative responses to food groups consumed in a household 24 h prior to the survey in three selected districts, Ethiopia, 2015 Socio demographic distribution of anaemia The point and interval estimates of the percentage of adolescents age 10–19 with anaemia, by selected background characteristics is depicted in Table 3. The overall prevalence of any anaemia among adolescent girls in the three districts was 29.2% [95% CI: 24.4, 34.5]. Twenty-five percent of adolescents had mild anaemia, with 3.8% had moderate anaemia, and 0.3% had severe anaemia. Table 3 Percentage of adolescents age 10–19 with anaemia, by area of residence, agro ecology and selected background characteristics, Ethiopia 2015 A higher proportion of early adolescent girls, age 10–14 years (31.3%) and middle adolescence, age 15–17 years (28.1%) were anaemic than girls who are in their late adolescence period, age 18–19 years (17.2%). We documented anaemia prevalence varied by area and residency. A higher proportion of adolescent girls in Debrelibanos district were anaemic (38.3, 95% CI: 26.7, 51.4]. The magnitude of anaemia in Damotegale and Laygaynt was 24.0 and 25.2%, respectively. The prevalence of anaemia among adolescencts also varied by urban and rural residence; a higher proportion of girls in rural areas were anaemic (31.6%) compared to those in urban areas (19%). Also, adolescent girls residing in the highlands had a relatively higher prevalence of anaemia (31.6%) than those girls residing in the lowlands (23.9%). Adolescents' perceptions on anaemia and preferred delivery channels We asked adolescent girls if they have ever heard or know the term anaemia in their local languages. We found that four out of ten adolescent girls have heard about the term anaemia in their local language. We further asked those adolescent girls who heard about the term anaemia if they knew about any symptoms related to anaemia. Figure 1 shows pattern of affirmative responses to selected symptoms of anaemia. Symptoms of anaemia such as dizziness, fatigue, and headache were commonly known by adolescents. Nearly 72% of adolescents knew dizziness, 26% knew fatigue and about 22% knew headache as symptoms of anaemia. However, less than 10% of adolescent girls understood that poor school performance could be a consequence of anaemia. Percentage of Adolescents who knows symptoms of anaemia, Ethiopia 2015 In addition to this, out of 576 adolescents who heard/knew about anaemia, 35.1% believed that deficiency of iron in foods is a major cause of anaemia. Following, menstrual bleeding (18.9%) and excessive loss of blood (17.4%) were reported as the next main causes of anaemia. Furthermore, we asked adolescents' whether they knew about IFA supplement, their sources of information, supply and experiences about using them. We found that majority of adolescents did not know about (heard about) IFA. Only 11.2% of adolescent girls had heard about the IFA supplements. Health extension workers and health care providers such as nurses and health officers were primarily sources of information on IFA supplements. Out of the 153 adolescent girls who heard about IFA, only 20.3% had taken the IFA supplement. Most of the adolescent girls who took IFA got the supplements from heath posts (20 out of 31) or from health centers (9 out of 31). We asked these adolescent girls the reason they took IFA; most of them (26 out of 31) reported that they took the IFA because they believed that they were helpful to fill up blood. We evaluated adolescent girls' willingness to use IFA and their preferred IFA delivery channel(s). The majority of them; (86.9%) reported that they will definitely be taking IFA to improve physical performance as well as learning and work capacity. Among those who are willing to take IFA, the channels that were most commonly chosen in order of preference were: (1) the health center (43.2%), (2) the health post (35.5%), (3) school clubs (12.0%), (4) at home (8.5%), (5) girls club (3.9%), and (6) the youth centers (1.9%). Risk factor analysis for adolescent girls anaemia Table 4 shows association of adolescents' and selected characteristics of households with adolescent girl's Anaemia. The multivariate model indicated that adolescent's age, place of residence, agro ecology, household food insecurity and knowing (heard) the term "anaemia" were significantly associated with anaemia. We found that the risk of anaemia was significantly high for adolescent girls who lived in Debrelibanos (a highland) district. The odds of anaemia among adolescents who lived in Debrelibanos district were nearly 2 times than the odds among those who lived in Damotegale (AOR 1.95; 95% CI; 1.27–3.01]. Household food insecurity was associated with increased risk of anaemia. We found that adolescent girls who lived in moderately food insecure households were more likely to be anemic than those living in food secured households. (AOR 1.48; 95% CI; 1.05–2.09). The risk of anaemia also varied by the different age brackets during adolescence period. Girls in their early adolescence were more likely to be anemic as compared to adolescent in their late adolescence (OR; 1.98; 95% CI; 1.03,3.82]. Table 4 Association of adolescents and selected characteristics of households with adolescent girl's Anaemia in three districts, Ethiopia 2015 We evaluated if knowing anaemia and its meaning had a protective effect from the risk of anaemia. The result showed that risk of anaemia was lower among those adolescents who heard the term anaemia. The odds of anaemia was 60% higher among adolescents who had not of heard the term compared to those who had heard the term anaemia (OR 1.58; 95%CI; 1.09–2.29]. We did not find statistically significant association between the risk of anaemia and characteristics such as household wealth, dietary diversity, schooling status, smoking status and, place (location) of residency. In this study, we aimed to find out the magnitude and predictors of anaemia among adolescent girls, evaluate adolescent girls' perceptions on anaemia and preferred delivery channels for Iron and Folic Acid Supplementation and recommend platform for iron supplementation among adolescent girls. The results of the survey indicated that anaemia rates ranged from 24 to 38%, with an average rate of 29%. Adolescent girls in their early adolescence period and those who lived in food insecure households had higher burden of anaemia. Less than half of the girls heard the term anaemia, and about one third knew of the relationship between anaemia and the intake of iron rich foods. The great majority of girls interviewed were willing to take iron-folic acid supplements to improve their health as well as their capacity to learn and to work. Most indicated they would prefer receiving supplements through the health system. Currently, the magnitude of anaemia among adolescent girls in the studied districts was alarmingly high. We calculated the sample size for this study based on a 13% expected prevalence of anaemia derived from an estimate made from the EDHS 2011. However, our result is more than double of what the EDHS reported in 2011 and from other studies [20, 21]. This variation might be due to differences in the study populations since we have included a wider age range in this study. We also believe that the inclusion of both in school and out of school adolescents unlike other studies might have produced different estimates compared to other studies. A higher prevalence of anaemia in the early adolescent girl may be attributed to higher prevalence of puberty menorrhagia at the time around menarche. A similar higher prevalence of anaemia in the early adolescence period (10–14 years) than the late adolescence period (15–19 years) was reported in India [21, 22]. Contrary to our finding, a study conducted in India has as well indicated that age and menarcheal status did not affect the prevalence of anaemia among the adolescent girls [23]. We used the dietary diversity score derived from the sum of food groups eaten by household members as proxy indicator for dietary intake and quality. The dietary score for the sampled adolescent girls was very low and the anaemia prevalence varied across these scores. Interestingly, the prevalence of anaemia was still considerably high for adolescents living in households with higher dietary diversity scores. This may be due to the fact that the consumption of iron rich food was generally low among the adolescent girls. Similar to a study conducted in Bangladesh [24], our findings showed that adolescent from food insure households are likely to suffer from anaemia compared to their food insecure counterparts. Food insecurity can predispose individuals to anaemia through inadequate consumption of micronutrients [25] and the food insure households tend to consume less micronutrient as a result of under consumption of diet or over consumption of energy dense diet that contain less micronutrients which facilitate the bioavailability of iron [26]. The current study also revealed that the prevalence of any form of anaemia was higher (31.6%) in rural adolescent girls as compared to their counterpart in urban area (19%). This finding is analogous to studies conducted in India. The higher prevalence of anaemia among rural adolescent is due to higher possibility of food shortage and thus consumption of food poor in iron and other micronutrients. As anticipated, knowing (heard) the term anaemia was also found to be significantly associated with anaemia status among the adolescent girls. Therefore, nutrition education and counseling can be used to improve the nutrition knowledge of adolescent girls and make them realize healthy diet for healthy living. According to the WHO classification of public health significance of anaemia in populations on the basis of prevalence estimated from blood levels of hemoglobin, all of the study districts (Debrelibanos, Damotegale and Laygaynt) can be classified under the category of moderate public health significance (prevalence of anaemia between 20.0–39.9%). According to the WHO set criteria for IFA supplementation for adolescents, the districts could be candidates for preventive iron supplementation program [27]. We aimed to find out adolescent girls' perceptions on anaemia and their levels of awareness on perceived causes and symptoms of anaemia. Furthermore, we intended to find out their views and experiences about using and their reported willingness to use IFA. We also considered how IFA supplementation fits within the larger picture of anaemia prevention, and how adolescents will respond to the concept of introducing IFA supplementation as a strategy to combat anaemia. We noted that more than four out of ten adolescents heard the term anaemia. In addition, the girls reported the relationship between anaemia and the intake of iron rich foods, menstrual bleeding and excessive loss of blood. On top of these, adolescents perceived that anaemia is related to poor school performance. These points could be used for messaging when introducing IFA supplementation. In addition, the majority of adolescents do not know about (heard about) IFA but are willing to take IFA. The finding that the great majority of adolescents were willing to take IFA supplements should be taken as an opportunity to initiate a supplementation program. However, choosing delivery channels need to take into consideration the preferences of adolescents. The differing views among adolescents on delivery channels highlight the differences in trust, experiences, privacy and other aspects that adolescents require. It is also worth noting that medicalization of IFA supplements could be one of the possible reasons that adolescents preferred the health system as an outlet for IFA delivery. The fact that 12% of adolescents were out of school has programmatic implications on choosing the type of delivery channels to use. Programs relying solely on school based delivery channels might miss adolescents who are out of school. Alternative delivery channels including health centers and health posts as well as home to home delivery might be needed to reach of adolescents who are out of school. We recommend a formative assessment with an objective of in depth investigations to explore the appropriateness, feasibility and cost effectiveness of different delivery channels. In conclusion, the prevalence of anaemia among adolescent girls was found to be a moderate public health problem. According to the WHO set criteria for IFA supplementation for adolescents, and given the high rate of anaemia found among. adolescent girls, the districts could be candidates for intermittent iron and Folic acid supplementation program. ANI: Accelerating Nutrition Improvement EDHS: Ethiopian Demographic and Health Survey HFIAS: Household food insecurity access scale IFA: Iron Folic Acid PCA: RBC: SD: Nakku JE, Okello ES, Kizza D, Honikman S, Ssebunnya J, Ndyanabangi S, et al. Perinatal mental health care in a rural African district, Uganda: a qualitative study of barriers, facilitators and needs. BMC Health Serv Res. 2016;16(1):295. Organization WH. Hemoglobin concentrations for the diagnosis of anemia and assessment of severity. In: Vitamin and mineral nutrition information system. Geneva: World Health Organization; 2011. Yasutake S, He H, Decker MR, Sonenstein FL, Astone NM. Anemia among adolescent and young women in low-and-middle-income countries. Int J Child Health Nutr. 2013;2. Mulugeta A, Hagos F, Stoecker B, Kruseman G, Linderhof V, Abraha Z, et al. Nutritional status of adolescent girls from rural communities of Tigray, northern Ethiopia. EthiopJHealth Dev. 2009;23. Central Statistical Agency [Ethiopia] and ICF International. 2012. Ethiopia Demographic and Health Survey 2011. Addis Ababa, Ethiopia and Calverton, Maryland, USA: Central Statistical Agency and ICF International. Central Statistical Agency [Ethiopia] and ORC Macro. 2006. Ethiopia Demographic and Health Survey 2005. Addis Ababa, Ethiopia and Calverton, Maryland, USA: Central Statistical Agency and ORC Macro. Adem OS, Tadsse K, Gebremedhin A. Iron deficiency aneamia is moderate public health problem among school going adolescent girls in Berahle district, Afar, Northeast Ethiopia. J Food Nutr Sci. 2015;3. World Health Organization. "Prevention of iron deficiency anaemia in adolescents." (2011). Pattnaik S, Patnaik L, Kumar A, Sahu T. Prevalence of Anemia among adolescent girls in a rural area of Odisha and its epidemiological correlates. Indian J Mater Child Health. 2013;15:1–1. A.K Sinha, G.M. Singh Karki, K.K Karna. Prevalence of Anemia amongst Adolescents in Biratnagar, Morang Dist. Nepal International Journal of Pharmaceutical & Biological Archives. 2012;3. Rati SA, Jawadagi S. Prevalence of Anemia among adolescent girls studying in selected schools. Int J Sci Res. 2014;3. Premalatha T, Valarmathi S, Parameshwari Srijayanth, Jasmine S Sundar, Kalpana S. Prevalence of Anemia and its Associated Factors among Adolescent School Girls in Chennai, Tamil Nadu, India Epidemiol. 2012;2. Al-Zabedi EM, Kaid FA, Sady H, Al-Adhroey AH, Amran AA, Al-Maktari MT. Prevalence and risk factors of iron deficiency anemia among children in Yemen. Am J Health Res. 2014;2. Kulkarni MV, Durge PM, Kasturwar NB. Prevalence of anemia among adolescent girls in an urban slum. Natl J Community Med. 2012;3(1):108–11. Nelima D. Prevalence and determinants of Anaemia among adolescent girls in secondary schools in Yala division Siaya District, Kenya. Universal J Food Nutr Sci. 2015;3. Barugahara EI, Kikafunda J, Gakenia WM. Prevalence and risk factors of nutritional anaemia among female school children in Masindi District, Western Uganda. Afr J Food Agric Nutr Dev. 2013;13(3). Kudaravalli J, Madhavi S, Nagaveni D, Deshpande N, Rao MR. Anemia, Iron deficiency, meat consumption, and hookworm infection in women of reproductive age in rural area in Andhra Pradesh. Ann Biol Res. 2011;2. Dougla NM, Anstey NM, Buffet PA, Poespoprodjo JR, Yeo TW, White NJ, et al. The anaemia of plasmodium vivax malaria. Ann Biol Res. 2011;2. Zimmermann MB, Hurrell RF. Nutritional iron deficiency. Lancet. 2007;370(9586):511–20. Tesfaye M, Yemane T, Adisu W, Asres Y, Gedefaw L. Anemia and iron deficiency among school adolescents: burden, severity, and determinant factors in Southwest Ethiopia. Adolesc Health Med Ther. 2015;6:189. Seid O, Tadesse K, Gebremedhin A. Iron deficiency anemia is moderate public health problem among school going adolescent girls in Berahle district, Afar, Northeast Ethiopia. J Food Nutr Sci. 2015;3:10–6. Rajaratnam J, Abel R, Asokan J, Jonathan P. Prevalence of anemia among adolescent girls of rural Tamilnadu. Indian Pediatr. 2000;37:532–6. Dhokar A, Ray S. Prevalence of anaemia among urban and rural adolescents. IJAR. 2016;2(6):965–7. Ghose B, Tang S, Yaya S, Feng Z. Association between food insecurity and anemia among women of reproductive age. PeerJ. 2016;4:e1945. Skalicky A, Meyers AF, Adams WG, Yang Z, Cook JT, Frank DA. Child food insecurity and iron deficiency anemia in low-income infants and toddlers in the United States. Matern Child Health J. 2006;10(2):177. Backstrand JR, Allen LH, Black AK, de Mata M, Pelto GH. Diet and iron status of nonpregnant women in rural Central Mexico. Am J Clin Nutr. 2002;76(1):156–64. Guideline WHO. Intermittent iron and folic acid supplementation in menstruating women. Geneva: World Health Organization; 2011. We would also like to say a big thank you to Ormoia region health Bureau staffs (Mr. Yadeta and Mr. Daniel), Amhara region health Bureau staff (Ms. Tenagne) and SNNP regional health Bureau staff (Mr. Mesgana) for facilitating the baseline study. We would also like to give sincere thanks to the supervisors and interviewers who showed great dedication and care in obtaining the data. We would also like to acknowledge the JSI staffs (Mr.Tollosa, Mr.Yohannes and Mr.Bekele) without whose valuable assistance and input during data collection process would have been far more challenging. We would also like to appreciate the Ethiopian Public Health association (EPHA) for their outstanding administrative and logistic supports provided to the entire field work. Finally, we must thank adolescents and their parents who gave up their valuable time and took part in the interviews with a great deal of patience and enthusiasm. The WHO Ethiopia country office funded the research work. We declare that the funding body has no role in the designing of the study, in the collection, analysis and interpretation of the data, in the writing of this manuscript and in the decision to submit for publication. We would like to give our sincere and heartfelt appreciation to the WHO Ethiopia Country and WHO/IST/NUT Office for supporting this study. We declare that we can provide all materials, questionnaires and data are available up on request from the principal investigator. Department of Reproductive Health and Health Service Management, School of Public Health, College of Health Sciences, Addis Ababa University, Addis Ababa, Ethiopia Seifu Hagos Gebreyesus & Bilal Shikur Endris World Health Organization (WHO) Ethiopia Country office, Addis Ababa, Ethiopia Getahun Teka Beyene & Hana Nekatebeb Bekele Department of Public Health, College of Medicine and Health Sciences, Jigjiga University, Jigjiga, Ethiopia Fekadu Elias School of Public Health, College of Health Sciences, Sodo University, Sodo, Ethiopia Alinoor Mohamed Farah Seifu Hagos Gebreyesus Bilal Shikur Endris Getahun Teka Beyene Hana Nekatebeb Bekele SHG, BSE, GTB, and HNB designed the study. SHG, BSE, GTB, HNB, AMF and FE All the authors participated in the data analysis and the drafting of the manuscript. All the authors read and approved the final manuscript. Correspondence to Seifu Hagos Gebreyesus. The study protocol was approved by the Ethics committees at the Amhara and Oromia Regional Health Bureaus, Federal Ministry of Health. The surveys were anonymous, so there was minimal or no risk to breaching respondents' confidentiality. All adolescent girls approached to take part in the study were informed about the voluntary nature of participation and were, likewise, informed that they can choose not to answer any questions they feel disinclined to answer. During the training of interviewers, supervisors and coordinators, emphasis was placed on the importance of obtaining informed consent. Consent to participate: For those aged < 16 years, verbal consent was taken from their parents or guardians on their behalf. For those aged > 16, verbal consent was obtained from themselves. In addition, permission was asked from parents or legal guardians of all adolescent girls involved in this study. The interviewers were made to sign on the consent form thereby verifying and taking responsibility of getting informed consent. We decided to use verbal consent rather than written consent for the reason that written consent involving paper signing is perceived to have legal implications of the responses made during the interview, and this could create bias and non-response during the interviews. These procedures of obtaining verbal consent and assent rather than in written were approved by the ethical committee. Gebreyesus, S.H., Endris, B.S., Beyene, G.T. et al. Anaemia among adolescent girls in three districts in Ethiopia. BMC Public Health 19, 92 (2019). https://doi.org/10.1186/s12889-019-6422-0
CommonCrawl
Mathematician:Mathematicians/Sorted By Birth/501 - 1000 CE < Mathematician:Mathematicians‎ | Sorted By Birth For more comprehensive information on the lives and works of mathematicians through the ages, see the MacTutor History of Mathematics archive, created by John J. O'Connor and Edmund F. Robertson. The army of those who have made at least one definite contribution to mathematics as we know it soon becomes a mob as we look back over history; 6,000 or 8,000 names press forward for some word from us to preserve them from oblivion, and once the bolder leaders have been recognised it becomes largely a matter of arbitrary, illogical legislation to judge who of the clamouring multitude shall be permitted to survive and who be condemned to be forgotten. -- Eric Temple Bell: Men of Mathematics, 1937, Victor Gollancz, London Previous ... Next 1 $\text {501}$ – $\text {600}$ 1.1 Metrodorus $($$\text {c. 500}$$)$ 1.2 Varahamihira $($$\text {505}$ – $\text {587}$$)$ 1.3 Severus Sebokht $($$\text {575}$ – $\text {667}$$)$ 1.4 Brahmagupta $($$\text {598}$ – $\text {668}$$)$ 1.5 Bhaskara I $($$\text {c. 600}$ – $\text {c. 680}$$)$ 2.1 Bede $($$\text {c. 673}$ – $\text {735}$$)$ 3.1 Alcuin of York $($$\text {c. 735}$ – $\text {804}$$)$ 3.2 Muhammad ibn Musa al-Khwarizmi $($$\text {c. 780}$ – $\text {c. 850}$$)$ 3.3 Leon the Mathematician $($$\text {c. 790}$ – $\text {c. 870}$$)$ 4.1 Mahaviracharya $($$\text {c. 800}$ – $\text {c. 870}$$)$ 4.2 Al-Kindi $($$\text {c. 801}$ – $\text {c. 873}$$)$ 4.3 Thabit ibn Qurra $($$\text {836}$ – $\text {901}$$)$ 4.4 Abu Kamil $($$\text {c. 850}$ – $\text {c. 930}$$)$ 5 $\text {901}$ – $\text {1000}$ 5.1 Abu'l-Wafa Al-Buzjani $($$\text {940}$ – $\text {998}$$)$ 5.2 Abu Bakr al-Karaji $($$\text {c. 953}$ – $\text {c. 1029}$$)$ 5.3 Abu Ali al-Hasan ibn al-Haytham $($$\text {965}$ – $\text {c. 1039}$$)$ 5.4 Abu Rayhan Muhammad ibn Ahmad Al-Biruni $($$\text {973}$ – $\text {1048}$$)$ 5.5 Halayudha $($$\text {c. 1000}$$)$ $\text {501}$ – $\text {600}$ Metrodorus $($$\text {c. 500}$$)$ Greek grammarian and mathematician, who collected mathematical epigrams which appear in The Greek Anthology Book XIV. He is believed to have authored nos. $116$ to $146$. Nothing else is known about him. show full page Varahamihira $($$\text {505}$ – $\text {587}$$)$ Indian astronomer, mathematician, and astrologer. One of several early mathematicians to discover what is now known as Pascal's triangle. Defined the algebraic properties of zero and negative numbers. Improved the accuracy of the sine tables of Aryabhata I. Made some insightful observations in the field of optics. Severus Sebokht $($$\text {575}$ – $\text {667}$$)$ Syrian scholar and bishop. The first Syrian to mention the Indian number system. Brahmagupta $($$\text {598}$ – $\text {668}$$)$ Indian mathematician and astronomer. Gave definitive solutions to the general linear equation, and also the general quadratic equation. Best known for the Brahmagupta-Fibonacci Identity. Bhaskara I $($$\text {c. 600}$ – $\text {c. 680}$$)$ Indian mathematician who was the first on record to use Hindu-Arabic numerals complete with a symbol for zero. Gave an approximation of the sine function in his Āryabhaṭīyabhāṣya of $629$ CE. Bede $($$\text {c. 673}$ – $\text {735}$$)$ English Benedictine monk at the monastery of St. Peter and its companion monastery of St. Paul in the Kingdom of Northumbria of the Angles. Studied the academic discipline of computus, that is the science of calculating calendar dates. Worked on computing the date of Easter. Helped establish the "Anno Domini" practice of numbering years. Produced works on finger-counting, the sphere, and division. These works are probably the first works on mathematics written in England by an Englishman. Alcuin of York $($$\text {c. 735}$ – $\text {804}$$)$ Hugely influential english scholar, clergyman, poet, and teacher. Wrote elementary texts on arithmetic, geometry and astronomy. Leader of a renaissance in learning in Europe. Muhammad ibn Musa al-Khwarizmi $($$\text {c. 780}$ – $\text {c. 850}$$)$ Mathematician who lived and worked in Baghdad. Famous for his book The Algebra, which contained the first systematic description of the solution to linear and quadratic equations. Sometimes referred to as "the father of algebra", but some claim the title should belong to Diophantus. Leon the Mathematician $($$\text {c. 790}$ – $\text {c. 870}$$)$ Archbishop of Thessalonike between $840$ and $843$. Byzantine sage at the time of the first Byzantine renaissance of letters and the sciences in the $9$th century. He was born probably in Constantinople where he studied grammar. He later learnt philosophy, rhetoric, and arithmetic in Andros. Mahaviracharya $($$\text {c. 800}$ – $\text {c. 870}$$)$ Indian mathematician best known for separating the subject of mathematics from that of astrology. Gave the sum of a series whose terms are squares of an arithmetical sequence and empirical rules for area and perimeter of an ellipse. Al-Kindi $($$\text {c. 801}$ – $\text {c. 873}$$)$ Persian mathematician, philosopher and prolific writer famous for providing a synthesis of the Greek and Hellenistic tradition into the Muslim world. Played an important role in introducing the Arabic numeral system to the West. Thabit ibn Qurra $($$\text {836}$ – $\text {901}$$)$ Sabian mathematician, physician, astronomer, and translator who lived in Baghdad in the second half of the ninth century during the time of Abbasid Caliphate. Made important discoveries in algebra, geometry, and astronomy. One of the first reformers of the Ptolemaic system in Astronomy. A founder of the discipline of statics. Abu Kamil $($$\text {c. 850}$ – $\text {c. 930}$$)$ Egyptian mathematician during the Islamic Golden Age. Considered the first mathematician to systematically use and accept irrational numbers as solutions and coefficients to equations. His mathematical techniques were later adopted by Fibonacci, thus allowing Abu Kamil an important part in introducing algebra to Europe. $\text {901}$ – $\text {1000}$ Abu'l-Wafa Al-Buzjani $($$\text {940}$ – $\text {998}$$)$ Persian mathematician and astronomer who made important innovations in spherical trigonometry. His work on arithmetic for businessmen contains the first instance of using negative numbers in a medieval Islamic text. Credited with compiling the tables of sines and tangents at $15'$ intervals Introduced the secant and cosecant functions, and studied the interrelations between the six trigonometric lines associated with an arc. His Almagest was widely read by medieval Arabic astronomers in the centuries after his death. He is known to have written several other books that have not survived. Known for his study of geometrical dissections. Pioneered the technique of geometrical construction using a rusty compass. Abu Bakr al-Karaji $($$\text {c. 953}$ – $\text {c. 1029}$$)$ Persian mathematician best known for the Binomial Theorem and what is now known as Pascal's Rule for their combination. Also one of the first to use the Principle of Mathematical Induction. Abu Ali al-Hasan ibn al-Haytham $($$\text {965}$ – $\text {c. 1039}$$)$ Persian philosopher, scientist and all-round genius who made significant contributions to number theory and geometry. His work influenced the work of René Descartes and the calculus of Isaac Newton. Abu Rayhan Muhammad ibn Ahmad Al-Biruni $($$\text {973}$ – $\text {1048}$$)$ Khwarazmi scholar and polymath. Thoroughly documented the Indian calendar with relation to the various Islamic calendars of his day. Appears to be the first to have defined a second (of time) as being $\dfrac 1 {24 \times 60 \times 60}$ of a day. Halayudha $($$\text {c. 1000}$$)$ Indian mathematician who wrote the Mṛtasañjīvanī, a commentary on Pingala's Chandah-shastra, containing a clear description of Pascal's triangle (called meru-prastaara). Retrieved from "https://proofwiki.org/w/index.php?title=Mathematician:Mathematicians/Sorted_By_Birth/501_-_1000_CE&oldid=480291" Lists of Mathematicians
CommonCrawl
Defining priority areas for blue whale conservation and investigating overlap with vessel traffic in Chilean Patagonia, using a fast-fitting movement model Abundance and distribution patterns of cetaceans and their overlap with vessel traffic in the Humboldt Current Ecosystem, Chile Luis Bedriñana-Romano, Patricia M. Zarate, … Alexandre N. Zerbini Individual and joint estimation of humpback whale migratory patterns and their environmental drivers in the Southwest Atlantic Ocean Luis Bedriñana-Romano, Alexandre N. Zerbini, … Federico Sucunza Assessing multiple threats to seabird populations using flesh-footed shearwaters Ardenna carneipes on Lord Howe Island, Australia as case study Chris Wilcox, Nicholas Carlile, … Tim Reid Repatriation of a historical North Atlantic right whale habitat during an era of rapid climate change O. O'Brien, D. E. Pendleton, … J. V. Redfern Important At-Sea Areas of Colonial Breeding Marine Predators on the Southern Patagonian Shelf Alastair M. M. Baylis, Megan Tierney, … Paul Brickle Movements and behaviour of blue whales satellite tagged in an Australian upwelling system Luciana M. Mӧller, Catherine R. M. Attard, … Michael C. Double Temporal and demographic variation in partial migration of the North Atlantic right whale Timothy A. Gowan, Joel G. Ortega-Ortiz, … Patricia J. Naessig Classifying grey seal behaviour in relation to environmental variability and commercial fishing activity - a multivariate hidden Markov model Floris M. van Beest, Sina Mews, … Roland Langrock Abundance estimates and habitat preferences of bottlenose dolphins reveal the importance of two gulfs in South Australia Kerstin Bilgmann, Guido J. Parra, … Luciana M. Möller Luis Bedriñana-Romano1,2, Rodrigo Hucke-Gaete1,2, Francisco A. Viddi1,2, Devin Johnson3, Alexandre N. Zerbini3,4,5,6, Juan Morales7, Bruce Mate8 & Daniel M. Palacios8 6477 Altmetric Defining priority areas and risk evaluation is of utmost relevance for endangered species` conservation. For the blue whale (Balaenoptera musculus), we aim to assess environmental habitat selection drivers, priority areas for conservation and overlap with vessel traffic off northern Chilean Patagonia (NCP). For this, we implemented a single-step continuous-time correlated-random-walk model which accommodates observational error and movement parameters variation in relation to oceanographic variables. Spatially explicit predictions of whales' behavioral responses were combined with density predictions from previous species distribution models (SDM) and vessel tracking data to estimate the relative probability of vessels encountering whales and identifying areas where interaction is likely to occur. These estimations were conducted independently for the aquaculture, transport, artisanal fishery, and industrial fishery fleets operating in NCP. Blue whale movement patterns strongly agreed with SDM results, reinforcing our knowledge regarding oceanographic habitat selection drivers. By combining movement and density modeling approaches we provide a stronger support for purported priority areas for blue whale conservation and how they overlap with the main vessel traffic corridor in the NCP. The aquaculture fleet was one order of magnitude larger than any other fleet, indicating it could play a decisive role in modulating potential negative vessel-whale interactions within NCP. Animal movement integrates several scales of ecological phenomena, including individual physiological state, locomotive, and navigational capabilities, and how these interact with external (environmental) factors affecting prey distribution. This has been explicitly acknowledged by theoretical approaches that place movement into a wider ecological and evolutionary framework1,2,3. Coupled with this growth in movement ecological theory, the rapid increase in animal tracking technology has allowed researchers to expand the frontiers of the questions that can be answered4,5. It is not surprising then, that movement approaches are being increasingly used as an ecological tool for informing conservation and management actions6,7,8. In fulfilling this goal, telemetry data have become particularly useful for oceanic species with wide-ranging life histories, for which other more traditional monitoring approaches are logistically challenging9. For the endangered Eastern South Pacific (ESP) blue whale (Balaenoptera musculus) population, northern Chilean Patagonia (NCP) is regarded as its most important summer foraging and nursing ground10,11,12. Previous studies on blue whale occurrence and movement patterns indicated that until the onset of austral autumn/winter migration, blue whales focus most of their activities within these productive coastal waters12,13,14,15. However, variations in how this population utilizes this region and other areas within the ESP appear to result from changes in prevailing oceanographic conditions16. Species distribution models (SDM) have shown that austral spring chlorophyll-a concentration, prior to the whales' arrival, and thermal fronts are important oceanographic proxies for describing the abundance and distribution patterns of blue whales within the NCP16. Krill, the primary prey of blue whales17, can take advantage of seasonally enhanced productivity for biomass production, with some time lag linking early life-history stages (e.g. larval recruitment) with adult densities17,18,19,20,21. Adult krill biomass is subsequently concentrated by thermal fronts into high-density patches which blue whales prey upon22,23,24,25. This prey aggregation effect driven by thermal fronts could be critical for blue whales, and other large baleen whales, given their energetically costly feeding behavior26,27,28,29. We hypothesize that both time-lagged distribution of primary productivity and thermal front aggregating effect generates foraging conditions for blue whales within NCP. To further test predictions from this hypothesis, here we propose that individual blue whales modify their behavior within areas of high spring chlorophyll-a concentrations and/or thermal front occurrence. As foraging behavior cannot be directly assessed solely by inspecting tracking data, we consider area-restricted search behavior (ARS, lower velocity and less directional persistence) as a proxy for this type of behavior30,31. Potential local threats affecting blue whales in NCP include collisions with vessels due to intense maritime traffic16,32, negative interactions with aquaculture and fisheries activities33,34,35, direct and indirect effects from poorly regulated whale-watching operations36, and general disturbance from noise and acoustic pollution37. As such, identifying priority areas for focusing conservation actions is of utmost relevance considering a population numbering the low hundreds with a very low potential biological removal from anthropogenic origin estimated at 1 individual every 1.8 years16 for continued growth. Vessel collisions with cetaceans have become recognized worldwide as a significant source of anthropogenic mortality and serious injuries38,39,40,41. Empirical work on this issue has been conducted in a few areas and populations, mostly in the northern Hemisphere32,39,42,43, with little effort conducted in South America32,44. In most countries, unreported cases, limited monitoring and insufficiently documented incidents have precluded any accurate assessment of the true collision prevalence and trend analyses32. Given the earlier results from SDMs, we considered using telemetry data as a complementary tool for improving our understanding of blue whale habitat selection process16,17,45 and investigating overlap with vessel traffic in NCP. In fulfilling these goals, here we provide: i) a novel fast-fitting model application for data gathered from satellite-monitored Argos tags (hereafter Argos tags), ii) model-derived spatial predictions of how whales use the area based on prevailing oceanographic conditions during the tracking period, iii) spatial estimates on the relative probability of encountering blue whales, based on the integration of movement model predictions with those of a previous SDM, and iv) spatial estimates on the relative probability of whales encountering vessels as a measure of risk for four different vessel fleets operating in NCP. The NCP (41–47°S) is characterized by an intricate array of inner passages, archipelagos, channels, and fjords enclosing roughly 12,000 km of convoluted and protected shoreline (Fig. 1). Primary biological productivity here is modulated by the mixing of sub-Antarctic waters, rich in macro-nutrients, and the abundant input of freshwater (derived from river discharges, heavy precipitation and glacier/snow melt), rich in micro-nutrients, particularly silica46,47,48. Within the NCP, several micro-basins have been described, some of them having particularly high seasonal primary and secondary production46,47,48,49,50, providing resources that upper-trophic level species rely on12,17,50,51,52,53,54. The area also hosts one of the largest salmon aquaculture industries in the world, among other anthropogenic activities that negatively affects local biodiversity33,34,55. Map of the Chilean Northern Patagonia depicting relevant geographical landmarks, tagging locations and the year of each deployment. Maps were created in R ver. 4.0.2 (https://www.r-project.org) and ensembled in QGIS ver. 3.8.0 (https://www.qgis.org) for final rendering. Maps were created using data on bedrock topography from the National Centers for Environmental Information (https://maps.ngdc.noaa.gov/viewers/grid-extract/index.html). Values above 0 were considered land coverage. Tagging and telemetry data Argos tags were deployed on 15 blue whales during the austral summer and early autumn at their summering grounds off the NCP (Fig. 1), following procedures described elsewhere14. Briefly, whales were tagged in waters of Corcovado Gulf during February 2004 (n = 4), and the Chiloe Inner Sea during late March and early April 2013 (n = 2), 2015 (n = 3), 2016 (n = 2) and 2019 (n = 4). Tags were deployed using a custom-modified compressed-air line-thrower (ARTS/RN, Restech Norway56) set at pressures ranging between 10 and 14 bar. Several models of custom-designed fully implantable satellite tags were used, including: ST-15 [n = 4], manufactured by Telonics (Mesa, Arizona, USA), SPOT5 [n = 3], SPOT6 [n = 4], and MK10 [n = 4], manufactured by Wildlife Computers (Redmond, Washington, USA). Raw Argos data included locations within NCP and outside the area after the onset of migratory movement. Because we were concerned with understanding movement patterns within the NCP, we applied a cut-off point of 24 h prior to a clear sign of migration was observed. This subset of the data was filtered using the R package "argosfilter"57 removing relocations that comprised velocities exceeding 3 m s−1, this upper limit was defined based on previous maximum speed assessments for this population14. Oceanographic covariates Chlorophyll-a and sea surface temperature (SST) data were extracted using R package "rerddapXtracto"58, which accesses the ERDDAP server at the NOAA/SWFSC Environmental Research Division. Chlorophyll-a data corresponded to satellite level-3 images from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard the Aqua satellite (Dataset ID: erdMH1chlamday), corresponding to monthly averages in a grid size of 4.64 × 4.64 km. Distance to areas of high chlorophyll-a concentration during spring (DAHCC), defined as the distance to polygons enclosing areas with an average chlorophyll-a concentration equal or higher than 5 mg/m3 during austral spring months (September, October, November), was the best explanatory variable in a SDM applied to line-transect survey data for blue whales in NCP16. Here we used the same procedure to construct this covariate but used the 95th percentile of each year´s concentrations distribution within the study area as the cut-off point for defining areas of high chlorophyll-a concentration. This was preferred because whales might select areas with the highest productivity regardless of their absolute values. Maps for DAHCC were created for each year where telemetry data were available, and their values were log transformed to reduce data overdispersion before their use in the models. For SST, data corresponded to daily averages of level-4 satellite images derived from the Multi-Scale Ultra-High Resolution (MUR) SST Analysis database (Dataset ID: jplMURSST41). MUR-SST maps merge data from different satellites, combined with in-situ measurements, using the Multi-Resolution Variational Analysis statistical interpolation59, in a grid size of 0.01 × 0.01 degrees (ca. 1 km2). From MUR-SST maps, thermal gradients maps were generated for each day that whale locations were available using the R package "grec" v. 1.3.060 with the Contextual Median Filter algorithm61 as the method for calculating gradients. MUR-SST and thermal gradients maps were used to extract the associated covariate values for each whale location. Vessel traffic data To characterize vessel traffic patterns in the area, daily vessel tracking information (time-stamped GPS locations for individualized vessels) was obtained from the Chilean National Fisheries and Aquaculture Service (SERNAPESCA), available at www.sernapesca.cl. This database was released by the Chilean government during 2020 and comprises data involving the industrial and artisanal fisheries, aquaculture, and transport fleets, from March 2019 to present (updated daily). According to Chilean legislation it is mandatory for these fleets to provide tracking information to SERNAPESCA, except for artisanal fishing vessels smaller than 15 m and also for those smaller than 12 m in the case of artisanal purse seiners (www.bcn.cl). Artisanal fishing fleet comprises vessels up to 18 m in length and less than 80 cubic meters of storage capacity; above these metrics fishing vessels are considered part of the industrial fishing fleet. The transport fleet comprises vessels with no size limitations, engaged solely in the transportation of fishery resources. The aquaculture fleet is the most diverse one, considering its different operations (e.g. staff commuting, live and processed resource transportation, and supplies and infrastructure movement) with vessel sizes ranging from 5 to 100 m. All procedures described next were conducted independently for each fleet during data analyses. We used an 8 × 8 km grid to calculate vessel density (VDi) for each grid-cell i. Vessel data are provided daily, with data gaps occurring for some days. Therefore, VDi was calculated by summing the daily number of unique vessels crossing each grid-cell i in a month divided by the total number of days with available data (range: 25–31 days). This procedure was conducted for austral summer and austral autumn months (March-June of 2019 and January-June of 2020) and then averaged into a single layer. Potential large differences in traffic patterns between these months were visually inspected through plots, which can be found as Supplementary Figures S1–S4 online. Data from austral winter and austral spring months were not used as most of the blue whale population is absent from the study area during these months13,14. Modeling approach Telemetry data analysis has motivated the development and increasing use of various state-space modeling (SSM) approaches, which deal with path reconstruction and complex latent behavioral states30,31,62,63. Most practical applications of SSM, however, are computationally intensive and therefore require a long time for fitting them. Recently, SSM has been implemented via Template Model Builder (TMB), a R package that relies on the Laplace approximation combined with automatic differentiation to fast-fit models with latent variables64,65,66. Based on "TMB" tools, we fitted a continuous-time correlated-random-walk model (CTCRW) which estimates two state variables, velocity and true locations from error-prone observed locations, and two parameters, β controlling autocorrelation in directionality and velocity and σ controlling the overall variability in velocity62. Variances for modelling error in locations were derived from the Argos error ellipse67. As the error ellipses data were not available for tags deployed in 2004, we calculated the mean error ellipse for all location classes in the newer tags (2013–2019) and assigned these values to the corresponding location classes for tags deployed in 2004. The original version of this model (with no behavioral variation) was fitted to obtain estimates of the true locations in whale's paths and used these to extract the corresponding covariate values from DAHCC, SST and thermal gradients rasters. The mean of the covariate values within a 3 km radius from each estimated location was used to partially account for uncertainty in covariate data arising from observation error. This error radius corresponded to twice the known error for Argos location classes 3, 2 and 167. Covariate data were standardized, and missing values were filled with zeros, which correspond to the mean in standardized variables. This only affected 6 whales (ID#s 1,6,7,10,11 and 12), it was restricted to SST and thermal gradient data, and except for one whale never exceeded more than 2.7% of the data (with ID#7 at 10.4% of the data). We modified the original version of the CTCRW by allowing βtand σtto be random variables that vary in time as a function of environmental covariates. $${log(\sigma }_{t})\sim \mathrm{Normal}({\mu }_{1,t},{\varepsilon }_{1})$$ $${\mu }_{1,t}=A0+A{X}_{t}$$ $$log({\beta }_{t})\sim \mathrm{Normal}({\mu }_{2,t},{\varepsilon }_{2})$$ $${\mu }_{2,t}=B0+B{X}_{t}$$ where B0 and A0 are intercepts, A and B are vectors of slopes, Xt is the corresponding design matrix holding the standardized covariates, and ε1 and ε2 correspond to standard deviations. In every case, the estimated standard deviation ε2 for βt was extremely small and presented exceptionally large standard errors; therefore, instead of trying to estimate this parameter, we fixed it at 0.01. In cases where no covariate presented a significant effect on βt this variable was reduced to a single parameter β, which was estimated. Estimated values of β larger than 4 produce persistence values lower than 0.05 h, indicating that at very short time differences velocity and location are poorly correlated with previous values. Therefore, in cases where model estimates for β were higher than 4 (ID#s 5 and 10) β was fixed at 4 indicating overall poorly autocorrelated movement patterns. Our modelling approach allowed us to quantify the influence of environmental covariates on βt and σt , with higher values of σt indicating higher velocities and higher values of βt indicating lower directional persistence, which might be expressed as pt = 3/βt in units of time62. As no discrete behavioral states were explicitly included in our model, we defined behavioral states as post hoc categories based on pt and σtvalues and their medians. The expected ARS state (slower and less persistent movement) was defined for locations jointly holding values of pt and σtbelow their medians and the opposite was defined as transit state. The other two logical combinations (high pt with low σt and low pt with high σt) were also provided and their interpretation is further discussed below. We also calculated \({\nu }_{t}=\frac{\sqrt{\pi }*{\sigma }_{t} }{\sqrt{{\beta }_{t}}*2}\), which corresponds to long-term velocity68. This variable is a function of both σtand βt(or pt), and hence higher νt can be obtained by either increasing σtor reducing βt. As νt is a function of both σt and βt, we considered it as a proxy for the ARS-transit continuum, with higher values of νtrepresenting more transit-like behavior. Expected responses of νtto covariate variation were inspected through prediction curves. Finally, model results were used to generate spatial predictions for νifor each grid-cell i using a 1 × 1 km grid. These predictions indicate the expected behavioral responses for whales traversing areas not necessarily visited during the tracking period. Predictive layers were generated for individual whales and averaged across individuals for depicting an overall pattern. Integrating movement and species distribution models Results from a previous SDM were used for assessing spatial overlap between blue whale distribution and marine traffic. Briefly, this model consisted of a Bayesian binomial N-mixture model used to model blue whale groups counts in line-transect data (2009, 2012 and 2014), using distance sampling techniques and oceanographic covariate data16. Using an 8 × 8 km grid spatial predictions of blue whale density at each grid-cell i (Ni) were generated for eight years (2009–2016) and averaged into a single layer. To integrate outputs from movement models and SDM the relative probability of encountering a whale (RPEW) was calculated as follows $${RPEW}_{i}=\frac{{N}_{i} \frac{1}{{\nu }_{i}}}{\sum_{i=1}^{n}({N}_{i} \frac{1}{{\nu }_{i}})}.$$ RPEWi assumes that the probability of encountering whales increases with predicted density39,69. Here we consider behavior might also be part of this function as slow and less persistent movement (ARS) will result in more time spent (1/νi) allocated to each grid-cell i relative to all other grid cells n. As Ni, had a spatial resolution of 8 × 8 km, we resampled the νi grid to match the coarser grid resolution prior to any calculation, using the mean of aggregated grid-cells. Defining spatial overlap with marine traffic A quantitative measure of risk associated to vessel traffic can be considered as a monotonic function of the number of vessels and the probability of encountering a whale39,70. As described above, the relative amount of time allocated to each grid-cell can be obtained from 1/νi. Therefore, as a measure of risk we calculated the relative probability of vessel encountering whale (RPVEW)39,69 by combining Ni, νi and VDi as follows. $${RPVEW}_{i}=\frac{{Pw}_{i} {{Pt}_{i} Pv}_{i}}{\sum_{i=1}^{n}{(Pw}_{i} {Pt}_{i}{ Pv}_{i})}$$ where \({Pw}_{i}=\frac{{N}_{i}}{\sum_{i=1}^{n}({N}_{i})}\) corresponds to the probability of observing a whale within each grid-cell i relative to all other grid cells n, \({Pt}_{i}=\frac{\frac{1}{{\nu }_{i}}}{\sum_{i=1}^{n}(\frac{1}{{\nu }_{i}})}\) corresponds to the time allocated to each grid-cell i relative to all other grid cells n, and \({Pv}_{i}=\frac{{VD}_{i}}{\sum_{i=1}^{n}{(VD}_{i})}\) corresponds to the observed number of vessels within grid-cell i relative to all other grid cells n. fleets. Finally, to generate quantitative estimates on the degree of overlap between blue whale distribution and vessel traffic we used the Shoener's D and Warren's I similarity statistics71. These statistics range from 0, indicating no overlap, to 1, indicating distributions are identical. To use these statistics, the variables Ni times 1/νi and VDi were rescaled to range between 0 and 1 and inputted to the nicheOverlap function from the R package dismo72,73. A schematic representation of our workflow can be found as a Supplementary Figure S5 online. Statement of approval The tagging methods employed in this study were approved by the Institutional Animal Care and Use Committee of the National Marine Mammal Laboratory of the Alaska Fisheries Science Center, National Marine Fisheries Service, U.S. National Oceanic and Atmospheric Administration. All methods employed in this study were carried out in accordance with guidelines from Subsecretaría de Pesca y Acuicultura (SUBPESCA), which provided full authorization to undertake this research through resolution #2267 of the Chilean Ministry of Economy and Tourism. Tracking duration for instrumented whales while within the study area ranged from 8.1 to 105 days (mean = 52.03, sd = 29.3, median = 48.7), yielding tracks that ranged from 49 to 1,728 locations (mean = 460.27, sd = 582.36, median = 140) used for modelling (after filtering, Table 1). In general, whales tended to remain in very localized coastal areas, where high productivity occurs during each austral spring (Fig. 2). No instrumented individuals departed from NCP until the onset of austral autumn–winter months (April-July)14. Pearson correlation analyses showed that none of the used covariates were strongly correlated (r < 0.5, p < 0.01). Except for one instrumented whale (ID#12), all animals showed a significant positive correlation between σt and DAHCC, six animals showed a significant negative correlation between σt and thermal gradients (Table 1). These results imply a clear pattern of whales reducing their velocities near areas that were highly productive during spring each year and/or where higher thermal gradients occur. The relationship with SST was less clear as three individuals showed a significant negative correlation and five a significant positive one (Table 1). Table 1 Parameters estimations for each individual whale (log scale). Individual ID, tag deploying date, number of available locations (locs) and tracking days are provided for each whale. Missing values for parameters estimating variation in log(β) represent the cases where this was considered as a single parameter instead of a random variable. For each covariate estimates, standard errors (SE), and p values are provided for each parameter. Behavioral variation for tagged whales. Panels (a–e) summarize results for 2004, 2013, 2015, 2016 and 2019, respectively and panel f combines all tracks. Red to blue four-color ramp indicates the percentile to which each location belongs regarding variation in σt and 3/βt (persistence). By using the medians, the four possible combinations are presented as a posteriori behavioral state identification. Locations jointly holding values of σt and 3/βt below their medians across all whales (low s and low p) can be considered ARS behavior, while the opposite (high s and high p) can be considered transit. Blue (far) to yellow (close) color ramp in the background indicates variation in standardized distance to areas of high chlorophyll concentration (DAHCC) in log scale, which was the most consistent covariate shaping blue whale movement patterns in this study. Data layers (including maps) were created in R ver. 4.0.2 (www.r-project.org) and ensembled in QGIS ver. 3.8.0 (www.qgis.org) for final rendering. Maps were created using data on bedrock topography from the National Centers for Environmental Information (https://maps.ngdc.noaa.gov/viewers/grid-extract/index.html). Values above 0 were considered land coverage. Regarding correlations between βt and environmental covariates, it was expected that whenever significant, they would present the opposite sign of those that were significant regarding σt, rendering a continuum between ARS and transit behavior. This was the case for three individuals with respect to DAHCC (ID#s1, 4 and 8), four individuals with respect to SST (ID#s 1, 4, 8 and 11) and one individual with respect to thermal gradients (ID#8, Table 1). Interestingly, two individuals showed the same signal in their correlation between DAHCC and βt, as well as, between DAHCC and σt (ID# 11 and 15). The same occurred for one individual regarding SST (ID#9) and one individual regarding thermal gradients (ID#13, Table 1). Post hoc definition of behavioral states showed the expected occurrence of both transit and ARS behavior. However, it also showed the occurrence of intermediate behavioral states at locations associated with low speed and high persistence and vice versa (Fig. 2). These types of intermediate behaviors were more predominant in individuals tagged in 2016 and 2019. Prediction curves for νt based on covariate variation provided unrealistic predictions for individuals for which a relatively small number of locations were available (< 200 locations, Fig. 3). For this reason, we only generated spatial predictions of νt (Fig. 4) for individuals having tracks with more than 200 locations (ID#s 5, 7, 11, 12, 13, 14 and 15). Interindividual variation was observed regarding absolute values for νt, indicating that some whales moved, in general, faster and in a more persistent manner (Fig. 4 b,c,e) than others, and also in terms of where their lowest values (ARS behavior) were expected. Despite this individual variation, some areas were consistently depicted as having the lowest values for νt, which are highlighted when the spatial predictions for these seven whales were averaged into an overall mean (Fig. 4h). Spatial predictions on RPEW highlighted areas of aggregation for blue whales in NCP, mainly located in the western part of Chiloe Island, Ancud Gulf, Adventure Bay and northern Moraleda Channel (Fig. 5). Prediction curves indicate expected variation in long-term velocity (νt) in relation to environmental covariates, (a) distance to areas of high chlorophyll concentration (DAHCC) in log scale, (b) sea surface temperature (SST) and c) thermal gradients. Red lines indicate predictions for whales exhibiting more than 200 locations (ID#s 5, 7, 11, 12, 13, 14 and 15) and black lines correspond to those with less locations available. Spatial predictions of expected long-term velocity (νt) responses in the entire study area, for every instrumented whale with more than 200 locations (panels a–g). The bottom right panel (h) shows the overall mean for all seven individuals. Data layers (including maps) were created in R ver. 4.0.2 (www.r-project.org) and ensembled in QGIS ver. 3.8.0 (www.qgis.org) for final rendering. Maps were created using data on bedrock topography from the National Centers for Environmental Information (https://maps.ngdc.noaa.gov/viewers/grid-extract/index.html). Values above 0 were considered land coverage. Relative probability of encountering a blue whale (RPEW). This integrates the output of the movements and species distribution models for areas within 25 km from shore. Data layers (including the map) were created in R ver. 4.0.2 (www.r-project.org) and ensembled in QGIS ver. 3.8.0 (www.qgis.org) for final rendering. Map was created using data on bedrock topography from the National Centers for Environmental Information (https://maps.ngdc.noaa.gov/viewers/grid-extract/index.html). Values above 0 were considered land coverage. VD absolute values were highest for the aquaculture fleet (range:0–78.4) followed by artisanal fishery (0–13.9), transport (range: 0–8) and industrial fishery (range: 0–1.9) fleets. The number of active vessels per day was highest for the aquaculture fleet (range: 602–729), followed by the artisanal fishery (range: 37–76), transport (range: 6–57) and industrial fishery (range: 1–13) fleets. Although the four fleets studied here showed spatial variation on RPVEW, all of them coincided in a high probability of whales interacting with vessels throughout the Chiloe inner sea (Fig. 6). Among the four fleets studied the artisanal fishing fleet showed the highest overlap with blue whale distribution patterns (D = 0.34; I = 0.64). The industrial fishery (D = 0.28; I = 0.48), aquaculture (D = 0.24; I = 0.46) and transport (D = 0.23; I = 0.45) fleets showed similar lower overlap (Fig. 6). available at www.sernapesca.cl. Data layers (including maps) were created in R ver. 4.0.2 (www.r-project.org) and ensembled in QGIS ver. 3.8.0 (www.qgis.org) for final rendering. Maps were created using data on bedrock topography from the National Centers for Environmental Information (https://maps.ngdc.noaa.gov/viewers/grid-extract/index.html). Values above 0 were considered land coverage. Top panels show vessel density (VD) as the mean number of vessels visiting each 8 × 8 km grid-cell per day, for the industrial fishery (a), artisanal fishery (b), aquaculture (c) and transport (d) fleets. Note the large difference in color bar increments for the aquaculture fleet. Bottom panels show the relative probability of vessel encountering whale (RPVEW) for the industrial fishery (e), artisanal fishery (f), aquaculture (g) and transport (h) fleets. The data of the different fleets are provided by the Chilean national services of fisheries and aquaculture, (SERNAPESCA) and are freely Blue whale habitat selection and priority areas for conservation Understanding the environmental drivers of blue whale habitat selection16,17 is paramount for defining priority areas for its conservation and developing recommendations for marine spatial planning11,74. In pursuing this goal, our setting combined previous SDM fit to line-transect data with a movement model fit to telemetry data in a complementary manner. Telemetry data supports the spatial pertinence of previously defined areas for assessing blue whale abundance and distribution patterns through ship-borne surveys. Although, some whales performed brief excursions to adjacent offshore waters, they tended to remain within the NCP coastal areas during most of the tracking time, which in two cases extended for up to 3 months (Table 1). Potential caveats to this approach include tagging location bias (i.e. only performed in coastal waters, Fig. 1) and sampling size, which should be overcome through the ongoing tagging program. Previous SDM16 showed that spring productivity and, secondarily, thermal fronts were important covariates for predicting blue whale densities. Results here show that the same covariates selected by SDM are important for understanding blue whale's movement patterns. As with the aforementioned SDM, DAHCC was the most prevalent covariate retained in our models, which combined with thermal gradients, displayed an unequivocal pattern in their correlation with σt. This is, whales tended to reduce their velocity near areas of high primary productivity that had occurred during austral spring and where strong thermal gradients take place (Table 1, Fig. 3a). As with many other large whale species, worldwide abundance and distribution patterns of blue whales have been linked to predictable highly and seasonally productive waters associated to high chlorophyll-a, among other proxies for enhanced productivity19,20,24,75,76,77,78. Nevertheless, as blue whales feed almost exclusively on krill, temporal lags are expected to occur between seasonally high primary productivity, euphausiids early life-history stage processes (e.g. larval recruitment), the peak in adult euphausiid densities and the peak in whale abundance17,20,78. Refining our understanding of how temporal lags relate chlorophyll-a to euphausiid spatial patterns and then to blue whale distribution remains a pending task79,80, especially considering that euphausiid spatial ecology in the NCP is poorly understood49,81. Although spring chlorophyll-a appears to be a suitable general proxy for blue whale prey availability in the NCP, whales are expected to respond in a much more complex manner to environmental heterogeneity. Previously, blue whale density in the NCP was found to be higher near areas of thermal front recurrence16. By using telemetry data, we were able to refine the assessment scale and test whether blue whales responded to daily changes in thermal gradients. Despite the relatively coarse resolution of Argos data, we were able to find evidence for behavioral response in six whales while traversing thermal gradients of less than 1 °C (Fig. 3c). This may even represent an underestimation given the reported response of blue whales to gradients as low as 0.03 °C82. Thus, our results provide additional support on the relevance of coarse to meso-scale thermal gradients when shaping marine predator distribution16,23,82,83. The underlying mechanism for this pattern, however, is not clear, as thermal fronts might be responsible for increasing prey availability by boosting local productivity and/or by aggregating prey patches22,23,24,25,83,84. Within the NCP, both processes are likely to be tightly coupled. The influence of fresh waters rich in silicic acid, among other nutrients, from high river discharges due to glacier melt and heavy rain, fertilize the photic zone by mixing with macronutrient-loaded oceanic deep water46,49,81,85,86. This large fresh water input in conjunction with higher irradiance reaching the surface during spring and summer, wind stress, tide and complex bottom topography promotes alternating processes of vertical and horizontal stratification/mixing of the water column, enhancing primary production as well as plankton aggregation87,88,89. In this context, areas selected by blue whales in the NCP might not just be of high biological productivity, but where frontal dynamics lead to highly concentrated prey patches. SST presented an equivocal pattern regarding blue whale movement patterns, suggesting a preference for colder waters in four individuals and the opposite in four other individuals (Table 1, Fig. 3b). This might be a temporal issue if whales in some years/seasons found their prey in colder/warmer waters. For instance, Ancud Gulf tends to present higher temperatures during spring and summer than the Corcovado Gulf as the latter represents the main entrance path for sub-superficial oceanic colder waters into the Chiloe inner sea. Alternatively, the lack of a clear trend in observed blue whale movement patterns regarding SST might be the result of a preference for intermediate temperatures that linear predictors failed to detect76. Blue whales appear to respond to dynamic water-column processes by performing continuous behavioral changes without necessarily departing from relatively discrete areas (e.g. Ancud Gulf and Moraleda Channel, Fig. 2). For instance, whales ID#11, ID#13 and ID#15 presented a higher probability of reducing their velocity nearby areas of high productivity and strong thermal gradients, a higher probability of increasing persistence nearby areas of high productivity (for whales ID#11 and ID#15, Table 1), and all three spent from one to 3 months within specific micro-basins (Ancud Gulf and Moraleda Channel). This suggests that both transit-like and ARS behaviors co-occur spatially, temporarily oscillating with the suitability of foraging conditions. Higher blue whale densities observed in the same areas where tagged individuals presented ARS behavior in a previous study16 could have been attributed to multiple individuals entering and leaving these areas. However, the results presented here show that instrumented blue whales concentrate in relatively discrete areas for extended periods of time (up to 3 months) searching for and exploiting available resources. The limited movement elicited by blue whales might be regarded as an indicator of low interspecific competition, considering that their population abundance is still estimated to be considerably below pre-whaling levels16,90,91. Other mechanisms like dominance92 and predator avoidance93, have been purported to explain limited animal movement. Thus, other factors should be considered in the future for understanding other dimensions of blue whales' habitat selection process, as well as temporal variations on it. Independently, both SDM and movement models predictions, highlighted similar areas of aggregation for blue whales in NCP based on observed oceanographic conditions (see Supplementary Fig. S5 online). These are clearly delimited by our RPEW map and considered Ancud Gulf, the Western coast of Chiloe Island, Corcovado Gulf / Moraleda Channel (CGMC), and Adventure Bay (Figs. 1 and 5). As previous SDMs were restricted to areas within 25 km from shore, some offshore areas visited by blue whales were not considered during RPEW computation. However, as the overall tendency to remain in coastal waters by instrumented whales was clear (Fig. 2f), we consider RPEW to be adequate. Quantifying overlap with vessel traffic For Chile, detailed and freely available vessel traffic data as those used here are limited to recent years (2019–2020), precluding long term assessments on vessel traffic spatiotemporal variation95. Although limited to 10 months of data, results showed little intra-fleet variation for the transport and aquaculture vessel activities, as well as, for those occurring in the inner sea for both fishing fleets (see Supplementary Figs. S1–S4 online). This was expected as transport and logistic support operations from aquaculture operations are less variable than the shifting resource-tracking operations of fishing vessels. In addition, the inner waters concentrate obligated marine corridors for entering/leaving the area which are used similarly regardless of vessel type. Henceforth, our estimates are expected to adequately reflect general vessel traffic patterns for each fleet but inspecting possible temporal variation in these patterns should be pursued in the future. The four different vessel fleets considered here elicited differences in VD values and their spatial use of the study area (Fig. 6). While artisanal and industrial fishing fleets utilize inner waters to the east and open waters to the west of the study area, aquaculture and transport fleets are mainly constrained to inner waters (Fig. 6). According to Chilean legislation, the artisanal fishing fleet is restricted to operate within 5 nm (9.3 km) from the coast in open and inner waters while the industrial fishing operations are to be performed beyond this area to the West. This might explain the artisanal fishing fleet´s high score on the similarity statistics, indicating the largest degree of overlap with blue whale coastal distribution. In other words, this fleet distributes the RPVEW more homogeneously matching blue whale distribution, while other fleets concentrate only at specific areas (lower degree of overlap). In comparison with results presented here, a study using the same overlap statistics, showed a higher degree of overlap between vessels and three species of cetaceans in the Mediterranean Sea73. This was expected as the Mediterranean Sea is a high intensity vessel traffic area96. However, most of the marine traffic recorded in that study (73.3%) corresponded to small sailing boats, suggesting low probabilities of lethal ship-strikes in general but pinpointing that shipping routes (where larger vessels navigate) might pose higher risk. This brings forward the fact that spatial overlap is just one of the factors affecting collision risk and its outcome, with vessel density, speed and size also contributing to it39,40,97. Although the industrial fishing fleet presents a lower degree of spatial overlap with blue whales and the lowest number of operating vessels, industrial vessels might yield a higher probability of lethal interactions if they occur, due to larger vessel size. This fleet also presented a particular pattern of high RPVEW values off Adventure Bay (Fig. 6). With up to 729 active vessels operating per day (83% of the total) and up to 78 vessels per day crossing a single grid-cell (VD), aquaculture fleet corresponds to the largest and most densely distributed fleet in the NCP. Hence, while RPVEW predictions highlights the specific areas where interactions are more likely to occur for each vessel fleet, in absolute terms, it is possible that the aquaculture fleet represents the major driver of negative vessel-whale interactions in NCP. When considering results from all fleets together it is clear that the inner waters largely concentrate higher VD and high RPVEW values for all fleets (Fig. 6). This area holds the largest number of human settlements in the NCP and the main port pertaining to the regional capital, Puerto Montt, raising concerns for potential collisions, behavioral disturbance and/or heavy noise exposure38,94,98,99,100,101 for blue whales there. Although, no systematic monitoring or registering protocol exists in this region, local authorities' statements and the local press have documented at least three large whale mortality events linked to vessel collisions in the NCP (two blue whales and one sei whale), with two occurring nearby Puerto Montt and the other one at CGMC (Fig. 5). The ability of blue whales to avoid approaching vessels appears to be limited to relatively slow descents/ascents, with no horizontal movements away from a vessel102,103, therefore, collision events might pose significant threats to survival and recovery97 for this endangered population. As inner waters of NCP might be considered, at the time, the spot of higher relative and absolute probabilities of negative interactions between blue whales and vessels, management actions are urgently needed to be implemented. For now, the most effective way to reduce collision risk is to keep whales and vessels apart, either in space or time, and where/when this is not possible, other measures (such as speed regulation) can be sought and applied singularly or in combination, considering variations in vessel activity and whale´s distribution40,102,104, as data become available. In addition, it is important to acknowledge that all analyses performed here were restricted to vessels carrying transponders and legally mandated to submit position data. Therefore, several vessels types operating in the area that could contribute to collision risk (e.g. international cargo and tankers, cruiseliners, as well as artisanal, recreational and military vessels) are currently unaccounted for. Because widely migratory species, such as the blue whale, do not recognize political boundaries, it is of great importance to identify the location of corridors and critical areas where they perform their vital activities (i.e., feed, migrate, breed, calve) to provide baseline information for their conservation. Efforts must be implemented at the local, national and international scales if success is to be reached, as ESP blue whale population recovery might be jeopardized by the loss of even a few individuals a year16 after being severely depleted by the whaling industry during the 20th Century. Modelling approach One of the main differences between our modelling approach and previously published SSMs is in that behavioral variation that arises from the dependence on time-varying parameters (σt and βt) rather than switches in discrete pre-determined behavioral states30,31,65,107. While the latter approach allows formal prediction, testing on the spatio-temporal occurrence of known behavioral modes (e.g. areas where ARS is likely to occur), time-varying approaches permit investigating variation in movement patterns that cannot, or are not desired to be, categorized a priori65,107,108. This poses a significant advantage in cases where animal movement fails to conform to the usual transit/ARS binary view. For instance, a previous work14 fitted a switching SSM to most of the data we analyzed here and found that transit states were very rare within the NCP. In agreement with this, our results show that ca. 75% of all whale estimated locations presented persistence values lower than 1.6 h, which is consistent with the biological expectation of whales primarily engaged in foraging related activities within NCP12. In this scenario, attempting to explore the effect of environmental variables on switching probability between ARS and transit states76 would have been difficult, as very few locations and their associated covariates would have been available for the transit state. By exploring changes in movement parameters, we can assess how animals' velocity and/or persistence respond to environmental covariates without the need of further assumptions. Following the transit/ARS rationale of conventional switching SSMs, one would expect that if a covariate is correlated with σt it also would be with βt, but with an opposite sign. That is, at certain covariate values an animal's velocity and persistence are likely to decrease indicating ARS behavior, as was the case for several individuals and variables (Table 1). However, this does not need to always be the case, as shown by whales ID#11 and ID#15, which reduced their velocity near areas of high productivity in conjunction with increased persistence (Table 1). In general, this might occur because both transit and ARS behavior co-occur in similar areas with respect to DAHCC but differ in other variables (SST and thermal gradients). Nonetheless, alternative explanations for other behaviors, apart from transit/ARS, might arise. For instance, short-lived chasing bursts (escorting-like behavior) has been described for the NCP109 , which are expected to present high velocities but not necessarily high persistence. On the other hand, slow persistent behavior, mostly present in whales tagged in years with the highest data transmission throughput (2016–2019, Fig. 2d–e, Table 1), might be explained by the ratio of the location error relative to the scale of movement. Thus, if short time periods separate two or more locations with limited movement, high persistence might arise from negligible variation in both speed and location, as observation error increases disproportionately relative to the scale of the movement process. Overall, our modelling approach accounted for observational error and allowed for the incorporation of environmental covariates to inform movement parameters without the need for regularization of location data into fixed time intervals30,65, all in one single step. By fitting the model through the R package "TMB" analysis took an average of 60.5 s to run (range: 2.6–310.6, processor: Intel Core i7-7700HQ at 2.8 GHz, RAM: 32 GB) which is a significant advantage when processing large amounts of data. Blue whale movement patterns agree with previous studies on their distribution, highlighting the importance of coastal waters and reinforcing our knowledge about primary production and thermal fronts as important environmental drivers for this species´ habitat selection process in the NCP. Considering defined priority areas for blue whale conservation in the area, those located at inner waters concentrated the highest probabilities of whales interacting with vessels. Among the studied vessel fleets, the unparalleled size of the aquaculture fleet indicates this could play a decisive role in modulating potential negative vessel-whale interactions within NCP. The results of this study clearly pinpoint specific areas where management actions are urgently needed, especially considering the undetermined number of vessels strikes and levels of noise exposure in the region. This information should be considered by Governmental and International organizations to inform, design, and rapidly implement mitigation action using existing national and international conservation instruments. C + + /TMB code for fitting the model (CTCRW_matrix_cov.cpp), raw telemetry data and accompanying covariate data are available as Supplementary Information. Hays, G. C. et al. Key questions in marine megafauna movement ecology. Trends in Ecol. Evol. 0 (2016). Nathan, R. et al. A movement ecology paradigm for unifying organismal movement research. PNAS 105, 19052–19059 (2008). Spiegel, O., Leu, S. T., Bull, C. M. & Sih, A. What's your move? Movement as a link between personality and spatial dynamics in animal populations. Ecol. Lett. 20, 3–18 (2017). Article ADS PubMed Google Scholar Hussey, N. E. et al. Aquatic animal telemetry: a panoramic window into the underwater world. Science 348, 1255642 (2015). Kays, R., Crofoot, M. C., Jetz, W. & Wikelski, M. Terrestrial animal tracking as an eye on life and planet. Science 348, aaa2478 (2015). Cooke, S. J. Biotelemetry and biologging in endangered species research and animal conservation: relevance to regional, national, and IUCN Red List threat assessments. Endanger. Species Res. 4, 165–185 (2008). Costa, D. P., Breed, G. A. & Robinson, P. W. New insights into pelagic migrations: implications for ecology and conservation. Annu. Rev. Ecol. Evol. Syst. 43, 73–96 (2012). Žydelis, R. et al. Dynamic habitat models: using telemetry data to project fisheries bycatch. Proc. R. Soc. Lond. B: Biol. Sci. 278, 3191–3200 (2011). Hays, G. C. et al. Translating marine animal tracking data into conservation policy and management. Trends Ecol. Evol. 34, 459–473 (2019). Cooke, J. IUCN Red List of Threatened Species: Blue Whale. IUCN Red List of Threatened Species. https://www.iucnredlist.org/en (2018). Hucke-Gaete, R., Moro, P. L. & Ruiz, J. Conservando el mar de Chiloé, Palena y las Guaitecas. Síntesis del estudio Investigación para el desarrollo de Área Marina Costera Protegida Chiloé, Palena y Guaitecas. Valdivia, Chile: Universidad Austral de Chile and Lucas Varga para The Natural Studio. Accessed June 30, 2014 (2010). Hucke-Gaete, R., Osman, L. P., Moreno, C. A., Findlay, K. P. & Ljungblad, D. K. Discovery of a blue whale feeding and nursing ground in southern Chile. Proc. R. Soc. Lond. B 271, S170–S173 (2004). Buchan, S. J., Stafford, K. M. & Hucke-Gaete, R. Seasonal occurrence of southeast Pacific blue whale songs in southern Chile and the eastern tropical Pacific. Mar. Mamm. Sci 31, 440–458 (2015). Hucke-Gaete, R. et al. From Chilean Patagonia to Galapagos, Ecuador: novel insights on blue whale migratory pathways along the Eastern South Pacific. PeerJ 6, e4695 (2018). Torres-Florez, J. P. et al. First documented migratory destination for eastern South Pacific blue whales. Mar. Mam. Sci. https://doi.org/10.1111/mms.12239 (2015). Bedriñana-Romano, L. et al. Integrating multiple data sources for assessing blue whale abundance and distribution in Chilean Northern Patagonia. Divers. Distrib. https://doi.org/10.1111/ddi.12739 (2018). Buchan, S. J. & Quiones, R. A. First insights into the oceanographic characteristics of a blue whale feeding ground in northern Patagonia, Chile. Mar. Ecol. Prog. Ser. 554, 183–199 (2016). Atkinson, A., Siegel, V., Pakhomov, E. & Rothery, P. Long-term decline in krill stock and increase in salps within the Southern Ocean. Nature 432, 100–103 (2004). Branch, T. A. et al. Past and present distribution, densities and movements of blue whales Balaenoptera musculus in the Southern Hemisphere and northern Indian Ocean. Mam. Rev. 37, 116–175 (2007). Croll, D. A. et al. From wind to whales: trophic links in a coastal upwelling system. Mar. Ecol. Prog. Ser. 289, 117–130 (2005). Article ADS Google Scholar Zerbini, A. N. et al. Baleen whale abundance and distribution in relation to environmental variables and prey density in the Eastern Bering Sea. Deep Sea Res. Part II 134, 312–330 (2016). Acha, E. M., Mianzan, H. W., Guerrero, R. A., Favero, M. & Bava, J. Marine fronts at the continental shelves of austral South America: Physical and ecological processes. J. Mar. Syst. 44, 83–105 (2004). DoniolValcroze, T., Berteaux, D., Larouche, P. & Sears, R. Influence of thermal fronts on habitat selection by four rorqual whale species in the Gulf of St, Lawrence. Mar. Ecol. Prog. Ser. 335, 207–216 (2007). Littaye, A., Gannier, A., Laran, S. & Wilson, J. P. F. The relationship between summer aggregation of fin whales and satellite-derived environmental conditions in the northwestern Mediterranean Sea. Remote Sens. Environ. 90, 44–52 (2004). Lutjeharms, J. R. E., Walters, N. M. & Allanson, B. R. Oceanic frontal systems and biological enhancement. In Antarctic Nutrient Cycles and Food Webs 11–21 (Springer, Berlin, Heidelberg, 1985). doi:https://doi.org/10.1007/978-3-642-82275-9_3. Acevedo-Gutiérrez, A., Croll, D. A. & Tershy, B. R. High feeding costs limit dive time in the largest whales. J. Exp. Biol. 205, 1747–1753 (2002). Goldbogen, J. A. et al. Prey density and distribution drive the three-dimensional foraging strategies of the largest filter feeder. Funct. Ecol. 29, 951–961 (2015). Goldbogen, J. A. et al. Mechanics, hydrodynamics and energetics of blue whale lunge feeding: efficiency dependence on krill density. J. Exp. Biol. 214, 131–146 (2011). Potvin, J., Goldbogen, J. A. & Shadwick, R. E. Passive versus active engulfment: verdict from trajectory simulations of lunge-feeding fin whales Balaenoptera physalus. J. R. Soc. Interface 6, 1005–1025 (2009). Jonsen, I. D., Flemming, J. M. & Myers, R. A. Robust state–space modeling of animal movement data. Ecology 86, 2874–2880 (2005). Morales, J. M., Haydon, D. T., Frair, J., Holsinger, K. E. & Fryxell, J. M. Extracting more out of relocation data: building movement models as mixtures of random walks. Ecology 85, 2436–2445 (2004). Waerebeek, K. V. et al. Vessel collisions with small cetaceans worldwide and with large whales in the Southern Hemisphere, an initial assessment. Latin Am. J. Aquat. Mamm. 6, 43–69 (2007). Buschmann, A. H. et al. A review of the impacts of salmonid farming on marine coastal ecosystems in the southeast Pacific. ICES J. Mar. Sci. 63, 1338–1345 (2006). Niklitschek, E. J., Soto, D., Lafon, A., Molinet, C. & Toledo, P. Southward expansion of the Chilean salmon industry in the Patagonian Fjords: main environmental challenges. Rev. Aquac. 5, 172–195 (2013). Viddi, F. A., Harcourt, R. G. & Hucke-Gaete, R. Identifying key habitats for the conservation of Chilean dolphins in the fjords of southern Chile. Aquat. Conserv: Mar. Freshw. Ecosyst. https://doi.org/10.1002/aqc.2553 (2015). Hoyt, E. & Iñiguez, M. Estado del avistamiento de cetáceos en América Latina. WDCS, Chippenham, UK 60 (2008). Colpaert, W., Briones, R. L., Chiang, G. & Sayigh, L. Blue whales of the Chiloé-Corcovado region, Chile: potential for anthropogenic noise impacts. Proc. Mtgs. Acoust. 27, 040009 (2016). Lesage, V., Omrane, A., Doniol-Valcroze, T. & Mosnier, A. Increased proximity of vessels reduces feeding opportunities of blue whales in the St. Lawrence Estuary, Canada. Endanger. Species Res. 32, 351–361 (2017). Nichol, L. M., Wright, B. M., O'Hara, P. & Ford, J. K. B. Risk of lethal vessel strikes to humpback and fin whales off the west coast of Vancouver Island, Canada. Endanger. Species Res. 32, 373–390 (2017). Vanderlaan, A. S. M. & Taggart, C. T. Vessel collisions with whales: the probability of lethal injury based on vessel speed. Mar. Mamm. Sci. 23, 144–156 (2007). Schoeman, R. P., Patterson-Abrolat, C. & Plön, S. A global review of vessel collisions with marine animals. Front. Mar. Sci. 7, 292 (2020). Guzman, H. M., Gomez, C. G., Guevara, C. A. & Kleivane, L. Potential vessel collisions with Southern Hemisphere humpback whales wintering off Pacific Panama. Mar. Mamm. Sci. 29, 629–642 (2013). Schick, R. S. et al. Striking the right balance in right whale conservation. Can. J. Fish. Aquat. Sci. 66, 1399–1403 (2009). Guzman, H. M., Capella, J. J., Valladares, C., Gibbons, J. & Condit, R. Humpback whale movements in a narrow and heavily-used shipping passage, Chile. Mar. Policy 118, 103990 (2020). Viddi, F. A., Hucke-Gaete, R., Torres-Florez, J. P. & Ribeiro, S. Spatial and seasonal variability in cetacean distribution in the fjords of northern Patagonia, Chile. ICES J. Mar. Sci. https://doi.org/10.1093/icesjms/fsp288 (2010). Iriarte, J. L., León-Muñoz, J., Marcé, R., Clément, A. & Lara, C. Influence of seasonal freshwater streamflow regimes on phytoplankton blooms in a Patagonian fjord. NZ J. Mar. Freshw. Res. 51, 304–315 (2017). Iriarte, J. L., Pantoja, S. & Daneri, G. Oceanographic processes in Chilean Fjords of Patagonia: from small to large-scale studies. Prog. Oceanogr. 129, 1–7 (2014). Iriarte, J. L., González, H. E. & Nahuelhual, L. Patagonian Fjord ecosystems in Southern Chile as a highly vulnerable region: problems and needs. AMBIO: J. Hum. Environ. 39, 463–466 (2010). González, H. E. et al. Seasonal plankton variability in Chilean Patagonia fjords: carbon flow through the pelagic food web of Aysen Fjord and plankton dynamics in the Moraleda Channel basin. Cont. Shelf Res. 31, 225–243 (2011). Pavés, H. J., González, H. E., Castro, L. & Iriarte, J. L. Carbon flows through the pelagic sub-food web in two basins of the Chilean Patagonian coastal ecosystem: the significance of coastal-ocean connection on ecosystem parameters. Estuar. Coasts 38, 179–191 (2015). Paves, H. J. & Schlatter, R. P. Research article breeding season of the southern fur seal, Arctocephalus australis at Guafo Island, southern Chile. Revista Chilena de Historia Natural 81, 137–149 (2008). Reyes-Arriagada, R., Campos-Ellwanger, P., Schlatter, R. P. & Baduini, C. Sooty Shearwater (Puffinus griseus) on Guafo Island: the largest seabird colony in the world?. Biodivers. Conserv. 16, 913–930 (2007). Shaffer, S. A. et al. Migratory shearwaters integrate oceanic resources across the Pacific Ocean in an endless summer. PNAS 103, 12799–12802 (2006). Wakefield, E. D. et al. Habitat preference, accessibility, and competition limit the global distribution of breeding black-browed albatrosses. Ecol. Monogr. 81, 141–167 (2011). Outeiro, L. & Villasante, S. Linking Salmon aquaculture synergies and trade-offs on ecosystem services to human wellbeing constituents. Ambio 42, 1022–1036 (2013). Heide-Jørgensen, M. P., Kleivane, L. & ØIen, N., Laidre, K. L. & Jensen, M. V. ,. A new technique for deploying Sa℡lite transmitters on baleen whales: tracking a blue whale (balaenoptera Musculus) in the North Atlantic. Mar. Mamm. Sci. 17, 949–954 (2001). Freitas, C., Lydersen, C., Fedak, M. A. & Kovacs, K. M. A simple new algorithm to filter marine mammal Argos locations. Mar. Mamm. Sci. 24, 315–325 (2008). Mendelssohn, R. rerddapXtracto: Extracts Environmental Data from 'ERDDAP' Web Services (2020). Chin, T. M., Milliff, R. F. & Large, W. G. Basin-scale, high-Wavenumber Sea surface wind fields from a multiresolution analysis of scatterometer data. J. Atmos. Oceanic Technol. 15, 741–763 (1998). Lau-Medrano, W. grec: Gradient-Based Recognition of Spatial Patterns in Environmental Data (2020). Belkin, I. M. & O'Reilly, J. E. An algorithm for oceanic front detection in chlorophyll and SST satellite imagery. J. Mar. Syst. 78, 319–326 (2009). Johnson, D. S., London, J. M., Lea, M.-A. & Durban, J. W. Continuous-time correlated random walk model for animal telemetry data. Ecology 89, 1208–1215 (2008). Patterson, T. A., Thomas, L., Wilcox, C., Ovaskainen, O. & Matthiopoulos, J. State–space models of individual animal movement. Trends Ecol. Evol. 23, 87–94 (2008). Auger-Méthé, M. et al. Spatiotemporal modelling of marine movement data using Template Model Builder (TMB). Mar. Ecol. Prog. Ser. 565, 237–249 (2017). Jonsen, I. D. et al. Movement responses to environment: fast inference of variation among southern elephant seals with a mixed effects model. Ecology 100, e02566 (2019). Kristensen, K., Nielsen, A., Berg, C. W., Skaug, H. & Bell, B. TMB: Automatic Differentiation and Laplace Approximation. J. Stat. Softw. 70, 1–21 (2016). McClintock, B. T., London, J. M., Cameron, M. F. & Boveng, P. L. Modelling animal movement using the Argos satellite telemetry location error ellipse. Methods Ecol. Evol. 6, 266–277 (2015). Michelot, T. & Blackwell, P. G. State-switching continuous-time correlated random walks. Methods Ecol. Evol. 10, 637–649 (2019). Vanderlaan, A. S. M., Taggart, C. T., Serdynska, A. R., Kenney, R. D. & Brown, M. W. Reducing the risk of lethal encounters: vessels and right whales in the Bay of Fundy and on the Scotian Shelf. Endanger. Species Res. 4, 283–297 (2008). Fonnesbeck, C. J., Garrison, L. P., Ward-Geiger, L. I. & Baumstark, R. D. Bayesian hierarchichal model for evaluating the risk of vessel strikes on North Atlantic right whales in the SE United States. Endanger. Species Res. 6, 87–94 (2008). Warren, D. L., Glor, R. E. & Turelli, M. Environmental niche equivalency versus conservatism: quantitative approaches to niche evolution. Evolution 62, 2868–2883 (2008). Hijmans, R. J., Phillips, S., Leathwick, J., Elith, J. & Hijmans, M. R. J. Package 'dismo'. Circles 9, 1–68 (2017). Pennino, M. G. et al. A spatially explicit risk assessment approach: Cetaceans and marine traffic in the Pelagos Sanctuary (Mediterranean Sea). PLoS ONE 12, e0179686 (2017). Outeiro, L. et al. Using ecosystem services mapping for marine spatial planning in southern Chile under scenario assessment. Ecosyst. Serv. 16, 341–353 (2015). Gill, P. C. et al. Blue whale habitat selection and within-season distribution in a regional upwelling system off southern Australia. Mar. Ecol. Prog. Ser. 421, 243–263 (2011). Palacios, D. M. et al. Ecological correlates of blue whale movement behavior and its predictability in the California current ecosystem during the summer-fall feeding season. Mov. Ecol. 7, 26 (2019). Redfern, J. V. et al. Predicting cetacean distributions in data-poor marine ecosystems. Divers. Distrib. 23, 394–408 (2017). Visser, F., Hartman, K. L., Pierce, G. J., Valavanis, V. D. & Huisman, J. Timing of migratory baleen whales at the Azores in relation to the North Atlantic spring bloom. Mar. Ecol. Prog. Ser. 440, 267–279 (2011). Barlow, D. R., Bernard, K. S., Escobar-Flores, P., Palacios, D. M. & Torres, L. G. Links in the trophic chain: modeling functional relationships between in situ oceanography, krill, and blue whale distribution under different oceanographic regimes. Mar. Ecol. Prog. Ser. 642, 207–225 (2020). Rockwood, R. C., Elliott, M. L., Saenz, B., Nur, N. & Jahncke, J. Modeling predator and prey hotspots: management implications of baleen whale co-occurrence with krill in Central California. PLoS ONE 15, e0235603 (2020). He, G. et al. Primary production and plankton dynamics in the Reloncaví Fjord and the Interior Sea of Chiloé northern Patagonia Chile. Mar. Ecol. Prog. Ser. 10, 15–20. https://doi.org/10.3354/meps08360 (2014). Etnoyer, P. et al. Sea-surface temperature gradients across blue whale and sea turtle foraging trajectories off the Baja California Peninsula, Mexico. Deep Sea Res. Part II 53, 340–358 (2006). Lydersen, C. et al. The importance of tidewater glaciers for marine mammals and seabirds in Svalbard, Norway. J. Mar. Syst. 129, 452–471 (2014). Bost, C. A. et al. The importance of oceanographic fronts to marine birds and mammals of the southern oceans. J. Mar. Syst. 78, 363–376 (2009). Silva, N., Calvete, C. & Sievers, H. Masas de agua y circulación general para algunos canales australes entre Puerto Montt y Laguna San Rafael, Chile (Crucero Cimar-Fiordo 1). Cienc. Tecnol. Mar 21, 17–48 (1998). Silva, N. & Guzmán, D. Condiciones oceanográficas físicas y químicas, entre boca del Guafo y fiordo Aysén (Crucero Cimar 7 Fiordos). Ciencia y Tecnología del Mar 29, 25–44 (2006). Molinet, C. et al. Effects of sill processes on the distribution of epineustonic competent larvae in a stratified system of Southern Chile. Mar. Ecol. Prog. Ser. 324, 95–104 (2006). Montero, P. et al. Seasonal variability of primary production in a fjord ecosystem of the Chilean Patagonia: Implications for the transfer of carbon within pelagic food webs. Cont. Shelf Res. 31, 202–215 (2011). Tello G., A. & Rodríguez Benito, C. Characterization of mesoscale spatio-temporal patterns and variability of remotely sensed Chl a and SST in the Interior Sea of Chiloe (41.4–43.5° S). International Journal of Remote Sensing http://repositoriodigital.uct.cl/handle/10925/652 (2012). Galletti-Vernazzani, B., Jackson, J. A., Cabrera, E., Carlson, C. A. & Brownell Jr., R. L. Estimates of abundance and trend of Chilean Blue Whales off Isla de Chiloé, Chile. PLoS ONE 12, e0168646 (2017). Williams, R. et al. Chilean blue whales as a case study to illustrate methods to estimate abundance and evaluate conservation status of rare species. Conserv. Biol. 25, 526–535 (2011). Nakano, S. Individual differences in resource use, growth and emigration under the influence of a dominance hierarchy in fluvial red-spotted Masu Salmon in A NATURAL HABITAT. J. Anim. Ecol. 64, 75–84 (1995). Jorgensen, S. J. et al. Limited movement in blue rockfish Sebastes mystinus: internal structure of home range. Mar. Ecol. Prog. Ser. 327, 157–170 (2006). Williams, R., Trites, A. W. & Bain, D. E. Behavioural responses of killer whales (Orcinus orca) to whale-watching boats: opportunistic observations and experimental approaches. J. Zool. 256, 255–270 (2002). Lammers, M., Pack, A., Lyman, E. & Espiritu, L. Trends in collisions between vessels and North Pacific humpback whales (Megaptera novaeangliae) in Hawaiian waters (1975–2011). J. Cetacean Res. Manag. 13, 73–80 (2013). Panigada, S. et al. Mediterranean fin whales at risk from fatal ship strikes. Mar. Pollut. Bull. 52, 1287–1298 (2006). Rockwood, R. C., Calambokidis, J. & Jahncke, J. High mortality of blue, humpback and fin whales from modeling of vessel collisions on the U.S. West Coast suggests population impacts and insufficient protection. PLoS ONE 12, e0183052 (2017). Lusseau, D., Bain, D. E., Williams, R. & Smith, J. C. Vessel traffic disrupts the foraging behavior of southern resident killer whales Orcinus orca. Endanger. Species Res. 6, 211–221 (2009). Ribeiro, S., Viddi, F. A. & Freitas, T. R. Behavioural responses of Chilean dolphins (Cephalorhynchus eutropia) to boats in Yaldad Bay, southern Chile. Aquat. Mamm. 31, 234 (2005). Van Parijs, S. M. & Corkeron, P. J. Boat traffic affects the acoustic behaviour of Pacific humpback dolphins, Sousa chinensis. Mar. Biol. Assoc. U.K J. Mar. Biol. Assoc. U.K. 81, 533 (2001). Berman-Kowalewski, M. et al. Association between blue whale (Balaenoptera musculus) mortality and ship strikes along the California coast. Aquat. Mamm. 36, 59–66 (2010). McKenna, M. F., Calambokidis, J., Oleson, E. M., Laist, D. W. & Goldbogen, J. A. Simultaneous tracking of blue whales and large ships demonstrates limited behavioral responses for avoiding collision. Endanger. Species Res. 27, 219–232 (2015). Szesciorka, A. R. et al. A case study of a near vessel strike of a blue whale: perceptual cues and fine-scale aspects of behavioral avoidance. Front. Mar. Sci. 6, 761 (2019). van der Hoop, J. M. et al. Vessel strikes to large whales before and after the 2008 ship strike rule. Conserv. Lett. 8, 24–32 (2015). Calambokidis, J. et al. Differential vulnerability to ship strikes between day and night for blue, fin, and humpback whales based on dive and movement data from medium duration archival tags. Front. Mar. Sci. 6, 543 (2019). Iorio, L. D. & Clark, C. W. Exposure to seismic survey alters blue whale acoustic communication. Biol. Lett. 6, 51–54 (2010). Breed, G. A., Costa, D. P., Jonsen, I. D., Robinson, P. W. & Mills-Flemming, J. State-space methods for more completely capturing behavioral dynamics from animal tracks. Ecol. Model. 235–236, 49–58 (2012). Gurarie, E., Andrews, R. D. & Laidre, K. L. A novel method for identifying behavioural changes in animal movement data. Ecol. Lett. 12, 395–408 (2009). Schall, E. et al. Visual and passive acoustic observations of blue whale trios from two distinct populations. Mar. Mamm. Sci. 36, 365–374 (2020). We are grateful to L/M Noctiluca crew, M. Novy, J. Barros, R. Contreras, N. Subercaseaux and R. Westcott whose commitment made this research possible. LBR held a doctoral CONICYT-Chile fellowship. This research was funded by the Whitley Fund for Nature, Kilverstone Wildlife Charitable Trust, Agencia de sustentabilidad y cambio climático, and WWF Germany to RHG. Also, The US Office of Naval Research, donors to the Marine Mammal Institute at Oregon State University, BM. Instituto de Ciencias Marinas y Limnológicas, Facultad de Ciencias, Universidad Austral de Chile, Casilla 567, Valdivia, Chile Luis Bedriñana-Romano, Rodrigo Hucke-Gaete & Francisco A. Viddi NGO Centro Ballena Azul, Valdivia, Chile Marine Mammal Laboratory, Alaska Fisheries Science Center/NOAA, 7600 Sand Point Way NE, Seattle, WA, USA Devin Johnson & Alexandre N. Zerbini Marine Ecology and Telemetry Research, 2468 Camp McKenzie Tr NW, Seabeck, WA, 98380, USA Alexandre N. Zerbini Cascadia Research Collective, 218 ½ 4th Ave, Olympia, WA, 98502, USA Instituto Aqualie, Av. Dr. Paulo Japiassú Coelho, 714, Sala 206, Juiz de Fora, MG, 36033-310, Brazil Grupo de Ecología Cuantitativa, INIBIOMA-CONICET, Universidad Nacional del Comahue, Bariloche, Argentina Juan Morales Marine Mammal Institute and Department of Fisheries and Wildlife, Hatfield Marine Science Center, Oregon State University, Newport, OR, USA Bruce Mate & Daniel M. Palacios Luis Bedriñana-Romano Rodrigo Hucke-Gaete Francisco A. Viddi Devin Johnson Bruce Mate Daniel M. Palacios L.B., R.H. and F.A.V. conceived the idea. L.B., D.J. and J.M. analyzed the data. R.H., D.M.P., F.A.V., B.M., and A.N.Z. provided the data and/or coordinated field campaigns. All authors participated in manuscript writing. Correspondence to Luis Bedriñana-Romano or Rodrigo Hucke-Gaete. Bedriñana-Romano, L., Hucke-Gaete, R., Viddi, F.A. et al. Defining priority areas for blue whale conservation and investigating overlap with vessel traffic in Chilean Patagonia, using a fast-fitting movement model. Sci Rep 11, 2709 (2021). https://doi.org/10.1038/s41598-021-82220-5 Equipment to tag, track and collect biopsies from whales and dolphins: the ARTS, DFHorten and LKDart systems Lars Kleivane Petter H. Kvadsheim Patrick J. O. Miller Animal Biotelemetry (2022) WATLAS: high-throughput and real-time tracking of many small birds in the Dutch Wadden Sea Allert I. Bijleveld Frank van Maarseveen Christine E. Beardsworth Patricia M. Zarate Federico Sucunza Spatio-temporal analysis identifies marine mammal stranding hotspots along the Indian coastline Sohini Dudhat Anant Pande Kuppusamy Sivakumar Editor's choice: threatened species Journal Top 100 Top 100 in Ecology
CommonCrawl
Anti-HIV, antitumor and immunomodulatory activities of paclitaxel from fermentation broth using molecular imprinting technique Junhyok Ryang1 na1, Yan Yan1 na1, Yangyang Song1, Fang Liu1 & Tzi Bun Ng2 In this study, a single component paclitaxel was obtained from fermentation broth by molecular imprinting technique, and its antiviral, antitumor and immunomodulatory activities were studied. The results showed that paclitaxel had a good inhibitory activity on human breast cancer MCF-7 cells and showed a concentration- dependent relationship with an IC50 of about 15 μg/mL in the sulforhodamine B assay. At the same time, paclitaxel exerted a weak inhibitory activity on cervical cancer Hela cells. In addition, paclitaxel not only inhibited the invasion of HIV-1 pseudovirus into cells, but also exhibited inhibitory activity to a certain extent after viral invasion of the cells. At a paclitaxel concentration of 20 μg/mL, the inhibition of HIV-1 pseudovirus reached about 66%. The inhibition of HIV-1 protease activity was concentration-dependent. At a concentration of 20 μg/mL, the inhibitory effect of paclitaxel on HIV-1 protease was similar to that of the positive control pepstatin A, being 15.8%. The HIV-1 integrase inhibiting activity of paclitaxel was relatively weak. Paclitaxel significantly up-regulated the expression of interleukin-6. Among plant-derived natural products, paclitaxel (C47H51NO14), which was first isolated from the bark of the Pacific yew Taxus brevifolia, also known commercially as taxol, is a chemotherapeutic diterpenoid drug that exhibits potent anticancer activity (Kasaei et al. 2017; Zhou et al. 2010). Due to its complex structure, unique medicinal mechanism and good anti-tumor activity, paclitaxel has been the subject of research of many scholars (Lasala et al. 2006; Oberlies and Kroll 2004). Moreover, paclitaxel has been studied for its potential for treating other diseases including neurodegenerative diseases and polycystic kidney disease (Zhang et al. 2005) and for the prevention of restenosis (Herdeg et al. 2000). Thus, paclitaxel is in high demand, and research interest in paclitaxel will continue to escalate (Li et al. 2017). At present, the main source of paclitaxel is still dependent on the yew tree. Due to the increasing demand for paclitaxel, yew tree is on the verge of extinction, hence it is urgent to find a new way to produce paclitaxel (Ismaiel et al. 2017). Thus researchers aim at taxol production by means of several modern techniques including chemical synthesis, semi-synthesis method and plant tissue culture method. Nevertheless, these methods have both advantages and disadvantages (Ismaiel et al. 2017; Shankar Naik 2019). Kusari et al. (2012) reported that there has been tremendous interest in locating alternative sources of paclitaxel, including fungal endophytes (Kusari et al. 2012). In general, microbial fermentation has demonstrated that isolation and identification of taxol-producing fungi is a good strategy in the production of taxol (Ismaiel et al. 2017). Thus, developing a cost-effective paclitaxel fermentation process by microorganisms has become a sustainable solution (Shankar Naik 2019; Somjaipeng et al. 2015). Molecular imprinting technology is a technique for preparing a polymeric material with the ability for recognizing a specific target molecule (template molecule). With the advancement of science and technology, molecular imprinting is now a mature technology, and numerous researchers deploy this technology to separate and obtain the target product (Li et al. 2017). Therefore, MIP is widely employed for the separation and enrichment of active ingredients of natural products (Ishkuh et al. 2014). In recent years, the application of MIP technology in the separation of active ingredients of natural medicinal resources has received more and more attention. For instance, some researchers have extensively investigated the interactions between paclitaxel and some common functional monomers, such as methacrylic acid (MAA), acrylamide (AM), 2-vinylpyridine (2-VP), and 4-vinylpyridine, in different solvents by ultraviolet spectrophotometry and found that the strongest interaction between paclitaxel and 2-VP in chloroform was observed at a ratio of 1:6 (Li et al. 2013, 2015). Ishkuh et al. (2014) have employed MSP using ethylene glycol dimethacrylate for preparing MIPs for paclitaxel with a high degree of crosslinking and found that the highest binding capacity for paclitaxel was 48.4%. However, the particle sizes of the MIPs were mainly distributed around 100 nm. Hence, despite its excellent imprinting effect, the MIPs could not be used further for separation and analysis (Ishkuh et al. 2014). Compared with conventional separation techniques such as liquid–liquid extraction and column chromatography, molecular imprinting technology has the advantages of economy, speed and simplicity (Li et al. 2017). At present, a large number of studies have established that paclitaxel exhibits high anticancer activity. Its anti-tumor mechanism involves binding to tubulin, formation of a stable tube bundle, leading to the loss of balance between dimers, and promotion of microtubule assembly polymerization (Li et al. 2017). Consequently, the cancer cells are arrested in the late G or M phase, mitosis of the cancer cells is inhibited, proliferation of the cancer cells is impeded, and the cells gradually shrink and eventually die. However, there are very few reports on other biological activities of paclitaxel such as inhibition of HIV-1 viral replication activity and regulation of immunity (Shankar Naik 2019; Wang et al. 2015). Wang et al. (2015) compared activities of taxol produced by endophytic fungi Nodulisporium sylviforme HDFS4-26 with that of taxol extracted from yew bark in inhibiting growth and inducing apoptosis of cancer cells (Wang et al. 2015). Cellular morphology, cell counting kit (CCK-8) assay, staining (HO33258/PI and Giemsa), DNA agarose gel electrophoresis, and flow cytometry (FCM) analyses were used to determine the apoptosis status of cancer cell lines such as MCF-7 cells, HeLa cells, and ovarian cancer HO8910 cells. The fungal taxol exhibited cytotoxic activity against HeLa cancer cell lines in vitro and displayed antifungal and antibacterial activities against different pathogenic strains (Das et al. 2017). In this study, paclitaxel samples obtained from endophytic fungus fermentation broth by molecular imprinting and solid phase extraction were used to investigate the antiviral, antitumor and immunomodulatory activities of the paclitaxel, which enriched the application value of paclitaxel. It is speculated that the intrinsic relationship between malignant tumor and AIDS is pointed out. It supports a theoretical foundation for the future diagnosis of potential diseases. Fermentation broth is commercial lyophilized powder (Professor Xudong Zhu Laboratory, State Key Program of Microbiology and Department of Microbiology, College of Life Sciences, Nankai University, Tianjin, China). 4-Vinylpyridine (4-vp) was purchased from Across Organics (USA). Methacrylic acid (MAA) and ethylene glycol dimethacrylate (EGDMA) were obtained from Aldrich (USA). Acrylamide (AA) was obtained from Union Star Biotechnology Co., Ltd, Tianjin, China. 2,2′-azobisisobutyronitrile (AIBN) (Kuwait Company, Tianjin, China) was recrystallized in ethanol before use. Dimethyl sulfoxide (DMSO) was purchased from Sigma (USA). Paclitaxel (> 98%) was purchased from Shanghai Jinhe Biotechnology Co., Ltd. Methanol, acetone, tetrahydrofuran and isooctane were of HPLC grade. Other reagents were of analytical grade. Cell lines and cell culture All cell lines were kindly provided by Professor Wentao Qiao (Department of Microbiology, Nankai University) and maintained in DMEM supplemented with 10% fetal bovine serum (Gibco, Invitrogen), 100 IU/mL of penicillin and 100 μg/mL of streptomycin at 37 °C in a humidified atmosphere of 95% air/5% CO2. Preparation of molecularly imprinted polymers MAA was selected as the functional monomer. Acetone was the porogen, EGDMA was the crosslinking agent, AIBN was the thermal initiator, and the ratio of the template: functional monomer: crosslinking agent used was 1:6:30. The template was synthesized. Affinity and transfer selectivity of molecularly imprinted polymers. The template molecules were removed when preparing the non-imprinted polymer, and the remaining steps were performed in the same manner as described above for the imprinted polymer. MIP-SEP procedures Two hundred milligrams of the prepared polymer as a filler were accurately weighed, and added to an empty solid phase extraction column (3 mL, 8 mm in diameter). The solid phase extraction column was made of polypropylene. The commercial joints and interfaces have been standardized and can be directly connected to the vacuum device. The upper column sample was dissolved in methanol: water (2:8, v/v), while the SPE cartridge was equilibrated with the same mixture of methanol and water. A mixed solution of methanol and water was used as a washing solution in the solid phase extraction process. Using methanol: glacial acetic acid (9:1, v/v) as eluent, the collected eluent was rotary evaporated to remove all solvents, and then the enriched product was dissolved in 2 mL of methanol, and analyzed by HPLC. Preparation of samples Ten gram lyophilized fermentation broth was dissolved in 100 mL of distilled water and filtered. The filtered liquor was evaporated under reduced pressure and then dissolved in 100 mL of methanol. The extract was partitioned in a mixture of dichloromethane: n-hexane: methanol (5:4:1, v/v/v) and the fraction in the lower layer was evaporated under reduced pressure. The crude paclitaxel was dissolved in a mixture of methanol: water (1:9, v/v) and processed as described in 2.3. High-performance liquid chromatographic (HPLC) analysis The analysis was performed using a LabAlliance high-performance liquid chromatographic instrument equipped with an ultraviolet detector at 280 nm. All separations were achieved on an analytical reversed-phase Kromasil 100-5 C18 column (\(4.6\;{\text{mm}} \times 250\;{\text{mm}}\)). The injection volume was 20 μL, and the flow rate was maintained at 1.0 mL/min. H2O 0.05% HAc MeOH (30:70) was used as a mobile phase. Assay of antitumor activity The inhibitory effects on tumor cell lines were determined using the protein-staining sulforhodamine B (SRB) assay based on the ability of the SRB dye to bind basic amino acid residues on proteins (Skehan et al. 1990) which is similar in performance to the MTT (dimethylthiazol-diphenyltetrazolium bromide) assay as assays of cytotoxicity. Different human cell lines, including embryonic kidney 293T, cervical cancer HeLa and breast cancer MCF-7 cells were taken as targets for the paclitaxel sample. The various cell lines were maintained in DMEM medium supplemented with streptomycin, penicillin and FBS at 37 °C in a humidified atmosphere of 5% CO2. 100 μL, cell suspension (1 × 105 cells/mL), with cells in the exponential growth phase, were seeded into each well of a 96-well culture microplate. After incubation for 24 h, paclitaxel solution was added and incubation was continued for another 48 h. The cells were fixed in cold trichloroacetic acid (25 μL, 50%) and stained with 0.4% SRB solution. The protein-bound SRB dye was solubilized with 100 μL Tris–HCl buffer (10 mM, pH 7.4) for determination of the optical density (OD) at 490 nm. The negative control was composed of cells treated with DMSO. The vehicle control was composed of cells without any treatment. $$Cell\;viability \,\left( \% \right) = \left( { mean\;OD\;of\;treated\;cells/mean\;OD\;of\;vehicle\;treated\;cells} \right) \times 100\%$$ $$Inhibition\;Rate \,\left( \% \right) = \left( {1 - mean\;OD\;of\;treated\;cells/mean\;OD\;of\;vehicle\;treated\;cells} \right) \times 100\% .$$ Assays of anti-HIV activity Assay of inhibition of HIV-1 entry TZM-BL-croGFP cells were cultured at 37 °C, in 5% CO2. When the cell density was 80–90%, the cells were digested with trypsin, and the cells were collected at 1000g for 3 min. The PBS buffer was washed repeatedly 3 times. The cell density was adjusted to 105 cell/mL. The MIPs of different concentrations to be tested were added 2 h before the addition of HIV-1 pseudovirus and 2 h after the addition, and the culture was continued for 24 h. The medium in the well was aspirated and 150 μL of cell lysate was added. The cells were fully lysed by reaction at room temperature for 15 min. After the lysis was completed, the cell lysate was collected and the supernatant was collected by centrifugation at 5000g for 1 min. Then 200 μL of the supernatant were removed and mixed evenly with 50 μL of luciferase substrate solution in a 96-well plate. The mixture was placed in a luciferase luminescence detector for determination of the fluorescence intensity at 560 nm (Is). The blank control was free of pseudovirus (Ib). The negative control consisted of only the pseudovirus but without the sample (In). AZT was used as the positive control. The inhibition of the entry of pseudovirus into the cell was calculated as follows: $$Inhibition\;rate \,\left( \% \right) = \left( {In - Is} \right) / \left( {In - Ib} \right) \times 100\% .$$ Assay of HIV-1 protease inhibitory activity The strain used was E. coli BL21-pPR (a plasmid containing the HIV-1 protease gene). The cells were cultured in LB medium for 12 h. Then 50 μL of the bacterial solution was transferred to 4 mL of fresh LB liquid medium containing 50 μg/mL kanamycin sulfate. The test sample and 40 μM inducer IPTG were added before culture at 37° C. A 100 μL aliquot was taken at hourly intervals into a 96-well plate. The growth of the cells was determined by measurement of absorbance at a wavelength of 490 nm. The cell culture time was taken as the horizontal and vertical coordinates, and the absorbance was plotted on the ordinate. The growth curve of the cells was drawn, the slope K was calculated, and the inhibition rate of the HIV-1 protease expression was calculated according to the slope. The protease inhibitor Pepstatin A was used as a positive control; in the negative control, LB medium was used instead of the sample, and the absorbance was K0; in the blank control, the LB medium was used instead of the sample, and the absorbance was K1, but no IPTG was induced; the absorbance of each sample system was Ks. The HIV-1 protease inhibitory activity was calculated as follows: $$Inhibition\;rate \,\left( \% \right) = \left( {Ks - K0} \right) / \left( {K1 - K0} \right) \times 100\% .$$ Assay of HIV-1 integrase inhibitory activity The plasmid pET28a-LTR cloned with HIV-1 LTR was transformed into E. coli DH5(R) strain, and the plasmid was extracted after expansion and culture. Then the following 10-μL reaction system was constructed: 1 μL Tris–HCl (20 mM, pH 8.0) buffer, 1 μL β-mercaptoethanol (2 mM), 1 μL MnCl2 (2 mM), 1 μL sample at different concentrations, 3 μL substrate plasmid and 3 μL of HIV-1 integrase (10 pmol) were incubated at 37 °C for 30 min followed by agarose gel electrophoresis. In this experiment, the control was DBZ, the negative control was Tris–HCl buffer (10 mM, pH 7.4) instead of the sample, and the blank control was Tris–HCl buffer instead of the sample and no HIV-1 integrase was added. Determination of cytokine gene expression levels Female BALB/c mice (7 weeks old, 20–25 g) were obtained from the Chinese Academy of Military Medical Sciences Laboratory Animal Center. The procedures of all animal experiments had been approved by the Chinese Academy of Military Medical Sciences Animal Research Ethics Committee. Ten BALB/c mice were randomly divided into 2 groups, a control group and a paclitaxel group, with 5 mice in each group. Normal saline was injected intraperitoneally into the control group. The MIPs group was injected intraperitoneally with a dose of 50 mg/kg for 7 consecutive days and once daily. On the 8th day, both groups of mice were treated with lipopolysaccharide at a dose of 3 mg/kg and euthanized after 12 h. The total splenic RNA was isolated by using the Trizol kit. The reaction mixture was composed of 1–5 μg total RNA, 0.5 mM dNTPs, 50 ng oligo (dT) primer, reverse transcription buffer, and 3 units of reverse transcriptase. It was heated at 65 °C for 5 min, then conducted at 42 °C for 30 min, and finally inactivated by heating to 95 °C for 5 min. The primers for real-time quantitative PCR were shown in Additional file 1: Table S1 and the system for real-time quantitative PCR was shown in Additional file 1: Table S2. The following thermocycler program was used for real-time PCR: 1 min pre-incubation at 95 °C, followed by 40 cycles of incubation at 94 °C for 15 s, 55 °C for 30 s, 72 °C for 45 s. The 2−ΔΔCt method was used to analyze the results. GAPDH was the internal control. Determination of cytokine gene expression levels was calculated as follows: $$\Delta {\text{Ct}} = Ct_{{\left( {Target\;gene} \right)}} - Ct_{GAPDH} \;\left( {{\text{amplification}}\;{\text{from}}\;{\text{the}}\;{\text{same}}\;{\text{cDNA}}} \right)$$ $$\Delta \Delta {\text{Ct}} = \Delta Ct_{{\left( {Treated} \right)}} - \Delta Ct_{{\left( {Control} \right)}} .$$ Results were expressed as mean ± standard deviation (SD). Statistical significance was evaluated using analysis of variance (ANOVA, SPSS software version 22; IBM Corp., NY) test followed by the least significant difference (LSD) test at p ≤ 0.05 level. Preparation and HPLC analysis of paclitaxel by molecularly imprinted method The presence of paclitaxel in the crude extracts was confirmed by HPLC analysis. As shown in Fig. 1a, in the fermentation broth, compared with paclitaxel standard, starting peak appeared at the same retention time of about 13 min, which proves that the crude extract from the fermentation broth contains paclitaxel, by the way, the composition of the product was complex as well as the content of paclitaxel was smaller than that of other ingredients. HPLC diagram of paclitaxel from the fermentation broth (a paclitaxel standard and crude extract, b paclitaxel standard and MIPs) Figure 1b shows that, after crude extract was treated by the molecularly imprinted column, the impurities were almost completely separated, and paclitaxel in the sample was considerably enriched and the homogeneity was greatly enhanced. Compared with the paclitaxel standard, the target material was seldom contained in the crude extract. The solution of MIPs eluted was mostly paclitaxel with only a minute extraneous peak. It proved that the synthesized molecularly imprinted polymer had a substantial enriching effect on the extraction of paclitaxel from the fermentation broth. Morphological characterization of the MIPs The apparent morphology of the polymer surface was observed directly by SEM to gain an intuitive understanding of the polymer. In the MIPs, the roughness of the particle surface itself causes the increase in the surface area compared with the NIPs, which possessed a uniform, compact, and smooth shape. The nonporous structure in the NIPs particles was due to the lack of specific binding sites which were created for MIPs and suggested that the MIPs had great potential in application as sorbents (Fig. 2). Scanning electron micrographs of the MIPs (a) and NIPs (b) Effect of paclitaxel on tumor cell proliferation As shown in Fig. 3a, when MIPs concentration was 20 μg/mL, there was no significant effect on embryonic kidney 293T cells (p > 0.05). Moreover, the damage to the cells caused by the addition of DMSO was negligible. Antitumor activity of paclitaxel. a Cytotoxicity of paclitaxel on 293T cells; b inhibitory activity of paclitaxel on MCF-7 cells; c inhibitory activity of paclitaxel on Hela cells. *p < 0.05, **p < 0.01 versus non-treatment with MIPs As can be seen from Fig. 3b, the antitumor activity of the paclitaxel standard was slightly higher than that of MIPs at the same concentration. However, MIPs also exhibited good antitumor activity with an IC50 of 15 μg/mL for human breast cancer MCF-7 cells in a concentration- dependent manner. In Fig. 3c, paclitaxel inhibited cervical cancer Hela cells by 40% at 20 μg/mL, and the antitumor activity against Hela cells was significantly lower than that of MCF-7 cells, which might be related to the mechanism of action. However, concentration dependence was also observed. Inhibitory effect of paclitaxel on the entry of HIV-1 pseudovirus The life cycle of hiv shows HIV-1 enters host cells after 1–2 h of infection, adsorption inhibitors must be added before HIV-1 infects host cells to have a corresponding effect. In this study, samples were added before and 2 h after infection. AZT (zidovudine, a listed nucleoside reverse transcriptase inhibitor) was used as the control group to test the inhibitory effect of the paclitaxel of different concentrations on the entry of hiv-1 pseudovirus into TZM-BL cells. As shown in Fig. 4, the positive control azidothymidine (AZT) maintained potent inhibitory activity before and 15 h after infection. The inhibition rate was 90%, and there was no significant difference before and after infection (\(p > 0.05\)). This indicates that AZT can inhibit the entry of HIV-1 pseudovirus to a certain extent, but it does not target the process of virus invasion. Inhibitory effects of paclitaxel on HIV-1 entrance activation induced by pseudovirus. "Positive" refers to the positive control AZT In the pre-protected group, both the paclitaxel standard and MIPs manifested good inhibitory activity, although the paclitaxel sample displayed better and concentration-dependent inhibitory activity. Although the inhibition rate decreased after 2 h of infection, the difference was not significant (\(p > 0.05\)), indicating that the paclitaxel sample had similar inhibitory activity toward AZT in the process of HIV-1 pseudovirus entry. The target is not limited to viral entry. Inhibitory effect of paclitaxel on HIV-1 protease activity Pepstatin A (an HIV-1 protease inhibitor) served as a positive control. At 80 μg/mL, pepstatin A brought about a certain degree of recovery of cell growth (Fig. 5). As shown in Fig. 5, the paclitaxel sample inhibited HIV-1 protease, but the naturally extracted paclitaxel sample inhibited protease activity to a less extent than the paclitaxel standard. This might be due to the damage of the biological activity of paclitaxel caused by the extraction process. However, at an effective concentration of 20 μg/mL, the inhibitory effect of paclitaxel on HIV-1 protease was approximately similar to that of the positive control pepstatin A (80 μg/mL). Thus paclitaxel demonstrated good HIV-1 protease inhibitory activity. Effect of HIV-1 protease on IPTG-induced E. coli growth. (Positive control: E. coli in the presence of IPTG and pepstatin A; negative control: E. coli in the presence of IPTG; blank control: E. coli only) Inhibitory effects of different concentrations of paclitaxel, MIPs and pepstatin A on HIV-1 protease. *p < 0.05, **p < 0.01 versus positive Inhibitory effect of paclitaxel on HIV-1 integrase activity In the in vitro assay model, the purified His-tagged HIV-1 integrase was applied to the substrate plasmid pET28a-LTR using HIV-1 integrase. The cleavage activity of the sample inhibiting HIV-1 integrase was characterized by detecting changes in plasmid linearity. The positive control used was raltegravir (the only commercially available HIV-1 integrase inhibitor). As shown in Fig. 7, raltegravir was able to significantly inhibit integrase cleavage activity, and the open-loop state of the plasmid under its action was comparable to that of the blank control group. The inhibitory activity of MIPs was better than the negative control. Inhibitory effect of paclitaxel on HIV-1 integrase Effect of MIPs on cytokine gene expression As shown in Fig. 8, the total RNA of mouse spleen lymphocytes was clearly separated into three bands. The upper two bands were 28s rRNA and 18s rRNA. The two bands were well separated and the integrity of the sample was good, and the next one was 5s rRNA. The A260/A280 values were 2.05 and 2.05 respectively, indicating high RNA purity, and the samples could be reverse transcribed and further experiments could be conducted. Total RNA in mouse spleen lymphocytes. 1: 0.9% NaCl, 2: MIPs (paclitaxel) The increase of DNA products was monitored in real time by determining the change of fluorescence intensity. As shown in Fig. 9, after the intraperitoneal injection of MIPs, there were some changes in the cytokines, especially the expression of IL-6 was significantly up-regulated. IL-6 responded to tissue damage and stimulated the production of other cytokines. As an anti-inflammatory factor, IL-6 can inhibit TNF-α, IL-1 and IL-10. It was found from the table that both TNF-α and IL-10 were down-regulated, while IFN-γ and IL-4 remained almost unaltered. Effects of MIPs on the expression of target genes (n = 5) Paclitaxel (taxol) belonging to a class of complex diterpenoids, called taxanes is a effective anti-cancer drug against breast cancer, and it has received extensive attention due to its unique anticancer mechanism. Initially, paclitaxel was isolated from the bark of yew (Shankar Naik 2019). Since then, paclitaxel has been isolated from some plants belonging to genus Taxus (family Taxaceae, syn Coniferales) and other genera of the same family such as AmenoTaxus, Austro Taxus, PseudoTaxus (Flores-Bustamante et al. 2010; Hao et al. 2013). The traditional methods of extracting paclitaxel from the bark of Taxus species have the disadvantage of high cost and environmental damage. Unprecedented yew cutting, low amounts of paclitaxel production, laborious and slow process of paclitaxel extraction prompted the discovery of the alternative source of paclitaxel (Flores-Bustamante et al. 2010). Thus researchers focus on paclitaxel production by means of several modern techniques, for example chemical synthesis, and plant tissue culture (Jennewein and Croteau 2001) and microbial fermentation etc. (Frense 2007; Visalakchi and Muthumary 2010). Each method has its own pros and cons. However, by microbial fermentation method, it is easy to reduce costs of production and increase the yield of paclitaxel, which is very economic and practical. In general, microbial fermentation has demonstrated that isolation and identification of taxol-producing fungi is a good strategy in the production of paclitaxel. Molecular imprinting technology is now mature, and many researchers use this technology to separate and obtain the desired product (Li et al. 2013, 2015). In the process of synthesizing molecularly imprinted polymers, the difficulty lies in the choice of functional monomers and porogens. Since the structure of paclitaxel has various functional groups such as a phenolic hydroxyl group, an ester group, an amino group, and a hydroxyl group, it has both an acidic as well as a basic functional monomer (Li et al. 2015). Previously, we studied MAA (acidic), AM (neutral) and 4-vp (alkaline) as functional monomers. The results showed that when MAA was used as a functional monomer, it had good specificity for the enrichment of paclitaxel, and the recovery attained was 70%. At the same time, acetone was found to have good specificity as a porogen, and the recovery was 83%. In addition, when the ratio of the template: functional monomer: cross-linker was 1:6:30, the adsorption rate and recovery rate of the paclitaxel sample are the highest. After the molecularly imprinted polymer was determined, the crude paclitaxel in the bark of the yew was first examined by HPLC to determine whether it had an enrichment effect (Li et al. 2017). Figure 1 shows that a single component consistent with the retention time of the paclitaxel standard was obtained, indicating that the synthesized molecularly imprinted polymer has a specific enrichment effect on the paclitaxel sample. Paclitaxel has been reported to have good antitumor activity, especially against MCF-7 cells (Kasaei et al. 2017; Wang et al. 2015). In this study, we examined the antitumor activity of the enriched paclitaxel samples. The experimental results show that the paclitaxel sample enriched by the synthesized molecularly imprinted polymer exhibited a good anti-proliferative activity toward both MCF-7 and Hela cells (Figs. 3, 4). Moreover, the antitumor activity of the paclitaxel sample was higher than that of the paclitaxel standard. It is speculated that the activity of the natural product was retained, and the antitumor activity was displayed. Up to now, there are few reports on the antiviral activity of paclitaxel (Krawczyk et al. 2005; Stebbing et al. 2003), in particular, there are very few reports on anti-HIV activity of microbial paclitaxel by molecular imprinted polymer. In this paper, we explored the potential of paclitaxel in the prevention and treatment of AIDS and conducted research in three areas: HIV-1 viral invasion, HIV-1 protease activity, and HIV-1 integrase activity. Based on the life cycle of viral replication, we designed to add paclitaxel samples 2 h before virus invading cells and 2 h after invasion, and to determine whether paclitaxel samples inhibit HIV-1 virus invasion into host cells with a fluorescence microplate reader. Paclitaxel sample inhibited the entry of HIV-1 virus into TZM-BL cells, and addition of the pre-invasive drug had a higher inhibition rate on the virus, indicating that in future practice, the method of preventive dosing can be employed to prevent and treat AIDS more effectively (Fig. 4). Interestingly, the paclitaxel sample not only acts on the process before the virus invades, but also has an inhibitory effect upon viral invasion of the host cells. This result provides a theoretical basis for the subsequent inhibition of HIV-1 protease and integrase activity by paclitaxel samples (Figs. 5, 6, 7). The results showed that the paclitaxel sample had different inhibitory activities against the two enzymes and had stronger HIV-1 protease inhibitory activity. We assumed that it may be related to the model of detection. The test of integrase inhibition is an in vivo detection model that can directly act on host cells; the assay of protease inhibition is an in vitro detection model, but only the degree of cleavage of the plasmid is detected. The body is in an immune-regulated state under normal conditions, and once stimulated by the outside world, the immune system will be disordered (Yuan et al. 2010). Therefore, we can detect the effect of paclitaxel samples from the aspect of cytokine changes. Under normal conditions, the body's immune cells Th1/Th2 are in dynamic equilibrium (Yuan et al. 2010). Many studies have disclosed that in the early stage of HIV-1 virus-infected host, the immune balance is disrupted, causing the immune cells Th1 to shift to Th2, and the immune-related cytokines are characterized by down-regulation of IL-2 and up-regulation of IL-4 and IL-10, and The pro-inflammatory factor TNF-α, which is closely related to the virus, is activated, allowing NF-kB to bind to the LTR of HIV-1 virus, thereby activating viral replication, causing immune imbalance and disease progression (Coghill et al. 2017; Otiti-Sengeri et al. 2018). As shown in Fig. 9, MIPs can down-regulate the expression of IL-10 and up-regulate the up-regulation of IL-6, which has a certain positive effect on balancing the cytokines in immune imbalance. In this study, we obtained paclitaxel from the fermentation broth of endophytic fungus by molecular imprinting technology and MIPs were used to investigate the antiviral activity, antitumor activity and immunomodulatory effects of the paclitaxel. The findings enriched the application value of paclitaxel, and provided theoretical support for the development of small molecule natural products. Availability of date and materials All datasets on which the conclusions of the manuscript rely are presented in the main paper MIPs: molecular imprinted polymers non molecularly imprinted polymer 4-vp: 4-vinylpyridine AA: EGDMA: dimethacrylate AIBN: 2,2′-azobisiso-butyronitrile DMSO: DMEM: Dulbecco's modified eagle medium MTT: dimethylthiazol diphenyltetrazolium bromide SRB: sulforhodamine FBS: 293T cell: embryonic kidney cell HeLa cell: cervical cancer cell MCF-7: breast cancer cells HIV-1: human immunodeficiency virus-1 IPTG: isopropyl-beta-d-thiogalacto-pyranoside GAPDH: IL-2, -4, -6, -10: interleukin-2, -4, -6, -10 TNF-α: tumor necrosis factor-α Coghill AE, Schenk JM, Mahkoul Z, Orem J, Phipps W, Casper C (2017) Omega-3 decreases interleukin-6 levels in HIV and HHV-8 co-infected patients: results from a randomized supplementation trial in Uganda. AIDS 32(4):505–512. https://doi.org/10.1097/QAD.0000000000001722 Das A, Rahman MI, Ferdous AS, Amin A, Rahman MM, Nahar N, Uddin MA, Islam MR, Khan H (2017) An endophytic Basidiomycete, Grammothele lineata, isolated from Corchorus olitorius, produces paclitaxel that shows cytotoxicity. PLoS ONE 12(6):e0178612. https://doi.org/10.1371/journal.pone.0178612 Flores-Bustamante ZR, Rivera-Orduna FN, Martinez-Cardenas A, Flores-Cotera LB (2010) Microbial paclitaxel: advances and perspectives. J Antibiot 63(8):460–467. https://doi.org/10.1038/ja.2010.83 Frense D (2007) Taxanes: perspectives for biotechnological production. Appl Microbiol Biot 73(6):1233–1240. https://doi.org/10.1007/s00253-006-0711-0 Hao X, Pan J, Zhu X (2013) Taxol producing fungi. In: Ramawat KG, Mérillon J-M (eds) Natural products: phytochemistry, botany and metabolism of alkaloids, phenolics and terpenes. Springer, Berlin, pp 2797–2812 Herdeg C, Oberhoff M, Baumbach A, Blattner A, Axel DI, Schroder S, Heinle H, Karsch KR (2000) Local paclitaxel delivery for the prevention of restenosis: biological effects and efficacy in vivo. J Am Coll Cardiol 35(7):1969–1976. https://doi.org/10.1016/S0735-1097(00)00614-8 Ishkuh FA, Javanbakht M, Esfandyari-Manesh M, Dinarvand R, Atyabi F (2014) Synthesis and characterization of paclitaxel-imprinted nanoparticles for recognition and controlled release of an anticancer drug. J Mater Sci 49(18):6343–6352. https://doi.org/10.1007/s10853-014-8360-7 Ismaiel AA, Ahmed AS, Hassan IA, El-Sayed ER, Karam El-Din AA (2017) Production of paclitaxel with anticancer activity by two local fungal endophytes, Aspergillus fumigatus and Alternaria tenuissima. Appl Microbiol Biot 101(14):5831–5846. https://doi.org/10.1007/s00253-017-8354-x Jennewein S, Croteau R (2001) Taxol: biosynthesis, molecular genetics, and biotechnological applications. Appl Microbiol Biot 57:13–19. https://doi.org/10.1007/s002530100757 Kasaei A, Mobini-Dehkordi M, Mahjoubi F, Saffar B (2017) Isolation of taxol-producing endophytic fungi from iranian yew through novel molecular approach and their effects on human breast cancer cell line. Curr Microbiol 74(6):702–709. https://doi.org/10.1007/s00284-017-1231-0 Krawczyk E, Luczak M, Majewska A (2005) Antiviral and cytotoxic activities of new derivatives of natural sesquiterpenes and taxol. Med Dosw Mikrobiol 57(1):93–99 Kusari S, Hertweck C, Spitellert M (2012) Chemical ecology of endophytic fungi: origins of secondary metabolites. Chem Biol 19(7):792–798. https://doi.org/10.1016/j.chembiol.2012.06.004 Lasala JM, Stone GW, Dawkins KD, Serruys PW, Colombo A, Grube E, Koglin J, Ellis S (2006) An overview of the TAXUS express, paclitaxel-eluting stent clinical trial program. J Interv Cardiol 19(5):422–431. https://doi.org/10.1111/j.1540-8183.2006.00183.x Li N, Ng TB, Wong JH, Qiao JX, Zhang YN, Zhou R, Chen RR, Liu F (2013) Separation and purification of the antioxidant compounds, caffeic acid phenethyl ester and caffeic acid from mushrooms by molecularly imprinted polymer. Food Chem 139(1–4):1161–1167. https://doi.org/10.1016/j.foodchem.2013.01.084 Li N, Zhao LJ, Ng TB, Wong JH, Yan Y, Shi Z, Liu F (2015) Separation and purification of the antioxidant compound hispidin from mushrooms by molecularly imprinted polymer. Appl Microbiol Biot 99(18):7569–7577. https://doi.org/10.1007/s00253-015-6499-z Li P, Wang T, Lei F, Peng X, Wang H, Qin L, Jiang J (2017) Preparation and evaluation of paclitaxel-imprinted polymers with a rosin-based crosslinker as the stationary phase in high-performance liquid chromatography. J Chromatogr A 1502:30–37. https://doi.org/10.1016/j.chroma.2017.04.048 Oberlies NH, Kroll DJ (2004) Camptothecin and taxol: historic achievements in natural products research. J Nat Prod 67(2):129–135. https://doi.org/10.1021/np030498t Otiti-Sengeri J, Colebunders R, Reynolds SJ, Muwonge M, Nakigozi G, Kiggundu V, Nalugoda F, Nakanjako D (2018) Elevated inflammatory cytokines in aqueous cytokine profile in HIV-1 infected patients with cataracts in Uganda. BMC Ophthalmol 18:12. https://doi.org/10.1186/s12886-018-0680-y Shankar Naik B (2019) Developments in taxol production through endophytic fungal biotechnology: a review. Orient Pharm Exp Med 19(1):1–13. https://doi.org/10.1007/s13596-018-0352-8 Skehan P, Storeng R, Scudiero D, Monks A, McMahon J, Vistica D, Warren JT, Bokesch H, Kenney S, Boyd MR (1990) New colorimetric cytotoxicity assay for anticancer-drug screening. J Natl Cancer Inst 82(13):1107–1112 Somjaipeng S, Medina A, Kwasna H, Ordaz Ortiz J, Magan N (2015) Isolation, identification, and ecology of growth and taxol production by an endophytic strain of Paraconiothyrium variabile from English yew trees (Taxus baccata). Fungal Biol 119(11):1022–1031. https://doi.org/10.1016/j.funbio.2015.07.007 Stebbing J, Wildfire A, Portsmouth S, Powles T, Thirlwell C, Hewitt P, Nelson M, Patterson S, Mandalia S, Gotch F (2003) Paclitaxel for anthracycline-resistant AIDS-related Kaposi's sarcoma: clinical and angiogenic correlations. Ann Oncol 14:1660–1666. https://doi.org/10.1093/annonc/mdg461 Visalakchi S, Muthumary J (2010) Taxol (anticancer drug) producing endophytic fungi: an overview. Int J Pharma Bio Sci 1(3):1–9 Wang X, Wang C, Sun YT, Sun CZ, Zhang Y, Wang XH, Zhao K (2015) Taxol produced from endophytic fungi induces apoptosis in human breast, cervical and ovarian cancer cells. Asian Pac J Cancer Prev 16(1):125–131. https://doi.org/10.7314/apjcp.2015.16.1.125 Yuan L, Wu LH, Chen JA, Wu QA, Hu SH (2010) Paclitaxel acts as an adjuvant to promote both Th1 and Th2 immune responses induced by ovalbumin in mice. Vaccine 28(27):4402–4410. https://doi.org/10.1016/j.vaccine.2010.04.046 Zhang B, Maiti A, Shively S, Lakhani F, McDonald-Jones G, Bruce J, Lee EB, Xie SX, Joyce S, Li C, Toleikis PM, Lee VMY, Trojanowski JQ (2005) Microtubule-binding drugs offset tau sequestration by stabilizing microtubules and reversing fast axonal transport deficits in a tauopathy model. Proc Natl Acad Sci USA 102(1):227–231. https://doi.org/10.1073/pnas.0406361102 Zhou X, Zhu H, Liu L, Lin J, Tang K (2010) A review: recent advances and future prospects of taxol-producing endophytic fungi. Appl Microbiol Biot 86(6):1707–1717. https://doi.org/10.1007/s00253-010-2546-y This study has been funded by National Natural Science Foundation (Grant No. 31870006) and the award of Health and Medical Research Fund (No. 12131221) from Food and Health Bureau, The Government of Hong Kong Special Administrative Region. Junhyok Ryang and Yan Yan contributed equally to this work Department of Microbiology, The Key Laboratory of Molecular Microbiology and Technology, Ministry of Education, College of Life Science, Nankai University, Tianjin, 300071, China Junhyok Ryang , Yan Yan , Yangyang Song & Fang Liu School of Biomedical Sciences, Faculty of Medicine, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China Tzi Bun Ng Search for Junhyok Ryang in: Search for Yan Yan in: Search for Yangyang Song in: Search for Fang Liu in: Search for Tzi Bun Ng in: JR, YY and FL designed the experiments. JR, YY and YS executed the experiments, analyzed all data and produced figures. JR, FL and TNB provided technical and theoretical support. FL conceived and coordinated the study, and helped in the drafting of the manuscript. JR and FL wrote and revised the manuscript. TBN gave advice in the study and edited the manuscript. All authors read and approved the final manuscript. Correspondence to Fang Liu or Tzi Bun Ng. The investigations described herein did not involve human participants, human data or human tissues. All applicable international guidelines for the care and use of animals were followed. Additional file 1: Table S1. Primers for real-time quantitative PCR. Table S2. Systems for real-time quantitative PCR. Ryang, J., Yan, Y., Song, Y. et al. Anti-HIV, antitumor and immunomodulatory activities of paclitaxel from fermentation broth using molecular imprinting technique. AMB Expr 9, 194 (2019) doi:10.1186/s13568-019-0915-1 Received: 15 November 2019 Taxol Antitumor activity Anti-HIV activity Molecularly imprinted polymer Immunomodulatory activity
CommonCrawl
Does Nori's fundamental group scheme appear in Kim's work This is a very vague question. I was just reading the introduction to M. Kim's article on motivic fundamental groups and the theorem of Siegel and noticed that there are essentially three fundamental groups appearing in his work: the De Rham fundamental group, the cristalline fundamental group and the etale fundamental group. Now, does Nori's fundamental group scheme also appear somewhere in his work? If no, why not? ag.algebraic-geometry HarryHarry It depends on what you call Nori's fundamental group scheme, of course. Nori himself has given several versions of his fundamental group scheme, and it has been vastly generalized. If you think of the classical definition (the tannaka group of the category of essentially finite vector bundles) I don't think it does. It is of a somewhat different nature of the tannaka groups you list above : it is pro-finite, whereas the ones appearing in your list are all pro-unipotent. They are not designed for the same purposes: Nori's fundamental group was built to take into account positive characteristic phenomenons, especially torsors under finite group schemes that are not necessarily étale, whereas in Kim's fundamental groups both the de Rham version and the étale one are defined for a variety over a field of characteristic zero, and my vague understanding of the subject is that all of them are algebraic incarnations of the unipotent envelope of the non existing topological fundamental group. Of course, this is not so simple. Nori has defined in his Phd an unipotent version of his fundamental group scheme which is related to Kim's de Rham. Also, a version of Nori's fundamental group scheme (rather, groupoid) was recently used in characteristic zero by Esnault and Hai to study the section conjecture, which is deeply interconnected with Kim's work. NielsNiels Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question. Algebraic data and purity associated to codimension greater than 2 what notions are "geometric" (characterized by geometric fibers)? Unipotency in realisations of the motivic fundamental group Grothendieck's Galois theory without finiteness hypotheses A question about the Tannakian etale fundamental group of a curve What does the group of automorphisms corresponding to $\mathfrak{g}$
CommonCrawl
Compatibility of the Feigin-Frenkel Isomorphism and the Harish-Chandra Isomorphism for jet algebras by Masoud Kamgarpour PDF Let $\mathfrak {g}$ be a simple finite-dimensional complex Lie algebra with a Cartan subalgebra $\mathfrak {h}$ and Weyl group $W$. Let $\mathfrak {g}_n$ denote the Lie algebra of $n$-jets on $\mathfrak {g}$. A theorem of Raïs and Tauvel and Geoffriau identifies the centre of the category of $\mathfrak {g}_n$-modules with the algebra of functions on the variety of $n$-jets on the affine space $\mathfrak {h}^*/W$. On the other hand, a theorem of Feigin and Frenkel identifies the centre of the category of critical level smooth modules of the corresponding affine Kac-Moody algebra with the algebra of functions on the ind-scheme of opers for the Langlands dual group. We prove that these two isomorphisms are compatible by defining the higher residue of opers with irregular singularities. We also define generalized Verma and Wakimoto modules and relate them by a nontrivial morphism. A. A. Beilinson and V. G. Drinfeld. Quantization of Hitchin's fibration and Langlands' program. http://www.math.uchicago.edu/ mitya/langlands/hitchin/BD-hitchin.pdf, 1997. A. A. Beilinson and V. G. Drinfeld. Opers. http://arxiv.org/abs/math/0501398, 2005. Colin J. Bushnell and Philip C. Kutzko, Semisimple types in $\textrm {GL}_n$, Compositio Math. 119 (1999), no. 1, 53–97. MR 1711578, DOI 10.1023/A:1001773929735 A. V. Chervov and A. I. Molev, On higher-order Sugawara operators, Int. Math. Res. Not. IMRN 9 (2009), 1612–1635. MR 2500972, DOI 10.1093/imrn/rnn168 V. G. Drinfel′d and V. V. Sokolov, Lie algebras and equations of Korteweg-de Vries type, Current problems in mathematics, Vol. 24, Itogi Nauki i Tekhniki, Akad. Nauk SSSR, Vsesoyuz. Inst. Nauchn. i Tekhn. Inform., Moscow, 1984, pp. 81–180 (Russian). MR 760998 Michel Duflo, Opérateurs différentiels bi-invariants sur un groupe de Lie, Ann. Sci. École Norm. Sup. (4) 10 (1977), no. 2, 265–288 (French, with English summary). MR 444841 Roman M. Fedorov, Irregular Wakimoto modules and the Casimir connection, Selecta Math. (N.S.) 16 (2010), no. 2, 241–266. MR 2679482, DOI 10.1007/s00029-010-0019-x Boris Feigin and Edward Frenkel, Affine Kac-Moody algebras at the critical level and Gel′fand-Dikiĭ algebras, Infinite analysis, Part A, B (Kyoto, 1991) Adv. Ser. Math. Phys., vol. 16, World Sci. Publ., River Edge, NJ, 1992, pp. 197–215. MR 1187549, DOI 10.1142/s0217751x92003781 B. Feigin, E. Frenkel, and V. Toledano Laredo, Gaudin models with irregular singularities, Adv. Math. 223 (2010), no. 3, 873–948. MR 2565552, DOI 10.1016/j.aim.2009.09.007 Edward Frenkel and Dennis Gaitsgory, Local geometric Langlands correspondence and affine Kac-Moody algebras, Algebraic geometry and number theory, Progr. Math., vol. 253, Birkhäuser Boston, Boston, MA, 2006, pp. 69–260. MR 2263193, DOI 10.1007/978-0-8176-4532-8_{3} Daniel Friedan, Emil Martinec, and Stephen Shenker, Conformal invariance, supersymmetry and string theory, Nuclear Phys. B 271 (1986), no. 1, 93–165. MR 845945, DOI 10.1016/0550-3213(86)90356-1 Edward Frenkel, Wakimoto modules, opers and the center at the critical level, Adv. Math. 195 (2005), no. 2, 297–404. MR 2146349, DOI 10.1016/j.aim.2004.08.002 Edward Frenkel, Langlands correspondence for loop groups, Cambridge Studies in Advanced Mathematics, vol. 103, Cambridge University Press, Cambridge, 2007. MR 2332156 François Geoffriau, Homomorphisme de Harish-Chandra pour les algèbres de Takiff généralisées, J. Algebra 171 (1995), no. 2, 444–456 (French). MR 1315906, DOI 10.1006/jabr.1995.1021 M Kamgarpour, Analogies between smooth representation of p-adic groups and affine Kac-Moody algebras, Talk at Oberwolfach session on enveloping algebras, 2012. Nicholas M. Katz, On the calculation of some differential Galois groups, Invent. Math. 87 (1987), no. 1, 13–61. MR 862711, DOI 10.1007/BF01389152 M. Kamgarpour and T. Schedler, Geometrization of principal series of reductive $p$-groups. http://arxiv.org/abs/1011.4529, 2011. Masoud Kamgarpour and Travis Schedler, Ramified Satake isomorphisms for strongly parabolic characters, Doc. Math. 18 (2013), 1275–1300. MR 3138847 Alexander Molev, Casimir elements for certain polynomial current Lie algebras. In Group 21, Physical Applications and Mathematical Aspects of Geometry, Groups, and Algebras, volume 1, pages 172–176. World Scientific, Singapore, 1997. A. I. Molev, Feigin-Frenkel center in types $B$, $C$ and $D$, Invent. Math. 191 (2013), no. 1, 1–34. MR 3004777, DOI 10.1007/s00222-012-0390-7 Alan Roche, Types and Hecke algebras for principal series representations of split reductive $p$-adic groups, Ann. Sci. École Norm. Sup. (4) 31 (1998), no. 3, 361–413 (English, with English and French summaries). MR 1621409, DOI 10.1016/S0012-9593(98)80139-0 Mustapha Raïs and Patrice Tauvel, Indice et polynômes invariants pour certaines algèbres de Lie, J. Reine Angew. Math. 425 (1992), 123–140 (French). MR 1151316 S. J. Takiff, Invariant polynomials on Lie algebras of inhomogeneous unitary and special orthogonal groups, Trans. Amer. Math. Soc. 170 (1972), 221–230. MR 304564, DOI 10.1090/S0002-9947-1972-0304564-5 Benjamin J. Wilson, Highest-weight theory for truncated current Lie algebras, J. Algebra 336 (2011), 1–27. MR 2802528, DOI 10.1016/j.jalgebra.2011.04.015 Retrieve articles in Transactions of the American Mathematical Society with MSC (2010): 17B67, 17B69, 22E50, 20G25 Retrieve articles in all journals with MSC (2010): 17B67, 17B69, 22E50, 20G25 Masoud Kamgarpour Affiliation: School of Mathematics and Physics, The University of Queensland, St. Lucia, Queensland 4072, Australia Email: [email protected] Received by editor(s): August 21, 2013 Received by editor(s) in revised form: February 7, 2014 Published electronically: October 3, 2014 Additional Notes: The author was supported by the Australian Research Council Discovery Early Career Research Award MSC (2010): Primary 17B67, 17B69, 22E50, 20G25
CommonCrawl
A.Neves http://math.stackexchange.com/users/1747/neves How to prove this identity $\pi=\sum\limits_{k=-\infty}^{\infty}\left(\frac{\sin(k)}{k}\right)^{2}\;$? sequences-and-series trigonometry pi asked Mar 15, 2013 at 17:49 math.stackexchange.com Prime powers, patterns similar to $\lbrace 0,1,0,2,0,1,0,3\ldots \rbrace$ and formulas for $\sigma_k(n)$ real-analysis number-theory prime-numbers riemann-zeta arithmetic-functions asked Dec 29, 2010 at 17:36 How to draw a line between two paragraphs of my text? paragraphs rules asked Oct 13, 2010 at 9:38 tex.stackexchange.com Riemann's zeta as a continued fraction over prime numbers. number-theory reference-request prime-numbers riemann-zeta continued-fractions asked Apr 8, 2014 at 11:47 Proving that $\frac{\pi}{4}$$=1-\frac{\eta(1)}{2}+\frac{\eta(2)}{4}-\frac{\eta(3)}{8}+\cdots$ sequences-and-series pi transcendental-numbers dirichlet-series constants asked Sep 3, 2013 at 15:00 Finding the value of $\sum\limits_{k=0}^{\infty}\frac{2^{k}}{2^{2^{k}}+1}$ sequences-and-series summation asked Apr 8, 2013 at 11:48 Arithmetic of continued fractions, does it exist? reference-request continued-fractions asked Oct 26, 2011 at 10:16 Proving that $\frac{\pi^{3}}{32}=1-\sum_{k=1}^{\infty}\frac{2k(2k+1)\zeta(2k+2)}{4^{2k+2}}$ sequences-and-series riemann-zeta pi constants asked Dec 19, 2013 at 17:28 Proving that $\left(\frac{\pi}{2}\right)^{2}=1+\sum_{k=1}^{\infty}\frac{(2k-1)\zeta(2k)}{2^{2k-1}}$. sequences-and-series riemann-zeta pi asked Dec 18, 2013 at 16:22 How to prove by arithmetical means that $\sum\limits_{k=1}^\infty \frac{((k-1)!)^2}{(2k)!} =\frac{1}{3}\sum\limits_{k=1}^{\infty}\frac{1}{k^{2}}$ calculus sequences-and-series asked Jan 17, 2012 at 10:45 Funny identities Golden Number Theory Proving $\prod_{j=1}^n \left(4-\frac2{j}\right)$ is an integer Patterns in Prime numbers, and the null hypothesis Lebesgue integral basics
CommonCrawl
The XYZ$^2$ hexagonal stabilizer code Basudha Srivastava1, Anton Frisk Kockum2, and Mats Granath1 1Department of Physics, University of Gothenburg, SE-41296 Gothenburg, Sweden 2Department of Microtechnology and Nanoscience, Chalmers University of Technology, SE-41296 Gothenburg, Sweden We consider a topological stabilizer code on a honeycomb grid, the "XYZ$^2$" code. The code is inspired by the Kitaev honeycomb model and is a simple realization of a "matching code" discussed by Wootton [J. Phys. A: Math. Theor. 48, 215302 (2015)], with a specific implementation of the boundary. It utilizes weight-six ($XYZXYZ$) plaquette stabilizers and weight-two ($XX$) link stabilizers on a planar hexagonal grid composed of $2d^2$ qubits for code distance $d$, with weight-three stabilizers at the boundary, stabilizing one logical qubit. We study the properties of the code using maximum-likelihood decoding, assuming perfect stabilizer measurements. For pure $X$, $Y$, or $Z$ noise, we can solve for the logical failure rate analytically, giving a threshold of 50%. In contrast to the rotated surface code and the XZZX code, which have code distance $d^2$ only for pure $Y$ noise, here the code distance is $2d^2$ for both pure $Z$ and pure $Y$ noise. Thresholds for noise with finite $Z$ bias are similar to the XZZX code, but with markedly lower sub-threshold logical failure rates. The code possesses distinctive syndrome properties with unidirectional pairs of plaquette defects along the three directions of the triangular lattice for isolated errors, which may be useful for efficient matching-based or other approximate decoding. Featured image: Distance 5 XYX$^2$ code with link, plaquette, and boundary stabilizers, and syndromes for isolated $X$, $Y$ and $Z$ errors. Quantum computers are more sensitive to noise than their classical digital counterparts. For the latter, errors are bit-flip errors (0 flipped to 1, or vice versa), whereas for the former, phase-flip errors (e.g. 0+1 flipped to 0-1) are also an issue since quantum bits (qubits) can be in a superposition of 0 and 1. Topological stabilizer codes are quantum error-correcting codes that store quantum information in logical qubits, consisting of groups of physical qubits, and protect against errors by repeated local measurements. For biased noise, for example, if phase-flip errors are more likely than bit-flip errors, these stabilizer measurements can be modified to increase the threshold of the code, that is, the physical error rate below which the logical error rate decreases by increasing the number of physical qubits in the code. We propose a stabilizer code implemented on a hexagonal (honeycomb) grid of physical qubits and show that, under the assumption of perfect measurements, the code possesses high threshold values for highly biased noise. Building a quantum computer with a high connectivity that allows for measuring the six qubits of a hexagon may thus provide an advantage over lower-connectivity structures such as a square lattice. @article{Srivastava2022xyzhexagonal, doi = {10.22331/q-2022-04-27-698}, url = {https://doi.org/10.22331/q-2022-04-27-698}, title = {The {XYZ}{$^2$} hexagonal stabilizer code}, author = {Srivastava, Basudha and Frisk Kockum, Anton and Granath, Mats}, journal = {{Quantum}}, issn = {2521-327X}, publisher = {{Verein zur F{\"{o}}rderung des Open Access Publizierens in den Quantenwissenschaften}}, volume = {6}, pages = {698}, month = apr, year = {2022} } [1] James R Wootton. ``A family of stabilizer codes for $D({{\mathbb{Z}}_{2}})$ anyons and Majorana modes''. Journal of Physics A: Mathematical and Theoretical 48, 215302 (2015). [2] Michael A. Nielsen and Isaac L. Chuang. ``Quantum Computation and Quantum Information''. Cambridge University Press. Cambridge (2010). [3] I.M. Georgescu, S. Ashhab, and Franco Nori. ``Quantum simulation''. Reviews of Modern Physics 86, 153–185 (2014). https:/​/​doi.org/​10.1103/​RevModPhys.86.153 [4] Ashley Montanaro. ``Quantum algorithms: an overview''. npj Quantum Information 2, 15023 (2016). https:/​/​doi.org/​10.1038/​npjqi.2015.23 [5] G Wendin. ``Quantum information processing with superconducting circuits: a review''. Reports on Progress in Physics 80, 106001 (2017). https:/​/​doi.org/​10.1088/​1361-6633/​aa7e1a [6] John Preskill. ``Quantum Computing in the NISQ era and beyond''. Quantum 2, 79 (2018). [7] Sam McArdle, Suguru Endo, Alán Aspuru-Guzik, Simon C. Benjamin, and Xiao Yuan. ``Quantum computational chemistry''. Reviews of Modern Physics 92, 015003 (2020). [8] Bela Bauer, Sergey Bravyi, Mario Motta, and Garnet Kin-Lic Chan. ``Quantum Algorithms for Quantum Chemistry and Quantum Materials Science''. Chemical Reviews 120, 12685–12717 (2020). https:/​/​doi.org/​10.1021/​acs.chemrev.9b00829 [9] Román Orús, Samuel Mugel, and Enrique Lizaso. ``Quantum computing for finance: Overview and prospects''. Reviews in Physics 4, 100028 (2019). https:/​/​doi.org/​10.1016/​j.revip.2019.100028 [10] M. Cerezo, Andrew Arrasmith, Ryan Babbush, Simon C. Benjamin, Suguru Endo, Keisuke Fujii, Jarrod R. McClean, Kosuke Mitarai, Xiao Yuan, Lukasz Cincio, and Patrick J. Coles. ``Variational quantum algorithms''. Nature Reviews Physics 3, 625–644 (2021). [11] Peter W. Shor. ``Scheme for reducing decoherence in quantum computer memory''. Physical Review A 52, R2493–R2496 (1995). https:/​/​doi.org/​10.1103/​PhysRevA.52.R2493 [12] A. M. Steane. ``Error Correcting Codes in Quantum Theory''. Physical Review Letters 77, 793–797 (1996). [13] Daniel Gottesman. ``Stabilizer Codes and Quantum Error Correction'' (1997). arXiv:quant-ph/​9705052. [14] Barbara M. Terhal. ``Quantum error correction for quantum memories''. Reviews of Modern Physics 87, 307–346 (2015). [15] Steven M. Girvin. ``Introduction to quantum error correction and fault tolerance'' (2021). arXiv:2111.08894. [16] S. B. Bravyi and A. Yu. Kitaev. ``Quantum codes on a lattice with boundary'' (1998). arXiv:quant-ph/​9811052. [17] Eric Dennis, Alexei Kitaev, Andrew Landahl, and John Preskill. ``Topological quantum memory''. Journal of Mathematical Physics 43, 4452–4505 (2002). [18] A.Yu. Kitaev. ``Fault-tolerant quantum computation by anyons''. Annals of Physics 303, 2–30 (2003). [19] Robert Raussendorf and Jim Harrington. ``Fault-Tolerant Quantum Computation with High Threshold in Two Dimensions''. Physical Review Letters 98, 190504 (2007). [20] Austin G. Fowler, Matteo Mariantoni, John M. Martinis, and Andrew N. Cleland. ``Surface codes: Towards practical large-scale quantum computation''. Physical Review A 86, 032324 (2012). [21] P.W. Shor. ``Fault-tolerant quantum computation''. In Proceedings of 37th Conference on Foundations of Computer Science. Pages 56–65. IEEE Comput. Soc. Press (1996). https:/​/​doi.org/​10.1109/​SFCS.1996.548464 [22] Emanuel Knill, Raymond Laflamme, and Wojciech H. Zurek. ``Resilient Quantum Computation''. Science 279, 342–345 (1998). https:/​/​doi.org/​10.1126/​science.279.5349.342 [23] J. Kelly, R. Barends, A. G. Fowler, A. Megrant, E. Jeffrey, T. C. White, D. Sank, J. Y. Mutus, B. Campbell, Yu Chen, Z. Chen, B. Chiaro, A. Dunsworth, I.-C. Hoi, C. Neill, P. J. J. O'Malley, C. Quintana, P. Roushan, A. Vainsencher, J. Wenner, A. N. Cleland, and John M. Martinis. ``State preservation by repetitive error detection in a superconducting quantum circuit''. Nature 519, 66–69 (2015). [24] Maika Takita, Andrew W. Cross, A. D. Córcoles, Jerry M. Chow, and Jay M. Gambetta. ``Experimental Demonstration of Fault-Tolerant State Preparation with Superconducting Qubits''. Physical Review Letters 119, 180501 (2017). [25] Christian Kraglund Andersen, Ants Remm, Stefania Lazar, Sebastian Krinner, Nathan Lacroix, Graham J. Norris, Mihai Gabureac, Christopher Eichler, and Andreas Wallraff. ``Repeated quantum error detection in a surface code''. Nature Physics 16, 875–880 (2020). [26] K. J. Satzinger et al. ``Realizing topologically ordered states on a quantum processor''. Science 374, 1237 (2021). https:/​/​doi.org/​10.1126/​science.abi8378 [27] Laird Egan, Dripto M. Debroy, Crystal Noel, Andrew Risinger, Daiwei Zhu, Debopriyo Biswas, Michael Newman, Muyuan Li, Kenneth R. Brown, Marko Cetina, and Christopher Monroe. ``Fault-tolerant control of an error-corrected qubit''. Nature 598, 281–286 (2021). [28] Zijun Chen et al. ``Exponential suppression of bit or phase errors with cyclic error correction''. Nature 595, 383–387 (2021). [29] Alexander Erhard, Hendrik Poulsen Nautrup, Michael Meth, Lukas Postler, Roman Stricker, Martin Stadler, Vlad Negnevitsky, Martin Ringbauer, Philipp Schindler, Hans J. Briegel, Rainer Blatt, Nicolai Friis, and Thomas Monz. ``Entangling logical qubits with lattice surgery''. Nature 589, 220–224 (2021). [30] Ming Gong, Xiao Yuan, Shiyu Wang, Yulin Wu, Youwei Zhao, Chen Zha, Shaowei Li, Zhen Zhang, Qi Zhao, Yunchao Liu, Futian Liang, Jin Lin, Yu Xu, Hui Deng, Hao Rong, He Lu, Simon C Benjamin, Cheng-Zhi Peng, Xiongfeng Ma, Yu-Ao Chen, Xiaobo Zhu, and Jian-Wei Pan. ``Experimental exploration of five-qubit quantum error correcting code with superconducting qubits''. National Science ReviewPage nwab011 (2021). https:/​/​doi.org/​10.1093/​nsr/​nwab011 [31] C. Ryan-Anderson, J. G. Bohnet, K. Lee, D. Gresh, A. Hankin, J. P. Gaebler, D. Francois, A. Chernoguzov, D. Lucchetti, N. C. Brown, T. M. Gatterman, S. K. Halit, K. Gilmore, J. A. Gerber, B. Neyenhuis, D. Hayes, and R. P. Stutz. ``Realization of real-time fault-tolerant quantum error correction''. Phys. Rev. X 11, 041058 (2021). [32] J. F. Marques, B. M. Varbanov, M. S. Moreira, H. Ali, N. Muthusubramanian, C. Zachariadis, F. Battistel, M. Beekman, N. Haider, W. Vlothuizen, A. Bruno, B. M. Terhal, and L. DiCarlo. ``Logical-qubit operations in an error-detecting surface code''. Nature Physics 18, 80–86 (2021). [33] Lukas Postler, Sascha Heußen, Ivan Pogorelov, Manuel Rispler, Thomas Feldker, Michael Meth, Christian D. Marciniak, Roman Stricker, Martin Ringbauer, Rainer Blatt, Philipp Schindler, Markus Müller, and Thomas Monz. ``Demonstration of fault-tolerant universal quantum gate operations'' (2021). arXiv:2111.12654. [34] Sebastian Krinner, Nathan Lacroix, Ants Remm, Agustin Di Paolo, Elie Genois, Catherine Leroux, Christoph Hellings, Stefania Lazar, Francois Swiadek, Johannes Herrmann, Graham J. Norris, Christian Kraglund Andersen, Markus Müller, Alexandre Blais, Christopher Eichler, and Andreas Wallraff. ``Realizing Repeated Quantum Error Correction in a Distance-Three Surface Code'' (2021). arXiv:2112.03708. [35] Dolev Bluvstein, Harry Levine, Giulia Semeghini, Tout T. Wang, Sepehr Ebadi, Marcin Kalinowski, Alexander Keesling, Nishad Maskara, Hannes Pichler, Markus Greiner, Vladan Vuletić, and Mikhail D. Lukin. ``A quantum processor based on coherent transport of entangled atom arrays''. Nature 604, 451–456 (2022). [36] J. Pablo Bonilla Ataides, David K. Tuckett, Stephen D. Bartlett, Steven T. Flammia, and Benjamin J. Brown. ``The XZZX surface code''. Nature Communications 12, 2172 (2021). [37] H. Bombin and M. A. Martin-Delgado. ``Optimal resources for topological two-dimensional stabilizer codes: Comparative study''. Physical Review A 76, 012305 (2007). [38] David K. Tuckett, Andrew S. Darmawan, Christopher T. Chubb, Sergey Bravyi, Stephen D. Bartlett, and Steven T. Flammia. ``Tailoring Surface Codes for Highly Biased Noise''. Physical Review X 9, 041031 (2019). [39] K. Lalumière, J. M. Gambetta, and A. Blais. ``Tunable joint measurements in the dispersive regime of cavity QED''. Physical Review A 81, 040301 (2010). [40] A. F. Kockum, L. Tornberg, and G. Johansson. ``Undoing measurement-induced dephasing in circuit QED''. Physical Review A 85, 052318 (2012). [41] D. Ristè, M. Dukalski, C. A. Watson, G. de Lange, M. J. Tiggelman, Y. M. Blanter, K. W. Lehnert, R. N. Schouten, and L. DiCarlo. ``Deterministic entanglement of superconducting qubits by parity measurement and feedback''. Nature 502, 350 (2013). [42] William P. Livingston, Machiel S. Blok, Emmanuel Flurin, Justin Dressel, Andrew N. Jordan, and Irfan Siddiqi. ``Experimental demonstration of continuous quantum error correction'' (2021). arXiv:2107.11398. [43] Craig Gidney, Michael Newman, Austin Fowler, and Michael Broughton. ``A fault-tolerant honeycomb memory''. Quantum 5, 605 (2021). [45] Maria Hermanns, Itamar Kimchi, and Johannes Knolle. ``Physics of the Kitaev model: Fractionalization, dynamic correlations, and material connections''. Annual Review of Condensed Matter Physics 9, 17–33 (2018). [46] David Poulin. ``Stabilizer Formalism for Operator Quantum Error Correction''. Physical Review Letters 95, 230504 (2005). [47] Martin Suchara, Sergey Bravyi, and Barbara Terhal. ``Constructions and noise threshold of topological subsystem codes''. Journal of Physics A: Mathematical and Theoretical 44, 155301 (2011). [48] Yi-Chan Lee, Courtney G Brell, and Steven T Flammia. ``Topological quantum error correction in the Kitaev honeycomb model''. Journal of Statistical Mechanics: Theory and Experiment 2017, 083106 (2017). https:/​/​doi.org/​10.1088/​1742-5468/​aa7ee2 [49] Matthew B. Hastings and Jeongwan Haah. ``Dynamically Generated Logical Qubits''. Quantum 5, 564 (2021). [50] Jeongwan Haah and Matthew B. Hastings. ``Boundaries for the Honeycomb Code'' (2021). arXiv:2110.09545. [51] James R Wootton. ``Demonstrating non-Abelian braiding of surface code defects in a five qubit experiment''. Quantum Science and Technology 2, 015006 (2017). [52] James R. Wootton. ``Hexagonal matching codes with 2-body measurements'' (2021). arXiv:2109.13308. [53] Alexei Kitaev and Chris Laumann. ``Topological phases and quantum computation'' (2009). arXiv:0904.2771. [54] Julien Vidal, Kai Phillip Schmidt, and Sébastien Dusuel. ``Perturbative approach to an exactly solved problem: Kitaev honeycomb model''. Physical Review B 78, 245121 (2008). [55] Xiao-Gang Wen. ``Quantum Orders in an Exact Soluble Model''. Physical Review Letters 90, 016803 (2003). [56] Alastair Kay. ``Capabilities of a Perturbed Toric Code as a Quantum Memory''. Physical Review Letters 107, 270502 (2011). [57] Benjamin J Brown, Wonmin Son, Christina V Kraus, Rosario Fazio, and Vlatko Vedral. ``Generating topological order from a two-dimensional cluster state using a duality mapping''. New Journal of Physics 13, 065010 (2011). [58] David K. Tuckett, Stephen D. Bartlett, and Steven T. Flammia. ``Ultrahigh Error Threshold for Surface Codes with Biased Noise''. Physical Review Letters 120, 050505 (2018). [59] Sergey Bravyi, Martin Suchara, and Alexander Vargo. ``Efficient algorithms for maximum likelihood decoding in the surface code''. Physical Review A 90, 032326 (2014). [60] James R. Wootton and Daniel Loss. ``High Threshold Error Correction for the Surface Code''. Physical Review Letters 109, 160503 (2012). [61] Adrian Hutter, James R. Wootton, and Daniel Loss. ``Efficient Markov chain Monte Carlo algorithm for the surface code''. Physical Review A 89, 022326 (2014). [62] Karl Hammar, Alexei Orekhov, Patrik Wallin Hybelius, Anna Katariina Wisakanto, Basudha Srivastava, Anton Frisk Kockum, and Mats Granath. ``Error-rate-agnostic decoding of topological stabilizer codes'' (2021). arXiv:2112.01977. [63] David Kingsley Tuckett. ``Tailoring surface codes: Improvements in quantum error correction with biased noise''. PhD thesis. University of Sydney. (2020). https:/​/​doi.org/​10.25910/​x8xw-9077 [64] David Poulin. ``Optimal and efficient decoding of concatenated quantum block codes''. Physical Review A 74, 052333 (2006). [65] Ben Criger and Imran Ashraf. ``Multi-path summation for decoding 2D topological codes''. Quantum 2, 102 (2018). [66] Andrew S. Darmawan, Benjamin J. Brown, Arne L. Grimsmo, David K. Tuckett, and Shruti Puri. ``Practical Quantum Error Correction with the XZZX Code and Kerr-Cat Qubits''. PRX Quantum 2, 030345 (2021). [67] EWD-QEC. code: QEC-project-2020/​EWD-QEC. https:/​/​github.com/​QEC-project-2020/​EWD-QEC [1] Jonathan F. San Miguel, Dominic J. Williamson, and Benjamin J. Brown, "A cellular automaton decoder for a noise-bias tailored color code", arXiv:2203.16534, (2022). [2] James R. Wootton, "Hexagonal matching codes with two-body measurements", Journal of Physics A Mathematical General 55 29, 295302 (2022). [3] Karl Hammar, Alexei Orekhov, Patrik Wallin Hybelius, Anna Katariina Wisakanto, Basudha Srivastava, Anton Frisk Kockum, and Mats Granath, "Error-rate-agnostic decoding of topological stabilizer codes", Physical Review A 105 4, 042616 (2022). [4] Eric Huang, Arthur Pesah, Christopher T. Chubb, Michael Vasmer, and Arpit Dua, "Tailoring three-dimensional topological codes for biased noise", arXiv:2211.02116, (2022).
CommonCrawl
Association of physical activity, vitamin E levels, and total antioxidant capacity with academic performance and executive functions of adolescents Ahmad H. Alghadir1, Sami A. Gabr1,2, Zaheen A. Iqbal ORCID: orcid.org/0000-0002-0504-68631 & Einas Al-Eisa1 Although various studies have shown the effect of vigorous physical activity on academic achievements, no studies have investigated the effect of vitamin E levels on academic performance. The present study aimed to assess the association between physical activity, vitamin E levels and total antioxidant capacity on the academic performance and executive functions of adolescents aged 15–18 years. The physical activity of participants was assessed according to the time spent engaging in moderate and intense exercise programs. Participants were classified into three groups representing mild, moderate, and high activity. Serum total antioxidant capacity was measured using a colorimetric assay kit. Vitamin E was estimated by the α- and γ-tocopherol levels in fasting serum samples using high-performance liquid chromatography paired with a diode array detector. School grades (ranging from 1.0, very poor; to 10.0, outstanding) were obtained at the end of the academic year to evaluate academic performance and executive functions. A total of 120 school students (mean age 16.36 ± 0.77 years; 70 boys, 50 girls) participated in the study. Academic performance was higher for students classified as moderately or highly active compared with those in the mild activity group. Serum levels of vitamin E, total antioxidant capacity, and leisure-time physical activity were also higher in the moderate and high activity groups. There was a significant correlation between age, gender, body mind index, α- and γ-tocopherol, total antioxidant capacity, leisure-time physical activity and academic performance. The academic performance and executive function scores were found to be positively correlated with age, gender, α- and γ-tocopherol, total antioxidant capacity, and physical activity; and were negatively correlated with body mind index. Our findings indicate that physical activity should be promoted during and after school hours, along with a healthy balanced diet including vitamin E. Physical activity (PA) has been shown to have various positive effects on bone strength, muscle health, and predisposition to obesity, to name a few [1, 2]. Furthermore, high levels of PA have also been shown to improve cognitive performance, such as memory and learning, for individuals of all ages [3]. However, with the advancement of technology, the time spent by children pursuing physical activities has been overtaken by sedentary activities involving television, computer devices and other screen-based technologies [4]. Regular PA has been shown to augment the antioxidant system and reduce lipid peroxidation [5]. Antioxidants, such as vitamins C and E, selenium, and so on play important roles in protecting the cell membrane from oxidative damage, and supplementation with antioxidants has been shown to enhance physical performance and general health [6,7,8]. Vitamin E has also been shown to have a role in reducing inflammation and muscle soreness [9, 10], and recent research has suggested that increased intake can help to preserve brain function [11] and protect against nerve cell degeneration [12]. A study in the United States of America (USA) has revealed that a significant number of adults are deficient in vitamin E [13]. Men have been reported to have greater risk of vitamin E deficiency as compared to women in both developed and developing countries [14,15,16]. A recent study has reported that at least 31% of the USA population is at risk of having at least one vitamin deficiency or anemia, and further shows that it is linked to socio-demographic, life-stage, use of dietary supplement, and dietary adequacy categories [17]. Studies have shown inadequate vitamin E in diets of toddlers aged 18–30 months in Mexico, Kenya, and Egypt [18]. Depletion of vitamin E among children, adolescence, and older populations in developing countries has been related to limited sources of food containing vitamin E and high prevalence of malaria and human immunodeficiency virus in the region [19]. Deficiency of vitamin E has also been associated with low levels of vitamin C, β-carotene and other antioxidants in blood circulation [20]. This supports the theory that deficiency of vitamin E is associated with poor intake and greater oxidative stress [14]. Other factors that have been attributed to vitamin E deficiency include low fat diet, limited inclusion of fruits, vegetables, and whole grains into the diet, and increased consumption of processed food [21, 22]. However, whilst vitamin E deficiency can be deleterious, uncontrolled supplementary intake may lead to toxic effects [23]. Vitamin E has been shown to be of critical importance in early infancy as its deficiency at this stage may predispose severe consequences particularly intraventricular hemorrhage, bronchopulmonary dysplasia and delays in the development of central nervous system [24]. Previous reports have showed that effects of vitamin E were related to both anti-inflammatory and pro-inflammatory properties of both the α-tocopherol and γ-tocopherol isoform which also protects biological cells from oxidative free radical stress via antioxidant enzymes, and increase cytokine production such as Interleukin-2 (IL-2), Interleukin-6 (IL-6), and Tumor Necrosis Factor (TNF) [25,26,27,28]. In addition to its role as antioxidant, vitamin E has also shown to be involved in various physiological processes including immune function, control of inflammation, regulation of gene expression and cognitive performance [29, 30]. Vitamin E deficiency was shown to play a role in brain disorders such as cognitive decline and Alzheimer's disease [31, 32]. Pathological mechanisms affecting motor activity have been shown to be reversed with vitamin E supplementation [33,34,35]. Physical exercise play a protective role against hippocampal cell injury which produces brain memory loss [36, 37], and facilitates recovery from injury, and improves cognitive function via increase of the expression of many neurotrophic and physiological factors that are involved in neural survival, differentiation, and improvement of memory function [38, 39]. Recently, effects of physical activity on cognitive performance among a healthy older adults were evaluated that show that outcome measures of cognitive performance; in motor praxis, vasomotor organization, thinking operations, and attention and concentration improved significantly following moderate aerobic training for 24 weeks. This has been related to improvement in antioxidant capacity and decline in oxidative stress free radicals [40] Despite these positive effects, parents and teachers often pressurize students to perform better in academia [41], and their inclusion in PA such as physical education, and sports, has received limited support [42]. Although various studies have demonstrated the beneficial effects of vigorous PA on academic achievement; there are, to the best of our knowledge, no studies that correlate the effects of physical activity and vitamin E levels on academic performance. The aim of our study was, therefore, to assess the association between daily physical activity, vitamin E levels, and total antioxidant capacity (TAC) on the academic performance and executive functions of adolescents aged 15–18 years. This study was conducted during the October 2014–March 2015 semester at a secondary school. Three hundred students from grade 7 to 9 (aged 15–18 years) from six different senior secondary schools following same academic curricula were invited to participate, of which 220 (73.3%) agreed. They were screened for any health problems, disability, or mental/concentration deficit, and excluded if any such symptom was observed. Finally, 120 participants (70 boys, 50 girls) were included in the study after passing the inclusion and exclusion criteria were applied. During the study period, all students were instructed not to change their normal eating habits. The demographics and baseline characteristics of the participants are detailed in Table 1. Age, body mass index (BMI), waist to hip ratio (WHR), blood pressure, hemoglobin, and maximum oxygen uptake (VO2 max) were measured. Table 1 General characteristics of study participants according to their level of physical activity Assessment of VO2max Maximum oxygen uptake (VO2 max) was evaluated using the ergospirometry on treadmill (inclination of 1%), with initial velocity of 4.5 km/h and increase of 0.5 km/h at each minute until voluntary exhaustion or when one of the following criteria was reached: increase in the VO2 lower than 2 ml/kg/Min for the increase in the exercise intensity (plateau); expiratory exchange ratio higher than 1.1; maximum heart rate expected for the age was reached, calculated by the formula (220-age). Prior to the beginning of the test, the individuals performed 3 minutes of warm-up at the 3.1 km/h velocity. Heart rate (HR) was monitored in the electrocardiogram. The respiratory parameters were measured in an open circuit ergospirometric system using the Mix-chamber technique [43,44,45]. $$ {\mathrm{VO}}_2\ \max\ \left(\mathrm{ml}/\mathrm{kg}\ \mathrm{x}\ \min \right)={\mathrm{VO}}_2\ \left\{220-\mathrm{age}-73-\left(\mathrm{sex}\times \kern0.37em 10\right)/\mathrm{HR}-73-\left(\mathrm{sex}\times \kern0.37em 10\right)\right\} $$ $$ \mathrm{VO}2\ \left(\mathrm{ml}/\mathrm{kg}/\min \right)=\left(1.8\ \mathrm{x}\ \mathrm{work}\ \mathrm{heart}\ \mathrm{rate}\right)/\mathrm{body}\ \mathrm{weight} $$ Sex = 0 for girls and 1 for boys, HR = heart rate at final stage. Assessment of physical activity The PA of the participants was assessed by the time spent performing moderate and intense exercise programs. The activity denoted as leisure-time physical activity (LTPA) was measured by metabolic equivalents (METs), as previously reported [46, 47]. METs were calculated using previously validated questionnaire Global physical activity questionnaire (GPAQ) [48], as MET-minutes/week of the intensity of physical activity according to the formula for computation of MET-minutes/week [49, 50]: {walking MET-minutes/week = × 3.3 walking minutes x walking days} {moderate MET-minutes/week = × 4.0 moderate-intensity activity minutes x moderate days} {vigorous MET-minutes/week = × 8.0 vigorous-intensity activity minutes x vigorous days} {total PA MET-minutes/week = sum of walking + moderate + vigorous MET-minutes/week scores} The participants were classified into three groups according to PA level; mild (< 500 MET-min/week), moderate (500–2500 MET-min/week) or active (> 2500 MET-min/week). The basal metabolic rate (BMR) and total daily energy expenditure (TEE) were estimated from body mass, height, age, and PA according to the Harris and Benedict equation [51] for obese and non-obese children. Blood sampling and analysis Blood samples were obtained from all the participants in the morning after overnight fasting. Venous blood samples (5 ml) were collected into plain tubes, and the samples were allowed to clot for half an hour following which samples were centrifuged for 15 min at 2000 rpm. Samples were given a coded study identified number and were stored frozen at − 80 °C until analysis. Assessment of total antioxidant capacity Serum TAC was measured using colorimetric assay kit (catalog #K274–100; BioVision Incorporated; CA 95035 USA). The antioxidant equivalent concentrations were measured at 570 nm as a function of Trolox concentration according to the manufacturer's instructions and calculated using Eq. 1: $$ \mathrm{Sa}/\mathrm{Sv}=\mathrm{nmol}/\upmu \mathrm{l}\ \mathrm{or}\ \mathrm{mM}\ \mathrm{Trolox}\ \mathrm{equivalent} $$ where Sa is the sample amount (in nmol) read from the standard curve, and Sv is the undiluted sample volume added to the wells. Assessment of vitamin E level Vitamin E levels were estimated from the α- and γ-tocopherol levels measured in fasting serum samples of the participants using high-performance liquid chromatography paired with a diode array detector (Hitachi L-2455; Hitachi Ltd., Tokyo, Japan) (HPLC-DAD). The concentration was calculated by interpolation of results of analysis of α- tocopherol and γ-tocopherol standards (Sigma-Aldrich, Inc., St. Louis, MO, USA) as reported in literature [52]. The inter-assay coefficients of variation were 10.5 and 11.7% for serum α- tocopherol and γ-tocopherol, respectively. Assessment of school performance executive function School grades (ranging from 1.0, very poor, to 10.0, outstanding) of the participants were obtained from the school principals at the end of the academic year. The mean of individual participants' grades in biology, chemistry, physics, Arabic, English, French, mathematics, social sciences, history, geography, religious studies, physical education and health sciences was taken to represent academic performance. The performance in mathematics alone was reported separately as a measure of executive functioning [53] among participants. Data were statistically analyzed and expressed as mean ± standard deviation using SPSS 15.0 for Windows (SPSS Inc., Chicago, IL, USA). Comparison of variables was performed using the Mann-Whitney U test and t-test parameters. The correlations of PA, serum α- tocopherol and γ-tocopherol with academic performance were examined using Pearson's correlation coefficient. P values < 0.05 were considered to be significant. The correlations of PA, vitamin E and TAC level with academic performance were examined using stepwise linear regression analysis. Variables that have the highest r-squared and strong significance were added in this model. Only, in this study gender, age, BMI, vitamin E forms (α- and γ-tocopherol), and PA scores showed higher r-squared and strong significance. Whereas, VO2 max, BMR, TEE, and LTPA showed lower r-squared and deleted from the proposed model. The mean age of the participants was 16.36 ± 0.77 years. Classification by PA revealed the following distribution of participants among activity groups mild, 33% (25 male, 15 female); moderate, 42% (30 male, 20 female); and high 25% (20 male, 10 female) (Table 1). Comparison between mild, moderate and high activity groups Compared with the moderate and high activity groups the mild activity group exhibited significantly higher BMI values, WHR, (p < 0.05), and lower fitness (VO2 max) (p < 0.01). On the other hand, there were significantly higher levels of α- and γ-tocopherol, TAC activity and LTPA in the moderate and high activity groups (p < 0.05 and p < 0.01 respectively) in comparison with the mild activity group (Table 2). Table 2 Association of α- and γ-tocopherol, BMR, TEE rate, and TAC with level of physical activity Analysis of academic performance and executive function revealed that participants classified into the mild activity group had lower scores of 4.78 ± 0.73 and 4.58 ± 0.68, respectively, compared with the moderate (6.38 ± 0.39, 6.46 ± 0.37) and highly (7.43 ± 0.34, 6.84 ± 0.55) active groups (p < 0.01) (Table 3). Table 3 Association of academic performance and physical activity among subjects (n = 120) Correlation between independent variables Significant correlation was identified between age, gender, BMI, α- and γ-tocopherol, TAC activity, PA level, and academic performance. Academic performance and executive function scores are positively correlated with age, gender, α- and γ-tocopherol, TAC activity and PA score and negatively correlated with BMI (Table 4). Table 4 Stepwise multiple regression analysis academic performance predicted by α- and γ-tocopherol, total antioxidant capacity, body mass index, and physical activity of study participants (n = 120) This study demonstrates the association of different levels of PA, vitamin E levels, and TAC with the academic performance and executive functions among adolescents aged 15–18 years. We found that BMI and WHR were significantly higher in participants who were classified as having mild activity levels, while fitness and achievement scores with respect to academic performance and executive function were lower in comparison with the moderate and highly active groups. On the other hand, significantly higher levels of α- and γ-tocopherol, TAC activity, and LTPA were observed in participants in the moderate and high activity groups. Academic performance and executive function scores were found to be positively correlated with age, gender, α- and γ-tocopherol, TAC activity, and PA score; and negatively correlated with BMI. Various studies have reported the possible correlation between PA and academic achievements. Some of studies report randomized controlled trials, and most of them focus only on primary school students [54]. Most of these studies include only vigorous activities [55] and do not consider mild and moderate PA. This study included all types of PA that involve increased energy expenditure; including exercise, sport activities and activities that comprise school physical education curricula. Furthermore, previous studies neglected to include any investigation of nutrition status of children. To the best of our knowledge, this is the first study to consider serum antioxidant levels while correlating PA and academic performance. There is a paucity of research into the relationship between PA and academic performance based on ethnicity [1]. The present study was conducted at a secondary school in Middle East, where no such studies have been carried out previously. However, sedentary behavior, including increased time spent sitting while watching TV and using computer or similar devices, has been linked to poor physical fitness, lower energy expenditure, and higher BMI leading to obesity in this region [4]. Our results showing significantly higher BMI and WHR values and lower fitness in the mild activity group (more sedentary and low intensity physical activities) further support this study. Encouraging PA in and after school could promote fitness along with decreased predisposition to obesity, and consequent improved academic performance. Past studies have reported the association between school time PA and academic performance through various direct and indirect mechanisms including physiological, psychological, cognitive, emotional, and learning mechanisms [56, 57]. PA has been shown to increase cerebral blood flow and perfusion of motor areas of the brain [58, 59]. It also increases brain neural activity and arousal [60] resulting in increased cerebral serotonin levels, which are reported to have a calming effect on body [61]. Increased synaptic transmission, neurotrophin concentration, and neurogenesis; and decreased free radicals following PA have been shown to facilitate the memory process [62, 63]. While published reviews of the literature have revealed that relationship between PA and academic performance usually presents as positive correlation or no correlation [64, 65], there are some studies that suggest a negative correlation between these variables [56]. This difference in results may arise due to the specific varieties of PA and academic performance outcomes that were evaluated in the different studies. The majority of these studies did not include PA other than those included in the school curricula. To overcome this, our study included evaluation of PA that students engaged in outside of school hours. A recent survey of the PA, diet habits and TV habits of students in an Arab region [2], revealed that the majority (71%) of respondents reported they "eat and study" during school breaks rather than "eat and play". Lunch breaks or recess periods are provided to students in school to take a break from study and rejuvenate themselves. Teachers and school authorities should make sure that students engage in some form of PA during this period. Research has shown that vitamin E helps to preserve brain function and protect against nerve cell degeneration [11, 12]. This in turn contributes to the preservation of cognitive function and promotes the learning process. Our results reflect this as a positive correlation between serum vitamin E levels and overall academic performance. It has been shown that detection of neuropsychological and physiological deficits early in a child's development can predict poor academic performance [66]. Therefore, the serum levels of vitamin E and other antioxidants should be included in such an assessment in order to develop a rehabilitative plan for children with such defects. Serum levels of TAC were estimated among all participants of this study and it was shown to be significantly higher in those with moderate and high PA compared to those with mild PA. This increment in TAC was also shown to be closely related to higher levels of α- and γ-tocopherol as isoforms of vitamin E, and LTPA. Additionally, academic performance and executive function scores were also correlated positively with α- and γ-tocopherol, TAC activity, PA score. Previous reports show that potential effects of vitamin E are related to both anti-inflammatory and pro-inflammatory properties of both the α- and γ-tocopherol isoforms which protect biological cells from oxidative free radical stress via producing antioxidant enzymes, and increase cytokine production such as IL-2, IL-6, and TNF [25,26,27,28, 67]. Thus, populations with controlled food containing vitamin E isoforms (α- and γ-tocopherol) were safe from severe consequences such as elevated oxidative stress, greater cellular inflammations, and poor cognitive performance [19, 68]. PA has been shown to play an important role in improving cognitive functions among older adults by modulating redox and inflammatory status, and consequently increasing the TAC activity, and reducing oxidative free radical stress parameters such as MDA, 8-OHdG, etc. [40] Vitamin E is a lipid soluble antioxidant that scavenges free radicals to protect cell membranes and lipoproteins from oxidative damage and significantly increases TAC in physically active people [69, 70]. Vitamin E has been shown to be increased in lymphocytes of the participants following endurance exercises [71], which further supports the correlation between PA, vitamin E levels, and TAC activity as seen in our study. Children who face difficulties in learning may benefit from the provision of a balanced diet including vitamin E and other antioxidants. Breakfast has been suggested to be the most important meal of the day [72], and the regular intake of a healthy and timely breakfast has been shown to improve cognitive and memory functions of the brain [73], as well as decreasing the likelihood of weight gain [74]. Many children indicate that they miss breakfast either due to a dislike for the offered foods, or for time pressure [2]. With the availability of fast food and carbonated drinks, in school cafeterias, children often do not consume the recommended quantities of healthy foods, including milk [74]. Furthermore, consumption of processed food has been shown to be a major cause of vitamin E deficiency [13]. Although participants of this study were instructed to have their normal diet during data collection, correlation between nutrition, executive function, and academic performance could not be measured. Self-reported questionnaire was used to measure physical activity among participants. To further elucidate the correlation of vitamin E and PA, especially in relation to age and gender, a supervised aerobic training model is recommended for further studies. Groups of adolescents who were moderately of highly active were found to have significantly higher levels of vitamin E, TAC activity, LTPA, and academic performance compared with mildly active participants. The academic performance and executive function scores were found to be positively correlated with age, gender, α- and γ-tocopherol, TAC activity, and PA score; and negatively correlated with BMI. Vitamin E levels and TAC activity could be used as indicators for improving executive function, PA and academic performance among students. BMR: GPAQ: Global physical activity questionnaire IL-2: LTPA: Leisure-time physical activity Metabolic equivalents TAC: Total antioxidant capacity TDEE: Total daily energy expenditure TNF: Tumor Necrosis Factor VO2 max: Maximum oxygen uptake WHR: Rasberry CN, Lee SM, Robin L, Laris BA, Russell LA, Coyle KK, Nihiser AJ. The association between school-based physical activity, including physical education, and academic performance: a systematic review of the literature. Prev Med. 2011;52(Suppl 1):S10–20. Alghadir A, Gabr S, Iqbal ZA. Television watching, diet and body mass index of school children in Saudi Arabia. Pediatr Int. 2015;58(4):290–4. Kramer AF, Erickson KI, Colcombe SJ. Exercise, cognition, and the aging brain. J Appl Physiol (1985). 2006;101(4):1237–42. Alghadir AH, Gabr SA, Iqbal ZA. Effects of sitting time associated with media consumption on physical activity patterns and daily energy expenditure of Saudi school students. J Phys Ther Sci. 2015;27(9):2807–12. Watson TA, MacDonald-Wicks LK, Garg ML. Oxidative stress and antioxidants in athletes undertaking regular exercise training. Int J Sport Nutr Exerc Metab. 2005;15(2):131–46. Powers SK, DeRuisseau KC, Quindry J, Hamilton KL. Dietary antioxidants and exercise. J Sports Sci. 2004;22(1):81–94. Gleeson M, Nieman DC, Pedersen BK. Exercise, nutrition and immune function. J Sports Sci. 2004;22(1):115–25. Louis J, Hausswirth C, Bieuzen F, Brisswalter J. Vitamin and mineral supplementation effect on muscular activity and cycling efficiency in master athletes. Appl Physiol Nutr Metab. 2010;35(3):251–60. van Essen M, Gibala MJ. Failure of protein to improve time trial performance when added to a sports drink. Med Sci Sports Exerc. 2006;38(8):1476–83. Takanami Y, Iwane H, Kawai Y, Shimomitsu T. Vitamin E supplementation and endurance exercise: are there benefits? Sports Med. 2000;29(2):73–83. Traber MG. Does vitamin E decrease heart attack risk? Summary and implications with respect to dietary recommendations. J Nutr. 2001;131(2):395S–7S. Morris MC, Evans DA, Bienias JL, Tangney CC, Wilson RS. Vitamin E and cognitive decline in older persons. Arch Neurol. 2002;59(7):1125–32. Ford ES, Sowell A. Serum alpha-tocopherol status in the United States population: findings from the third National Health and Nutrition Examination Survey. Am J Epidemiol. 1999;150(3):290–300. Assantachai P, Lekhakula S. Epidemiological survey of vitamin deficiencies in older Thai adults: implications for national policy planning. Public Health Nutr. 2007;10(1):65–70. Cheng W-Y, Fu M-L, Wen L-J, Chen C, Pan W-H, Huang C-J. Plasma retinol and a-tocopherol status of the Taiwanese elderly population. Asia Pac J Clin Nutr. 2005;14(3):256–62. Kang M-J, Lin Y-C, Yeh W-H, Pan W-H. Vitamin E status and its dietary determinants in Taiwanese. Eur J Nutr. 2004;43(2):86–92. Bird JK, Murphy RA, Ciappio ED, McBurney MI. Risk of deficiency in multiple concurrent micronutrients in children and adults in the United States. Nutrients. 2017;9(7):655. Calloway D, Murphy S, Beaton G, Lein D. Estimated vitamin intakes of toddlers: predicted prevalence of inadequacy in village populations in Egypt, Kenya, and Mexico. Am J Clin Nutr. 1993;58(3):376–84. Dror DK, Allen LH. Vitamin E deficiency in developing countries. Food Nutr Bull. 2011;32(2):124–43. Oldewage-Theron WH, Samuel FO, Djoulde RD. Serum concentration and dietary intake of vitamins a and E in low-income south African elderly. Clin Nutr. 2010;29(1):119–23. Lukaski HC. Vitamin and mineral status: effects on physical performance. Nutrition. 2004;20(7–8):632–44. McClung JP, Gaffney-Stomberg E, Lee JJ. Female athletes: a population at risk of vitamin and mineral deficiencies affecting health and performance. J Trace Elem Med Biol. 2014;28(4):388–92. Hathcock JN, Shao A, Vieth R, Heaney R. Risk assessment for vitamin D. Am J Clin Nutr. 2007;85(1):6–18. Bell EF, Hansen NI, Brion LP, Ehrenkranz RA, Kennedy KA, Walsh MC, Shankaran S, Acarregui MJ, Johnson KJ, Hale EC. Serum tocopherol levels in very preterm infants after a single dose of vitamin E at birth. Pediatrics. 2013;132(6):e1626–33. Abdala-Valencia H, Berdnikovs S, Cook-Mills JM. Vitamin E isoforms as modulators of lung inflammation. Nutrients. 2013;5(11):4347–63. McCary CA, Abdala-Valencia H, Berdnikovs S, Cook-Mills JM. Supplemental and highly elevated tocopherol doses differentially regulate allergic inflammation: Reversibility of α-tocopherol and γ-tocopherol's effects. J Immunol. 2011;186(6):3674–85. Cook-Mills JM, Abdala-Valencia H, Hartert T. Two faces of vitamin E in the lung. Am J Respir Crit Care Med. 2013;188(3):279–84. McCary CA, Yoon Y, Panagabko C, Cho W, Atkinson J, Cook-Mills JM. Vitamin E isoforms directly bind PKCα and differentially regulate activation of PKCα. Biochem J. 2012;441(1):189–98. Masaki K, Losonczy K, Izmirlian G, Foley D, Ross G, Petrovitch H, Havlik R, White L. Association of vitamin E and C supplement use with cognitive function and dementia in elderly men. Neurology. 2000;54(6):1265–72. Grodstein F, Chen J, Willett WC. High-dose antioxidant supplements and cognitive function in community-dwelling elderly women. Am J Clin Nutr. 2003;77(4):975–84. Di Donato I, Bianchi S, Federico A. Ataxia with vitamin E deficiency: update of molecular diagnosis. Neurol Sci. 2010;31(4):511–5. Huebbe P, Lodge JK, Rimbach G. Implications of apolipoprotein E genotype on inflammation and vitamin E status. Mol Nutr Food Res. 2010;54(5):623–30. Guggenheim MA, Ringel SP, Silverman A, Grabert BE. Progressive neuromuscular disease in children with chronic cholestasis and vitamin E deficiency: diagnosis and treatment with alpha tocopherol. J Pediatr. 1982;100(1):51–8. Osoegawa M, Ohyagi Y, Inoue I, Tsuruta Y, Iwaki T, Taniwaki T, Kira J-i. A patient with vitamin E deficient, myopathy presenting with amyotrophy. Rinsho Shinkeigaku. 2001;41(7):428–31. Tomasi LG. Reversibility of human myopathy caused by vitamin E deficiency. Neurology. 1979;29(8):1182. Tyndall AV, Davenport MH, Wilson BJ, Burek GM, Arsenault-Lapierre G, Haley E, Eskes GA, Friedenreich CM, Hill MD, Hogan DB. The brain-in-motion study: effect of a 6-month aerobic exercise intervention on cerebrovascular regulation and cognitive function in older adults. BMC Geriatr. 2013;13(1):21. Ahlskog, JE, Geda, YE, Graff‐Radford, NR, Petersen, RC. Physical exercise as a preventive or disease-modifying treatment of dementia and brain aging. Mayo Clin Proc. 2011;86:876–84. Davenport MH, Hogan DB, Eskes GA, Longman RS, Poulin MJ. Cerebrovascular reserve: the link between fitness and cognitive function? Exerc Sport Sci Rev. 2012;40(3):153–8. Deley G, Kervio G, Van Hoecke J, Verges B, Grassi B, Casillas J-M. Effects of a one-year exercise training program in adults over 70 years old: a study with a control group. Aging Clin Exp Res. 2007;19(4):310–5. Alghadir AH, Gabr SA, Al-Eisa ES. Effects of moderate aerobic exercise on cognitive abilities and redox state biomarkers in older adults. Oxidative Med Cell Longev. 2016;2016:2545168. Ahamed Y, Macdonald H, Reed K, Naylor PJ, Liu-Ambrose T, McKay H. School-based physical activity does not compromise children's academic performance. Med Sci Sports Exerc. 2007;39(2):371–6. Kwak L, Kremers SP, Bergman P, Ruiz JR, Rizzo NS, Sjostrom M. Associations between physical activity, fitness, and academic achievement. J Pediatr. 2009;155(6):914–918 e911. Grant JA, Joseph AN, Campagna PD. The prediction of VO2max: a comparison of 7 indirect tests of aerobic power. J Strength Cond Res. 1999;13(4):346–52. American College of Sports Medicine. Guidelines for exercise testing and prescription. 6th ed. Baltimore: Lippincott Williams & Wilkins; 2000. Karila C, de Blic J, Waernessyckle S, Benoist M-R, Scheinmann P. Cardiopulmonary exercise testing in children: an individualized protocol for workload increase. Chest. 2001;120(1):81–7. Bull FC, Maslin TS, Armstrong T. Global physical activity questionnaire (GPAQ): nine country reliability and validity study. J Phys Act Health. 2009;6(6):790–804. Trinh OT, Nguyen ND, van der Ploeg HP, Dibley MJ, Bauman A. Test-retest repeatability and relative validity of the global physical activity questionnaire in a developing country context. J Phys Act Health. 2009;6(Suppl 1):S46–53. Armstrong T, Bull F. Development of the world health organization global physical activity questionnaire (GPAQ). J Public Health. 2006;14(2):66–70. Han TS, Sattar N, Lean M. ABC of obesity: assessment of obesity and its clinical implications. BMJ. 2006;333(7570):695. Ashok P, Kharche JS, Raju R, Godbole G. Metabolic equivalent task assessment for physical activity in medical students. Natl J Physiol Pharm Pharmacol. 2017;7(3):236. Harris JA, Benedict FG. Biometric standards for energy requirements in human nutrition. Sci Mon. 1919;8(5):2–19. Gunter EW, Driskell WJ, Yeager PR. Stability of vitamin E in long-term stored serum. Clin Chim Acta. 1988;175(3):329–35. Bull R, Scerif G. Executive functioning as a predictor of children's mathematics ability: inhibition, switching, and working memory. Dev Neuropsychol. 2001;19(3):273–93. Sigfusdottir ID, Kristjansson AL, Allegrante JP. Health behaviour and academic achievement in Icelandic school children. Health Educ Res. 2007;22(1):70–80. Coe DP, Pivarnik JM, Womack CJ, Reeves MJ, Malina RM. Effect of physical education and activity levels on academic achievement in children. Med Sci Sports Exerc. 2006;38(8):1515–9. Trudeau F, Shephard RJ. Physical education, school physical activity, school sports and academic performance. Int J Behav Nutr Phys Act. 2008;5:10. Kirkcaldy BD, Shephard RJ, Siefen RG. The relationship between physical activity and self-image and problem behaviour among adolescents. Soc Psychiatry Psychiatr Epidemiol. 2002;37(11):544–50. Ide K, Horn A, Secher NH. Cerebral metabolic response to submaximal exercise. J Appl Physiol. 1999;87(5):1604–8. Graf C, Koch B, Klippel S, Buttner S, Coburger S, Christ H, Lehmacher W, Bjarnason-Wehrens B, Platen P, Hollmann W, et al. Correlation between physical activities and concentration in children results of the CHILT project. Deut Z Sportmed. 2003;54(9):242–6. Wininger SR. Improvement of affect following exercise: methodological artifact or real finding? Anxiety Stress Coping. 2007;20(1):93–102. Cook EH Jr, Leventhal BL, Freedman DX. Serotonin and measured intelligence. J Autism Dev Disord. 1988;18(4):553–9. Cooke SF, Bliss TV. Plasticity in the human central nervous system. Brain. 2006;129(Pt 7):1659–73. Kempermann G, van Praag H, Gage FH. Activity-dependent regulation of neuronal plasticity and self repair. Prog Brain Res. 2000;127:35–48. Shephard RJ. Curricular physical activity and academic performance. Pediatr Exerc Sci. 1997;9(2):113–26. Carlson SA, Fulton JE, Lee SM, Maynard LM, Brown DR, Kohl HW 3rd, Dietz WH. Physical education and academic achievement in elementary school: data from the early childhood longitudinal study. Am J Public Health. 2008;98(4):721–7. Rourke BP, Conway JA. Disabilities of arithmetic and mathematical reasoning: perspectives from neurology and neuropsychology. J Learn Disabil. 1997;30(1):34–46. Berdnikovs S, Abdala-Valencia H, McCary C, Somand M, Cole R, Garcia A, Bryce P, Cook-Mills JM. Isoforms of vitamin E have opposing immunoregulatory functions during inflammation by regulating leukocyte recruitment. J Immunol. 2009;182(7):4395–405. Jiang Q, Christen S, Shigenaga MK, Ames BN. γ-Tocopherol, the major form of vitamin E in the US diet, deserves more attention. Am J Clin Nutr. 2001;74(6):714–22. Traber MG. Vitamin E regulatory mechanisms. Annu Rev Nutr. 2007;27:347–62. Traber M, Kamal-Eldin A. Oxidative stress and vitamin E in anemia. Kraemer K, Zimmermann MB, editors. eds. Nutritional Anemia. Basel: SIGHT AND LIFE Press, ISBN 3-906412-33-4 2007: p. 155–187. Cases N, Aguilo A, Tauler P, Sureda A, Llompart I, Pons A, Tur J. Differential response of plasma and immune cell's vitamin E levels to physical activity and antioxidant vitamin supplementation. Eur J Clin Nutr. 2005;59(6):781. Miller GD, Forgoc T, Cline T, McBean LD. Breakfast benefits children in the US and aboard. J Am Coll Nutr. 1998;17(1):4–6. Sjoberg A, Hallberg L, Hoglund D, Hulthen L. Meal pattern, food choice, nutrient intake and lifestyle factors in the Goteborg adolescence study. Eur J Clin Nutr. 2003;57(12):1569–78. Rampersaud GC, Pereira MA, Girard BL, Adams J, Metzl JD. Breakfast habits, nutritional status, body weight, and academic performance in children and adolescents. J Am Diet Assoc. 2005;105(5):743–60; quiz 761-742. The authors would like to extend their sincere appreciation to the Deanship of Scientific Research at King Saud University for funding this research through the research group No. RGP-VPP-209. The authors would like to extend their sincere appreciation to the Deanship of Scientific Research at King Saud University for funding this research through the research group No. RGP-VPP-209. Funding body had no role in designing the study, collection, analysis, and interpretation of data, or in writing the manuscript. The data used and analyzed during the current study are available from the corresponding author on reasonable request. Department of Rehabilitation Sciences, College of Applied Medical Sciences, King Saud University, Riyadh, Kingdom of Saudi Arabia Ahmad H. Alghadir , Sami A. Gabr , Zaheen A. Iqbal & Einas Al-Eisa Department of Anatomy, Faculty of Medicine, Mansoura University, Mansoura, Egypt Sami A. Gabr Search for Ahmad H. Alghadir in: Search for Sami A. Gabr in: Search for Zaheen A. Iqbal in: Search for Einas Al-Eisa in: Research ideas and study design were proposed by SG and AA. Review of the literature was carried out by ZI and EE. Data collection and analysis were executed by ZI and SG. Manuscript preparation and submission were done by ZI and AA. All authors read and approved the final manuscript. Correspondence to Zaheen A. Iqbal. The aims and methodology of this study were explained to all participants and their parents, and written informed consent was obtained. In the case of minor participants (age < 16 years), informed consent was obtained from the parents/legal guardians. Ethical approval in accordance to the Declaration of Helsinki was obtained from the Rehabilitation Research review board of King Saud University before data collection (Ref. KSU/RRC/85/11/2017). Alghadir, A.H., Gabr, S.A., Iqbal, Z.A. et al. Association of physical activity, vitamin E levels, and total antioxidant capacity with academic performance and executive functions of adolescents. BMC Pediatr 19, 156 (2019). https://doi.org/10.1186/s12887-019-1528-1 Antioxidant capacity
CommonCrawl
Determination of singular time-dependent coefficients for wave equations from full and partial data IPI Home An inverse problem for the magnetic Schrödinger operator on Riemannian manifolds from partial boundary data June 2018, 12(3): 773-799. doi: 10.3934/ipi.2018033 Backward problem for a time-space fractional diffusion equation Junxiong Jia 1, , Jigen Peng 2,, , Jinghuai Gao 3, and Yujiao Li 4, School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049, China Department of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an 710049, China Department of Bioengineering, Xi'an Jiaotong University, Xi'an 710049, China * Corresponding author: Jigen Peng Received March 2016 Revised January 2018 Published March 2018 Figure(6) / Table(3) In this paper, a backward problem for a time-space fractional diffusion process has been considered. For this problem, we propose to construct the initial function by minimizing data residual error in Fourier space domain with variable total variation (TV) regularizing term which can protect the edges as TV regularizing term and reduce staircasing effect. The well-posedness of this optimization problem is obtained under a very general setting. Actually, we rewrite the time-space fractional diffusion equation as an abstract fractional differential equation and deduce our results by using fractional operator semigroup theory, hence, our theoretical results can be applied to other backward problems for the differential equations with more general fractional operator. Then a modified Bregman iterative algorithm has been proposed to approximate the minimizer. The new features of this algorithm is that the regularizing term altered in each step and we need not to solve the complex Euler-Lagrange equation of variable TV regularizing term (just need to solve a simple Euler-Lagrange equation). The convergence of this algorithm and the strategy of choosing parameters are also obtained. Numerical implementations are provided to support our theoretical analysis to show the flexibility of our minimization model. Keywords: Backward time-space fractional diffusion, fractional operator semigroup, Bregmann iterative method, variable TV regularization. Mathematics Subject Classification: Primary: 35R30, 35R11; Secondary: 65L09. Citation: Junxiong Jia, Jigen Peng, Jinghuai Gao, Yujiao Li. Backward problem for a time-space fractional diffusion equation. Inverse Problems & Imaging, 2018, 12 (3) : 773-799. doi: 10.3934/ipi.2018033 R. Acar and C. R. Vogel, Analysis of bounded variation penalty methods for ill-posed problems, Inverse Probl., 10 (1994), 1217-1229. doi: 10.1088/0266-5611/10/6/003. Google Scholar W. Arendt, C. J. Batty, M. Hieber and F. Neubrander, Vector-Valued Laplace Transforms and Cauchy Problems, Birkhäuser Verlag, Basel, 2001. Google Scholar B. Baeumer, M. Kovács and H. Sankaranarayanan, Higher order Grünwald approximations of fractional derivatives and fractional powers of operators, T. Am. Math. Soc., 367 (2015), 813-834. doi: 10.1090/S0002-9947-2014-05887-X. Google Scholar B. Baeumer, M. Kovács, M. M. Meerschaert, R. Schilling and P. Straka, Reflected spectrally negative stable processes and their governing equations, T. Am. Math. Soc., 368 (2016), 227-248. doi: 10.1090/tran/6360. Google Scholar B. Baeumer, M. M. Meerschaert and E. Nane, Space-time duality for fractional diffusion, J. Appl. Probab., 46 (2009), 1100-1115. doi: 10.1239/jap/1261670691. Google Scholar B. Baeumer, S. Kurita and M. M. Meerschaert, Inhomogeneous fractional diffusion equations, Fract. Calc. Appl. Ana., 8 (2005), 371-386. Google Scholar E. G. Bajlekova, Fractional Evolution Equations in Banach Spaces, Ph. D thesis, Eindhoven University of Technology in Eindhoven, 2001. Google Scholar P. Blomgren, T. F. Chan, P. Mulet, L. Vese and W. L. Wan, Variational PDE Models and Methods for Image Processing, Chapman & Hall/CRC, Boca Raton, 2000. Google Scholar E. M. Bollt, R. Chartrand, S. Esedo$\bar{\text{g}}$lu, P. Schultz and K. R. Vixie, Graduated adaptive image denoising: Local compromise between total variation and isotropic diffusion, Adv. Comput. Math., 31 (2009), 61-85. doi: 10.1007/s10444-008-9082-7. Google Scholar T. Bui-Thanh and O. Ghattas, A scalable algorithm for MAP estimators in Bayesian inverse problems with Besov priors, Inverse Probl. Imag., 9 (2015), 27-53. doi: 10.3934/ipi.2015.9.27. Google Scholar C. M. Carracedo and M. S. Alix, The Theory of Fractional Powers of Operators, North-Holland Publishing Co., Amsterdam, 2001. Google Scholar J. Cheng, J. Nakagawa, M. Yamamoto and T. Yamazaki, Uniqueness in an inverse problem for a one-dimensional fractional diffusion equation, Inverse Probl. , 25 (2009), 115002, 16pp. doi: 10.1088/0266-5611/25/11/115002. Google Scholar L. C. Evans, Partial Differential Equations, American Mathematical Society, Providence, 1998. Google Scholar T. Goldstein and S. Osher, The split Bregman method for L1-regularized problems, SIAM J. Imaging Sci., 2 (2009), 323-343. doi: 10.1137/080725891. Google Scholar M. Haase, The Functional Calculus for Sectorial Operators, Birkhäuser Verlag, Basel, 2006.Google Scholar P. Harjulehto, P. Hästö, V. Latvala and O. Toivanen, Critical variable exponent functionals in image restoration, Appl. Math. Lett., 26 (2013), 56-60. doi: 10.1016/j.aml.2012.03.032. Google Scholar P. Harjulehto, P. Hästö and V. Latvala, Minimizers of the variable exponent, non-uniformly convex Dirichlet energy, J. Math. Pure. Appl., 89 (2008), 174-197. doi: 10.1016/j.matpur.2007.10.006. Google Scholar N. Jacob, Pseudo Differential Operators and Markov Processes: Fourier Analysis and Semigroups, Imperial College Press, London, 2001. Google Scholar J. Jia, J. Peng and K. Li, Well-posedness of abstract distributed-order fractional-order fractional diffusion equations, Commun. Pur. Appl. Anal., 13 (2014), 605-621. doi: 10.3934/cpaa.2014.13.605. Google Scholar B. Jin, R. Lazarov, D. Sheen and Z. Zhou, Error estimates for approximations of distributed order time fractional diffusion with nonsmooth data, Fract. Calc. Appl. Ana., 19 (2016), 69-93. doi: 10.1515/fca-2016-0005. Google Scholar B. Jin and W. Rundell, A tutorial on inverse problems for anomalous diffusion processes, Inverse Probl., 31 (2015), 035003, 40pp. doi: 10.1088/0266-5611/31/3/035003. Google Scholar R. Klages, G. Radons and I. M. Sokolov, Anomalous Transport: Foundations and Applications, Wiley-VCH Verlag GmbH & Co. KGaA, Darmstadt, 2008. doi: 10.1002/9783527622979. Google Scholar F. Li, Z. Li and L. Pi, Variable exponent functionals in image restoration, Appl. Math. Comput., 216 (2010), 870-882. doi: 10.1016/j.amc.2010.01.094. Google Scholar M. Li, C. Chen and F. B. Li, On fractional powers of generators of fractional resolvent families, J. Funct. Anal., 259 (2010), 2702-2726. doi: 10.1016/j.jfa.2010.07.007. Google Scholar J. Liu and M. Yamamoto, A backward problem for the time-fractional diffusion equation, Appl. Anal., 89 (2010), 1769-1788. doi: 10.1080/00036810903479731. Google Scholar F. Mainardi, Fractional relaxation-oscillation and fractional diffusion-wave phenomena, Chaos Soltion Frac., 7 (1996), 1461-1477. doi: 10.1016/0960-0779(95)00125-5. Google Scholar F. Mainardi and M. Tomirotti, On a special function arising in the time fractional diffusion-wave equation, Transform Methods and Special Functions, Sofia'94, Proceedings of International Workshop, (1994), 171-183. Google Scholar M. M. Meerschaert and A. Sikorskii, Stochastic Models for Fractional Calculus, Walter de Gruyter & Co, Berlin/Boston, 2012. doi: 10.1515/9783110258165. Google Scholar S. Osher, M. Burger, D. Goldfarb, J. Xu and W. Yin, An iterative regularization method for total variation-based image restoration, Multiscale Model. Sim., 4 (2005), 460-489. doi: 10.1137/040605412. Google Scholar J. Peng and K. Li, A novel characteristic of solution operator for the fractional abstract Cauchy problem, J. Math. Anal. Appl., 385 (2012), 786-796. doi: 10.1016/j.jmaa.2011.07.009. Google Scholar I. Podlubny, Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications, Academic Press, Inc., San Diego, CA, 1999. Google Scholar I. Podlubny, Matlab program for computing Mittag-Leffler fuction $E_{α, β}(·)$, http://www.mathworks.com/matlabcentral/fileexchange/8738-mittag-leffler-function.Google Scholar K. Sakamoto and M. Yamamoto, Initial value/boundary value problems for fractional diffusion-wave equations and applications to some inverse problems, J. Math. Anal. Appl., 382 (2011), 426-447. doi: 10.1016/j.jmaa.2011.04.058. Google Scholar J. Tiirola, Image decompositions using spaces of variable smoothness and integrability, SIAM J. Imaging Sci., 7 (2014), 1558-1587. doi: 10.1137/130923324. Google Scholar L. Wang and J. Liu, Total variation regularization for a backward time-fractional diffusion problem, Inverse Probl. 29 (2013), 115013, 22pp. doi: 10.1088/0266-5611/29/11/115013. Google Scholar Z. Wang, J. Gao, Q. Zhou, K. Li and J. Peng, A new extension of seismic instantaneous frequency using a fractional time derivative, J. Appl. Geophys., 98 (2013), 176-181. doi: 10.1016/j.jappgeo.2013.08.016. Google Scholar M. X. Wang, Operator Semigroup and Evolutionary Equations (in Chinese), Science Press, China, 2006.Google Scholar E. M. Wright, The generalized Bessel function of order greater than one, Q. J. Math., 11 (1940), 36-48. doi: 10.1093/qmath/os-11.1.36. Google Scholar G. M. Zaslavsky, Chaos, fractional kinetics, and anomalous transport, Phys. Rep., 371 (2002), 461-580. doi: 10.1016/S0370-1573(02)00331-9. Google Scholar Y. Zhang and X. Xu, Inverse source problem for a fractional diffusion equation, Inverse Probl. 27 (2011), 035010, 12pp. doi: 10.1088/0266-5611/27/3/035010. Google Scholar G. H. Zheng and T. Wei, Two regularization methods for solving a Riesz-Feller space-fractional backward diffusion problem, Inverse Probl. 26 (2010), 115017, 22pp. doi: 10.1088/0266-5611/26/11/115017. Google Scholar Figure 1. Left: Initial data; Right: The solution of the fractional diffusion equation (2) at time $T = 1$ with $\alpha = 0.6$, $\beta = 0.9$. Figure Options Download full-size image Download as PowerPoint slide Figure 2. Left: Boundaries of the initial data; Right: Boundaries of the solution of the fractional diffusion equation (2) at time $T = 1$ with $\alpha = 0.6$, $\beta = 0.9$. Figure 3. Left: Original function; Middle: Recovered function by variable TV model with $\delta = 0.0005$; Right: Recovered function by variable TV model with $\delta = 0.005$ for Example 1. Figure 4. Initial function for Example 2. Figure 5. Left: Recovered function by variable TV model with $\delta = 0.0005$ for Example 2; Right: Recovered function by the variable TV model with $\delta = 0.005$ for Example 2. Figure 6. The curve of the relative error of the recovered data for different values of parameter $\alpha$ Table 1. The values of RelErr of three methods for Example 1 RelErr TV model Tikhonov model Variable TV model $\sigma = 0.0005$ $3.8283\%$ $0.3857\%$ $0.3696\%$ $\sigma = 0.005$ $8.8646\%$ $0.6559\%$ $0.6597\%$ Table Options $\sigma = 0.0005$ $13.0053\%$ $13.7772\%$ $13.0666\%$ $\sigma = 0.005$ $22.7222\%$ $25.2101\%$ $22.7810\%$ Table 3. The values of RelErr with different parameters λ of Variable TV model for Example 2 $\lambda = 10^{11}$ $\lambda = \frac{1}{4}\times 10^{11}$ $\lambda = \frac{1}{16}\times 10^{11}$ $\sigma = 0.0005$ $\text{M} = 9$ $\text{M} = 34$ $\text{M} = 150$ $\text{RelErr} = 13.0792\%$ $\text{RelErr} = 13.0342\%$ $\text{RelErr} = 13.0275\%$ Stanisław Migórski, Shengda Zeng. The Rothe method for multi-term time fractional integral diffusion equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 719-735. doi: 10.3934/dcdsb.2018204 Alina Toma, Bruno Sixou, Françoise Peyrin. Iterative choice of the optimal regularization parameter in TV image restoration. Inverse Problems & Imaging, 2015, 9 (4) : 1171-1191. doi: 10.3934/ipi.2015.9.1171 Gui-Qiang Chen, Kenneth Hvistendahl Karlsen. Quasilinear anisotropic degenerate parabolic equations with time-space dependent diffusion coefficients. Communications on Pure & Applied Analysis, 2005, 4 (2) : 241-266. doi: 10.3934/cpaa.2005.4.241 Meng Zhao, Aijie Cheng, Hong Wang. A preconditioned fast Hermite finite element method for space-fractional diffusion equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3529-3545. doi: 10.3934/dcdsb.2017178 Anouar Bahrouni, VicenŢiu D. RĂdulescu. On a new fractional Sobolev space and applications to nonlocal variational problems with variable exponent. Discrete & Continuous Dynamical Systems - S, 2018, 11 (3) : 379-389. doi: 10.3934/dcdss.2018021 Moulay Rchid Sidi Ammi, Ismail Jamiai. Finite difference and Legendre spectral method for a time-fractional diffusion-convection equation for image restoration. Discrete & Continuous Dynamical Systems - S, 2018, 11 (1) : 103-117. doi: 10.3934/dcdss.2018007 Zhousheng Ruan, Sen Zhang, Sican Xiong. Solving an inverse source problem for a time fractional diffusion equation by a modified quasi-boundary value method. Evolution Equations & Control Theory, 2018, 7 (4) : 669-682. doi: 10.3934/eect.2018032 Binjie Li, Xiaoping Xie. Regularity of solutions to time fractional diffusion equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3195-3210. doi: 10.3934/dcdsb.2018340 Costică Moroşanu. Stability and errors analysis of two iterative schemes of fractional steps type associated to a nonlinear reaction-diffusion equation. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 1-21. doi: 10.3934/dcdss.2020089 Hong Wang, Aijie Cheng, Kaixin Wang. Fast finite volume methods for space-fractional diffusion equations. Discrete & Continuous Dynamical Systems - B, 2015, 20 (5) : 1427-1441. doi: 10.3934/dcdsb.2015.20.1427 Huan Han. A variational model with fractional-order regularization term arising in registration of diffusion tensor image. Inverse Problems & Imaging, 2018, 12 (6) : 1263-1291. doi: 10.3934/ipi.2018053 Zhidong Zhang. An undetermined time-dependent coefficient in a fractional diffusion equation. Inverse Problems & Imaging, 2017, 11 (5) : 875-900. doi: 10.3934/ipi.2017041 Kenichi Fujishiro, Yavar Kian. Determination of time dependent factors of coefficients in fractional diffusion equations. Mathematical Control & Related Fields, 2016, 6 (2) : 251-269. doi: 10.3934/mcrf.2016003 Jaan Janno, Kairi Kasemets. Uniqueness for an inverse problem for a semilinear time-fractional diffusion equation. Inverse Problems & Imaging, 2017, 11 (1) : 125-149. doi: 10.3934/ipi.2017007 Chi Hin Chan, Magdalena Czubak, Luis Silvestre. Eventual regularization of the slightly supercritical fractional Burgers equation. Discrete & Continuous Dynamical Systems - A, 2010, 27 (2) : 847-861. doi: 10.3934/dcds.2010.27.847 Barbara Kaltenbacher, William Rundell. Regularization of a backwards parabolic equation by fractional operators. Inverse Problems & Imaging, 2019, 13 (2) : 401-430. doi: 10.3934/ipi.2019020 Zhigang Wang, Lei Wang, Yachun Li. Renormalized entropy solutions for degenerate parabolic-hyperbolic equations with time-space dependent coefficients. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1163-1182. doi: 10.3934/cpaa.2013.12.1163 Hai Huyen Dam, Kok Lay Teo. Variable fractional delay filter design with discrete coefficients. Journal of Industrial & Management Optimization, 2016, 12 (3) : 819-831. doi: 10.3934/jimo.2016.12.819 Dina Tavares, Ricardo Almeida, Delfim F. M. Torres. Fractional Herglotz variational problems of variable order. Discrete & Continuous Dynamical Systems - S, 2018, 11 (1) : 143-154. doi: 10.3934/dcdss.2018009 Haixia Yu. Hilbert transforms along double variable fractional monomials. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1433-1446. doi: 10.3934/cpaa.2019069 PDF downloads (111) HTML views (297) Junxiong Jia Jigen Peng Jinghuai Gao Yujiao Li
CommonCrawl
Should I capitalise the "N" in "Normal Distribution" in British English? This question is a bit left-field, but I figured that the community here probably has strong views on the subject! I am writing up my PhD thesis. Consistently, when talking about quantities which are formally related to a Gaussian distribution, I have capitalised the "N" in "Normal" to refer to them. For example, "[...Under such circumstances] the resulting distribution is not Normal, but rather described by [...]". My supervisor has read through the relevant chapter, and replaced every single one of these with a lower-case 'n'. I can't find any definitive literature on the subject -- Springer apparently wanted names properly capitalised, and according to another random dude on the internet, capitalising distribution names is a Good Idea. Lacking a definitive style guide for my thesis, I thought I'd turn to the community of experts -- what is commonly done, and why? normal-distribution terminology LandakLandak $\begingroup$ I tend to capitalize "Normal" to emphasize that no member of this family of distributions is "normal." $\endgroup$ – whuber♦ Sep 21 '15 at 21:54 For what it's worth, Wikipedia says this on the origin of the name: Since its introduction, the normal distribution has been known by many different name... Gauss himself apparently coined the term with reference to the "normal equations" involved in its applications, with normal having its technical meaning of orthogonal rather than "usual". However, by the end of the 19th century some authors had started using the name normal distribution, where the word "normal" was used as an adjective... https://en.wikipedia.org/wiki/Normal_distribution#Naming It is also not capitalized in the Wikipedia article, nor have I seen it capitalized in general as an American English speaker. For all intents and purposes normal IS an adjective, though not one that's meant to imply all other distributions are 'abnormal'. Cat'r'pillarCat'r'pillar $\begingroup$ (+1) this should be the accepted answer. I was mislead by my non-English background. After reviewing multiple sources it seems that in vast majority use lowercase names are used except names based on surnames. $\endgroup$ – Tim♦ Sep 21 '15 at 15:17 $\begingroup$ @Tim that's the standard I've always seen and used $\endgroup$ – shadowtalker Sep 22 '15 at 3:07 $\begingroup$ @ssdecontrol I've seen different usages, check my edited answer. But yes, it seems that lowercase is the standard. $\endgroup$ – Tim♦ Sep 22 '15 at 5:52 On one hand, "Normal" seems not to be an adjective, nor a feature of some distribution that it is more normal than any other (or more "beta", more "binomial"). "Normal" is a name of a distribution and can to be considered as a proper noun, and so be capitalized. As @Scortchi noticed in his comment, this is also a general term and people seem to capitalize such terms. If you look into the literature, you'll see that some authors capitalize all the names of distributions, while some seem to never do so. On another hand, currently (e.g. by Forbes et al., Krishnamoorty, Fisher, Cox et al. and others), it seems that most commonly names of distributions are written in lowercase (e.g. normal, beta, binomial) and are capitalized if they come from surnames (e.g. Cauchy, Gaussian, Poisson). There are also some names that are always written in lowercase as $t$-distribution (example here). While Halperin et al. (1965) in their recommendations do not mention distribution names, in their text they write about chi-squared and standarized normal distributions in lowercase. This convention may be confusing since in formulas names of distributions are almost always written capitalized (e.g. $X \sim \mathrm{Normal}(\mu, \sigma)$ or $X \sim \mathcal{N}(\mu, \sigma)$) and also because many names come from surnames. However, contrary to my initial answer, it seems that lowercase names are used more commonly and so can be considered as a current convention. (image source: Freeman, 2006) Halperin, M., Hartley, H.O., and Hoel, P.G. (1965). Recommended Standards for Statistical Symbols and Notation. COPSS Committee on Symbols and Notation. The American Statistician, 19(3): 12–14. Freeman, A. (2006). A visual comparison of normal and paranormal distributions. J Epidemiol Community Health, 60(1): 6. Tim♦Tim $\begingroup$ I'd say "normal" in this context is an adjective and therefore should not be capitalised. However, "Gaussian" would be capitalised. This seems to be the accepted usage on the Wiki page for the normal distribution. $\endgroup$ – babelproofreader Sep 21 '15 at 10:36 $\begingroup$ Shouldn't it be "Normal Distribution" then, if it's a proper name? Seems to me it's rather like the "Mountain Bluebird" example in that Wikipedia article. $\endgroup$ – Scortchi - Reinstate Monica♦ Sep 21 '15 at 10:42 $\begingroup$ It's a bit deceptive to quote Wikipedia in support of capitalization, when they uniformly lowercase "normal distribution": en.wikipedia.org/wiki/Normal_distribution $\endgroup$ – Charles Sep 21 '15 at 15:07 $\begingroup$ I down-voted not because the answer is bluntly wrong but to steer people towards Grace's answer. (Otherwise your answer is quite nice!) $\endgroup$ – usεr11852 says Reinstate Monic Sep 21 '15 at 15:56 $\begingroup$ @David "Just wrong" seems to be going too far. No part of the meaning of "proper noun" concerns the singularity or plurality of the referent. Plenty of proper nouns refer to families, such as the Obamas, the Beatles, or a Gaussian (which is even a synonym for "normal distribution"!). $\endgroup$ – whuber♦ Sep 22 '15 at 12:24 Not the answer you're looking for? Browse other questions tagged normal-distribution terminology or ask your own question. What is the correct spelling and capitalization of "Naive Bayes"? How well does the normal distribution perform? What is the difference between normal distribution and standard normal distribution? What is the difference between a Normal and a Gaussian Distribution Bayes in English Normal distribution. Find the average Is the following function a normal distribution? Standard normal distribution vs Unit normal distribution
CommonCrawl
Markov modeling in hepatitis B screening and linkage to care Martin A. Sehr1, Kartik D. Joshi2, John M. Fontanesi3, Robert J. Wong4, Robert R. Bitmead1 & Robert G. Gish5,6,7 Theoretical Biology and Medical Modelling volume 14, Article number: 11 (2017) Cite this article With up to 240 million people chronically infected with hepatitis B worldwide, including an estimated 2 million in the United States, widespread screening is needed to link the infected to care and decrease the possible consequences of untreated infection, including liver cancer, cirrhosis and death. Screening is currently fraught with challenges in both the developed and developing world. New point-of-care tests may have advantages over standard-of-care tests in terms of cost-effectiveness and linkage to care. Stochastic modeling is applied here for relative utility assessment of point-of-care tests and standard-of-care tests for screening. We analyzed effects of point-of-care versus standard-of-care testing using Markov models for disease progression in individual patients. Simulations of large cohorts with distinctly quantified models permitted the assessment of particular screening schemes. The validity of the trends observed is supported by sensitivity analyses for the simulation parameters. Increased utilization of point-of-care screening was shown to decrease hepatitis B-related mortalities and increase life expectancy at low projected expense. The results suggest that standard-of-care screening should be substituted by point-of-care tests resulting in improved linkage to care and decrease in long-term complications. With up to 240 million people chronically infected with hepatitis B virus (HBV) worldwide [1], including an estimated 2 million people in the United States [2, 3], widespread testing to identify the infected is needed in order to link them to care and decrease the possible consequences of untreated HBV infection, which include approximately 500,000 to 1.2 million deaths yearly from liver cirrhosis and its complications, including primary liver cancer [1]. Limitations related to funding and access to commercially available tools for chronic HBV testing are particularly important in developing countries where the burden of chronic HBV is heaviest. Success of traditional standard-of-care (SOC) testing for HBV infection hinges on the existence of a systematic process of following up test results that return several days after testing, notifying patients of results, and arranging for follow-ups to discuss antiviral therapy, a system that requires resources that are limited in developing regions. The development of rapid point-of-care (POC) tests for HBV has the potential to address many of these limiting factors and establish a more effective medical care model for chronic HBV. In a recent study of patients undergoing HBV screening, the performance characteristics of NanoSign® HBs POC chromatographic immunoassay was compared with standard commercial laboratory HBsAg testing (Quest Diagnostics EIA). The POC tests yielded a sensitivity of 73.7% and a specificity of 97.8% [4]. In a meta-analysis evaluating the accuracy of POC testing, Shivkumar et al. reported POC testing sensitivity of 93-98% and specificity of 93–99% [5]. Furthermore, the low cost ($0.50) and rapid turnaround (20 min from phlebotomy to test results) of POC tests give them the potential to significantly improve the widespread implementation of HBV screening, especially in resource-limited regions. Modeling in HBV analysis and treatment is an active research topic and multiple approaches have been considered recently [6,7,8]. A variety of mathematical modeling strategies have been used to address in particular the cost-effectiveness of HBV screening, using predominantly combinations of decision trees and/or Markov chain models [9]. In this paper, we propose time-varying Markov chain models of detailed structure, reflecting disease propagation in individuals to quantify the effects of large-scale utilization of POC tests to succeed the SOC screening model. In comparing effectiveness of POC and SOC screening strategies for HBV, we made use of two Markov models with identical structure but different transition probabilities. Each of these models has six states capturing the medical progression of individual patients and is formed by aggregating states from a more detailed Markov model describing chronic HBV disease progression of individuals. The two aggregated models were used to simulate consequences of POC or SOC utilization in HBV screening strategies on large populations of individuals. Before they can be iterated numerically, the two Markov models rely on the specification of certain numerical values dealing with the rate of uptake of POC, the rate of infected patients seeking medical care, death rates, and so on. Some of these numbers can be determined (at least within a range) from the medical literature, which we did. Others are more hypothetical or might be the outcome of policy initiatives. The utility of the models is in their low computational cost and attendant capacity for iteration with many possible candidate values and the determination of the sensitivity of the observed behavior to the specific parameter values. Where available, the transition parameters in our models were selected from the literature. The remaining model parameters were estimated and their effects on the overall results analyzed in terms of sensitivity. For the aggregated models, we considered the six patient states depicted in Fig. 1, where arrows symbolize state transitions admissible in a single time step. At year t, a patient is in state i with probability π i,t . Arranging these probabilities into a state row-vector, we have \( {\varPi}_t=\left[\begin{array}{cccc}\hfill {\pi}_{1, t}\hfill & \hfill {\pi}_{2, t}\hfill & \hfill \dots \hfill & \hfill {\pi}_{6, t}\hfill \end{array}\right] \). This vector is then propagated over time via Π t + 1 = Π t P t , where P t denotes the Markov state transition matrix at time t with element p ij,t denoting the conditional probability of a patient in state i at year t transitioning to state j at year t + 1. Evidently, each row of the state transition matrix sums to one at all times. Special cases are p ij,t = 0 for inadmissible transitions and p ij,t = 1 for certain transitions. For instance, a transition from being immune to having an undetected HBV infection is inadmissible, whereas a deceased patient is going to remain so. State transition diagram for aggregated Markov model. Connections illustrate feasible transitions per step. Dashed box encloses absorbing states As illustrated by the connection between States 3 and 4 in Fig. 1, we assume that no patient starting treatment ever abandons this treatment. This assumption is justified in that it alters the transition probabilities p 36,t and p 46,t of dying from HBV with or without medical treatment by a relatively small degree, which is covered by the sensitivity analysis described below. Model States 3 and 4 in this model are aggregated states from more detailed Markov models described below. This common model structure for both SOC and POC screening policies takes P t of form $$ {P}_t=\left[\begin{array}{cccccc}\hfill {p}_{11, t}\hfill & \hfill {p}_{12, t}\hfill & \hfill {p}_{13, t}\hfill & \hfill 0\hfill & \hfill {p}_{15, t}\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill {p}_{22, t}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill {p}_{25, t}\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill {p}_{33, t}\hfill & \hfill {p}_{34, t}\hfill & \hfill {p}_{35, t}\hfill & \hfill {p}_{36, t}\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill {p}_{44, t}\hfill & \hfill {p}_{45, t}\hfill & \hfill {p}_{46, t}\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill \end{array}\right]=\left[\begin{array}{cccccc}\hfill {p}_{11, t}\hfill & \hfill {\boldsymbol{p}}_{12}\hfill & \hfill {p}_{13}\hfill & \hfill 0\hfill & \hfill {p}_{5, t}\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill {p}_{22, t}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill {p}_{5, t}\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill {p}_{33, t}\hfill & \hfill {\boldsymbol{p}}_{34}\hfill & \hfill {p}_{5, t}\hfill & \hfill {p}_{36}\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill {p}_{44, t}\hfill & \hfill {p}_{5, t}\hfill & \hfill {p}_{46}\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill \end{array}\right], $$ where p 12,t = p 12 and p 34,t = p 34 are constant transition probabilities chosen depending on the screening strategy at hand. The constant transition probability p 13 is presumed independent of the screening policies while p 36 and p 46 are results of the aggregation procedure outlined below. Time-variation of the state transition matrix P t is caused solely by the varying propensity for death, p 5,t as age advances, which is modeled through linear inter- and extrapolation of annual mortality rates for individuals in the USA [10]. However, even though all time variations are induced by variations in p 5,t , notice that the changing mortality rates affect the first four transition probabilities on the diagonal of the state transition matrix, being the probabilities to remain in the respective non-absorbing states. For illustration, whenever p 5,t increases at some time, p 22,t has to decrease by just as much to ensure that the sum of all transition probabilities from State 2 is one at all times t. That is, we require p 22,t = 1 − p 5,t for all times t. These time-adjustments have to be made for all rows of the state transition matrix corresponding to non-absorbing states (i.e., the first four rows of P t ). Special cases comprise the third and fourth rows of P t , which capture transitions emerging from the two aggregated states corresponding to HBV disease progression in untreated and treated forms, respectively. We next discuss aggregation of a more complex model capturing the natural history of chronic HBV to estimate the transition probabilities corresponding to State 3 and 4 of the aggregated model in Fig. 1. To form the aggregation resulting in State 3, untreated infection, we captured the disease progression of HBV without treatment using a different Markov model with its own states and admissible transitions as depicted in Fig. 2. Transition probabilities in this model are based on a literature review and subsequent weighting of the annual probabilities reported in the references using the GRADE criteria [11] for assessing the quality of each study. The resulting transition probabilities are summarized in Table 1, where HCC connotes hepatocellular carcinoma (liver cancer). We refer to the auxiliary model corresponding to Fig. 2 equipped with annual transition probabilities summarized in Table 1 as the disease model, while we refer to the more compact six-state model described above as the aggregated model. Disease model for untreated chronic HBV. State transition diagram of disease model for untreated chronic HBV, expanding State 3 in aggregated Markov model. Connections illustrate feasible transitions per step. Absorption to States 4–6 in aggregated Markov model via green box Table 1 Annual transition probabilities for untreated chronic HBV infection Absorption in the disease model means transition to the union of the States 4 (Infection under treatment), 5 (death, HBV-unrelated) and 6 (death, HBV-related) of the aggregated model, which can be reached from any state in the disease model. Initiation of treatment occurs via p 34,t = p 34 defined for the aggregated model above. Death unrelated to HBV in the disease model is assumed time-invariant with annual probability of 0.1%, corresponding to an individual of age 25–34 years [10]. HBV-related death follows the probabilities listed in Table 1. The total absorption probability at any state in the disease model is then the sum of the three aforementioned component probabilities. Having the purpose of aggregation in mind, we do not need to distinguish the absorbing states in the disease model any further. The technical tool used to aggregate the disease model into State 3 of the aggregated model is the fundamental matrix N = (I − Q)− 1 of the disease model, where Q is the matrix obtained by extracting all rows and columns of the state transition matrix corresponding to transient states. The fundamental matrix N allows a number of useful deductions about the Markov chain, such as expected numbers of occupancies in transient states until absorption and corresponding variances. A particularly useful property of the fundamental matrix is that element n ij equals the expected number of years spent at state j of the disease model when starting in state i [12]. That is, row i of N accumulates the expected numbers of years in each disease state given the process is initiated in state i. Given a sufficiently large number of patients, the normalized version of this row-vector can be interpreted as the average fraction of time spent at each state until absorption, where normalization refers to scaling the vector such that its components have sum one. We can now estimate the probability of death caused by untreated HBV by forming this normalized vector from the first row of the fundamental matrix and using its components to obtain a weighted sum of the probabilities for HBV-related death in Table 1. This weighted sum is used for p 36, concluding the aggregation of the disease model into State 3 of the aggregated model. Assuming probabilities p 34 = 15% of initiating medical treatment and 0.1% for HBV-unrelated death in the disease model, this procedure yields the estimate p 36 = 1.35%. Notice that this probability depends implicitly on the screening policy employed via variation of p 34. State 4, Infection under treatment of the aggregated model can be viewed as an aggregation of the same states used to form the disease model (Fig. 2), although with annual transition probabilities differing from those summarized in Table 1 to reflect effects of treatment. Moreover, the probability of absorption from this disease model under treatment would be decreased by the amount of p 34. The effect of these transition probabilities for the disease model under treatment is a value for the HBV-related mortality rate under treatment in the aggregated model, p 46. To obtain this transition probability, we correct the probability for absorption in the disease model used above by p 34, but keep using the values in Table 1. To adjust for the favorable effects of medical intervention, we introduce a scaling parameter α ∈ (0, 1) and estimate p 46 = αp *46 , where p *46 denotes the probability obtained after aggregation with the values in Table 1. For instance, scaling factor α = 0.25 and fixed annual probability 0.1% for HBV-unrelated death result in the estimate p 46 = 0.54%. This scaling approach is chosen as we focus on effects of screening policies rather than treatment options. Summarizing the modeling and aggregation procedure, the state transition matrix for the aggregated models takes on the structure $$ {P}_t=\left[\begin{array}{cccccc}\hfill 1-\left({\boldsymbol{p}}_{12}+{p}_{13}+{p}_{5, t}\right)\hfill & \hfill {\boldsymbol{p}}_{12}\hfill & \hfill {p}_{13}\hfill & \hfill 0\hfill & \hfill {p}_{5, t}\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill 1-{p}_{5, t}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill {p}_{5, t}\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill 1-\left({\boldsymbol{p}}_{34}+{p}_{5, t}+{p}_{36}\right)\hfill & \hfill {\boldsymbol{p}}_{34}\hfill & \hfill {p}_{5, t}\hfill & \hfill {p}_{36}\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1-\left({p}_{5, t}+\alpha {p}_{46}^{*}\right)\hfill & \hfill {p}_{5, t}\hfill & \hfill \alpha {p}_{46}^{*}\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill \end{array}\right], $$ with HBV-unrelated mortality rates p 5,t from the literature [10] and the transition probabilities in the third and fourth rows depending on the aggregation procedure outlined above. In the following, we use this transition matrix structure for simulation and corresponding sensitivity analyses based on a number of constants in the state transition matrix, namely the transition probabilities p 12, p13, p 34 and the scaling parameter α. As mentioned above, the effects of SOC and POC screening strategies are compared using altered transition probabilities p 12 and p 34 in the aggregated model, which in terms also affects the state aggregation yielding p 36 as well as the respective transition probabilities on the diagonal of P t . Higher utilization of POC screening with subsequent immunization in uninfected cases and initiation of medical treatment in infected cases, respectively, is anticipated to increase both p 12 and p 34, albeit to different degrees. To model these changes, we take p 12 → p SOC12 and p 34 → p SOC34 in the SOC case. In the POC case, we take p 12 → p POC12 = βp SOC12 and p 34 → p POC34 = γp SOC34 , employing additional scaling parameters β and γ, each greater than one. The approach taken to analyze POC/SOC utilization effectiveness using the quantified aggregated model is to model a population of a large number of individuals starting with an initial probability distribution over the six states and then to propagate the Markov chain until the collective probability of the death states is nearly one. We presume the population comprises 100,000 initially uninoculated and uninfected 10-year-olds and the evolution of the Markov chain over time yields the anticipated proportions of the aging population in each state. The assessment of various performance measures such as mortality rates, years under treatment or life expectancy under SOC and POC screening policies is then tracked via evolution of the probability vector Π t , which now admits the interpretation as the proportions of the population occupying each disease state, since the population is presumed large. As mentioned above, we use an annual probability of 0.1% for HBV-unrelated death in the disease model. The remaining transition probabilities in the disease model are according to Table 1. The time-varying probabilities for HBV-unrelated death in the aggregated model, p 5,t , are obtained via inter- and extrapolation of mortality data for individuals in the U.S. [10]. The remaining simulation parameters to be chosen are the scaling constants α, β and γ as well as the transition probabilities p13, p SOC12 and p SOC34 . In the following, we use the nominal parameter values p SOC12 = 0.2%, p SOC34 = 15%, p 13 = 0.15%, α = 0.25, β = 5 and γ = 2 unless otherwise specified. Sensitivity analyses around the nominal parameter values specified above are displayed in Table 2, one for the inoculation-related parameters p SOC12 and β, one for the treatment-related parameters p SOC34 and γ and one for the screening-unrelated parameters p 13 and α. Each sensitivity analysis uses nominal values for the remaining parameters. Table 2 Sensitivity data for simulation parameters Simulation results based on our cohort of 100,000 initially uninfected and uninoculated 10-year-olds and the nominal simulation parameters specified above are displayed in Fig. 3. As anticipated, utilization of POC screenings reduces the numbers of untreated infections and HBV-related mortalities at all times. This holds for all setups of the simulation parameters, as long as the scaling parameters are confined to their respective boundaries, that is 0 < α < 1, β > 1 and γ > 1. The number of infections under treatment using POC screening is initially higher but lower in average than with SOC screening. This trend is a result of both the increased inoculation rate p POC12 > p SOC12 and the increased rate for initiation of treatment p POC34 > p SOC34 . Initially, about the same number of people gets infected under each screening strategy, while more of those infected individuals are linked to medical treatment in the POC case. As the population ages, a larger fraction of the cohort has been inoculated in the POC case, which results in a decreased number of new infections. This in turns leads to a smaller number of patients with untreated infections that can potentially be linked to care, resulting in lower average numbers of patients receiving medical treatment under the POC screening setup. However, the ratio of people linked to care over those infected without treatment is significantly higher in the POC case. The results from our modeling serve to quantify and bound these results, which are a logical consequence of the model's structure. Expected occupancies of States 3, 4 and 6 in the aggregated Markov model. Solid lines = standard of care (SOC); dashed lines = point of care (POC); initial cohort age of 10 years; parameter values: chance of immunization, SOC, p SOC12 = 0.2%; chance of treatment initiation, SOC, p SOC34 = 15%; chance of infection, p 13 = 0.15%; treatment effectiveness factor, α = 0.25; immunization factor, β = 5; treatment initiation factor, γ = 2. Solid lines for SOC, dashed lines for POC; initial cohort age of 10 years The data displayed in Fig. 3 are based on the particular set of nominal parameter values specified above, with the resultant findings extended to different parameter combinations. Sensitivity analyses around the nominal parameter values specified above are displayed in Table 2 for the following varied parameter combinations: the inoculation-related parameters p SOC12 and β; the treatment-related parameters p SOC34 and γ; and the screening-unrelated parameters p 13 and α. For presentation purposes, each sensitivity analysis presumes nominal values for the remaining simulation parameters. However, the trends summarized in the following paragraphs extend to combined sensitivity analysis. The indicators listed to evaluate the simulations are the total numbers of HBV-related mortalities and life expectancies under POC and SOC screening policies as well as relative improvements gained by implementing POC screenings. For instance, the nominal parameter values result in improvements of 27.8% in HBV-related mortality numbers 0.18% in life expectancy, respectively. Sensitivity is interpreted as variation in these two relative measures for the benefit in POC screening utilization. The reason for the seemingly low changes in life expectancy is that only a fraction of the population ever gets infected with HBV, while the change in life expectancy for infected individuals is larger. In the first sensitivity analysis, only the treatment effectiveness factor α and the infection rate p 13 vary from their nominal values. These are the simulation parameters presumed independent of the screening policies employed. As we can see, each of the tested combinations of these two parameters yields improvements of at least 25.1% in total HBV-related death numbers and 0.12% in life expectancy in the POC screening case. In general, we observe trends for larger improvements in HBV-related mortality numbers towards higher treatment effectiveness (i.e., lower value for α). The infection rate p 13 has only minor influence on the improvement in HBV-related death totals, while increasing the gains in life expectancy at higher infection rates. The comparatively small influence of p 13 on the relative improvements in mortality numbers is not surprising as p 13 only changes the proportions of the population ever to become infected, but not the change of course for patients after being infected. The treatment effectiveness factor α, however, is strongly linked to potential gains in POC screening by the improved linkage to care and thus affects relative improvements in mortality numbers to a greater extent. The reason for the strong sensitivity of the gains in life expectancy to the infection rate is that if a larger fraction of the population becomes infected, the relative weight of the improvements for this fraction on the entire population grows. The second sensitivity study focuses on variations of the inoculation-related simulation parameters p SOC12 and β. Using the parameter values in Table 2, we gain improvements of at least 21.98% in mortality numbers and at least 0.17% in life expectancies when implementing POC screenings. As expected, both scaling factor β and base inoculation rate p SOC12 have significant influence on the two measures of improvement obtainable using POC tests. However, even for low scaling factors β and high base inoculation rates, notable benefits of POC test utilization are apparent. In the third sensitivity study, the treatment-related simulation parameters p SOC34 and γ are varied. Improvements are at least 22.51% in HBV-related mortality numbers and 0.13% in life expectancy, while both parameters appear to have similar influence on the two measures. Chronic HBV is a worldwide problem, with millions of new people infected each year and a large population of chronically infected patients facing health care consequences both short- and long-term. However, many chronic HBV patients remain asymptomatic and millions worldwide are unaware of their infections. The importance of early detection via HBV screening of high-risk individuals hinges on the ability to implement effective antiviral therapy to prevent progression of liver disease leading to complications such as cirrhosis and hepatocellular carcinoma. While commercially available serologic immunoassays are widely used for HBV screening, the availability and access to these testing tools for resource-limited regions or marginalized populations such as the homeless and immigrants are suboptimal. Furthermore, the effort associated with following up on SOC test results, patient call-back and counseling can be considerable and create further hurdles for implementing effective screening programs. Recent development of POC tests for HBV holds promise, and previous studies have reported satisfactory sensitivity and specificity of POC testing when compared with SOC testing. However, few studies have used a modeling approach that not only takes into account the performance characteristics of POC testing, but also the natural history of untreated HBV infection to evaluate accurately the added benefit of POC testing over SOC testing. Given the significantly lower cost and more rapid turnaround time associated with POC testing for HBV, the replacement of SOC testing by POC testing has the potential to improve HBV screening programs by promoting greater access and improving linkage to care. Using Markov modeling based on a comprehensive literature review, our current study demonstrates that POC testing is associated with significantly lower HBV-related mortality and greater life expectancy when compared with SOC testing. In conclusion, the simulation results under various parameter selections indicate that a significant improvement is obtainable via replacement of SOC screening by new POC tests. The clinical impact of POC testing may be even greater in resource-limited regions and among marginalized populations where health care access and follow-up after testing are obstacles to the effective implementation of HBV screening programs. In a future study, additional measures such as morbidity and expected cost of treatment will be analyzed based on additional data regarding cost and effectiveness of medical treatment as well as costs of POC and SOC screening implementation. HBV: POC: SOC: Standard-of-care Lavanchy D. Hepatitis B, virus epidemiology, disease burden, treatment, and current and emerging prevention and control measures. J Viral Hepat. 2004;11:97–107. Gish RG, Gadano AC. Chronic hepatitis B: current epidemiology in the Americas and implications for management. J Viral Hepat. 2006;13:787–98. Kowdley KV, Wang CC, Welch S, Roberts H, Brosgart CL. Prevalence of chronic hepatitis B among foreign-born persons living in the United States by country of origin. Hepatology. 2012;56:422–33. Gish RG, Gutierrez JA, Navarro-Cazarez N, Giang K, Adler D, Tran B, et al. A simple and inexpensive point-of-care test for hepatitis B surface antigen detection: serological and molecular evaluation. J Viral Hepat. 2014;21:905–8. Shivkumar S, Peeling R, Jafari Y, Joseph L, Pai NP. Rapid point-of-care first-line screening tests for hepatitis B infection: a meta-analysis of diagnostic accuracy (1980–2010). Am J Gastroenterol. 2012;107:1306–13. Owolabi KM. Numerical solution of diffusive HBV model in a fractional medium. Springerplus. 2016;5:1643. Shlomai A, Schwartz RE, Ramanan V, Bhatta A, De Jong YP, Bhatia SN, et al. Modeling host interactions with hepatitis B virus using primary and induced pluripotent stem cell-derived hepatocellular systems. Proc Natl Acad Sci U S A. 2014;111:12193–8. Cheng L, Li F, Bility MT, Murphy CM, Su L. Modeling hepatitis B virus infection, immunopathology and therapy in mice. Antiviral Res. 2015;121:1–8. Geue C, Wu O, Xin Y, Heggie R, Hutchinson S, Martin NK, et al. Cost-Effectiveness of HBV and HCV Screening Strategies--A Systematic Review of Existing Modelling Techniques. PLoS One. 2015;10:e0145022. Centers for Disease Control and Prevention NCFHS 2013. Compressed Mortality File 1999–2010 on CDC WONDER Online Database. http://wonder.cdc.gov/cmf-icd10.html. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. Br Med J. 2008;336:924–6. Kemeny JG, Snell JL. Finite Markov chains. New York: Springer; 1976. Eckman MH, Kaiser TE, Sherman KE. The Cost-effectiveness of Screening for Chronic Hepatitis B Infection in the United States. Clin Infect Dis. 2011;52:1294–306. Toy M, Veldhuijzen IK, De Man RA, Richardus JH, Schalm SW. Potential impact of long-term nucleoside therapy on the mortality and morbidity of active chronic hepatitis B. Hepatology. 2009;50:743–51. Toy M, Salomon JA, Jiang H, Gui HL, Wang H, Wang JS, et al. Population health impact and cost-effectiveness of monitoring inactive chronic hepatitis B and treating eligible patients in Shanghai, China. Hepatology. 2014;60:46–55. Veldhuijzen IK, Toy M, Hahne SJM, De Wit GA, Schalm SW, De Man RA, et al. Screening and early treatment of migrants for chronic hepatitis B virus infection is cost-effective. Gastroenterology. 2010;138:522–30. Wong WWL, Woo G, Heathcote EJ, Krahn M. Cost effectiveness of screening immigrants for hepatitis B. Liver Int. 2011;31:1179–90. Lacey LF, Gane E. The cost-effectiveness of long-term antiviral therapy in the management of HBeAg-positive and HBeAg-negative chronic hepatitis B in Singapore. J Viral Hepat. 2007;14:751–66. Chen JD, Yang HI, Iloeje UH, You SL, Lu SN, Wang LY, et al. Carriers of Inactive Hepatitis B Virus Are Still at Risk for Hepatocellular Carcinoma and Liver-Related Death. Gastroenterology. 2010;138:1738–47. Lok A, Lai C-L, Wu P-C, Leung E. Spontaneous hepatitis B e antigen to antibody seroconversion and reversion in Chinese patients with chronic hepatitis B virus infection. Gastroenterology. 1987;92:1839–43. Yuen MF, Wong DKH, Fung J, Ip P, But D, Hung I, et al. HBsAg seroclearance in chronic hepatitis B in Asian patients: Replicative level and risk of hepatocellular carcinoma. Gastroenterology. 2008;135:1192–9. Arase Y, Ikeda K, Suzuki F, Suzuki Y, Saitoh S, Kobayashi M, et al. Long-term outcome after hepatitis B surface antigen seroclearance in patients with chronic hepatitis B. Am J Med. 2006;119:71e9–16. Chen YC, Sheen IS, Chu CM, Liaw YF. Prognosis following spontaneous HBsAg seroclearance in chronic hepatitis B patients with or without concurrent infection. Gastroenterology. 2002;123:1084–9. Liu J, Yang HI, Lee MH, Lu SN, Jen CL, Batrla-Utermann R, et al. Spontaneous seroclearance of hepatitis B seromarkers and subsequent risk of hepatocellular carcinoma. Gut. 2014;63:1648–57. Aggarwal R, Ghoshal UC, Naik SR. Assessment of cost-effectiveness of universal hepatitis B immunization in a low-income country with intermediate endemicity using a Markov model. J Hepatol. 2003;38:215–22. Hutton DW, So SK, Brandeau ML. Cost-effectiveness of nationwide hepatitis B catch-up vaccination among children and adolescents in China. Hepatology. 2010;51:405–14. Hutton DW, Tan D, So SK, Brandeau ML. Cost-effectiveness of screening and vaccinating Asian and Pacific Islander adults for hepatitis B. Ann Intern Med. 2007;147:460–9. Writing and editing support was provided by independent medical editor Lark Lands, Ph.D., and funded by Robert Gish, M.D. The authors would like to express their gratitude for her invaluable assistance in preparing the manuscript for publication. Funds were provided through a special projects fund at the University of California, San Diego. All data not explicitly included in the article is simulation data, which the authors will make available upon request. MAS contributed substantially to the conception and design of the study, the interpretation of data, both the drafting and the critical revision of the manuscript for important intellectual content, and the final approval of the version to be published; he was the lead researcher who was responsible for modeling procedure, simulation, and sensitivity analysis; he agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the article are appropriately investigated and resolved. KDJ contributed substantially to the conception and design of the study, the analysis and interpretation of data, both the drafting and the critical revision of the manuscript for important intellectual content, and the final approval of the version to be published; he contributed substantially to the comprehensive literature reviews used to quantify the models, and contributed to model procedure; he agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the article are appropriately investigated and resolved. JMF contributed substantially to the conception and design of the study, both the drafting and the critical revision of the manuscript for important intellectual content, and the final approval of the version to be published; he contributed substantially to modeling procedure, simulation, and sensitivity analysis; he agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the article are appropriately investigated and resolved. RJW contributed substantially to the conception and design of the study; analysis and interpretation of data, both the drafting and the critical revision of the manuscript for important intellectual content, and the final approval of the version to be published; he contributed substantially to the comprehensive literature reviews used to quantify the models and to all the medical discussion elements in the manuscript; he agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the article are appropriately investigated and resolved. RRB contributed substantially to the conception and design of the study, the analysis and interpretation of data, both the drafting and the critical revision of the manuscript for important intellectual content, and the final approval of the version to be published; he contributed substantially to modeling procedure, simulation, and sensitivity analysis; he agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the article are appropriately investigated and resolved. RGG contributed substantially to the concept and design of the study, both the drafting and the critical revision of the manuscript for important intellectual content, and the final approval of the version to be published; he contributed substantially to the manuscript's discussion of hepatitis B testing, including standard-of-care and point-of-care tests, hepatitis B epidemiology and disease progression, and approaches to hepatitis B screening and linkage to care; he agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the article are appropriately investigated and resolved. The authors list as possible competing interests the following: Robert G. Gish has had Grants/Research Support from Gilead Sciences, and Merck & Co.; Dr. Gish has performed as Consultant and/or Advisor to Akshaya Pharmaceuticals, Arbutus Biopharma Corporation, Arrowhead Research Corporation, Bristol-Myers Squibb, ContraVir Pharmaceuticals, Enyo Pharma, Gilead Sciences, HumAbs BioMed, Ionis Pharmaceuticals, Merck & Co., Nanogen Biopharmaceutical, and Novira Therapeutics; Dr. Gish has current activity with the scientific or clinical advisory boards of Arrowhead Research Corporation, Merck & Co., ContraVir Pharmaceuticals, Gilead Sciences, Isis Pharmaceuticals, Enyo Pharma, HumAbs BioMed, and Nanogen Biopharmaceutical; Dr. Gish is a member of the Speakers Bureau for Bristol-Myers Squibb, Gilead Sciences, and Merck & Co.; Dr. Gish has stock options with Arrowhead Research Corporation. All other authors have no competing interests. This study does not contain any individual person's data in any form. This study does not involve human participants, human data or human tissue. Department of Mechanical and Aerospace Engineering, University of California, San Diego, 9500 Gilman Drive, MS 0411, La Jolla, CA, 92093-0411, USA Martin A. Sehr & Robert R. Bitmead Midwestern University, Arizona College of Osteopathic Medicine, 19555 North 59th Avenue, Glendale, AZ, 85308, USA Kartik D. Joshi Department of Medicine, Division of General Internal Medicine, University of California, San Diego, 200 W. Arbor Drive #8415, San Diego, CA, 92103, USA John M. Fontanesi Division of Gastroenterology and Hepatology, Alameda Health System - Highland Hospital, 1411 East 31st Street, Highland Care Pavilion - 5th Floor Endoscopy Unit, Oakland, CA, 94602, USA Robert J. Wong Department of Medicine, Division of Gastroenterology and Hepatology, Stanford University, Alway Building, Room M211, 300 Pasteur Drive, MC: 5187, Stanford, CA, 94305-5187, USA Robert G. Gish National Viral Hepatitis Roundtable, 1612 K Street NW, Suite 1202, Washington, DC, 20006, USA Hepatitis B Foundation, 3805 Old Easton Road, Doylestown, PA, USA Martin A. Sehr Robert R. Bitmead Correspondence to Robert G. Gish. Sehr, M.A., Joshi, K.D., Fontanesi, J.M. et al. Markov modeling in hepatitis B screening and linkage to care. Theor Biol Med Model 14, 11 (2017). https://doi.org/10.1186/s12976-017-0057-6 Markov modeling
CommonCrawl
An efficient method for chitin production from crab shells by a natural deep eutectic solvent Wen-Can Huang1, Dandan Zhao1, Changhu Xue1,2 & Xiangzhao Mao1,2 Marine Life Science & Technology volume 4, pages 384–388 (2022)Cite this article Crab shells are an important feedstock for chitin production. However, their highly compact structure significantly limits their use for the production of chitin under mild conditions. Here, a green and efficient approach using a natural deep eutectic solvent (NADES) to produce chitin from crab shells was developed. Its effectiveness in isolating chitin was investigated. The results showed that most proteins and minerals were removed from crab shells and the relative crystallinity of the isolated chitin reached 76%. The quality of the obtained chitin was comparable to chitin isolated by the acid–alkali method. This is the first report on a green method for efficient chitin production from crab shells. This study is expected to open new avenues for green and efficient production of chitin from crab shells. The global shellfishery industry generates millions of metric tons of waste every year (Yan and Chen 2015). Crustacean shells are a primary source of this waste, causing serious environmental problems and a significant waste of resources. Crustacean shells contain a useful chemical—chitin (Chen et al. 2021). Chitin has received significant attention due to its beneficial characteristics including biocompatibility, renewability, biodegradability, sustainability, bioactivity, and non-toxicity (Duan et al. 2015; Hong et al. 2018; Zhang and Rolandi 2017). It can be applied in many industries, including biomedicine, agriculture, water treatment, and cosmetics (Krajewska 2004; Zhang and Rolandi 2017). In crustacean shells, the chitin nanofibrils are complex, containing proteins to form long chitin-protein fibers embedded in the mineral matrix (Raabe et al. 2005). Compared to shrimp shell structure, the structure of a crab shell is even more complex and compact because of its higher mineral content. Thus, it is extremely difficult to use biological methods, such as enzymatic reactions and microbial fermentation, to dissociate the chitin-protein-mineral complex to produce chitin from crab shells. Moreover, the conventional acid–alkali approach for chitin production is considered harmful to the environment. Hence, a green method for isolating chitin from crab shells is urgently needed. Natural deep eutectic solvents (NADESs) are being increasingly considered as green and sustainable solvents. NADESs consist of natural compounds, especially primary metabolites including organic acids, sugars, and amino acids (Dai et al. 2013; Guo et al. 2019; Paiva et al. 2014). NADESs are distinguished from conventional solvents due to their favorable properties including biocompatibility, sustainability, biodegradability, non-volatility, low cost, and simple preparation methods (Xin et al. 2017). Here, we developed a chitin isolation method using a NADES consisting of choline chloride and malic acid, to separate chitin from crab shells. In our previous work, we developed a NADES-based approach to isolate chitin from shrimp shells (Huang et al. 2018). To the best of our knowledge, this is the first report on a green chemical approach for chitin production from crab shells. The isolated chitin was characterized and the reusability of the NADES was evaluated. Preparation of the NADES The NADES used in this study was composed of choline chloride and malic acid, associated by hydrogen bonding to form a eutectic mixture. Chitin isolation from crab shells A schematic diagram of the NADES-based chitin isolation is presented in Fig. 1A. Chitin isolation by the NADES was evaluated with various crab shell/NADES ratios (1:10, 1:20, and 1:30) and microwave irradiation times (3, 5, 7, 9, and 11 min). As seen in Fig. 1C, the demineralization and deproteinization efficiency increased with the crab shell/NADES ratio from 1:10 to 1:30. Time for microwave irradiation is another factor that can influence the chitin isolation. The demineralization and deproteinization efficiency increased when the irradiation time was increased. This result indicates that a larger amount of NADES and a longer irradiation time are beneficial for the removal of minerals and proteins from crab shells. The maximal demineralization and deproteinization efficiency reached 99.8% and 92.3%, respectively, at the crab shell/NADES ratio of 1:30 after 11 min of microwave irradiation. A Schematic diagram of the chitin isolation from crab shells by the NADES. B SEM images of the crab shells, chitin isolated using the acid–alkali, and chitin isolated using the NADES. C Demineralization and deproteinization efficiency at crab shell/NADES ratios of 1:10, 1:20, and 1:30. D Reusability of the NADES. E FT-IR spectra of the crab shells, chitin isolated by acid–alkali, and chitin isolated by the NADES; XRD curves of the crab shells, CaCO3, chitin isolated by acid–alkali, and chitin isolated by the NADES; TG curves of the crab shells, chitin isolated by acid–alkali, and chitin isolated by the NADES Scanning electron microscopy The morphologies of the isolated chitin obtained by the NADES and acid–alkali, and the raw crab shells were observed by SEM. As seen in Fig. 1B, remarkable morphological changes were observed between the isolated chitin and raw crab shell. The morphologies of the chitin isolated by the NADES and acid–alkali were similar. The surface of the raw crab shells was rough with pores observed. In contrast, in the chitin isolated by NADES treatment, pores were clearly observed among the microfibrils due to the absence of proteins and minerals. Fourier transform infrared spectroscopy Fourier transform infrared (FT-IR) spectra of the crab shells, and the chitin extracted using the NADES and acid–alkali are presented in Fig. 1E. The peaks of isolated chitin at 3447 cm−1 and 3268 cm−1 can be attributed to symmetric stretching vibration of NH2 and OH groups, respectively (Zhu et al. 2017). The amide I band split at 1660 cm−1 and 1626 cm−1 can be assigned to the existence of the intermolecular (–CO··HN–) and the intramolecular hydrogen bond (–CO··HOCH2–) (Rinaudo 2006). The splitting of the amide-I band was not observed in the crab shells because the amide peaks of protein overlapped with the chitin amide-I bands, indicating isolated chitin was free from protein. These absorbance peaks are the typical feature of α-chitin. No apparent differences were observed between the spectra of the chitin obtained from the NADES and acid–alkali method, suggesting chitin isolated by the NADES and acid–alkali possessed the same chemical structure as the chitin isolated by acid–alkali. X-ray diffraction (XRD) was used to investigate the crystal structure of the crab shells, calcium carbonate, and isolated chitin. As shown in Fig. 1E, the chitin isolated by the NADES and acid–alkali presented diffraction peaks at approximately at 9.20°, 12.76°, 19.29°, and 26.26°, which confirmed the crystal type of the α-chitin (Zhu et al. 2017). This result indicated that the crystal structure of chitin was not changed by the NADES treatment. The diffraction peak of calcium carbonate at 29.55° was not shown in the samples isolated by the NADES and acid–alkali, indicating NADES can remove calcium carbonate. The XRD pattern of the chitin isolated by the NADES was similar to that of the chitin isolated by the acid–alkali method, suggesting complete removal of calcium carbonate. The CrI was calculated using Segal's method. The CrI of the chitin isolated by the NADES was 75.55%, while that of the crab shells was 33.13%. The increase of CrI indicated that the NADES-based method can effectively remove minerals and proteins from crab shells. Thermogravimetric analysis The results of thermogravimetric analysis (TGA) of crab shells before and after treatment are presented in Fig. 1E. The initial decomposition temperature range of raw crab shells was approximately from 150 to 300 °C, which can be assigned to the presence of proteins. Compared with the crab shells, the chitin extracted by the NADES did not show mass loss between 150 and 330 °C, suggesting proteins were not present along with chitin after treatment with NADES. The absence of mass loss between 600 and 750 °C indicates the isolated chitin was free from calcium carbonate. Additionally, the thermal stability of the chitin isolated by the NADES was similar to that of chitin obtained by acid–alkali. Reuse of the NADES When the NADES, consisting of choline chloride and malic acid, is used in chitin isolation, some malic acid is consumed in removal of calcium carbonate. Hence, after each cycle, a certain amount of malic acid needs to be added to maintain the ratio of choline chloride/malic acid. As present in Fig. 1D, the NADES could be used three times without a remarkable decrease in the demineralization and deproteinization efficiency. After three times, the NADES was too viscous to be further used because proteins isolated from crab shells increased the viscosity of the NADES. Chitin isolation from crab shells using a green approach is extremely difficult because crab shells have a much higher mineral content than other crustacean shells, such as shrimp shells. Because the minerals in crab shells are mostly in the form of crystalline calcium carbonate, when the NADES was applied to the crab shells, minerals were removed by malic acid, leaving chitin and proteins. In crab exoskeletons, the minerals are deposited within the chitin–protein matrix and form a well-defined hierarchical organization (Chen et al. 2008). Thus, the strong internal structure of crab shells was weakened after the removal of minerals. NADESs are capable of breaking hydrogen bonds and have a very high ionic strength (Sharma et al. 2013). The strong hydrogen-bond network between chitin and proteins was weakened due to competing hydrogen bond formed between the chloride ions of the NADES and the hydroxyl groups, and the proteins were removed by dissolution because of the hydrogen-bond interaction with the NADES (Li et al. 2013; Sharma et al. 2013). As a result, chitin was isolated from the crab shells. Choline chloride was purchased from Yuanye Bio-Technology Co., Ltd. Malic acid, HCl, and NaOH were purchased from Sinopharm Chemical Reagent Co., Ltd. Choline chloride and malic acids were mixed with a molar ratio of 1:1 at 80 °C until formation of a homogeneous liquid. Isolation of chitin from crab shells Chitin isolation from crab shells by the NADES was performed as follows. The ground crab shells and NADES were mixed at various shell/NADES ratios (1:10, 1:20, and 1:30). Next, the shell/NADES mixture was heated by microwave irradiation at 700 W. The isolated chitin was then separated from the mixture by centrifugation. The sample was then rinsed with distilled water followed by drying in a vacuum oven. For comparison with the NADES-based chitin isolation, acid–alkali-based chitin isolation from crab shells was conducted. Demineralization was performed by treating the crab shells with a 5% (w/v) HCl solution with a HCl solution-to-shell ratio of 10 ml/g at room temperature for 1 h. Next, the demineralized sample was collected by centrifugation followed by deproteinization with 10% (w/v) NaOH with a NaOH solution-to-shell ratio of 10 ml/g at 95 °C for 1 h. The isolated chitin was then rinsed with distilled water followed by drying in a vacuum oven. The mineral content was measured by heating the samples (1–2 g) at 525 °C to a constant weight in a muffle furnace. The demineralization efficiency was calculated using the following equation: $$\mathrm{Demineralization }\left(\mathrm{\%}\right)=\frac{{M}_{0}-M}{{M}_{0}}\times 100\%$$ where M0 and M are the mineral contents of the crab shells and isolated chitin, respectively. The protein content was determined by the Bradford method (Bradford 1976). Briefly, Coomassie Brilliant Blue G-250 (100 mg) was dissolved in ethanol (95%, 50 ml). Then, phosphoric acid (85%, 100 ml) was added, and the total volume of the solution was adjusted to 1 L with distilled water. Next, a 0.5 g sample was added to 5% NaOH, and the mixture was stirred at 95 °C for 1 h, followed by filtering and dilution. Then, the resulting sample was added to the solution, and the absorbance was measured at 595 nm after 2 min. The deproteinization efficiency was calculated using the following equation: $$\mathrm{Deproteinization }\left(\mathrm{\%}\right)=\frac{{P}_{0}-P}{{P}_{0}}\times 100\%$$ where P0 and P are the protein contents of the crab shells and isolated chitin, respectively. To evaluate reusability, the NADES was recycled three times without purification after chitin isolation. The surface morphologies of the crab shells and isolated chitin were observed by SEM (JEOL JSM-840) with an acceleration voltage of 20 kV. Before observation, all the samples were coated with platinum by vacuum sputtering. The FT-IR spectra were recorded on a FT-IR spectrometry (Thermo Scientific Nicolet iS10) over the frequency range of 4000–500 cm−1 with a resolution of 4 cm−1. The XRD patterns were recorded on an X-ray diffractometer (Rigaku Miniflex 600) using Cu Kα radiation at 40 kV. The diffraction data were collected at a scanning rate of 5° min−1 from 2θ = 5 − 60°. The relative crystallinity index (CrI) was calculated by the Segal method: $$\mathrm{CrI }\left(\mathrm{\%}\right)= \frac{{I}_{110}-{I}_{\mathrm{am}}}{{I}_{110}} \times 100\%$$ where I110 is the peak intensity of the diffraction at 2θ ≈ 20°, which represents both the crystalline and amorphous region material, and Iam is the diffraction intensity of the amorphous fraction at 2θ ≈ 18°. TGA was performed under a nitrogen atmosphere at a heating rate of 10 °C min−1 by a thermogravimetric analyzer (NETZSCH TG 209 F3). In the present study, we developed a NADES-based method for chitin production from crab shells. The experimental results indicated that most protein and minerals were removed, and the quality of the isolated chitin was superior to that isolated by other methods. In addition, the NADES can be reused three times without purification. This method was proved to be green, efficient, facile, and sustainable. The chitin produced by the proposed method was comparable to the chitin isolated by the acid–alkali method. This study provides a method for the sustainable production of chitin from crab shells. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. Bradford MM (1976) A rapid and sensitive method for the quantitation of microgram quantities of protein utilizing the principle of protein-dye binding. Anal Biochem 72:248–254 Chen P-Y, Lin AY-M, McKittrick J, Meyers MA (2008) Structure and mechanical properties of crab exoskeletons. Acta Biomater 4:587–596 Chen R, Huang W-C, Wang W, Mao X (2021) Characterization of TEMPO-oxidized chitin nanofbers with various oxidation times and its application as an enzyme immobilization support. Mar Life Sci Technol 3:85–93 Dai Y, van Spronsen J, Witkamp G-J, Verpoorte R, Choi YH (2013) Natural deep eutectic solvents as new potential media for green technology. Anal Chim Acta 766:61–68 Duan B, Zheng X, Xia Z, Fan X, Guo L, Liu J, Wang Y, Ye Q, Zhang L (2015) Highly biocompatible nanofibrous microspheres self-assembled from chitin in NaOH/urea aqueous solution as cell carriers. Angew Chem Int Ed 54:5152–5156 Guo N, Ping K, Jiang Y-W, Wang L-T, Niu L-J, Liu Z-M, Fu Y-J (2019) Natural deep eutectic solvents couple with integrative extraction technique as an effective approach for mulberry anthocyanin extraction. Food Chem 296:78–85 Hong M-S, Choi G-M, Kim J, Jang J, Choi B, Kim J-K, Jeong S, Leem S, Kwon H-Y, Hwang H-B, Im H-G, Park J-U, Bae B-S, Jin J (2018) Biomimetic chitin-silk hybrids: an optically transparent structural platform for wearable devices and advanced electronics. Adv Funct Mater 28:1705480 Huang W-C, Zhao D, Guo N, Xue C, Mao X (2018) Green and facile production of chitin from crustacean shells using a natural deep eutectic solvent. J Agric Food Chem 66:11897–11901 Krajewska B (2004) Application of chitin- and chitosan-based materials for enzyme immobilizations: a review. Enzyme Microb Technol 35:126–139 Li C, Li D, Zou S, Li Z, Yin J, Wang A, Cui Y, Yao Z, Zhao Q (2013) Extraction desulfurization process of fuels with ammonium-based deep eutectic solvents. Green Chem 15:2793–2799 Paiva A, Craveiro R, Aroso I, Martins M, Reis RL, Duarte ARC (2014) Natural deep eutectic solvents—solvents for the 21st century. ACS Sustain Chem Eng 2:1063–1071 Raabe D, Sachs C, Romano P (2005) The crustacean exoskeleton as an example of a structurally and mechanically graded biological nanocomposite material. Acta Mater 53:4281–4292 Rinaudo M (2006) Chitin and chitosan: properties and applications. Prog Polym Sci 31:603–632 Sharma M, Mukesh C, Mondal D, Prasad K (2013) Dissolution of alpha-chitin in deep eutectic solvents. RSC Adv 3:18149–18155 Xin R, Qi S, Zeng C, Khan FI, Yang B, Wang Y (2017) A functional natural deep eutectic solvent based on trehalose: Structural and physicochemical properties. Food Chem 217:560–567 Yan N, Chen X (2015) Don't waste seafood waste. Nature 524:155–157 Zhang X, Rolandi M (2017) Engineering strategies for chitin nanofibers. J Mater Chem B 5:2547–2559 Zhu P, Gu Z, Hong S, Lian H (2017) One-pot production of chitin with high purity from lobster shells using choline chloride-malonic acid deep eutectic solvent. Carbohydr Polym 177:217–223 This work was supported by China Agriculture Research System (CARS-48), Taishan Scholar Project of Shandong Province (tsqn201812020), Fundamental Research Funds for the Central Universities (201941002). College of Food Science and Engineering, Ocean University of China, Qingdao, 266003, China Wen-Can Huang, Dandan Zhao, Changhu Xue & Xiangzhao Mao Laboratory for Marine Drugs and Bioproducts of Qingdao, Pilot National Laboratory for Marine Science and Technology (Qingdao), Qingdao, 266200, China Changhu Xue & Xiangzhao Mao Wen-Can Huang Dandan Zhao Changhu Xue Xiangzhao Mao WCH conceived the idea and wrote the manuscript. DZ performed the experiments. CX and XM supervised the work. WCH and DZ contributed equally to this work. Correspondence to Xiangzhao Mao. Animal and human rights statement This article does not contain any studies with human participants or animals performed by any of the authors. Edited by Xin Yu. Huang, WC., Zhao, D., Xue, C. et al. An efficient method for chitin production from crab shells by a natural deep eutectic solvent. Mar Life Sci Technol 4, 384–388 (2022). https://doi.org/10.1007/s42995-022-00129-y Issue Date: August 2022 Chitin Food wastes Natural deep eutectic solvents Crab shells
CommonCrawl
Title: Income statement Subject: Bookkeeping, Statement of retained earnings, Accumulated other comprehensive income, Variable costing, SG&A Collection: Accounting Terminology, Financial Statements, Generally Accepted Accounting Principles Historical cost accounting Constant purchasing power accounting Major types of accounting Accounting organizations Luca Pacioli (1445–1517) Positive accounting Sarbanes–Oxley Act An income statement (US English) or profit and loss account (UK English)[1] (also referred to as a profit and loss statement (P&L), statement of profit or loss, revenue statement, statement of financial performance, earnings statement, operating statement, or statement of operations)[2] is one of the financial statements of a company and shows the company's revenues and expenses during a particular period.[1] It indicates how the revenues (money received from the sale of products and services before expenses are taken out, also known as the "top line") are transformed into the net income (the result after all revenues and expenses have been accounted for, also known as "net profit" or the "bottom line"). It displays the revenues recognized for a specific period, and the cost and expenses charged against these revenues, including write-offs (e.g., depreciation and amortization of various assets) and taxes.[2] The purpose of the income statement is to show managers and investors whether the company made or lost money during the period being reported. One important thing to remember about an income statement is that it represents a period of time like the cash flow statement. This contrasts with the balance sheet, which represents a single moment in time. Charitable organizations that are required to publish financial statements do not produce an income statement. Instead, they produce a similar statement that reflects funding sources compared against program expenses, administrative costs, and other operating commitments. This statement is commonly referred to as the statement of activities. Revenues and expenses are further categorized in the statement of activities by the donor restrictions on the funds received and expended. The income statement can be prepared in one of two methods.[3] The Single Step income statement takes a simpler approach, totaling revenues and subtracting expenses to find the bottom line. The more complex Multi-Step income statement (as the name implies) takes several steps to find the bottom line, starting with the gross profit. It then calculates operating expenses and, when deducted from the gross profit, yields income from operations. Adding to income from operations is the difference of other revenues and other expenses. When combined with income from operations, this yields income before taxes. The final step is to deduct taxes, which finally produces the net income for the period measured. Usefulness and limitations of income statement 1 Operating section 1.1 Non-operating section 1.2 Irregular items 1.3 Disclosures 1.4 Earnings per share 1.5 Sample income statement 2 Bottom line 3 Requirements of IFRS 4 Items and disclosures 4.1 Usefulness and limitations of income statement Income statements should help investors and creditors determine the past financial performance of the enterprise, predict future performance, and assess the capability of generating future cash flows through report of the income and expenses. However, information of an income statement has several limitations: Items that might be relevant but cannot be reliably measured are not reported (e.g., brand recognition and loyalty). Some numbers depend on accounting methods used (e.g., using FIFO or LIFO accounting to measure inventory level). Some numbers depend on judgments and estimates (e.g., depreciation expense depends on estimated useful life and salvage value). - INCOME STATEMENT GREENHARBOR LLC - For the year ended DECEMBER 31 2010 € € Debit Credit GROSS REVENUES (including INTEREST income) 296,397 ADVERTISING 6,300 BANK & CREDIT CARD FEES 144 BOOKKEEPING 2,350 SUBCONTRACTORS 88,000 ENTERTAINMENT 5,550 INSURANCE 750 LEGAL & PROFESSIONAL SERVICES 1,575 LICENSES 632 PRINTING, POSTAGE & STATIONERY 320 RENT 13,000 MATERIALS 74,400 TELEPHONE 1,000 UTILITIES 1,491 TOTAL EXPENSES (195,513) NET INCOME 100,885 Guidelines for statements of comprehensive income and income statements of business entities are formulated by the FASB in the U.S.. Names and usage of different accounts in the income statement depend on the type of organization, industry practices and the requirements of different jurisdictions. If applicable to the business, summary values for the following items should be included in the income statement:[4] Operating section Revenue - Cash inflows or other enhancements of assets of an entity during a period from delivering or producing goods, rendering services, or other activities that constitute the entity's ongoing major operations. It is usually presented as sales minus sales discounts, returns, and allowances. Every time a business sells a product or performs a service, it obtains revenue. This often is referred to as gross revenue or sales revenue.[5] Expenses - Cash outflows or other using-up of assets or incurrence of liabilities during a period from delivering or producing goods, rendering services, or carrying out other activities that constitute the entity's ongoing major operations. Cost of Goods Sold (COGS) / Cost of Sales - represents the direct costs attributable to goods produced and sold by a business (manufacturing or merchandizing). It includes material costs, direct labour, and overhead costs (as in absorption costing), and excludes operating costs (period costs) such as selling, administrative, advertising or R&D, etc. Selling, General and Administrative expenses (SG&A or SGA) - consist of the combined payroll costs. SGA is usually understood as a major portion of non-production related costs, in contrast to production costs such as direct labour. Selling expenses - represent expenses needed to sell products (e.g., salaries of sales people, commissions and travel expenses, advertising, freight, shipping, depreciation of sales store buildings and equipment, etc.). General and Administrative (G&A) expenses - represent expenses to manage the business (salaries of officers / executives, legal and professional fees, utilities, insurance, depreciation of office building and equipment, office rents, office supplies, etc.). Depreciation / Amortization - the charge with respect to fixed assets / intangible assets that have been capitalised on the balance sheet for a specific (accounting) period. It is a systematic and rational allocation of cost rather than the recognition of market value decrement. Research & Development (R&D) expenses - represent expenses included in research and development. Expenses recognised in the income statement should be analysed either by nature (raw materials, transport costs, staffing costs, depreciation, employee benefit etc.) or by function (cost of sales, selling, administrative, etc.). (IAS 1.99) If an entity categorises by function, then additional information on the nature of expenses, at least, – depreciation, amortisation and employee benefits expense – must be disclosed. (IAS 1.104) The major exclusive of costs of goods sold, are classified as operating expenses. These represent the resources expended, except for inventory purchases, in generating the revenue for the period. Expenses often are divided into two broad sub classicifications selling expenses and administrative expenses.[5] Non-operating section Other revenues or gains - revenues and gains from other than primary business activities (e.g., rent, income from patents, goodwill). It also includes unusual gains that are either unusual or infrequent, but not both (e.g., gain from sale of securities or gain from disposal of fixed assets) Other expenses or losses - expenses or losses not related to primary business operations, (e.g., foreign exchange loss). Finance costs - costs of borrowing from various creditors (e.g., interest expenses, bank charges). Income tax expense - sum of the amount of tax payable to tax authorities in the current reporting period (current tax liabilities/ tax payable) and the amount of deferred tax liabilities (or assets). Irregular items They are reported separately because this way users can better predict future cash flows - irregular items most likely will not recur. These are reported net of taxes. Discontinued operations is the most common type of irregular items. Shifting business location(s), stopping production temporarily, or changes due to technological improvement do not qualify as discontinued operations. Discontinued operations must be shown separately. Cumulative effect of changes in accounting policies (principles) is the difference between the book value of the affected assets (or liabilities) under the old policy (principle) and what the book value would have been if the new principle had been applied in the prior periods. For example, valuation of inventories using LIFO instead of weighted average method. The changes should be applied retrospectively and shown as adjustments to the beginning balance of affected components in Equity. All comparative financial statements should be restated. (IAS 8) However, changes in estimates (e.g., estimated useful life of a fixed asset) only requires prospective changes. (IAS 8) No items may be presented in the income statement as extraordinary items under IFRS regulations, but are permissible under US GAAP. (IAS 1.87) Extraordinary items are both unusual (abnormal) and infrequent, for example, unexpected natural disaster, expropriation, prohibitions under new regulations. [Note: natural disaster might not qualify depending on location (e.g., frost damage would not qualify in Canada but would in the tropics).] Additional items may be needed to fairly present the entity's results of operations. (IAS 1.85) Certain items must be disclosed separately in the notes (or the statement of comprehensive income), if material, including:[4] (IAS 1.98) Write-downs of inventories to net realisable value or of property, plant and equipment to recoverable amount, as well as reversals of such write-downs Restructurings of the activities of an entity and reversals of any provisions for the costs of restructuring Disposals of items of property, plant and equipment Disposals of investments Discontinued operations Litigation settlements Other reversals of provisions Because of its importance, earnings per share (EPS) are required to be disclosed on the face of the income statement. A company which reports any of the irregular items must also report EPS for these items either in the statement or in the notes. \text{Earnings per share} = \frac{\text{Net income} - \text{Preferred stock dividends}}{\text{Weighted average of common stock shares outstanding}} There are two forms of EPS reported: Basic: in this case "weighted average of shares outstanding" includes only actual stocks outstanding. Diluted: in this case "weighted average of shares outstanding" is calculated as if all stock options, warrants, convertible bonds, and other securities that could be transformed into shares are transformed. This increases the number of shares and so EPS decreases. Diluted EPS is considered to be a more reliable way to measure EPS. Sample income statement The following income statement is a very brief example prepared in accordance with IFRS. It does not show all possible kinds of accounts, but it shows the most usual ones. Please note the difference between IFRS and US GAAP when interpreting the following sample income statements. Fitness Equipment Limited Year Ended March 31, 2009 2008 2007 Revenue $ 14,580.2 $ 11,900.4 $ 8,290.3 Cost of sales (6,740.2) (5,650.1) (4,524.2) ------------- ------------ ------------ Gross profit 7,840.0 6,250.3 3,766.1 SGA expenses (3,624.6) (3,296.3) (3,034.0) Operating profit $ 4,215.4 $ 2,954.0 $ 732.1 Gains from disposal of fixed assets 46.3 - - Interest expense (119.7) (124.1) (142.8) Profit before tax 4,142.0 2,829.9 589.3 Income tax expense (1,656.8) (1,132.0) (235.7) Profit (or loss) for the year $ 2,485.2 $ 1,697.9 $ 353.6 DEXTERITY INC. AND SUBSIDIARIES Year Ended December 31, 2009 2008 2007 Revenue $ 36,525.9 $ 29,827.6 $ 21,186.8 Cost of sales (18,545.8) (15,858.8) (11,745.5) ----------- ----------- ------------ Gross profit 17,980.1 13,968.8 9,441.3 Selling, general and administrative expenses (4,142.1) (3,732.3) (3,498.6) Depreciation (602.4) (584.5) (562.3) Amortization (209.9) (141.9) (111.8) Impairment loss (17,997.1) — — Total operating expenses (22,951.5) (4,458.7) (4,172.7) Operating profit (or loss) $ (4,971.4) $ 9,510.1 $ 5,268.6 Interest income 25.3 11.7 12.0 Interest expense (718.9) (742.9) (799.1) Profit (or loss) from continuing operations before tax, share of profit (or loss) from associates and non-controlling interest $ (5,665.0) $ 8,778.9 $ 4,481.5 ----------- ----------- ------------ Income tax expense (1,678.6) (3,510.5) (1,789.9) Profit (or loss) from associates, net of tax (20.8) 0.1 (37.3) Profit (or loss) from non-controlling interest, net of tax (5.1) (4.7) (3.3) Profit (or loss) from continuing operations $ (7,369.5) $ 5,263.8 $ 2,651.0 Profit (or loss) from discontinued operations, net of tax (1,090.3) (802.4) 164.6 Profit (or loss) for the year $ (8,459.8) $ 4,461.4 $ 2,815.6 "Bottom line" is the net income that is calculated after subtracting the expenses from revenue. Since this forms the last line of the income statement, it is informally called "bottom line." It is important to investors as it represents the profit for the year attributable to the shareholders. After revision to IAS 1 in 2003, the Standard is now using profit or loss for the year rather than net profit or loss or net income as the descriptive term for the bottom line of the income statement. Requirements of IFRS On 6 September 2007, the International Accounting Standards Board issued a revised IAS 1: Presentation of Financial Statements, which is effective for annual periods beginning on or after 1 January 2009. A business entity adopting IFRS must include: a statement of comprehensive income or two separate statements comprising: an income statement displaying components of profit or loss and a statement of comprehensive income that begins with profit or loss (bottom line of the income statement) and displays the items of other comprehensive income for the reporting period. (IAS1.81) All non-owner changes in equity (i.e., comprehensive income ) shall be presented in either in the statement of comprehensive income (or in a separate income statement and a statement of comprehensive income). Components of comprehensive income may not be presented in the statement of changes in equity. Comprehensive income for a period includes profit or loss (net income) for that period and other comprehensive income recognised in that period. All items of income and expense recognised in a period must be included in profit or loss unless a Standard or an Interpretation requires otherwise. (IAS 1.88) Some IFRSs require or permit that some components to be excluded from profit or loss and instead to be included in other comprehensive income. (IAS 1.89) Items and disclosures The statement of comprehensive income should include:[4] (IAS 1.82) Finance costs (including interest expenses) Share of the profit or loss of associates and joint ventures accounted for using the equity method A single amount comprising the total of (1) the post-tax profit or loss of discontinued operations and (2) the post-tax gain or loss recognised on the disposal of the assets or disposal group(s) constituting the discontinued operation Profit or loss Each component of other comprehensive income classified by nature Share of the other comprehensive income of associates and joint ventures accounted for using the equity method Total comprehensive income The following items must also be disclosed in the statement of comprehensive income as allocations for the period: (IAS 1.83) Profit or loss for the period attributable to non-controlling interests and owners of the parent Total comprehensive income attributable to non-controlling interests and owners of the parent No items may be presented in the statement of comprehensive income (or in the income statement, if separately presented) or in the notes as extraordinary items. Trading statement Statement of retained earnings (statement of changes in equity) Model audit International Financial Reporting Standards (and their requirements) PnL Explained - Profit and loss explained report ^ a b Professional English in Use - Finance, Cambridge University Press, p. 10 ^ a b ^ Warren, Carl (2008). Survey of Accounting. Cincinnati: South-Western College Pub. pp. 128–132. ^ a b c "Presentation of Financial Statements" International Accounting Standards Board. Accessed 17 July 2010. ^ a b http://www.economywatch.info/2011/06/income-statement.html Harry I. Wolk, James L. Dodd, Michael G. Tearney. Accounting Theory: Conceptual Issues in a Political and Economic Environment (2004). ISBN 0-324-18623-1. Angelico A. Groppelli, Ehsan Nikbakht. Finance (2000). ISBN 0-7641-1275-9. Barry J. Epstein, Eva K. Jermakowicz. Interpretation and Application of International Financial Reporting Standards (2007). ISBN 978-0-471-79823-1. Jan R. Williams, Susan F. Haka, Mark S. Bettner, Joseph V. Carcello. Financial & Managerial Accounting (2008). ISBN 978-0-07-299650-0. Understanding The Income Statement Article from Investopedia Articles with limited geographic scope from December 2009 Accounting terminology Accounting, Accounting research, Business, International Financial Reporting Standards, Manufacturing Expense, United Kingdom, Canada, Internal Revenue Service, International Financial Reporting Standards United States, United Kingdom, Real estate, Bible, Inheritance tax United Kingdom, United States, Management accounting, Financial accounting, Finance Debits and credits, Ledger, Double-entry bookkeeping system, Asset, Income Statement of retained earnings Balance sheet, Income statement, International Financial Reporting Standards, Generally accepted accounting principles, Equity (finance) Accumulated other comprehensive income Accounting, Accounting research, United States, Income statement, Comprehensive income Variable costing Income statement, Income, Gaap, Managerial accounting, Manufacturing overhead Sg&a Income statement, Accountancy, Cost of goods sold, Taxes, Net income
CommonCrawl
Matrix multiplication and tensorial summation convention I'm reading this introduction to tensors: https://arxiv.org/abs/math/0403252, specifically rules concerning summation convention (ref. page 13): Rule 1. In correctly written tensorial formulas free indices are written on the same level (upper or lower) in both sides of the equality. Each free index has only one entry in each side of the equality. Rule 2. In correctly written tensorial formulas each summation index should have exactly two entries: one upper entry and one lower entry. Rule 3. For any double indexed array with indices on the same level (both upper or both lower) the first index is a row number, while the second index is a column number. If indices are on different levels (one upper and one lower), then the upper index is a row number, while lower one is a column number. I have a doubt on applying these rules to matrix multiplication though. Let $A$ and $B$ be matrices and let's represent their elements as $A_{ij}$ and $B^{jk}$. If $C=AB$, then $$C_i^k = A_{ij}B^{jk}$$ where $j$ is summed over. But on the LHS, $k$ clearly represents the column index, and $i$ the row index. You can even check it yourself by considering $A$ and $B$ as $2\times 2$ matrices. According to rule 3 though, it's supposed to be the opposite since $i$ is the subscript and $k$ the superscript. Is there a way to resolve the inconsistency between the $3$ rules here, or am I missing something? Because if this convention doesn't even apply to something as simple as matrix multiplication, then it seems pretty useless. tensor-calculus conventions notation linear-algebra covariance Shirish KulhariShirish Kulhari Matrices are not tensors. A matrix is just a rectangular block of numbers. By contrast, tensors are geometrical objects; you can specify a tensor by taking a coordinate system and giving its components, but the tensor exists independently of those components. A tensor is to a matrix like a triangle is to a list of the coordinates of its points. However, for tensors of low rank, it's possible to write tensor manipulations in terms of familiar matrix operations on their components. Because of this, some sources even go so far as to say a tensor is the same thing as a matrix, though I think this is misguided. Specifically, a tensor of rank $(r, s)$, in components, would have $r$ indices up and $s$ indices down. It can be thought of as a linear map which takes in $s$ vectors and returns $r$ vectors. Therefore, a rank $(1, 0)$ tensor $v^i$ is a vector, since it takes in nothing and returns a vector. Its components can be thought of as forming a column vector. a rank $(0, 1)$ tensor $w_i$ is a covector, since it takes in a vector and returns a number. Its components can be thought of as forming a row vector. a rank $(1, 1)$ tensor $A^i_{\ \ j}$ of rank $(1, 1)$ is a linear map from vectors to vectors, so its components can be thought of as a matrix. For example, a covector can act on a vector to return a number, and in components that is $$w(v) = w_i v^i$$ which is the familiar multiplication of a row vector and column vector. Also, a rank $(1, 1)$ tensor can act on a vector to return a vector, which in components is $$(A(v))^i = A^i_{\ \ j} v^j.$$ Finally, the composition of two rank $(1, 1)$ tensors is $$C^i_{\ \ k} = A^i_{\ \ j} B^j_{\ \ k}.$$ Thus, for these three types of tensors only, tensor operations can be written in terms of matrix multiplication. The general rule is that a "column" index is associated with an upper tensor index. It's useful to be able to use both notations when they both work, but tensor index manipulations are more general, since the matrix picture completely breaks down for higher-rank tensors. The other problem with the matrix picture for tensors is that, even when it works, very few sources will line up the indices as I did. For example, if you're reading a computer science or typical intro linear algebra textbook, the indices will not line up. And that's fine, because they are just considering rectangles of numbers; their matrices have no tensorial meaning whatsoever, so there's no reason to adhere to the tensor conventions. Also, if you're working in Euclidean space, you can always raise and lower the indices on tensors without changing the values of the components. Hence books about applied physics (e.g. fluid dynamics, engineering) will not line up the tensor indices because it makes no difference in computing the result. knzhouknzhou $\begingroup$ You said that the general rule is that a "column" index is associated with an upper tensor index. Does that mean rule $3$ as given in the linked notes is wrong? Because if that's the case, even for normal matrix multiplication, rules 1 and 2, together with the revised rule 3 (superscipt column and subscript row) become consistent. $\endgroup$ – Shirish Kulhari Jun 10 '18 at 9:18 $\begingroup$ @ShirishKulhari I've given a set of rules that is self-consistent, but the issue is that, as I said in my last paragraph, lots of people use the index notation in different ways. I suspect that the "rule 3" in your source is not meant to be a nice, self-consistent rule, but just meant to describe what some people do. You can't go wrong if you follow my rules, which are the standard in theoretical physics, but "rule 3" might help you decipher what other people are trying to say. $\endgroup$ – knzhou Jun 10 '18 at 9:26 $\begingroup$ @ShirishKulhari Under my conventions, even the first part of rule 3 is not correct. A tensor with both indices up or both indices down should not be thought of as a matrix at all, since it is not a linear transformation acting on vectors. So already at that point your source is being a bit more flexible. $\endgroup$ – knzhou Jun 10 '18 at 9:30 Ignore rule 3. As you noticed it creates contradictions, so it has to be replaced by a better rule. First of all, you shouldn't write $C_i^k$. Here lies the problem. Indexes have an order, and it's easy to see it in tensors like $A_{ij}$ or $B^{jk}$. It should be seen in $C$ too. So the correct notation is: $$C_i\,^k = A_{ij} B^{jk}$$ In matrix representation the first index always represents rows and the second always columns. This representation works well for order 1 and 2 tensors, but begins to be ugly for bigger orders. GRBGRB Not the answer you're looking for? Browse other questions tagged tensor-calculus conventions notation linear-algebra covariance or ask your own question. Difference in covariant/contravariant indexation order in Tensors Question on index notation and metric tensor Tensors, indices and matrix notation - is there a common convention? Notation: tetrad indices Staggered Indices ($\Lambda^\mu{}_\nu$ vs. $\Lambda_\mu{}^\nu$) on Lorentz Transformations Einstein Summation Convention: One as Upper, One as Lower? How to express this using the Einstein summation convention Error in Jackson's 'Classical Electrodynamics' definition of Matrix Multiplication (Eq. 6.142)? Index notation Lorentz transfromation matrix Distinguishing between matrix forms when reordering indices of tensors Anti-symmetrization brackets break Einstein summation convention
CommonCrawl
Difference between revisions of "Universal Style Transfer via Feature Transforms" A2prasad (talk | contribs) (→‎Additional Results and Figures) =Additional Results and Figures= [[File:style-1.png]] =References= 2.1 How Content and Style are Extracted using CNNs 2.2 Other Methods 3.1 Image Reconstruction 3.2 Whitening Transform 3.3 Colour Transform 3.4 Content/Style Balance 3.5 Using Multiple Layers 4.1 Style Transfer 4.2 Transfer Efficiency 4.3 Other Applications 7 Additional Results and Figures When viewing an image, whether it is a photograph or a painting, two types of mutually exclusive data are present. First, there is the content of the image, such as a person in a portrait. However, the content does not uniquely define the image. Consider a case where multiple artists paint a portrait of an identical subject, the results would vary despite the content being invariant. The cause of the variance is rooted in the style of each particular artist. Therefore, style transfer between two images results in the content being unaffected but the style being copied. Style transfer is an important image editing task which enables the creation of new artistic works. Typically one image is termed the content/reference image, whose style is discarded. The other image is called the style image, whose style, but the not content is copied to the content image. Deep learning techniques have been shown to be effective methods for implementing style transfer. Previous methods have been successful but with several key limitations and often trade off between generalization, quality and efficiency. Either they are fast, but have very few styles that can be transferred or they can handle arbitrary styles but are no longer efficient. The presented paper establishes a compromise between these two extremes by using only whitening and coloring transforms (WCT) to transfer a style within a feedforward image reconstruction architecture. No training of the underlying deep network is required per style. Gatys et al. developed a new method for generating textures from sample images in 2015 [1] and extended their approach to style transfer by 2016 [2]. They proposed the use of a pre-trained convolutional neural network (CNN) to separate content and style of input images. Having proven successful, a number of improvements quickly developed, reducing computational time, increasing the diversity of transferrable styes, and improving the quality of the results. Central to these approaches and of the present paper is the use of a CNN. In 2017, Mechrez et al. [12] proposed an approach that takes as input a stylized image and makes it more photorealistic. Their approach relied on the Screened Poisson Equation, maintaining the fidelity of the stylized image while constraining the gradients to those of the original input image. The method they proposed was fast, simple, fully automatic and showed positive progress in making a stylized image photorealistic. Alternative attempts, by using a single network to transfer multiple styles, include models conditioned on binary selection units [13], a network that learns a set of new filters for every new style [15], and a novel conditional normalization layer that learns normalization parameters for each style [3] How Content and Style are Extracted using CNNs A CNN was chosen due to its ability to extract high level feature from images. These features can be interpreted in two ways. Within layer [math] l [/math] there are [math] N_l [/math] feature maps of size [math] M_l [/math]. With a particular input image, the feature maps are given by [math] F_{i,j}^l [/math] where [math] i [/math] and [math] j [/math] locate the map within the layer. Starting with a white noise image and an reference (content) image, the features can be transferred by minimizing [math] \mathcal{L}_{content} = \frac{1}{2} \sum_{i,j} \left( F_{i,j}^l - P_{i,j}^l \right)^2 [/math] where [math] P_{i,j} [/math] denotes the feature map output caused by the white noise image. Therefore this loss function preserves the content of the reference image. The style is described using a Gram matrix given by [math] G_{i,j}^l = \sum_k F_{i,k}^l F_{j,k}^l [/math] Gram matrix $G$ of a set of vectors $v_1,\dots,v_n$ is the matrix of all possible inner products whose entries are given by $G_{ij}=v_i^Tv_j$. The loss function that describes a difference in style between two images is equal to: [math] \mathcal{L}_{style} = \frac{1}{4 N_l^2 M_l^2} \sum_{i,j} \left(G_{i,j}^l - A_{i,j}^l \right)^2 [/math] where [math] A_{i,j}^l [/math] and [math] G_{i,j}^l [/math] are the Gram matrices of the generated image and style image respectively. Therefore three images are required, a style image, a content image and an initial white noise image. Iterative optimization is then used to add content from one image to the white noise image, and style from the other. An additional parameter is used to balance the ratio of these loss functions. The 19-layer ImageNet trained VGG network was chosen by Gatys et al. VGG-19 is still commonly used in more recent works as will be shown in the presented paper, although training datasets vary. Such CNNs are typically used in classification problems by finalizing their output through a series of full connected layers. For content and style extraction it is the convolutional layers that are required. The method of Gatys et al. is style independent, since the CNN does not need to be trained for each style image. However the process of iterative optimization to generate the output image is computational expensive. Other methods avoid the inefficiency of iterative optimization by training a network/networks on a set of styles. The network then directly transfers the style from the style image to the content image without solving the iterative optimization problem. V. Dumoulin et al. trained a single network on $N$ styles [3]. This improved upon previous work where a network was required per style [4]. The stylized output image was generated by simply running a feedforward pass of the network on the content image. While efficiency is high, the method is no longer able to apply an arbitrary style without retraining. Li et al. have proposed a novel method for generating the stylized image. A CNN is still used as in Gatys et al. to extract content and style. However, the stylized image is not generated through iterative optimization or a feed-forward pass as required by previous methods. Instead, whitening and colour transforms are used. Image Reconstruction Training a single decoder. X denotes the layer of the VGG encoder that the decoder receives as input. An auto-encoder network is used to first encode an input image into a set of feature maps, and then decode it back to an image as shown in the adjacent figure. The encoder network used is VGG-19. This network is reponsible for obtaining feature maps (similar to Gatys et al.). The output of each of the first five layers is then fed into a corresponding decoder network, which is a mirrored version of VGG-19. Each decoder network then decodes the feature maps of the $l$th layer producing an output image. A mechanism for transferring style will be implemented by manipulating the feature maps between the encoder and decoder networks. First, the auto-encoder network needs to be trained. The following loss function is used [math] \mathcal{L} = || I_{output} - I_{input} ||_2^2 + \lambda || \Phi(I_{output}) - \Phi(I_{input})||_2^2 [/math] where $I_{input}$ and $I_{output}$ are the input and output images of the auto-encoder. $\Phi$ is the VGG encoder. The first term of the loss is the pixel reconstruction loss, while the second term is feature loss. Recall from "Related Work" that the feature maps correspond to the content of the image. Therefore the second term can also be seen as penalising for content differences that arise due to the encoder network. The network was trained using the Microsoft COCO dataset. They use whitening and coloring transforms to directly transform the $f_c$ (VGG feature map of content image at a certain layer) to match the covariance matrix of $f_s$ (VGG feature map of style image). This process is consisted of two steps, i.e., whitening and coloring transform. Note that the decoder will reconstruct the original content image if $f_c$ is directly fed into it. Whitening Transform Whitening first requires that the covariance of the data is a diagonal matrix. This is done by solving for the covariance matrix's eigenvalues and eigenvector matrices. Whitening then forces the diagonal elements of the eigenvalue matrix to be the same. This is achieved for a feature map from VGG through the following steps. The feature map $f_c$ is extracted from a layer of the encoder network after activation on the content image. This is the data to be whitened. $f_c$ is centered by subtracting its mean vector $m_c$. Then, the eigenvectors $E_c$ and eigenvalues $D_c$ are found for the covariance matrix of $f_c$. The whitened feature map is then given by $\hat{f}_c = E_c D_c^{-1/2} E_c^T f_c$. If interested, the derivation of the whitening equation can be seen in [5]. Li et al. found that whitening removed styles from the image. Colour Transform However, whitening does not transfer style from the style image. It only uses feature maps from the content image. The colour transform uses both $\hat{f}_c$ from above and $f_s$, the feature map from the style image. $f_s$ is centered by subtracting its mean vector $m_s$. Then, the eigenvectors $E_s$ and eigenvalues $D_s$ are calculated for the covariance matrix of $f_s$. The colour transform is given by $\hat{f}_{cs} = E_s D_s^{1/2} E_s^T \hat{f}_c$. Recenter $\hat{f}_{cs}$ using $m_s$. Intuitively, colouring results in a correlation between the $\hat{f}_c$ and $f_s$ feature maps. This is where the style transfer takes place. Content/Style Balance Using just $\hat{f}_{cs}$ as the input to the decoder may create a result that is too extreme in style. To balance content and style a new parameter $\alpha$ is defined. [math] \hat{f}_{cs} = \alpha \hat{f}_{cs} + (1 - \alpha) f_c [/math] Authors use $\alpha$ = 0.6 in the style transfer experiments. Using Multiple Layers It has been previously mentioned that multiple decoders were trained, one for each of the first five layers of the encoder network. Each layer of a CNN perceives features at different levels. Levels close to the input image will detect lower level local features such as edges. Those levels deeper into the network will detect more complex global features. The style transfer algorithm is applied at each of these levels, which yields the question as to which results, as shown below, to use. Results of style transfer from each of the first five layers of the encoder network. Ideally, the results of each layer should be used to build the final output image. This captures the entire range of features detected by the encoder network. First, one full pass of the network is performed. Then the stylised image from the deepest layer (Relu_5_1 in this case) is taken and used as the content image for another iteration of the algorithm, where then the next layer (Relu_4_1) is used as the output. These steps are repeated until the final image is produced from the shallowest layer. This process is summarised in the figure below. The content (C) and style (S) are fed to the VGG encoding network. The output image (I) after a whitening and colour transform (WCT) is taken from the deepest level's decoder. The process is iteratively repeated until the most shallow layer is reached. The authors note that the transformations must be applied first at the highest level (most abstract) layers, which capture complicated local structures and pass this transformed image to lower layers, which improve on details. They observe that reversing this order (lowest to highest) leads to images with low visual quality, as low-level information cannot be preserved after manipulating high level features. (a)-(c) Output from intermediate layers. (d) Reversed transformation order. The success of style transfer might appear hard to quantify as it relies on qualitative judgement. However, the extremes of transferring no style, or transferring only style can be considered as performing poorly. Consistent transfer of style throughout the entire image is another parameter of success. Ideally, the viewer can recognize the content of the image, while seeing it expressed in an alternative style. Quantitatively, the quality of the style transfer can be calculated by taking the covariance matrix difference $L_s$ between the resulting image and the original style. The results of the presented paper also need to be considered within the contexts of generality, efficiency and training requirements. A number of style transfer examples are presented relative to other works. A: See [6]. B: See [7]. C: See [8]. D: Gatys et al. iterative optimization, see [2]. E: This paper's results. Li et al. then obtained the average $L_s$ using 10 random content images across 40 style images. They had the lowest average $log(L_s)$ of all referenced works at 6.3. Next lowest was Gatys et al. [2] with $log(L_s) = 6.7$. It should be noted that while $L_s$ quantitatively calculates the success of the style transfer, results are still subject to the viewer's impression. Reviewing the transfer results, rows five and six for Gatys et al.'s method shows local minimization issues. However, their method still achieves a competitive $L_s$ score. Transfer Efficiency It was hypothesized by Li et al. that using WCT would enable faster run-times than [2] while still supporting arbitrary style transfer. For a 256x256 image, using a 12GB TITAN X, they achieved a transfer time of 1.5 seconds. Gatys et al.'s method [2] required 21.2 seconds. The pure feed-forward approaches [7], and [8] had times equal to or less than 0.2 seconds. [6] had a time comparable to the presented paper's method. However, [6,7,8] do not generalize well to multiple styles as training is required. Therefore this paper obtained a near 15x speed up for a style agnostic transfer algorithm when compared to leading previous work. The authors also note that WCT was done using the CPU. They intend to port WCT to the GPU and expect to see the computational time be further reduced. Li et al.'s method can also be used for texture synthesis. This was the original work of Gatys et. al. before they applied their algorithm to style transfer problems. Texture synthesis takes a reference texture/image and creates new textures from it. With proper boundary conditions enforced these synthesized textures can be tileable. Alternatively, higher resolution textures can be generated. Texture synthesis has applications in areas such as computer graphics, allowing for large surfaces to be texture mapped. The content image is set as white noise, similar to how [2] initializes their output image. Then the reference texture/image is set as the style image. Since the content image is initially random white noise, then the features generated by the encoder of this image are also random. Li et al. state that this increases the diversity of the resulting output textures. A: Reference image/texture. B: Result from [8]. C: Result of present paper. Reviewing the examples from the above figure, it can be observed that the method from this paper repeats fewer local features from the image than a competing feed forward network method [8]. While the analysis is qualitative, the authors claim that their method produces "more visually pleasing results". Only a couple years ago were CNNs first used to stylize images. Today, a host of improvements have been developed, optimizing the original work of Gatys et al. for a number of different situations. Using additional training per style image, computational efficiency and image quality can be increased. However, the trained network then depends on that specific style image, or in some cases such as in [3], a set of style images. Till now, limited work has taken place in improving Gatys et al.'s method for arbitrary style images. The authors of this paper developed and evaluated a novel method for arbitrary style transfer. Their method and Gatys et al.'s method share the use of a VGG-19 CNN as the initial processing step. However, the authors replaced iterative optimization with whitening and colour transforms, which can be applied in a single step. This yields a decrease in computational time while maintaining generality with respect to the style image. After their CNN auto-encoder is initially trained no further training is required. This allows their method to be style agnostic. Their method also performs favourably, in terms of image quality, when compared to other current work. In the paper, the authors only experimented with layers of VGG19. Given that architectures such as ResNet and Xception perform better on image recognition tasks, it would be interesting to see how residual layers and/or Inception modules may be applied to the task of disentangling style and content and whether they would improve performance relative to the results presented in the current paper is the encoder used were to utilize layers from these alternative convolutional architectures. Additionally, it is worth exploring whether one can invent a probabilistic and/or generative version of the encoder-decoder architecture used in the paper. More precisely, is it possible to come up with something in the spirit of variational autoencoders, wherein we the bottleneck layer can be used to sample noise vectors, which can then be input into each of the decoder units to generate synthetic style and content images. Alternative attempts would also involve the study of generative adversarial networks with a perturbation threshold value. GANs can produce surreal images, where the underlying structure (content) is preserved ( in CNNs the filters learn the edges and surfaces and shape of the image), provided the Discriminator is trained for style classification ( training set consists of images pertaining the style that requires to be transferred). Additional Results and Figures [1] L. A. Gatys, A. S. Ecker, and M. Bethge. Texture synthesis using convolutional neural networks. In NIPS, 2015. [2] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In CVPR, 2016. [3] V. Dumoulin, J. Shlens, and M. Kudlur. A learned representation for artistic style. In ICLR, 2017. [4] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, 2016 [5] R. Picard. MAS 622J/1.126J: Pattern Recognition and Analysis, Lecture 4. http://courses.media.mit.edu/2010fall/mas622j/whiten.pdf [6] T. Q. Chen and M. Schmidt. Fast patch-based style transfer of arbitrary style. arXiv preprint arXiv:1612.04337, 2016. [7] X. Huang and S. Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. arXiv preprint arXiv:1703.06868, 2017. [8] D. Ulyanov, V. Lebedev, A. Vedaldi, and V. Lempitsky. Texture networks: Feed-forward synthesis of textures and stylized images. In ICML, 2016. [9] Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, A Neural Algorithm of Artistic Style, https://arxiv.org/abs/1508.06576 [10] Karen Simonyan et al. Very Deep Convolutional Networks for Large-Scale Image Recognition [11] VGG Architectures - More Details [12] Mechrez, R., Shechtman, E., & Zelnik-Manor, L. (2017). Photorealistic Style Transfer with Screened Poisson Equation. arXiv preprint arXiv:1709.09828. [13] Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.-H. Yang. Diversified texture synthesis with feed-forward networks. In CVPR, 2017 [14] D. Chen, L. Yuan, J. Liao, N. Yu, and G. Hua. Stylebank: An explicit representation for neural image style transfer. In CVPR, 2017 Implementation Example: https://github.com/titu1994/Neural-Style-Transfer Retrieved from "http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Universal_Style_Transfer_via_Feature_Transforms&oldid=30856"
CommonCrawl
The microbial metabolic activity on carbohydrates and polymers impact the biodegradability of landfilled solid waste Christian Brandstaetter ORCID: orcid.org/0000-0003-2645-51881,2, Nora Fricko1, Mohammad J. Rahimi3, Johann Fellner1, Wolfgang Ecker-Lala2 & Irina S. Druzhinina3,4 Biodegradation (2021)Cite this article Biological waste degradation is the main driving factor for landfill emissions. In a 2-year laboratory experiment simulating different landfill in-situ aeration scenarios, the microbial degradation of solid waste under different oxygen conditions (treatments) was investigated. Nine landfill simulation reactors were operated in triplicates under three distinct treatments. Three were kept anaerobic, three were aerated for 706 days after an initial anaerobic phase and three were aerated for 244 days in between two anaerobic phases. In total, 36 solid and 36 leachate samples were taken. Biolog® EcoPlates™ were used to assess the functional diversity of the microbial community. It was possible to directly relate the functional diversity to the biodegradability of MSW (municipal solid waste), measured as RI4 (respiration index after 4 days). The differences between the treatments in RI4 as well as in carbon and polymer degradation potential were small. Initially, a RI4 of about 6.5 to 8 mg O2 kg−1 DW was reduced to less than 1 mg O2 kg−1 DW within 114 days of treatment. After the termination of aeration, an increase 3 mg O2 kg−1 DW was observed. By calculating the integral of the Gompertz equation based on spline interpolation of the Biolog® EcoPlates™ results after 96 h two substrate groups mainly contributing to the biodegradability were identified: carbohydrates and polymers. The microbial activity of the respective microbial consortium could thus be related to the biodegradability with a multilinear regression model. Gaseous and liquid landfill emissions pose a significant threat to human health and to the environment. Landfilling strongly affects its surrounding by causing gaseous and liquid emissions (Kylefors et al. 2003). These emissions are driven by biological degradation processes of organic matter present in the landfilled waste (El-Fadel et al. 1997). The biodegradation processes in landfills depend on several factors, including the organic matter content, the landfill age, temperature, oxygen situation, heterogeneity of the deposition and the biodegradability of the organic matter. The latter is of importance also for the management of landfills, as the biodegradability of the waste is directly coupled with the methane generation potential (Komilis et al. 1999). Globally, in landfills containing untreated municipal solid waste (MSW), organic wastes form the main source for biodegradable carbon. Other potential carbon sources are plastics, cardboard and paper. Especially under anaerobic conditions these materials decay rather slowly. During degradation, the C:N ratio tends to narrow down, as easily degradable carbon is mainly converted to methane (CH4) and carbon dioxide (CO2). At the same time both, the carbon content and biodegradability of the carbon decrease and more recalcitrant material remains behind. In recent years, for reducing landfill emissions the so-called bioreactor approach gained traction (Valencia et al. 2009), where it is the goal to reduce long-term emissions of landfills by accelerating the biological degradation of waste material. Landfill in-situ aeration is a bioreactor-technique used to mitigate hazardous landfill emissions by introducing oxygen into the landfill body. This leads to a reduction of two harmful emissions: gaseous CH4 emissions and ammonia (NH4) emissions via leachate. For predicting the remaining emission potential of landfills and determining an end point for in-situ measures, the importance of carbon quality with regard to its biodegradability has been emphasized (Prantl et al. 2006; Brandstätter et al. 2015a). Currently, the forecasting of landfill emissions is typically based on a first-order decay model (Tabasaran and Rettenberger 1987) that was calibrated under laboratory conditions. This model is widely used by practitioners because the parameter assessment of more complex models is not practically feasible (Majdinasab et al. 2017). A deeper understanding of the underlying degradation process might contribute to improve the forecasting, both for anaerobic and aerobic landfill conditions. The degradation process of organic waste material also plays a pivotal role in composting of organic waste. A detailed view on the changing carbon quality during microbial waste degradation facilitates the understanding of the process on a biochemical level, thus contributing to a better understanding for landfill emission models as well as for estimating the aeration success. To this end, we applied the method of Biolog® EcoPlates™ to solid waste material under different levels of oxygenation in a 2-year laboratory experiment. The application of Biolog® EcoPlates™ rarely is considered in the context of solid waste. Examples are applications for sewage sludge analysis (Gryta et al. 2014) or, more recent, for analysis of composting microbial diversity (Zeng et al. 2018) or in lysimeter experiments (Dabrowska et al. 2019). Typical ways for Biolog® EcoPlates™ analysis are the calculation of the AWCD (average well color development) or ecological parameters, such as functional diversity (Garland et al. 2001). However, aggregating the results of Biolog® EcoPlates™ data activity data leads to a significant information loss. With the prevailing work we target to extract more meaningful information from the Biolog® EcoPlates™ method to directly link the microbial physiological diversity to the degradation process. For assessing biodegradability, we analyzed the respiration index after 4 days (RI4). Landfill material The waste material originated from a compartment of the Rautenweg landfill (48.26° N, 16.48° W) near Vienna (Austria) which was filled in the 1980s and the material was sampled (July 2017) during the drilling of new gas wells and then sieved to ≤ 20 mm. The waste material was collected and on site placed into 200 l drums and sealed. The material was stored under anaerobic conditions and then prior to the installation into the reactors piled and mixed with shovels. It consisted of a typical mix of landfilled waste, showing similar characteristics as the material used in Brandstätter et al. (2015a). Experimental setup The experiment involved the operation of nine landfill simulation reactors, similarly designed as those described in Brandstätter et al. (2015a). The reactors were made of PP (polypropylene) and regularly watered from above. Roughly 10 cm above the vessel bottom, a crate was installed to prevent the waste material from soaking. The total volume of each reactor was 60 l - corresponding to an initial waste material mass of 38.3 kg ± 1.2 kg. More details on the reactors are presented in the work of Fricko et al. (2021). After an initial anaerobic phase of 57 days, six reactors were aerated—three of them for the complete remaining period of 706 days (aerobic treatment), three others only for 244 days (mixed treatment) and three remained without aeration for the whole period (anaerobic treatment). An Argon-Oxygen mixture (79% Ar (Argon 5.0), 21% O2, Messer, Austria) was used as aeration gas. The gas influx into each reactor was recorded with a thermal massflow meter (FMA3103, OMEGA Engineering, Germany) for flow rates between 0 and 0.1 Nl/min and adjusted manually. The temperature of each reactor was recorded and individually controlled (set value 35 °C). Solid samples were retrieved from each reactor at start and end of the whole experiment. To keep anaerobic conditions intact, no sampling occurred during anaerobic operation periods. The aerated (aerobic and mixed) reactors were sampled five times in total. The conditions during sampling (aeration status) is shown in the brackets before the sample size. The samplings were linked to following events due to changes in operation: initial sampling upon installation (also for anaerobic reactors; anaerobic n = 9) after an initial anaerobic phase, before start of aeration (57 days; still anaerobic conditions; anaerobic n = 6) after two months of aeration (114 days, aerobic n = 6) after one year of operation (358 days, aerobic n = 6), directly before terminating aeration for the mixed treatment at the termination time point, together with the anaerobic treatment (763 days, anaerobic n = 6, aerobic n = 3) For the first four solid sampling campaigns (until day 358), the conditions prior to sampling were the same for both the aerobic and mixed treatment (n = 6). At the first sampling campaign upon installation, all the reactors were at the same (anaerobic) state (n = 9). Only at the final sampling campaign after 763 days, at the termination of the experiment, the different treatments were reflected in three solid samples per treatment. During sampling, the material was intensively mixed, subsequently randomly sampled and sieved (≤ 4 mm). Only this fraction was used for the ecophysiological profiling. In general, leachate was sampled regularly every four weeks with intensified intervals after changes in operation (2 weeks). However, the focus was put on the changes in the microbial community composition over time and differences or intersections between the solid and liquid phase microbiota. Hence, only leachate related to the sampling events was considered for the ecological phenotype microarrays analysis (in total also 36 samples). The resulting solid and liquid samples were processed as fast as possible. Nevertheless, in most of the cases cooled storage (6 °C) was required. Unfortunately in one case (sampling campaign 3, day 114), the samples had to be frozen to −20 °C prior to the processing of the ecological phenotype microarrays. Measurement methods For conducting dynamic monitoring of the functional diversity of the respective samples over six days the Biolog® EcoPlates™ method was used. Every EcoPlate consisted of 96 wells containing 31 carbon sources (see Table 1) plus a blank well in three replications each. The carbon utilization rate was determined by reducing a tetrazolium violet redox dye, which changed from colorless to purple if the microorganisms used the respective carbon source. More technically, the coloring is not directly caused by the substrate usage, but through respiration. For coloring, the microbes must be able to grow/breathe in the respective medium containing single carbon sources. For the analysis, the Biolog® EcoPlates™ substrates were subdivided into five groups (see Table 1): amines, amino acids, carbohydrates, carboxylic acids and polymers (Gryta et al. 2014; University of Toledo 2004). For the respective calculation of the utilization rate, the measured O.D.-values at 590 nm for each substrate, used as an indicator of microbial respiration due to dye color change (Mills and Garland 2002; Pinzari et al. 2017), were summed up to the according substrate groups. The Biolog® EcoPlates™ to be tested were prepared according to an extraction protocol modified from Hopkins et al. (1991), separating bacterial cells from soil particles. In brief, for the solid samples, 5 g of material was placed in a 250 ml flask containing 50 ml 10 mM sterile phosphate buffer (pH 7.0) supplemented with 0.1% tween 20 (PBS+TW20) and 30 glass beads. For the liquid samples (leachate), all the liquids were centrifuged at 10,000 r min−1 for 10 min, and the resulted pellets were washed three times with sterile ultrapure water. Using a spectrophotometer, the turbidity of each bacterial suspensions' samples was adjusted to 0.5 McFarland standard turbidity (0.1 O.D. at 595 nm wavelength). This turbidity is equal to approximately 106 of bacterial cells in 1 ml of the samples. For the analysis, the plates were placed into a plastic container and incubated at 28 °C for 7 days. The absorbance at both 590 and 750 nm was measured on a Biolog Microplate Reader (molecular devices) after 24, 48, 72, 96 and 144 h of incubation. The respiration index after 4 days (RI4) was measured according to the Austrian standard ON S 2027-4:2012-06-01. Table 1 Substrates Data preparation and statistical procedures All statistical analyses as well as data preparation were performed using R version 3.6.3 (R Core Team 2018). Prior to further calculations, the average blank values were subtracted and then the average measurement value of the technical replicates were calculates (three replicates per well). If certain measurement values were negative after the blank subtraction, they were set to 0. Concerning the handling of missing values for the solid samples, one out of three data points of the mixed treatment at 763 days was missing; for this missing replicate at each measurement the mean of the other two measurements was applied. Missing data points of the leachate samples were not treated (they remained excluded). For some samples, which were measured longer than 96 h, the measurement values were interpolated (spline-interpolation) and cut off at 96 h. For calculating the total substrate consumption a Gompertz equation was fitted by using the grofit package (Kahm et al. 2010). The Grofit-equation commonly used Zwietering et al. (1990) is derived as follows: $$\begin{aligned} y = A \exp {\left\{ -\exp {\left[ \frac{\mu e}{A} (\lambda -t) + 1 \right] }\right\} } \end{aligned}$$ with A being the asymptote (amplitude), μ being the linear slope (or growth rate) and \(\lambda\) the lag time. Prior to the calculation of the Gompertz-curve (Eq. 1), spline interpolation was conducted. For the interpolation, the total time period of 96 h was divided in 100 data points based on the measured data points. Based on that interpolation, a heuristic algorithm was applied to fit the best Gompertz equation. In those cases, where no Gompertz equation could be calculated (where the \(\lambda\)-value was negative), the spline interpolation was considered as base for integration over 96 h. This was also the case, when O.D. was 0 over the whole measurement. More detailed information on the application of the Gompertz calculation is given in Table 2. A successful Gompertz equation means in this context, that the numeric conditions for fitting the Gompertz equation were given and a Gompertz equation could be derived. As the Gompertz curve was fitted on the integral of the spline function, the difference of the integral between those two was considered negligible. For a more meaningful display of the temporal development of the respective substrate utilization throughout the experiment, scaling was applied (see Figs. 1, 2, 5 and 6). Thereto, the initial values (for the integrated substrate utilization) of the respective reactors were referred to the value of the first sampling (day 0). Hence, the respective initial value was set to 1 for each variable and reactor. For the data of the last sampling of the experiment (day 763), ANOVA followed by TukeyHSD-test was conducted between the three treatments (see Fig. 1) in order to check for statistical differences between the treatments. Prior to ANOVA Levene's test for homogeneity of variances as well as Shapiro test for normality were conducted to check for ANOVA requirements. With the exception of amines (where no significant differences were detected), the requirements for ANOVA were not violated. For the comparison of groups at different time points, Wilcoxon-range-test was applied (see Fig. 2), as not all subgroups were normally distributed. Before calculating a multilinear model to predict the RI4 (see Fig. 4, all values for each variable were normalized as follows: $$\begin{aligned} x_{norm} = \frac{x - {\rm{min}}(x)}{{\rm{max}}(x) - {\rm{min}}(x)} \end{aligned}$$ Application of the Gompertz equation Even if the successful application of the Gompertz application showed some variety over both time course and treatment, in total out of 1,085 measured curves, the Gompertz-fitting could be applied for 816 times, leading to a success rate of 75% (see Table 2). For further analyses, the integral over 96 h was considered. The values of the Gompertz-integral and the spline-integral showed a high degree of similarity, as spline interpolation was also applied for the Gompertz-fitting. For those cases, where the Gompertz-fitting failed, the integration of the spline interpolation over 96 h was considered. Table 2 Gompertz calculation success rate Microbial respiratory activity on different substrate groups The different substrate groups showed different responses over the experimental time course (see Fig. 1). On overall, the microbial metabolic activity and growth of the microbial consortium was lowest at the end of the experiment. This was to be expected in a batch reactor experiment: the more recalcitrant substrate fractions accumulate and easier degradable fractions would become less and less common. At the last sampling campaign, the anaerobic treatment showed higher activity in all substrate groups compared to the aerobic one. The microbial growth on amines generally was high in variation and showed the highest values in the midterm of the experiment (5-fold increase in comparison to the initial value). The microbial community was able to show highest activity on amino acids in the first half of the experiment. However, increases in activity on amino acids (referred to the initial value) were moderate in comparison to amines. Overall growth/activity on carbohydrates was highest initially and reduced over time to less than 5% of the initial value. For the carboxylic acids, there was a slight increase in the first half of the experiment followed by a decrease. Polymers showed a similar pattern to amines, with a pronounced increase until day 114 and from there onwards decreasing. Significant differences among the three treatment groups (aerobic, anaerobic and mixed) at the final sampling campaign were only found for carboxylic acids and for polymers, with the anaerobic treatment showing the highest utilization capability. Dynamics of microbial respiratory activity in solid samples depending on the biochemical group of the carbon source and the landfill type. Mixed treatment started anaerobically, then was switched to aerobic (1 year), then to anaerobic again. Different letters indicate significant differences according to Tukey HSD-test (p = 0.05). At the beginning and the end all reactors (n = 9) were sampled. From day 57 to 358 only aerobic reactors were sampled (n = 6). Values were scaled according to values from T = 0. Error bars indicate standard error (day 0: n = 9; day 763: n = 3; all others: n = 6). Aeration status represents the conditions during sampling .aer aerobic, ana anaerobic, O.D. optical density Sampling strategies Comparing different sampling strategies (solid and liquid samples), there were significant differences to be found among all substrate groups (see Fig. 2). For the amines, the solids showed an initial stronger increase than the liquids. Apparently, at the sampling campaign of day 114, the solid samples showed higher relative values (compared in to the initial one), while at the last sampling campaign, this trend was inverse, with leachate samples showing higher values. Also the variation at day 114 was relatively high. Both sampling strategies revealed similar trends over time. Influence of the sampling method on the microbial respiratory activity—Solids vs. Leachate. The leachate samples directly stem from the reactor leachate, while the solid samples were derived through the elution of solid sample material. Stars indicate significant differences according to Wilcoxon-range-test. At the beginning and the end all reactors were sampled (n = 9). From day 57 to 358 only aerobic reactors were sampled (n = 6). Values were scaled according to values from T = 0. Error bars indicate standard error. O.D. optical density. ***\(p<0.001\); **\(p<0.01\); *\(p<0.05\). Relationship between the microbial activity and biodegradability For measuring the biodegradability of municipal solid or organic waste, the RI4 is often applied as a proxy-measure due to its reproducibility and standardization (Binner et al. 2012; Binner and Zach 1999). In waste management, knowledge of the biological degradability of the waste material is important, as it is a driving factor of future emissions. In Austria and Germany, for waste material a low RI4 is necessary prior to landfilling (for Austrian landfills, the threshold value was set to 7 mg O2 kg−1 DW). Generally, respiration indices are used to determine microbial activity of waste samples containing organic fractions and they can be measured via CO2 production or O2 consumption under standardized conditions (Barrena Gómez et al. 2006). High values of RI4 indicate a high potential for microbial respiratory activity. Here, we observed a strong decrease for the aerobic and mixed treatment in the beginning from about 6.5 to 8 mg O2 kg−1 DW to less than 1 mg O2 kg−1 DW within 114 days of treatment (Fig. 3). There also was a decrease in the anaerobic case observable, but less pronounced over time (from 6.5 to 2.6 mg O2 kg−1 DW after 763 days of treatment). A notable increase of RI4 in the mixed treatment after aeration stop to 3 mg O2 kg−1 DW is surprisingly even higher than the RI4 observed for the anaerobic treatment at the end of the experiment. RI4 of landfilled waste in each treatment during the experiment Since the biodegradability of municipal solid waste is of high interest for landfill practitioners as well as for the assessment of their environmental impact, an increased understanding of its driving forces is important. The relationship between the degradation capability of the microbial consortium and the RI4 is displayed in Fig. 4. A strong linear relationship can be observed between Carbohydrates and RI4, and a weaker negative relationship between RI4 and polymer degradation. The relationship between amines, amino acids and carboxylic acids with RI4 is less pronounced. Thus, we considered the carbohydrates and polymers for a multilinear model to predict RI4. The resulting model can be described as follows: $$\begin{aligned} RI_4 = 0.26 + 0.68*Carbohydrates - 0.5*Polymers \end{aligned}$$ with Carbohydrates and Polymers being the standardized integral of the of the O.D. at 590 nm for all parameters of its subgroup (see Table 1). All the model parameters (including intercept) showed a p-value < 0.001 and the adjusted R2 value was 0.69 based on 36 samples. Since these two substrate groups are highly associated with short-term biodegradability of municipal solid waste, the individual components are shown in Figs. 5 and 6. Similarly to the RI4 (Fig. 3), the individual carbohydrate fractions (Fig. 5) show a rapid decrease early in the experiment. This was the case except for D-cellobiose, the monomer of cellulose, where the decrease was slower and for i-erythritol, where there was a peak at 114 days. The treatment did not show a big impact on the capability of the microbial community to utilize different carbon sources at the end of the experiment. For the polymers (Fig. 6), the time course of the degradation capability showed an inconsistent picture. Both tween-fractions showed an initial increase followed by a strong decrease for the aerated treatments and a slight change in the anaerobic treatment. The time course of α-cyclodextrin revealed an increase for the aerated treatments with high variation in between and glycogen showed an increase until day 114 and from there a decrease. The degradation capability for glycogen was highest for the (strictly) aerobic treatment. Glycogen is an important storage molecule for numerous microorganisms. Its buildup or decomposition is controlled by environmental factors, such as glucose-concentration (Wilson et al. 2010) and in comparison to the other polymers in the substrate group it is less recalcitrant. RI4 vs Substrate group. The shaded area shows the CI at the 95% level. O.D. optical density Timeline carbohydrates. Mixed treatment started anaerobically, then was switched to aerobic (1 year), then to anaerobic again. At the beginning all reactors were sampled. From day 57 to 358 only aerobic reactors were sampled. Values were scaled according to values from T = 0. O.D. optical density Timeline polymers. Mixed treatment started anaerobically, then was switched to aerobic (1 year), then to anaerobic again. At the beginning all reactors were sampled. From day 57 to 358 only aerobic reactors were sampled. Values were scaled according to values from T = 0. O.D. optical density Validation of methodological approach The common usage of average well color development (AWCD) may lead to a loss of the ecological signature of a sample (Miki et al. 2018). It was noted by the authors, that the integration of the signal showed a higher statistical power than using other approaches, like min or max values. Like others (Sofo and Ricciuti 2019), we observed that the detailed description of the usage of Biolog® EcoPlates™ data is often lacking. It is the aim of this work, to increase the level of standardization of the method. This is not only important for natural soils, but also of high relevance in solid landfill samples, showing already highly heterogeneous properties. For the paper at hand, it was not possible to calculate the AWCD, as the measurements at T = 0 were not performed. This also might have lead to a slight signal underestimation in the chosen approach of integrating a fitted curve, since T = 0 was considered as 0. For future approaches, it is important to also measure at the initial setup. The main motivation behind using the Gompertz equation was that it describes a natural process, namely biological growth. This makes the interpretation more meaningful, instead of directly using the measured values for integration. The results showed, at least for carbohydrates and polymers, surprisingly low variation between the treatments as well as between time points (see Figs. 5 and 6). For amino acids and amines the variation was much higher (data not shown). This might be explained by the composition of old landfilled waste. The total carbon content of the material is 5- to 10-fold higher than the total nitrogen concentration (Brandstätter et al. 2015a, b). Thus, more abundant carbon fractions show lower error values than less abundant nitrogen-fractions. Out of those samples/substrates, where a Gompertz-fitting could not be performed (n = 269), 116 showed a integrated value lower than 96 (1 O.D./h). This shows, that a big fraction of samples, where the Gompertz equation could not be fitted, were rather low in respiration activity. For those measured entities, where both spline interpolation and Gompertz-fitting were applied, the correlation coefficient between those two was 0.999 (data not shown). This is a direct result of the fitting method, as the Gompertz fitting was based on the spline function. Decomposition of individual substrate groups: experimental influence The initial sampling occurred during the reactor construction. The waste material was kept for roughly two years at room temperature under anaerobic conditions. Before sampling at the first time point, the material got thoroughly mixed and during this procedure also oxygenated to some extent. Thus, we created an ecological disturbance event of mostly (strictly and facultative) anaerobic microorganisms. This could have affected the high utilization rates of carbohydrates (see Fig. 1) at the very early stage of the experiment. For the substrate group carbohydrates, nearly all of the individual carbohydrate substrates showed initially a very high utilization rate, followed by a strong decrease (see Fig. 5). Other reasons for this initial high peak might be an accumulation of carbohydrates during this 2 years of anaerobic incubation, that were not accessible under anaerobic conditions. With the mixing and oxygenation there presumably was a sudden abundance of readily available carbohydrates, also possibly impacted by the death of strictly anaerobic microorganisms. During the degradation experiment, after the initial anaerobic setup with leachate recirculation and heating, at one point oxygen got introduced to the anaerobic system for the mixed and aerobic treatment. This lead to another ecological disturbance and the introduction of oxygen allowed for previously inaccessible microbial substrate utilization. Through aeration, the microbial CO2 respiration rate drastically increased as it was the case in similar experiments (Brandstätter et al. 2015a; Prantl et al. 2006). Thus, the microbial consortium had to adapt to a richer environment. For most substrate groups (except for carbohydrates and, by a small margin, amino acids), the measured activity was highest at time point three (at 114 days), the first sample after aeration start (and after roughly 2 months of aeration). At the end of the experiment (day 763) the higher utilization rates of the anaerobic treatment in comparison to the other two treatments for all substrate groups might have been caused by two effects: firstly, the aerated samples were more biologically stabilized and thus resource depleted. Thus the microbiological consortium adapted to a more scarce environment. And secondly, similarly to the initial setup, the sudden oxygenation of previously anaerobic conditions (and the substrate mixing) might have created access to previously not accessible resources. The experimental design did not allow for in-between sampling of the anaerobic treatment. Thus, for the anaerobic treatment only one sample campaign could be conducted after 2 years. A direct comparison with the other treatments (aerated and mixed) therefore is difficult. The anaerobic treatment was considered as a default case for comparing the in-situ aeration treatment with traditional landfilling, as was done in previous studies (e. g. Prantl et al. 2006; Ritzkowski and Stegmann 2013). For a more direct comparison between the anaerobic and aerobic cases, more anaerobic reactors could be included and destroyed at different time points. However, since the experiment targets to simulate to some extend landfill behavior, landfill simulation reactors should not be too small, making their creation and operation rather costly. Other ways to gain samples from such anaerobic reactors and retain anaerobic conditions would be either a complete anaerobic chamber (also not really feasible) or solid sampling entrance points directly integrated in the reactors (but this would not allow for sample homogenization). In the here presented setup, potential anaerobic effects strongly were influenced by batch reactor resource depletion because of the long time span of 763 days between the initial and the final sampling. Solid sampling of landfills is often costly and the heterogeneity of the landfilled material heavily influences the results of the analysis (Sormunen et al. 2008; Östman et al. 2006). Thus, it is more common to analyze landfill leachate samples, which are considered to give a better representation of the overall properties of the landfill body. However, through preferential flow paths the leachate is not passing the whole landfill in a similar way (e.g. Huber et al. 2004), and until the leachate sample is collected, it may remain some time in the leachate tank and/or change its properties (e.g. temperature, oxygen) until collection and analysis. In the prevailing case of landfill reactors, preferential flow paths are less pronounced than in the field, as the included material was sieved and mixed. Also the irrigation system was set up to ensure a rather uniform water distribution. During the sampling campaigns there was no evidence of dry patches in the material, which would be a clear sign of heterogeneous water flow. As can be seen in Fig. 2, the variation for both sampling strategies (solid and liquid sampling) was generally of a similar magnitude. For the solid samples, there was visibly higher variation in amines and amino acids and polymers in comparison to the leachate samples, for carbohydrates the opposite was the case. The amines showed a rather high variation in the solid samples. One reason for this might be that this substrate group contained the minimal number of substrates: two (phenylethylamine and putrescine). The dominant pathway for the biological production of amines is the decarboxylation of amino acids (Halász et al. 1994). Influenced by the before mentioned heterogeneity of the waste material, a low occurrence of the raw material (amino acid concentrations, data not shown) might be a driving force in altering the ability of the microbial consortium for amine consumption. Amino acids are a valuable resource for many cellular activities and other uses might have been preferred over amine production, thus reducing the ability of the microbial community for amine consumption. At the final sampling point, the microbial degradation potential was generally lower for the solid samples. This might be attributed to a few reasons: first the leachate is getting a more integrated sample of the overall community, and the solid samples are more prone to sampling biases. Second, it might be the case, that in the leachate at the bottom of the reactors a more diverse microbial flora emerged, thus enriching the leachate sample in microbial diversity. In a similar experiment, denitrification was considered to mainly occur in externalized leachate tanks (Brandstätter et al. 2015b), as these leachate reservoirs got charged with carbohydrates from leachate recirculation and provided anaerobic conditions. Furthermore, bacteria tend to form biofilms and attach to solids (e. g. Cai et al. 2019). Therefore the analysis of leachate does not address the complete biological functionality present in the samples. In this study, we attempted the extraction of microbial cells from solid particles and considered the samples from the solids as more trustworthy, as the solid dataset was more complete and the samples would better represent real landfill conditions. We generally ensured a profound mixing procedure to reduce variability as much as possible. Insights on biodegradability The here investigated system contains landfill bioreactors under different oxygen conditions. This means, that next to classical anaerobic biodegradation of landfilled waste, also the aerobic degradation was investigated. In waste management, the composting process investigates aerobic degradation of organic waste materials. There, substrate quality is of high relevance for the process assessment (Komilis 2015; Meng et al. 2019). Compost stability was technically defined as a measure of the resistance against further microbial decomposition. During the degradation of organic matter in a batch system, the material naturally gets more and more recalcitrant, as readily degradable substances are getting oxygenated into CO2 or, in the anaerobic case, also reduced to CH4. More recalcitrant components of MSW are wood and rubber (Patil et al. 2017). These are characterized by bigger and more complex compounds, which are represented in the substrate group polymers (see Figs. 4 and 6 ). The here investigated waste material was taken from a rather old landfill, thus it is not surprising, that polymer degradation was found to be a relevant predictor for the respiration index. The RI4 is of high relevance for authorities to determine waste reactivity (Binner et al. 2012). As it describes the oxygen consumption of a subsample of material under a rather short amount of time, the influence of rather readily degradable carbohydrates is considered plausible (see Fig. 5). The differences between the treatments in RI4 as well as in carbon and polymer degradation potential were surprisingly small. A conclusion from that could be, that for this old waste material, the laboratory conditions, namely mixing, irrigation and heating were of higher relevance for the formation of the microbial consortia than the differences in oxygen addition. To determine an end-point for the remediation technique landfill in-situ aeration proves rather challenging (Ritzkowski and Stegmann 2012; Brandstätter et al. 2020). It is known, that the waste stability is increased right after the aeration treatment (Prantl et al. 2006; Ritzkowski and Stegmann 2013). But what happens after the aeration is terminated and the landfill would fall anaerobic again? This is especially interesting for the most relevant nitrogen species in landfills (NH4), but here we found in the laboratory an increase in RI4 one year after the termination of aeration (last sampling mixed treatment, see 3). We hypothesize, that with the loss of oxygen, after a microbial death event of strictly aerobic microbiota, an accumulation of byproducts occur, that are not fully degradable under anaerobic conditions. By oxygenating the anaerobic material during the RI4 testing, these stored compounds could then get subjected to aerobic respiration. It is to be investigated further, whether this observed phenomenon is an artifact under test conditions or if it also would pose difficulties in the full-scale application of in-situ aeration after its termination. By investigating MSW-waste degradation of old landfilled waste with Biolog® EcoPlates™ , it was possible to link the metabolic activity of the microbial consortium with the reactivity of the material. Namely, the potential for growth and respiration on carbohydrates (positively) and the potential for utilizing polymers (negatively) both impacted the RI4. We also could observe an increase of the RI4 1 year after the termination of aeration. This needs to be investigated further, as under field conditions, uncontrolled carbon release or punctual temperature increases might occur after the termination of the measures. The raw measured data are published on Zenodo (Brandstätter et al. 2021). Code availability In an accompanying master thesis (Brandstätter 2021) the most important R-scripts as well as calculated data are included. Barrena GR, Vázquez LF, Sánchez FA (2006) The use of respiration indices in the composting process: a review. Waste Manag Res 24(1):37–47. https://doi.org/10.1177/0734242X06062385 Binner E, Zach A (1999) Laboratory tests describing the biological reactivity of pretreated residual wastes. In ORBIT Symposium 1999 Binner E, Böhm K, Lechner P (2012) Large scale study on measurement of respiration activity (AT4) by sapromat and oxitop. Waste Manag 32(10):1752–1759. https://doi.org/10.1016/j.wasman.2012.05.024 Brandstätter C (2021) Modeling of the microbial functional diversity during waste degradation. University of Applied Sciences, Wiener Neustadt Brandstätter C, Laner D, Fellner J (2015) Carbon pools and flows during lab-scale degradation of old landfilled waste under different oxygen and water regimes. Waste Manag 40:100–111. https://doi.org/10.1016/j.wasman.2015.03.011 Article PubMed CAS Google Scholar Brandstätter C, Laner D, Fellner J (2015) Nitrogen pools and flows during lab-scale degradation of old landfilled waste under different oxygen and water regimes. Biodegradation 26(5):399–414. https://doi.org/10.1007/s10532-015-9742-5 Brandstätter C, Prantl R, Fellner J (2020) Performance assessment of landfill in-situ aeration - a case study. Waste Manag 101:231–240. https://doi.org/10.1016/j.wasman.2019.10.022 Brandstätter C, Fricko N, Fellner J, Rahimi MJ, Druzhinina IS (2021) Mintox - dataset ecoplates. https://zenodo.org/record/4698853 Dąbrowska D, Sołtysiak M, Biniecka P, Michalska J, Wasilkowski D, Nowak A, Nourani V (2019) Application of hydrogeological and biological research for the lysimeter experiment performance under simulated municipal landfill condition. J Mater Cycles Waste Manag 21(6):1477. https://doi.org/10.1007/s10163-019-00900-x Fricko N, Brandstätter C, Fellner J (2021) Enduring reduction of carbon and nitrogen emissions from landfills due to aeration? Waste Manag 135:457–466 Garland JL, Mills AL, Young JS (2001) Relative effectiveness of kinetic analysis vs single point readings for classifying environmental samples based on community-level physiological profiles (CLPP). Soil Biol Biochem 33:1059–1066. https://doi.org/10.1016/S0038-0717(01)00011-6 Gryta A, Frąc M, Oszust K (2014) The application of the Biolog EcoPlate approach in ecotoxicological evaluation of dairy sewage sludge. Appl Biochem Biotechnol 174(4):1434–1443. https://doi.org/10.1007/s12010-014-1131-8 Article PubMed PubMed Central CAS Google Scholar Halász A, Baráth Á, Simon-Sarkadi L, Holzapfel W (1994) Biogenic amines and their production by microorganisms in food. Trends Food Sci Technol 5(2):42–49. https://doi.org/10.1016/0924-2244(94)90070-1 Hopkins DW, Macnaughton SJ, O'Donnell AG (1991) A dispersion and differential centrifugation technique for representatively sampling microorganisms from soil. Soil Biol Biochem 23(3):217–225. https://doi.org/10.1016/0038-0717(91)90055-O Huber R, Fellner J, Doeberl G, Brunner PH (2004) Water flows of MSW landfills and implications for long-term emissions. J Environ Sci Health 39(4):885–900. https://doi.org/10.1081/ESE-120028400 Kahm M, Hasenbrink G, Lichtenberg-Fraté H, Ludwig J, Kschischo M (2010) Grofit: fitting biological growth curves with R. J Stat Softw 33(7):1–21. https://doi.org/10.1038/npre.2010.4508.1 Komilis DP (2015) Compost quality: is research still needed to assess it or do we have enough knowledge? Waste Manag 38(1):1–2. https://doi.org/10.1016/j.wasman.2015.01.023 Komilis DP, Ham RK, Stegmann R (1999) The effect of landfill design and operation practices on waste degradation behavior: a review. Waste Manag Res 17(1):20–26. https://doi.org/10.1177/0734242X9901700104 Kylefors K, Andreas L, Lagerkvist A (2003) A comparsion of small-scale, pilot scale and large-scale tests for predicting leaching behaviour of landfilled wastes. Waste Manag 23:45–59. https://doi.org/10.1016/S0956-053X(02)00112-5 Majdinasab A, Zhang Z, Yuan Q (2017) Modelling of landfill gas generation: a review. Rev Environ Sci Biotechnol 16(2):361–380. https://doi.org/10.1007/s11157-017-9425-2 Meng X, Liu B, Zhang H, Jingwei W, Yuan X, Cui Z (2019) Co-composting of the biogas residues and spent mushroom substrate: physicochemical properties and maturity assessment. Bioresour Technol 276:281–287. https://doi.org/10.1016/j.biortech.2018.12.097 Miki T, Yokokawa T, Ke PJ, Hsieh IF, Hsieh CH, Kume T, Yoneya K, Matsui K (2018) Statistical recipe for quantifying microbial functional diversity from EcoPlate metabolic profiling. Ecol Res 33(1):249–260. https://doi.org/10.1007/s11284-017-1554-0 Mills AL, Garland JL (2002) Application of physiological profiles to assessment of community properties. In: Hurst CJ, Crawford RL, Garland JL, Lipson DA (eds) Manual of environmental microbiology. ASM Press, Washington, pp 135–146 Mutasem E-F, Findikakis AN, Leckie JO (1997) Environmental impacts of solid waste landfilling. J Environ Manag 50(1):1–25. https://doi.org/10.1006/jema.1995.0131 ON S 2027-4:2012-06-01 (2012) Evaluation of waste from mechanical-biological treatment - Part 4: Stability parameters - Respiration activity (AT4) Östman M, Wahlberg O, Ågren S, Mårtensson AM (2005) Metal and organic matter contents in a combined household and industrial landfill. Waste Manag 26(1):29–40. https://doi.org/10.1016/j.wasman.2005.01.012 Patil BS, Singh DN (2017) Simulation of municipal solid waste degradation in aerobic and anaerobic bioreactor landfills. Waste Manag Res 35(3):301–312. https://doi.org/10.1177/0734242X16679258 Peng C, Sun Xiaojie W, Yichao GC, Monika M, Holden Patricia A, Marc R-G, Qiaoyun H (2019) Soil biofilms: microbial interactions, challenges, and advanced techniques for ex-situ characterization. Soil Ecol Lett 1(3–4):85–93. https://doi.org/10.1007/s42832-019-0017-7 Pinzari F, Maggi O, Lunghini D, Di Lonardo DP, Persiani AM (2017) A simple method for measuring fungal metabolic quotient and comparing carbon use efficiency of different isolates: application to Mediterranean leaf litter fungi. Plant Biosyst 151(2):371–376. https://doi.org/10.1080/11263504.2017.1284166 Prantl R, Tesar M, Huber-Humer M, Lechner P (2006) Changes in carbon and nitrogen pool during in-situ aeration of old landfills under varying conditions. Waste Manag 26(4):373–380. https://doi.org/10.1016/j.wasman.2005.11.010 R Core Team (2018) A language and environment for statistical computing. http://www.r-project.org Ritzkowski M, Stegmann R (2012) Landfill aeration worldwide: concepts, indications and findings. Waste Manag 32(7):1411–1419. https://doi.org/10.1016/j.wasman.2012.02.020 Ritzkowski M, Stegmann R (2013) Landfill aeration within the scope of post-closure care and its completion. Waste Manag 33(10):2074–2082. https://doi.org/10.1016/J.WASMAN.2013.02.004 Sofo A, Ricciuti P (2019) A standardized method for estimating the functional diversity of soil bacterial community by Biolog® EcoPlatesTM assay-the case study of a sustainable olive orchard. Appl Sci (Switzerland) 9(19):1–9. https://doi.org/10.3390/app9194035 Sormunen K, Ettala M, Rintala J (2008) Detailed internal characterisation of two Finnish landfills by waste sampling. Waste Manage 28(1):151–163. https://doi.org/10.1016/j.wasman.2007.01.003 Tabasaran O, Rettenberger G (1987) Grundlagen zur Planung von Entgasungsanlagen. Hösel, Schenkel, Schurer (Publisher). Müll-Handbuch. E. Schmidt, Berlin University of Toledo. Community level physiological profiling (CLPP) Background information, 2004. https://www.biolog.com/wp-content/uploads/2020/04/Sigler_Von_Sigler_LEPR_Protocols_files_CLPP.pdf Valencia R, van der Zon WH, Woelders H, Lubberding HJ, Gijzen HJ (2009) Achieving "Final Storage Quality" of municipal solid waste in pilot scale bioreactor landfills. Waste Manag 29(1):78–85. https://doi.org/10.1016/j.wasman.2008.02.008 Wilson WA, Roach PJ, Montero M, Baroja-Fernández E, Muñoz FJ, Eydallin G, Viale AM, Pozueta-Romero J (2010) Regulation of glycogen metabolism in yeast and bacteria. FEMS Microbiol Rev 34(6):952–985. https://doi.org/10.1111/j.1574-6976.2010.00220.x Zeng Z, Guo X, Piao X, Xiao R, Huang D, Gong X, Cheng M, Yi H, Li T, Zeng G (2018) Responses of microbial carbon metabolism and function diversity induced by complex fungal enzymes in lignocellulosic waste composting. Sci Total Environ 643:539–547. https://doi.org/10.1016/j.scitotenv.2018.06.102 Zwietering MH, Jongenburger I, Rombouts FM, Riet K (1990) Modeling of the bacterial growth curve. Appl Environ Microbiol 56(6):1875–81. https://doi.org/10.1128/aem.56.6.1875-1881.1990 Big thanks to Philipp Aschenbrenner for tremendous laboratory support and to Christian Derntl (ICEBE, TU Wien, Vienna, Austria) for the support for laboratory work during pandemic lockdown. This research was funded by the Austrian Science Fund FWF - Project number P 29168 to Johann Fellner. Open access funding provided by TU Wien (TUW). This research was funded by the Austrian Science Fund FWF - Project number P 29168 to Johann Fellner. Research Unit Waste and Resource Management, Institute for Water Quality and Resource Management, TU Wien, Karlsplatz 13/226.2, 1040, Vienna, Austria Christian Brandstaetter, Nora Fricko & Johann Fellner Institute of Computer Science, University of Applied Sciences Wiener Neustadt, Johannes-Gutenberg-Straße 3, 2700, Wiener Neustadt, Austria Christian Brandstaetter & Wolfgang Ecker-Lala Institute of Chemical, Environmental and Bioscience Engineering (ICEBE), TU Wien, Gumpendorferstrasse 1a, 1060, Vienna, Austria Mohammad J. Rahimi & Irina S. Druzhinina Key Laboratory of Plant Immunity, Fungal Genomics Laboratory (FungiG), Nanjing Agricultural University, Weigang No. 1, Nanjing, 210095, People's Republic of China Irina S. Druzhinina Christian Brandstaetter Nora Fricko Mohammad J. Rahimi Johann Fellner Wolfgang Ecker-Lala All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by CB, NF, MJR, JF and ISD. WE-L contributed to and supervised the statistical analysis. The first draft of the manuscript was written by CB and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. Correspondence to Christian Brandstaetter. There is no known conflict of interest of any author. The submitted manuscript is used as a base for a master thesis of Christian Brandstaetter in data science at FH Wiener Neustadt, Austria (Brandstätter 2021). Consent to participate Brandstaetter, C., Fricko, N., Rahimi, M.J. et al. The microbial metabolic activity on carbohydrates and polymers impact the biodegradability of landfilled solid waste. Biodegradation (2021). https://doi.org/10.1007/s10532-021-09967-6 Biolog® EcoPlates™ Gompertz equation In-situ aeration
CommonCrawl
Author Archives: A. M. Winkler Redundancy in canonical correlation analysis Posted on 27.December.2019 by A. M. Winkler In canonical correlation analysis (CCA; Hotelling, 1936), the absolute value of a correlation is not always that helpful. For example, large canonical correlations may arise simply due to a large number of variables being investigated using a relatively small sample size; high correlations may arise simply because there are too many opportunities for finding mixtures in both sides that are highly correlated one with another. Motivated by this perceived difficulty in the interpretation of results, Stewart and Love (1968) proposed the computation of what has been termed a redundancy index. It works as follows. Let and be two sets of variables over which CCA is computed. We find canonical coefficients and , such that the canonical variables and have maximal, diagonal correlation structure; this diagonal contains the ordered canonical correlations . Now that CCA has been computed, we can find the correlations between the original variables and the canonical coefficients. Let and be such correlations, which are termed canonical loadings or structure coefficients. Now compute the mean square for each of the columns of and , i.e: Variance extracted by canonical variable : These quantities represent the mean variance extracted from the original variables by each of the canonical variables (in each side). Compute now the proportion of variance of one canonical variable (say, ) explained by the corresponding canonical variable in the other side (say, ). This is given simply by , the usual coefficient of determination. The redundancy index for each canonical variable is then the product of and for the left side of CCA, and the product of and for the right side. That is, the index is not symmetric. It measures the proportion of variance in one of the two set of variables explained by the correlation between the -th pair of canonical variables. The sum of the redundancies for all canonical variables in one side or another forms a global redundancy metric, which indicates the proportion of the variance in a given side explained by the variance in the other. This global redundancy can be scaled to unity, such that the redundancies for each of the canonical variables in a give side can be interpreted as the proportion of total redundancy. If you follow the original paper by Stewart and Love (1968), and are column III of Table 2, the redundancy of each canonical variable for each side corresponds to column IV, and the proportion of total redundancy is in column V. Another reference on the same topic that is worth looking is Miller (1981). In it, the author discusses that redundancy is somewhere in between CCA itself (fully symmetric) and multiple regression (fully asymmetric). Hotelling H. Relations between two sets of variates. Biometrika. 1936;28(3/4):321–77. Muller KE. Relationships between redundancy analysis, canonical correlation, and multivariate regression. Psychometrika. 1981;46(2):139–42. Stewart D, Love W. A general canonical correlation index. Psychological Bulletin. 1968;70(3, Pt.1):160–3. (unfortunately, the paper is paywalled; write to APA to complain). Posted in Uncategorized | Tagged cca, multivariate, statistics | Leave a reply A fresh Octave for PALM Posted on 08.October.2019 by A. M. Winkler PALM — Pemutation Analysis of Linear Models — uses either MATLAB or Octave behind the scenes. It can be executed from within either of these environments, or from the shell, in which case either of these is invoked, depending on how PALM was configured. For users who do not want or cannot spend thousands of dollars in MATLAB licenses, Octave comes for free, and offers quite much the same benefits. However, for Octave, some functionalities in PALM require version 4.4.1 or higher. However, stable Linux distributions such as Red Hat Enterprise Linux and related (such as CentOS and Scientific Linux) still offer only 3.8.2 in the official repositories at the time of this writing, leaving the user with the task of finding an unofficial package or compiling from the source. The latter task, however, can be daunting: Octave is notoriously difficult to compile, with a myriad of dependencies. A much simpler approach is to use Flatpak or Snappy. These are systems for distribution of Linux applications. Snappy is sponsored by Canonical (that maintains Ubuntu), whereas Flatpak appears to be the preferred tool for Fedora and openSUSE. Using either system is quite simple, and here the focus is on Flatpak. To have a working installation of Octave, all that needs be done is: 1) Make sure Flatpak is installed: On a RHEL/CentOS system, use (as root): yum install flatpak For openSUSE, use (as root): zypper install flatpak For Ubuntu and other Debian-based systems: sudo apt install flatpak 2) Add the Flathub repository: flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo 3) Install Octave: flatpak install flathub org.octave.Octave 4) Run it! flatpak run org.octave.Octave Only the installation of Flatpak needs be done as root. Once it has been installed, repositories and applications (such as Octave, among many others) can be installed at the user level. These can also be installed and made available system-wide (if run as root). Configuring PALM From version alpha117 onwards, the executable file 'palm' (not to be confused with 'palm.m') will include a variable named "OCTAVEBIN", which specifies how Octave should be called. Change it from the default: OCTAVEBIN=/usr/bin/octave so that it invokes the version installed with Flatpak: OCTAVEBIN="/usr/bin/flatpak run org.octave.Octave" After making the above edits, it should be possible to run PALM directly from the system shell using the version installed via Flatpak. Alternatively, it should also be possible to invoke Octave (as in step 4 above), then use the command "addpath" to specify the location of palm.m, and then call PALM from the Octave prompt. Octave packages Handling of packages when Octave is installed via Flatpak is the same as usual, that is, via the command 'pkg' run from within Octave. More details here. Posted in Scripts, Statistics | Tagged fsl, octave, PALM | 4 Replies Posted in Linux, Neuroinformatics, Scripts | Leave a reply How do we measure thickness, area, and volume of the cerebral cortex? Posted on 23.January.2018 by A. M. Winkler There are various ways one could estimate morphometric parameters of the cortex, such as its thickness, area, and volume. For example, it is possible to use voxelwise partial volume effects using volume-based representations of the brain, such as in voxel-based morphometry (VBM), in which estimates per voxel become available. Volume-based representations also allow for estimates of thickness, as suggested, for example, by Hutton et al. (2004), or from a surface representation of the cortex, in which it can be measured as a form of distance between the mesh that represents the pia mater (the pial surface) and the mesh that represents the interface between gray and white matter (the white surface). Here we focus on the surface-based representation as that offers advantages over volume-based representations (Van Essen et al., 1998). Software such as FreeSurfer uses magnetic resonance images to initially construct the white surface. Once that surface has been produced, a copy of it can be offset outwards until tissue contrast in the magnetic resonance image is maximal, which indicates the location of the pial surface. This procedure ensures that both white and pial surfaces have the same topology, with each face and each vertex of the white surface having their matching pair in the pial. This convenience facilitates the computations indicated below. Cortical surface area For a triangular face of the surface representation, with vertex coordinates , , and , the area is , where , , represents the cross product, and the bars represent the vector norm. Even though such area per face (i.e., facewise) can be used in subsequent steps, most software packages can only deal with values assigned to each vertex (i.e., vertexwise). Conversion from facewise to vertexwise is achieved by assigning to each vertex one-third of the sum of the areas of all faces that meet at that vertex (Winkler et al., 2012). Cortical thickness The thickness at each vertex is computed as the average of two distances (Fischl and Dale, 2000; Greve and Fischl, 2018): the first is the distance from each white surface vertex to their corresponding closest point on the pial surface (not necessarily at a pial vertex); the second is the distance from the corresponding pial vertex to the closest point on the white surface (again, not necessarily at a vertex). Other methods are possible, however, see table below (adapted from Lerch and Evans, 2005): Distance solved using the Laplace's equation. Jones et al. (2000) Distance between corresponding vertices. MacDonald et al. (2000) Distance to the nearest point in the other surface. MacDonald et al. (2000) Distance to the nearest point in the other surface, computed for both surfaces, then averaged. Fischl and Dale (2000) Distance along the normal. MacDonald et al. (2000) Distance along the iteratively computed normal. Lerch and Evans (2005) Cortical volume Product method If the area of either of these surfaces is known, or if the area of a mid-surface, i.e., the surface running half-distance between pial and white surfaces is known, an estimate of the volume can be obtained by multiplying, at each vertex, area by thickness. This procedure is still problematic in that it underestimates the volume of tissue that is external to the convexity of the surface, and overestimates volume that is internal to it; both cases are undesirable, and cannot be solved by merely resorting to using an intermediate surface as the mid-surface. Figure 1: A diagram in two dimensions of the problem of measuring the cortical volume. If volume is computed using the product method (a), considerable amount of tissue is left unmeasured in the gyri, or measured repeatedly in sulci. The problem is minimised, but not solved, with the use of the mid-surface. In the analytic method (b), vertex coordinates are used to compute the volume of tissue between matching faces of white and pial surfaces, leaving no tissue under- or over-represented. Analytic method In Winkler et al. (2018) we propose a different approach to measure volume. Instead of computing the product of thickness and area, we note that any pair of matching faces can be used to define an irregular polyhedron, of which all six coordinates are known from the surface geometry. This polyhedron is an oblique truncated triangular pyramid, which can be perfectly divided into three irregular tetrahedra, which do not overlap, nor leave gaps. Figure 2: A 3D diagram with the proposed solution to measure the cortical volume. In the surface representation, the cortex is limited internally by the white and externally by the pial surface (a). These two surfaces have matching vertices that can be used to delineate an oblique truncated triangular pyramid (b) and (c). The six vertices of this pyramid can be used to define three tetrahedra, the volumes of which are computed analytically (d). From the coordinates of the vertices of these tetrahedra, their volumes can be computed analytically, then added together, viz.: For a given face in the white surface, and its corresponding face in the pial surface, define an oblique truncated triangular pyramid. Split this truncated pyramid into three tetrahedra, defined as: For each such tetrahedra, let , , and represent its four vertices in terms of coordinates . Compute the volume as , where , , , the symbol represents the cross product, represents the dot product, and the bars represent the vector norm. No error other than what is intrinsic to the placement of these surfaces is introduced. The resulting volume can be assigned to each vertex in a similar way as conversion from facewise area to vertexwise area. The above method is the default in FreeSurfer 6.0.0. Is volume at all useful? Given that volume of the cortex is, ultimately, determined by area and thickness, and these are known to be influenced in general by different factors (Panizzon et al, 2009; Winkler et al, 2010), why would anyone bother in even measuring volume? The answer is that not all factors that can affect the cortex will affect exclusively thickness or area. For example, an infectious process, or the development of a tumor, have potential to affect both. Volume is a way to assess the effects of such non-specific factors on the cortex. However, even in that case there are better alternatives available, namely, the non-parametric combination (NPC) of thickness and area. This use of NPC will be discussed in a future post here in the blog. Fischl B, Dale AM. Measuring the thickness of the human cerebral cortex from magnetic resonance images. Proc Natl Acad Sci U S A. 2000 Sep 26;97(20):11050–5. Greve DN, Fischl B. False positive rates in surface-based anatomical analysis. Neuroimage. 2018 Dec 26;171(1 May 2018):6–14. Hutton C, De Vita E, Ashburner J, Deichmann R, Turner R. Voxel-based cortical thickness measurements in MRI. Neuroimage. 2008 May 1;40(4):1701–10. Jones SE, Buchbinder BR, Aharon I. Three-dimensional mapping of cortical thickness using Laplace's equation. Hum Brain Mapp. 2000 Sep;11(1):12–32. Lerch JP, Evans AC. Cortical thickness analysis examined through power analysis and a population simulation. Neuroimage. 2005;24(1):163–73. MacDonald D, Kabani NJ, Avis D, Evans AC. Automated 3-D extraction of inner and outer surfaces of cerebral cortex from MRI. Neuroimage. 2000 Sep;12(3):340–56. Panizzon MS, Fennema-Notestine C, Eyler LT, Jernigan TL, Prom-Wormley E, Neale M, et al. Distinct genetic influences on cortical surface area and cortical thickness. Cereb Cortex. 2009 Nov;19(11):2728–35. Van Essen DC, Drury HA, Joshi S, Miller MI. Functional and structural mapping of human cerebral cortex: solutions are in the surfaces. Proc Natl Acad Sci U S A. 1998 Feb 3;95(3):788–95. Winkler AM, Kochunov P, Blangero J, Almasy L, Zilles K, Fox PT, et al. Cortical thickness or grey matter volume? The importance of selecting the phenotype for imaging genetics studies. Neuroimage. 2010 Nov 15;53(3):1135–46. Winkler AM, Sabuncu MR, Yeo BTT, Fischl B, Greve DN, Kochunov P, et al. Measuring and comparing brain cortical surface area and other areal quantities. Neuroimage. 2012 Jul 15;61(4):1428–43. Winkler AM, Greve DN, Bjuland KJ, Nichols TE, Sabuncu MR, Ha Berg AK, et al. Joint analysis of cortical area and thickness as a replacement for the analysis of the volume of the cerebral cortex. Cereb Cortex. 2018 Feb 1;28(2):738–49. Posted in freesurfer, Surface models | 2 Replies The "Group" indicator in FSL In FSL, when we create a design using the graphical interface in FEAT, or with the command Glm, we are given the opportunity to define, at the higher-level, the "Group" to which each observation belongs. When the design is saved, the information from this setting is stored in a text file named something as "design.grp". This file, and thus the group setting, takes different roles depending whether the analysis is used in FEAT itself, in PALM, or in randomise. What can be confusing sometimes is that, in all three cases, the "Group" indicator does not refer to experimental or observational group of any sort. Instead, it refers to variance groups (VG) in FEAT, to exchangeability blocks (EB) in randomise, and to either VG or EB in PALM, depending on whether the file is supplied with the options -vg or -eb. In FEAT, unless there is reason to suspect (or assume) that the variances for different observations are not equal, all subjects should belong to group "1". If variance groups are defined, then these are taken into account when the variances are estimated. This is only possible if the design matrix is "separable", that is, it must be such that, if the observations are sorted by group, the design can be constructed by direct sum (i.e., block-diagonal concatenation) of the design matrices for each group separately. A design is not separable if any explanatory variable (EV) present in the model crosses the group borders (see figure below). Contrasts, however, can encompass variables that are defined across multiple VGs. The variance groups not necessarily must match the experimental observational groups that may exist in the design (for example, in a comparison of patients and controls, the variance groups may be formed based on the sex of the subjects, or another discrete variable, as opposed to the diagnostic category). Moreover, the variance groups can be defined even if all variables in the model are continuous. In randomise, the same "Group" setting can be supplied with the option -e design.grp, thus defining exchangeability blocks. Observations within a block can only be permuted with other observations within that same block. If the option --permuteBlocks is also supplied, then the EBs must be of the same size, and the blocks as a whole are instead then permuted. Randomise does not use the concept of variance group, and all observations are always members of the same single VG. In PALM, using -eb design.grp has the same effect that -e design.grp has in randomise. Further using the option -whole is equivalent to using --permuteBlocks in randomise. It is also possible to use together -whole and -within, meaning that the blocks as a whole are shuffled, and further, observations within block are be shuffled. In PALM the file supplied with the option -eb can have multiple columns, indicating multi-level exchangeability blocks, which are useful in designs with more complex dependence between observations. Using -vg design.grp causes PALM to use the v– or G-statistic, which are replacements for the t– and F-statistics respectively for the cases of heterogeneous variances. Although VG and EB are not the same thing, and may not always match each other, the VGs can be defined from the EBs, as exchangeability implies that some observations must have same variance, otherwise permutations are not possible. The option -vg auto defines the variance groups from the EBs, even for quite complicated cases. In both FEAT and PALM, defining VGs will only make a difference if such variance groups are not balanced, i.e., do not have the same number of observations, since heteroscedasticity (different variances) only matter in these cases. If the groups have the same size, all subjects can be allocated to a single VG (e.g., all "1"). Posted in FSL, Statistics | Tagged feat, fsl, PALM, randomise | 6 Replies Why the maximum statistic? In brain imaging, each voxel (or vertex, or face, or edge) constitutes a single statistical test. Because thousands such voxels are present in an image, a single experiment results in thousands of statistical tests being performed. The p-value is the probability of finding a test statistic at least as large as the one observed in a given voxel, provided that no effect is present. A p-value of 0.05 indicates that, if an experiment is repeated 20 times and there are no effects, on average one of these repetitions will be considered significant. If thousands of tests are performed, the chance of obtaining a spuriously significant result in at least one voxel increases: if there are 1000 voxels, and at the same test level , we expect, on average, to find 50 significant tests, even in the absence of any effect. This is known as the multiple testing problem. A review of the topic for brain imaging provided in Nichols and Hayasaka (2003) [see references at the end]. To take the multiple testing problem into account, either the test level ( ), or the p-values can be adjusted, such that instead of controlling the error rate at each individual test, the error rate is controlled for the whole set (family) of tests. Controlling such family-wise error rate (FWER) ensures that the chance of finding a significant result anywhere in the image is expected to be within a certain predefined level. For example, if there are 1000 voxels, and the FWER-adjusted test level is 0.05, we expect that, if the experiment is repeated for all the voxels 20 times, then on average in one of these repetitions there will be an error somewhere in the image. The adjustment of the p-values or of the test level is done using the distribution of the maximum statistic, something that most readers of this blog are certainly well aware of, as that permeates most of the imaging literature since the early 1990s. Have you ever wondered why? What is so special about the distribution of the maximum that makes it useful to correct the error rate when there are multiple tests? Definitions first Say we have a set of voxels in an image. For a given voxel , , with test statistic , the probability that is larger than some cutoff is denoted by: where is the cumulative distribution function (cdf) of the test statistic. If the cutoff is used to accept or reject a hypothesis, then we say that we have a false positive if an observed is larger than when there is no actual true effect. A false positive is also known as error type I (in this post, the only type of error discussed is of the type I). For an image (or any other set of tests), if there is an error anywhere, we say that a family-wise error has occurred. We can therefore define a "family-wise null hypothesis" that there is no signal anywhere; to reject this hypothesis, it suffices to have a single, lonely voxel in which . With many voxels, the chances of this happening increase, even if there no effect is present. We can, however, adjust our cuttoff to some other value so that the probability of rejecting such family-wise null hypothesis remains within a certain level, say . Union-intersection tests The "family-wise null hypothesis" is effectively a joint null hypothesis that there is no effect anywhere. That is, it is an union-intersection test (UIT; Roy, 1953). This joint hypothesis is retained if all tests have statistics that are below the significance cutoff. What is the probability of this happening? From the above we know that . The probability of the same happening for all voxels simultaneously is, therefore, simply the product of such probabilities, assuming of course that the voxels are all independent: Thus, the probability that any voxel has a significant result, which would lead to the occurrence of a family-wise error, is . If all voxels have identical distribution under the null, then the same is stated as . Distribution of the maximum Consider the maximum of the set of voxels, that is, . The random variable is only smaller or equal than some cutoff if all values are smaller or equal than . If the voxels are independent, this enables us to derive the cdf of : Thus, the probability that is larger than some threshold is . If all voxels have identical distribution under the null, then the same is stated as . These results, lo and behold, are the same as those used for the UIT above, hence how the distribution of the maximum can be used to control the family-wise error rate (if the distribution of the maximum is computed via permutations, independence is not required). The above is not the only way in which we can see why the distribution of the maximum allows the control of the family-wise error rate. The work by Marcus, Peritz and Gabriel (1976) showed that, in the context of multiple testing, the null hypothesis for a particular test can be rejected provided that all possible joint (multivariate) tests done within the set and including are also significant, and doing so controls the family-wise error rate. For example, if there are four tests, , the test in is considered significant if the joint tests using (1,2,3,4), (1,2,3), (1,2,4), (1,3,4), (1,2), (1,3), (1,4) and (1) are all significant (that is, all that include ). Such joint test can be quite much any valid test, including Hotelling's , MANOVA/MANCOVA, or NPC (Non-Parametric Combination), all of which are based on recomputing the test statistic from the original data, or others, based on the test statistics or p-values of each of the elementary tests, as in a meta-analysis. Such closed testing procedure (CTP) incurs an additional problem, though: the number of joint tests that needs to be done is , which in imaging applications renders them unfeasible. However, there is one particular joint test that provides a direct algorithmic shortcut: using the as the test statistic for the joint test. The maximum across all tests is also the maximum for any subset of tests, such that these can be skipped altogether. This gives a vastly efficient algorithmic shortcut to a CTP, as shown by Westfall and Young (1993). Simple intuition One does not need to chase the original papers cited above (although doing so cannot hurt). Broadly, the same can be concluded based solely on intuition: if the distribution of some test statistic that is not the distribution of the maximum within an image were used as the reference to compute the (FWER-adjusted) p-values at a given voxel , then the probability of finding a voxel with a test statistic larger than anywhere could not be determined: there could always be some other voxel , with an even larger statistic (i.e., ), but the probability of such happening would not be captured by the distribution of a non-maximum. Hence the chance of finding a significant voxel anywhere in the image under the null hypothesis (the very definition of FWER) would not be controlled. Using the absolute maximum eliminates this logical leakage. Marcus R, Peritz E, Gabriel KR. On closed testing pocedures with special reference to ordered analysis of variance. Biometrika. 1976 Dec;63(3):655. Nichols T, Hayasaka S. Controlling the familywise error rate in functional neuroimaging: a comparative review. Stat Methods Med Res. 2003 Oct;12(5):419–46. Roy SN. On a heuristic method of test construction and its use in multivariate analysis. Ann Math Stat. 1953 Jun;24(2):220–38. Westfall PH, Young SS. Resampling-based multiple testing: examples and methods for p-value adjustment. New York, Wiley, 1993. Posted in Statistics | Tagged closed testing procedure, distribution of the maximum, family-wise error rate, fwe, fwer, multiple comparisons problem, multiple testing problem | Leave a reply Better statistics, faster Posted on 09.August.2016 by A. M. Winkler Permutation tests are more robust and help to make scientific results more reproducible by depending on fewer assumptions. However, they are computationally intensive as recomputing a model thousands of times can be slow. The purpose of this post is to briefly list some options available for speeding up permutation. Firstly, no speed-ups may be needed: for small sample sizes, or low resolutions, or small regions of interest, a permutation test can run in a matter of minutes. For larger data, however, accelerations may be of use. One option is acceleration through parallel processing or GPUs (for example applications of the latter, see Eklund et al., 2012, Eklund et al., 2013 and Hernández et al., 2013; references below), though this does require specialised implementation. Another option is to reduce the computational burden by exploiting the properties of the statistics and their distributions. A menu of options includes: Do few permutations (shorthand name: fewperms). The results remain valid on average, although the p-values will have higher variability. Keep permuting until a fixed number of permutations with statistic larger than the unpermuted is found (a.k.a., negative binomial; shorthand name: negbin). Do a few permutations, then approximate the tail of the permutation distribution by fitting a generalised Pareto distribution to its tail (shorthand name: tail). Approximate the permutation distribution with a gamma distribution, using simple properties of the test statistic itself, amazingly not requiring any permutations at all (shorthand name: noperm). Do a few permutations, then approximate the full permutation distribution by fitting a gamma distribution (shorthand name: gamma). Run permutations on only a few voxels, then fill the missing ones using low-rank matrix completion theory (shorthand name: lowrank). These strategies allow accelerations >100x, yielding nearly identical results as in the non-accelerated case. Some, such as tail approximation, are generic enough to be used nearly all the most common scenarios, including univariate and multivariate tests, spatial statistics, and for correction for multiple testing. In addition to accelerating permutation tests, some of these strategies, such as tail and noperm, allow continuous p-values to be found, and refine the p-values far into the tail of the distribution, thus avoiding the usual discreteness of p-values, which can be a problem in some applications if too few permutations are done. These methods are available in the tool PALM — Permutation Analysis of Linear Models — and the complete description, evaluation, and application to the re-analysis of a voxel-based morphometry study (Douaud et al., 2007) have been just published in Winkler et al., 2016 (for the Supplementary Material, click here). The paper includes a flow chart prescribing these various approaches for each case, reproduced below. The hope is that these accelerations will facilitate the use of permutation tests and, if used in combination with hardware and/or software improvements, can further expedite computation leaving little reason not to use these tests. Douaud G, Smith S, Jenkinson M, Behrens T, Johansen-Berg H, Vickers J, James S, Voets N, Watkins K, Matthews PM, James A. Anatomically related grey and white matter abnormalities in adolescent-onset schizophrenia. Brain. 2007 Sep;130(Pt 9):2375-86. Epub 2007 Aug 13. Eklund A, Andersson M, Knutsson H. fMRI analysis on the GPU-possibilities and challenges. Comput Methods Programs Biomed. 2012 Feb;105(2):145-61. Eklund A, Dufort P, Forsberg D, LaConte SM. Medical image processing on the GPU – Past, present and future. Med Image Anal. 2013;17(8):1073-94. Hernández M, Guerrero GD, Cecilia JM, García JM, Inuggi A, Jbabdi S, et al. Accelerating fibre orientation estimation from diffusion weighted magnetic resonance imaging using GPUs. PLoS One. 2013 Jan;8(4):e61892.1. Winkler AM, Ridgway GR, Webster MA, Smith SM, Nichols TE. Permutation inference for the general linear model. Neuroimage. 2014 May 15;92:381-97. Winkler AM, Ridgway GR, Douaud G, Nichols TE, Smith SM. Faster permutation inference in brain imaging. Neuroimage. 2016 Jun 7;141:502-516. Contributed to this post: Tom Nichols, Ged Ridgway. Posted in FSL, Statistics | Tagged fsl, matlab, octave, PALM, permutation test, statistics | 6 Replies Three HCP utilities If you are working with data from the Human Connectome Project (HCP), perhaps these three small Octave/MATLAB utilities may be of some use: hcp2blocks.m: Takes the restricted file with information about kinship and zygosity and produces a multi-level exchangeability blocks file that can be used with PALM for permutation inference. It is fully described here. hcp2solar.m: Takes restricted and unrestricted files to produce a pedigree file that can be used with SOLAR for heritability and genome-wide association analyses. picktraits.m: Takes either restricted or unrestricted files, a list of traits and a list of subject IDs to produce tables with selected traits for the selected subjects. These can be used to, e.g., produce design matrices for subsequent analysis. These functions need to parse relatively large CSV files, which is somewhat inefficient in MATLAB and Octave. Still, since these commands usually have to be executed only once for a particular analysis, a 1-2 minute wait seems acceptable. If downloaded directly from the above links, remember also to download the prerequisites: strcsvread.m and strcsvwrite.m. Alternatively, clone the full repository from GitHub. The link is this. Other tools may be added in the future. A fourth utility For the HCP-S1200 release (March/2017), zygosity information is provided in the fields ZygositySR (self-reported zygosity) and ZygosityGT (zygosity determined by genetic methods for select subjects). If needed, these two fields can be merged into a new field named simply Zygosity. To do so, use a fourth utility, command mergezyg. Posted in Genetics, Scripts, Statistics | Tagged fsl, HCP, matlab, octave, PALM, permutation test, solar | 4 Replies Downsampling (decimating) a brain surface Posted on 31.May.2016 by A. M. Winkler Downsampled average cortical surfaces at different iterations (n), with the respective number of vertices (V), edges (E) and faces (F). In the previous post, a method to display brain surfaces interactively in PDF documents was presented. While the method is already much more efficient than it was when it first appeared some years ago, the display of highly resolved meshes can be computationally intensive, and may make even the most enthusiastic readers give up even opening the file. If the data being shown has low spatial frequency, an alternative way to display, which preserves generally the amount of information, is to decimate the mesh, downsampling it to a lower resolution. Although in theory this can be done in the native (subject-level) geometry through retessellation (i.e., interpolation of coordinates), the interest in downsampling usually is at the group level, in which case the subjects have all been interpolated to a common grid, which in general is a geodesic sphere produced by subdividing recursively an icosahedron (see this earlier post). If, at each iteration, the vertices and faces are added in a certain order (such as in FreeSurfer's fsaverage or in the one generated with the platonic command), downsampling is relatively straightforward, whatever is the type of data. Vertexwise data For vertexwise data, downsampling can be based on the fact that vertices are added (appended) in a certain order as the icosahedron is constructed: Vertices 1-12 correspond to n = 0, i.e., no subdivision, or ico0. Vertices 13-42 correspond to the vertices that, once added to the ico0, make it ico1 (first iteration of subdivision, n = 1). Vertices 43-162 correspond to the vertices that, once added to ico1, make it ico2 (second iteration, n = 2). Vertices 163-642, likewise, make ico3. Vertices 643-2562 make ico4. Vertices 2563-10242 make ico5. Vertices 10243-40962 make ico6, etc. Thus, if the data is vertexwise (also known as curvature, such as cortical thickness or curvature indices proper), the above information is sufficient to downsample the data: to reduce down to an ico3, for instance, all what one needs to do is to pick the vertices 1 through 642, ignoring 643 onwards. Facewise data Data stored at each face (triangle) generally correspond to areal quantities, that require mass conservation. For both fsaverage and platonic icosahedrons, the faces are added in a particular order such that, at each iteration of the subdivision, a given face index is replaced in situ for four other faces: one can simply collapse (via sum or average) the data of every four faces into a new one. Surface geometry If the objective is to decimate the surface geometry, i.e., the mesh itself, as opposed to quantities assigned to vertices or faces, one can use similar steps: Select the vertices from the first up to the last vertex of the icosahedron in the level needed. Iteratively downsample the face indices by selecting first those that are formed by three vertices that were appended for the current iteration, then for those that have two vertices appended in the current iteration, then connecting the remaining three vertices to form a new, larger face. Using downsampled data is useful not only to display meshes in PDF documents, but also, some analyses may not require a high resolution as the default mesh (ico7), particularly for processes that vary smoothly across the cortex, such as cortical thickness. Using a lower resolution mesh can be just as informative, while operating at a fraction of the computational cost. A script A script that does the tasks above using Matlab/Octave is here: icodown.m. It is also available as part of the areal package described here, which also satisfies all its dependencies. Input and output formats are described here. Posted in File types, Rendering, Scripts, Surface models | Tagged brain, decimate, downsample, facewise, freesurfer, matlab, mesh, octave, surface | Leave a reply Interactive 3D brains in PDF documents A screenshot from Acrobat Reader. The example file is here. Would it not be helpful to be able to navigate through tri-dimensional, surface-based representations of the brain when reading a paper, without having to download separate datasets, or using external software? Since 2004, with the release of the version 1.6 of the Portable Document Format (PDF), this has been possible. However, the means to generate the file were not easily available until about 2008, when Intel released of a set of libraries and tools. This still did not help much to improve popularity, as in-document rendering of complex 3D models requires a lot of memory and processing, making its use difficult in practice at the time. The fact that Acrobat Reader was a lot bloated did not help much either. Now, almost eight years later, things have become easier for users who want to open these documents. Newer versions of Acrobat are much lighter, and capabilities of ordinary computers have increased. Yet, it seems the interest on this kind of visualisation have faded. The objective of this post is to show that it is remarkably simple to have interactive 3D objects in PDF documents, which can be used in any document published online, including theses, presentations, and papers: journals as PNAS and Journal of Neuroscience are at the forefront in accepting interactive manuscripts. U3D Tools: Make sure you have the IDTFConverter utility, from the U3D tools, available on SourceForge as part of the MathGL library. A direct link to version 1.4.4 is here; an alternative link, of a repackaged version of the same, is here. Compiling instructions for Linux and Mac are in the "readme" file. There are some dependencies that must be satisfied, and are described in the documentation. If you decide not to install the U3D tools, but only compile them, make sure the path of the executable is both in the $PATH and in the $LD_LIBRARY_PATH. This can be done with: cd /path/to/the/directory/of/IDTFConverter export PATH=${PATH}:$(pwd) export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$(pwd) The ply2idtf function: Make sure you have the latest version of the areal package, which contains the MATLAB/Octave function ply2idtf.m used below. Certain LaTeX packages: The packages movie15 or media9, that allow embedding the 3D object into the PDF using LaTeX. Either will work. Below it is assumed the older, movie15 package, is used. Step 1: Generate the PLY maps Once you have a map of vertexwise cortical data that needs to be shown, follow the instructions from this earlier blog post that explains how to generate Stanford PLY files to display colour-coded vertexwise data. These PLY files will be used below. Step 2: Convert the PLY to IDTF files IDTF stands for Intermediate Data Text Format. As the name implies, it is a text, intermediate file, used as a step before the creation of the U3D files, the latter that are embedded into the PDF. Use the function ply2idtf for this: ply2idtf(... {'lh.pial.thickness.avg.ply','LEFT', eye(4);... 'rh.pial.thickness.avg.ply','RIGHT',eye(4)},... 'thickness.idtf'); The first argument is a cell array with 3 columns, and as many rows as PLY files being added to the IDTF file. The first column contains the file name, the second the label (or node) that for that file, and the third an affine matrix that maps the coordinates from the PLY file to the world coordinate system of the (to be created) U3D. The second (last) argument to the command is the name of the output file. Step 3: Convert the IDTF to U3D files From a terminal window (not MATLAB or Octave), run: IDTFConverter -input thickness.idtf -output thickness.u3d Step 4: Configure default views Here we use the older movie15 LaTeX package, and the same can be accomplished with the newer, media9 package. Various viewing options are configurable, all of which are described in the documentation. These options can be saved in a text file with extension .vws, and later supplied in the LaTeX document. An example is below. VIEW=Both Hemispheres COO=0 -14 0, C2C=-0.75 0.20 0.65 ROO=325 AAC=30 ROLL=-0.03 BGCOLOR=.5 .5 .5 RENDERMODE=Solid LIGHTS=CAD PART=LEFT VISIBLE=true PART=RIGHT VIEW=Left Hemisphere C2C=-1 0 0 VISIBLE=false VIEW=Right Hemisphere C2C=1 0 0 ROLL=0.03 Step 5: Add the U3D to the LaTeX source Interactive, 3D viewing is unfortunately not supported by most PDF readers. However, it is supported by the official Adobe Acrobat Reader since version 7.0, including the recent version DC. Thus, it is important to let the users/readers of the document know that they must open the file using a recent version of Acrobat. This can be done in the document itself, using a message placed with the option text of the \includemovie command of the movie15 package. A minimalistic LaTeX source is shown below (it can be downloaded here). % Relevant package: \usepackage[3D]{movie15} % pdfLaTeX and color links setup: \usepackage{color} \usepackage[pdftex]{hyperref} \definecolor{colorlink}{rgb}{0, 0, .6} % dark blue \hypersetup{colorlinks=true,citecolor=colorlink,filecolor=colorlink,linkcolor=colorlink,urlcolor=colorlink} \title{Interactive 3D brains in PDF documents} \author{} \date{} \begin{figure}[!h] \includemovie[ text=\fbox{\parbox[c][9cm][c]{9cm}{\centering {\footnotesize (Use \href{http://get.adobe.com/reader/}{Adobe Acrobat Reader 7.0+} \\to view the interactive content.)}}}, poster,label=Average,3Dviews2=pial.vws]{10cm}{10cm}{thickness.u3d} \caption{An average 3D brain, showing colour-coded average thickness (for simplicity, colour scale not shown). Click to rotate. Right-click to for a menu with various options. Details at \href{http://brainder.org}{http://brainder.org}.} Step 6: Generate the PDF For LaTeX, use pdfLaTeX as usual: pdflatex document.tex After generating the PDF, the result of this example is shown here (a screenshot is at the top). It is possible to rotate in any direction, zoom, pan, change views to predefined modes, and alternate between orthogonal and perspective projections. It's also possible to change rendering modes (including transparency), and experiment with various lightning options. In Acrobat Reader, by right-clicking, a menu with various useful options is presented. A toolbar (as shown in the top image) can also be enabled through the menu. The same strategy works also with the Beamer class, such that interactive slides can be created and used in talks, and with XeTeX, allowing these with a richer variety of text fonts. Wikipedia has an article on U3D files. Alexandre Gramfort has developed a set of tools that covers quite much the same as above. It's freely available in Matlab FileExchange. To display molecules interactively (including proteins), the steps are similar. Instructions for Jmol and Pymol are available. Commercial products offering features that build on these resources are also available. Posted in File types, Rendering, Scripts | Tagged 3d model, blender, brain, idtf, LaTeX, media9, movie15, pdf, stanford ply, surface, u3d | 2 Replies A. M. Winkler on FreeSurfer brains in arbitrary colours joost on FreeSurfer brains in arbitrary colours A. M. Winkler on A fresh Octave for PALM halfSpinDoctor on A fresh Octave for PALM Extreme value notes Non-Parametric Combination (NPC) for brain imaging FSL on the Raspberry Pi Permutation tests in the Human Connectome Project Another look into Pillai's trace Appeared in the 2011-2012 Science Report of the Texas Biomedical Institute. [http://www.txbiomed.org/docs/default-document-library/science_report_2011-online.pdf] Appeared in Meda et al. (2012), Neuroimage. [http://dx.doi.org/10.1016/j.neuroimage.2011.12.076] Appeared in Karlsgodt et al. (2011), Behavioral Brain Research. [http://dx.doi.org/10.1016/j.bbr.2011.08.016] Appeared in Winkler et al. (2010), Neuroimage. [http://dx.doi.org/10.1016/j.neuroimage.2009.12.028] Appeared in the lecture Winkler AM, "Genetic influences on brain structure", OHBM-2010, Barcelona. Figure appeared in Winkler et al. (2010), Neuroimage. [http://dx.doi.org/10.1016/j.neuroimage.2009.12.028] Appeared in the poster Winkler et al. "A comparison of thresholding methods for statistical parametric maps", OHBM-2007, Chicago. Appeared in Glahn et al. "Genome-wide combined linkage/association scan localizes two QTLs influencing human caudate nucleus volume", ASHG-2009, Honolulu. Appeared in Bearden et al. (2011), The Neuroscientist. [http://dx.doi.org/10.1177/1073858411415113] This appeared in many presentations already… :-) Appeared in Kochunov et al. (2015), Neuroimage. [http://dx.doi.org/10.1016/j.neuroimage.2015.02.050] Appeared in Glahn et al. (2011). Biological Psychiatry. [http://dx.doi.org/10.1016/j.biopsych.2011.08.022] Appeared in Carless et al. (2011), Molecular Psychiatry. [http://dx.doi.org/10.1038/mp.2011.37] Use this for your news reader.
CommonCrawl
Climate policy under uncertainty: a case for solar geoengineering Juan B. Moreno-Cruz1 & David W. Keith2 Climatic Change volume 121, pages 431–444 (2013)Cite this article Solar Radiation Management (SRM) has two characteristics that make it useful for managing climate risk: it is quick and it is cheap. SRM cannot, however, perfectly offset CO2-driven climate change, and its use introduces novel climate and environmental risks. We introduce SRM in a simple economic model of climate change that is designed to explore the interaction between uncertainty in the climate's response to CO2 and the risks of SRM in the face of carbon-cycle inertia. The fact that SRM can be implemented quickly, reducing the effects of inertia, makes it a valuable tool to manage climate risks even if it is relatively ineffective at compensating for CO2-driven climate change or if its costs are large compared to traditional abatement strategies. Uncertainty about SRM is high, and decision makers must decide whether or not to commit to research that might reduce this uncertainty. We find that even modest reductions in uncertainty about the side-effects of SRM can reduce the overall costs of climate change in the order of 10%. It appears to be technically feasible to engineer an increase in albedo, a planetary brightening, as a means to offset the warming caused by carbon dioxide (CO2) and other greenhouse gases through Solar Radiation Management (SRM) (Keith and Dowlatabadi 1992; Keith 2000; Crutzen 2006; Shepherd et al. 2009). However, the cooling produced by SRM does not exactly compensate for the warming caused by CO2-driven climate change; and any particular method of SRM will no doubt entail other risks and side-effects (e.g. Bala et al. 2008; Ricke et al. 2010). Nevertheless, SRM may be a useful tool to mange climate risks (Wigley 2006). In this paper we ask how optimal policy is affected by risk regarding the side-effects of SRM, in the face of uncertainty about the magnitude of the damages caused by CO2-driven climate change. To answer this question we construct a simple model that captures the following stylized facts about climate change and SRM: The carbon-climate system has inertia. There is a lag between the response of the climate system and the anthropogenic carbon emissions that cause climate change. The inertia of the carbon-climate system makes it impossible to quickly reduce climate risk by reducing emissions, as it is expected that 40% of the peak concentration of CO2 will remain in the atmosphere 1000 years after the peak is reached (Solomon et al. 2009). Climate change damages are uncertain. The amount of climate change resulting from a given emissions trajectory is uncertain, as are the resulting economic (or other) damages. Moreover, this uncertainty is irreducible over a timescale of decades during which we will make near-term decisions about emissions abatement (Morgan and Keith 1995; Zickfeld et al. 2010). SRM is fast. A reduction in the incoming radiation has relatively instantaneous effects on global temperature (Caldeira and Matthews 2007; Robock et al. 2008). Nature gives an example of how quickly temperature responds to changes in radiative forcing: after Mount Pinatubo's explosion around 20TgS were deposited in the stratosphere, global surface temperatures cooled about 0.5°C over the following year (Soden et al. 2002). SRM is inexpensive. At this stage, little is known about the technical costs of SRM, but some preliminary studies have suggested that SRM could offset the increase in global average temperature due to CO2 at a cost 10 to 1000 times lower than achieving the same outcome by cutting emissions (McClellan et al. 2010; Robock et al. 2009; Shepherd et al. 2009). SRM cannot eliminate carbon-climate risk. SRM technologies can intervene to restore the surface temperature by reducing the incoming solar radiation. This intervention, however, cannot eliminate all the damages caused by climate change. In particular, the temperature compensation has a different regional distribution, that leaves the poles under compensated while the equator is over compensated (Caldeira and Matthews 2007). Moreover, the accumulation of greenhouse gases has direct implications on the precipitation patterns (Allen and Ingram 2002); and, in the case of CO2, ocean acidification (Caldeira and Wickett 2003, 2005). SRM introduces damages. There is an increase in the risks of destruction of stratospheric ozone due to SRM implementation (Solomon 1996, 1999). Moreover, sulfuric acid deposition may create health and regional problems (Crutzen 2006); although recent literature suggests these effects are small (Kravitz et al. 2009). Also, recent numerical simulations show that SRM will affect precipitation patterns and volumes, possibly causing droughts in large regions of the planet (e.g. Ricke et al. 2010). Our goal is to explore the trade-offs between the advantages and disadvantages of SRM in a cost minimizing optimal-decision framework that we intend to be as simple as possible while still capturing all of these stylized facts. The advantages of SRM to manage climate risks are twofold. First, it is inexpensive compared to abatement, and second it allows rapid action avoiding some of the inertia of the carbon system. The corresponding disadvantages of SRM are that it imperfectly compensates for CO2 driven warming and it may introduce new environmental risks. In our model, the objective of the decision-maker is to minimize the expected total costs of managing climate change. The costs of climate change are the sum of the costs of abatement and SRM activities plus any economic damages. The costs of abatement and SRM are increasing and convex functions of their arguments, while economic damages are the sum of the damages arising from greenhouse gas concentrations—such as temperature changes and ocean acidification—and those arising from the side-effects of SRM. The damages from temperature are a quadratic function of the change in global surface temperature; which, in turn, is proportional to radiative forcing. The damages from ocean acidification arise due to the increase of CO2 concentrations in the oceans; which, in turn, affects marine life and the economic activities associated with it, i.e. fishing and tourism. The damages arising from the side-effects of SRM are assumed to be a quadratic function of the total level of SRM. As a simple way to capture climate-carbon inertia we use a two-stage decision framework in which the abatement decisions are made in the first period and SRM decisions are made in the second. In between periods, the decision maker learns the true sensitivity of the climate (Fig. 1). Because temperature depends on cumulative emissions, we assume emissions are irreversible (Matthews et al. 2009) and in that sense, only the level of abatement implemented before learning about the sensitivity of the climate system can help reduce damages caused by temperature changes and ocean acidification. The climate system, however, responds quickly to changes in radiative forcing in the form of SRM. This quickness of response allows SRM to reduce temperature damages after learning about the sensitivity of the climate; hence, eliminating the inertia associated with other forms of climate intervention and abatement. Damages take place in the second period, after SRM decisions are made. Timing of decisions and information in the two-period model. The top schematic shows the timing when SRM is used as an insurance in Section 3. The bottom three schematics show the scenarios in Section 4. They vary depending on whether decisions are made before or after learning about the SRM damages. Decisions are represented by rectangles, while uncertain outcomes are represented by circles. SRM uncertainty is represented with blue circles. Learning is represented with red circles. Payoffs are represented by hexagons. The first schematic shows the timing of decisions when there is no learning (NL scenario). The second schematic describes the scenario when learning takes place before SRM decisions are made, but after abatement decisions are made (2L scenario). The third schematic describes the scenario when learning takes place before abatement and SRM decisions are made (1L scenario) The approach in this paper has proven to be useful for the economic analysis of climate change and we expect it to be equally insightful for the economic analysis of SRM [see Weitzman (2009) for a recent application of a two period model to analyze climate change policy, and Goulder and Mathai (2000) for an example of the use of a cost minimizing framework with increasing and convex costs to analyze climate policy]. Five caveats, however, are important for our analysis: The optimal policy assumes a centralized decision maker. In practice, many countries will decide how to implement SRM amongst themselves. The strategic interaction among countries may lead to under or over provision of either SRM or abatement (see Millar-Ball 2011; Moreno-Cruz 2011). A centralized decision maker minimizes changes in global mean temperature and other damages at a global scale. By making this assumption, we eliminate all considerations to regional inequalities that may arise from the implementation of SRM [see Moreno-Cruz et al. (2011) and Ricke et al. (2010) for a detailed treatment of the inequalities introduced by SRM]. Nonetheless, understanding the optimal policy is important as it serves as a benchmark with which other policies may be compared. Use of a static two-period model limits its application in two important aspects. First, it cannot be used to analyze time dependent optimal policies where SRM is introduced incrementally. Instead we concentrate on SRM as a tool to deal with low-probability high-consequence impacts that are colloquially referred to as "climate emergencies". Second, the model cannot address damages due to rapid temperature changes associated to the sudden interruption of an SRM program. By considering damages only in terms of reduction in economic output we are neglecting aspects of the problem that do not easily fit this framework such as non-monetary environmental values. This assumption, of course, neglects the ethical issues associated with the direct manipulation of the climate implied by SRM. We believe these ethical concerns are crucial for the analysis of SRM as are the issues of uncertainty and inertia that we treat here. The rest of the paper proceeds as follows. In Section 2 we introduce and calibrate the model. In Section 3 we introduce uncertainty on the climate system and analyze the role of SRM in dealing with high-impact, low-probability outcomes. In Section 4, we deal with the uncertainty attached to the damages from SRM and analyze the value of reducing this uncertainty. We draw conclusions in Section 5. A general description of the model Temperature, abatement and SRM When the concentration of greenhouse gases increases in the atmosphere it alters the balance between incoming solar radiation and outgoing terrestrial radiation, resulting in an increase in the mean global temperature of Earth. Radiative forcing describes how the radiation balance is altered by human activity. Radiative forcing, R, is a function of the concentration of CO2 in the atmosphere, S, relative to the preindustrial level, S 0: $$ R=\beta ~\ln\big(S/S_0\big) $$ where, according to the IPCC (2007), β = 5.35 watts-per-meter-squared [Wm − 2]. Abatement, which we denote by A, refers to measures that reduce the concentration level of CO2 in the atmosphere. In particular, assume that S = S BAU − A, where S BAU is the business as usual concentration of CO2 in the atmosphere measured in parts per million [ppm]. Changes in mean global temperature, ΔT—measured in °C—are defined as a linear function of radiative forcing, R: $$ \Delta T=\lambda R $$ where λ is the climate sensitivity parameter with units °C m2/W. When SRM is introduced in the model, the relation between CO2 concentrations and temperature is altered. We measure SRM, G, in terms of its radiative forcing potential and, since temperature change is a linear function of radiative forcing, Eq. 2 can be written as: $$ \Delta T(A,G)=\lambda \left(\beta ~{\rm Ln}\left(\frac{S_{BAU}-A}{S_0}\right)- G\right) $$ We represent total climate damages as the sum of impacts from three different sources: temperature, SRM and uncompensated CO2 damages (e.g. ocean acidification). Following Nordhaus (2008), we assume temperature damages are quadratic. Following Brander et al. (2009), damages from ocean acidification are also quadratic on the concentration of CO2. We assume that SRM damages are also a quadratic function of the total level of SRM.Footnote 1 To be able to compare the different sources of impacts, we express damages in terms of reductions in economic output. Thus, total damages are given by: $$D(A,G)=\eta_S(S_{BAU}-A)^2+\eta_T\lambda^2\left(\Delta T(A,G)\right)^2+\eta_GG^2 $$ where \(\eta_S(S_{BAU}-A)^2\) are the damages caused by ocean acidification and other uncompensated damages from CO2, \(\eta_T\lambda^2\left(\Delta T(A,G)\right)^2\) are damages caused by temperature changes, and \(\eta_GG^2\) are the damages caused by the side-effects of SRM. In Eq. 4, when A equals S BAU and G equals zero, damages are zero. However, when A is less that S BAU , damages are always positive, showing the inability of SRM to perfectly compensate for greenhouse gas driven climate change (see bottom panel in Fig. 2). That is, although technically SRM can reduce temperature changes to zero, it may do so at the expense of other economic damages. Optimal climate policy. The horizontal axis is the impacts of SRM expressed as a fraction of the business-as-usual climate damages. For example, when η G = 0.5 D T , the impacts of SRM are equivalent to 50% of the damages from CO2-driven climate change. The vertical axis is in units of radiative forcing (Wm − 2). The top panel shows the optimal policy measured in terms of radiative forcing potential (Wm − 2). The middle panel presents the temperature change measured in °C. The solid line shows the results with SRM, and the dashed line shows the results without SRM. The bottom panel shows the expected costs of implementing the optimal policy as a fraction of global GDP. The orange lines show the expected total costs with only temperature damages. The difference between the solid black line and the solid orange line is the fraction of costs that cannot be compensated using SRM Implementation costs We assume that abatement costs are increasing and convex. In particular, following Nordhaus (2008), we have: $$\Lambda(A)= K_A A^{\alpha} $$ where K A has units [$/ppm] and α = 2.8. Following Keith and Dowlatabadi (1992) we assume that SRM costs are linear and given by $$\Gamma(G)= K_G G $$ where K G has units [$/(Wm − 2)]. Total social costs are the sum of the implementation costs, given by Eqs. 5 and 6, and the economic damages given by Eq. 4. The optimal policy consist of the level of abatement and the level of SRM that minimize total social costs. We use the year 2100 as our planning horizon, a common target in the analysis of climate change policy.Footnote 2 To calibrate our model, we use the projected costs and damages in 2100 reported by the DICE-2007 model (Dynamic Integrated Model of Climate and the Economy) (Nordhaus 2008). We complete the information needed for our calibration using data from the IPCC (2007) and publications related to the costs of SRM. The information given below is, unless otherwise noted, from Nordhaus (2008). The assumptions and calibrated values are summarized in Table 1. Table 1 Calibration of model We calibrate costs and damages as percentages of global GDP, when we report dollar values we assume global GDP to be around $50 trillion per year (World Bank, World Development Indicators). Although not relevant for our study, incorporating discounting is simple. For example if we assume a discount rate of 1%, the yearly GDP value would be equivalent to $33 trillion. If we assume a discount rate of 7%, yearly GDP would be $7 trillion. Economic growth is equally easy to introduce. Introducing economic growth at a rate of 2.5% will yield a yearly GDP value of $200 trillion. Considering a discount rate close to the rate of economic growth would leave the yearly value of GDP at around $50 trillion. There is insufficient information to allow us to quantify the risks of SRM, η G , with any confidence, so we treat them parametrically. In Section 3 we analyze optimal policy as a function of η G and in Section 4 we introduce uncertainty and learning on η G . Climate sensitivity uncertainty: SRM as insurance In this section we analyze the role of SRM in dealing with the uncertainty surrounding the climate's response to changes in the atmospheric concentration of CO2. Specifically, we made the climate sensitivity parameter, λ, random. We define the random variable \(\widetilde{\lambda}\), to introduce the uncertainty of the response of the climate system. \(\widetilde{\lambda}\) follows a binomial distribution of the form: $$ \widetilde{\lambda}=\left\lbrace \begin{array}{c l} \lambda_H=2.3 & \text{with probability $p=0.1$}\\ \lambda_L=0.7 & \text{with probability $1-p=0.9$}. \end{array} \right. $$ Notice that the mean of this distribution is 0.86, which is consistent with recent estimates (IPCC 2007).Footnote 3 We choose this distribution of \(\widetilde{\lambda}\) to capture the idea of low probability-high impact events that are characteristic of fat-tail distributions commonly associated to climate sensitivity (Roe and Baker 2007; Weitzman 2009). This is of course a simple approximation that allows us to introduce risk in the climate system without increasing the complexity of the model. The qualitative results of our paper would remain the same if we introduce a continuous distribution with fat-tails. As we mentioned in the introduction, to capture climate-carbon inertia, decisions about abatement and SRM are made sequentially. Abatement decisions are made in the first period and SRM decisions are made in the second period. In between periods, the true climate sensitivity is revealed. Here SRM decisions are made under perfect information, but we will relax this assumption in Section 4 (see Fig. 1). We introduce the imperfection of SRM parametrically; that is, the optimal level of abatement and the optimal level of SRM are a function of the magnitude of the side effects of SRM, η G . We allow damages from SRM to be higher than those induced by CO2-driven climate change, so η G ∈ [0, 1.5 D T ] where \(D_{T}=\$11.4 \times 10^{12}/(\mathrm{Wm}^{-2})\). That is, when η G = D T , reducing temperature changes to zero using only SRM may create damages just as large as if temperature were equal to its business as usual level. By setting the upper limit at η G = 1.5 D T we try to highlight the role of SRM as an insurance. This limit, however, can be too high. The most commonly discussed direct impact of SRM (using stratospheric aerosols) is ozone loss. Estimates of the economic losses due to ozone depletion are in the order of US$1.1 trillion between 1987 and 2060. That is equivalent to 0.03% of global GDP, or (1/100) D T (Environment Canada 2007; Sunstein 2007). The top panel in Fig. 2 shows the optimal policy. As expected, SRM is a decreasing function of η G while abatement is increasing in η G . Thus, abatement and SRM are technical substitutes: if SRM is costly, then it is optimal to implement more abatement. Also, the optimal level of SRM is always higher in the high-sensitivity outcome (λ = λ H ) compared to the low-sensitivity outcome (λ = λ L ). This is the result of the assumption that SRM can be chosen after learning about the climate sensitivity of the system. Moreover, in the case of an unlucky outcome, SRM is used more than abatement, even if damages from SRM are higher than D T . The middle panel in Fig. 2 shows temperature with and without SRM. We can see that temperature change increases when the damages from SRM increase. Temperature increases because there is a reduction in the level of SRM that is less than compensated by the increase in abatement levels; which results from the fact that abatement costs are increasing and convex. The bottom panel in Fig. 2 shows the total costs of managing climate change as a function of the marginal damages from SRM, η G . As expected, total costs are higher when damages from SRM become larger. If SRM was harmless, that is η G = 0, the savings relative to the case of no SRM would be around 2% GDP or $1 trillion per year, which is equivalent to a reduction in the expected costs of climate change close to 85%. If on the other hand η G = D T , the cost reduction due to the introduction of SRM is around 1.1%GDP or $550 billion per year, which is equivalent to a reduction in the expected costs of climate change close to 50%. To illustrate the role that the uncompensated damages from CO2 play in the model, we set η S = 0 (orange lines in lower panel of Fig. 2). The difference between the black and orange lines are due to costs such as ocean acidification that cannot be compensated by SRM even if there are no damages from SRM, (η G = 0). We find that it is still optimal to implement high levels of SRM even if the marginal damages from SRM are higher than those of climate change (η G = 1.5 D T ). This counter-intuitive result arises because SRM can be implemented after the uncertainty about climate sensitivity is resolved. The signal advantage of SRM is its quick response: even if damages from SRM are substantially high, it is still valuable to have SRM available, as a complement to abatement measures, in case the climate sensitivity is high. Consider the counterfactual: without SRM it is difficult to bound climate damages in the face of climate sensitivity uncertainty and inertia as argued by Roe and Baker (2007) and Weitzman (2009). Uncertain SRM: assessing the value of learning about the side-effects In this section we explicitly introduce uncertainty about the damages from SRM. We examine the effect that reducing this uncertainty has on the optimal policy and the total costs of addressing climate change. Uncertainty about the risk and the effectiveness of SRM may be reduced by researching and engaging in the small scale implementation of SRM. We describe the reduction of uncertainty–achieved by research or otherwise—as learning. The implications of learning for the optimal policy depends strongly on when learning occurs in relation to decisions. We treat three scenarios (Fig. 1). The first scenario assumes no learning (NL), or equivalently that learning occurs after abatement and SRM are chosen. In the second we assume that learning occurs before SRM decisions are made, but after abatement is chosen; we refer to this as second stage learning—2L. In the third scenario, we assume that learning occurs before abatement and SRM decisions are made; we refer to this as first stage learning—1L. The effects of learning on the optimal levels of abatement and SRM, and its implications for total costs, as a function of the amount of learning, M. The second stage learning (2L) scenario is denoted by dashed lines, the first stage learning (1L) scenario is denoted by solid lines, and the No Learning scenario corresponds to M = 0. The top panel shows the effects of learning on the expected level of SRM and abatement. The blue line shows the expected level of SRM in the case where learning reveals that the SRM impacts are worse than expected, while the green line shows the converse. In red is the expected value of SRM when the probability of learning that the damages from SRM are larger or smaller than the damages from climate change is 0.5. The purple lines shows the optimal level of abatement, A. The purple dotted lines show the level of abatement in the 1L scenario. The middle panel shows the expected costs, with the same convention as the top panel. The bottom panel shows the total savings. Total savings are the difference between the total costs of the optimal policy when there is no learning and the corresponding learning scenario To introduce risk associated with SRM, we treat the damages due to SRM, η G , as a random variable \(\widetilde{\eta_G}\) that follows the distribution: $$\widetilde{\eta_G}=\left\lbrace \begin{array}{c l} \eta_G^H=D_T & \text{with probability $q=0.5$}\\ \eta_G^L=0 & \text{with probability $1-q=0.5$}. \end{array} \right. $$ which has an expected value of 0.5 D T . When q = 0.5, we have no information regarding whether damages from SRM are larger or smaller than those of climate change. In this case, and due to the linearity of the model imposed by our assumption of quadratic damages, the optimal policy is equal to the case of no uncertainty when η G = 0.5 D T . This is also true for other probability distributions that preserve the mean of the original distribution. The linearity of the model with respect to the choice of SRM implies that the decision maker is risk neutral. This very important characteristic allows us to concentrate on the value of learning that reduces uncertainty (Baker 2006). We assume that learning increases the spread of the original distribution by skewing the probability towards one of the two outcomes. Learning is equally likely to show that the damages from SRM are equal to the damages from climate change, η G = D T , or to show they are zero, η G = 0. That is, learning does not change the expected value of η G . In the case where, with probability 0.5, learning reveals that the impacts are more likely to be worse than expected, the distribution of \(\widetilde{\eta_G}\) takes the form: $$\widetilde{\eta_G}=\left\lbrace \begin{array}{c l} \eta_G^H=D_T & \text{with probability $q^H=0.5+M$}\\ \eta_G^L=0 & \text{with probability $1-q^H=0.5-M$}. \end{array} \right. $$ where M ∈ [0, 0.5] describes the amount of learning that occurs. On the other hand, if learning reveals that low impacts from SRM are more likely, then the distribution of \(\widetilde{\eta_G}\) takes the form: $$\widetilde{\eta_G}=\left\lbrace \begin{array}{c l} \eta_G^H=D_T & \text{with probability $q^L=0.5-M$}\\ \eta_G^L=0 & \text{with probability $1-q^L=0.5+M$}. \end{array} \right. $$ We present our analysis as a function of M, the amount of learning that occurs. When M = 0 no learning has occurred. Whereas when M = 0.5, learning has fully eliminated uncertainty. Figure 3 shows the effects of learning on the optimal policy (top panel), the expected costs of climate change (middle panel), and the net savings or expected value of information (bottom panel), as functions of the amount of learning, M. First stage learning (1L) is preferred to second stage learning (2L) for two related reasons. First, it allows better decisions in terms of SRM: SRM is lower when learning reveals high SRM damages and SRM is higher when learning reveals low SRM damages. This tendency is accentuated when learning is larger (M → 0.5). Second, the value of learning is an increasing function of the amount of learning and it is higher under first stage learning (1L). The top panel in Fig. 3 also shows that the expected level of abatement does not change significantly with early (1L) or late (2L) learning compared to the no learning (NL) scenario. This suggest that, at least for the optimal policy, learning about SRM do not affect the expected value of abatement. Of course, the realized—as opposed to expected—value of abatement does strongly depend on the outcome of learning. We explore a simple model in which a decision maker chooses the level of emissions abatement and SRM that minimizes the costs of climate change in the face of uncertainty about the impacts of both emissions and SRM. We draw two main conclusions. First, imperfect SRM is an effective means to manage the uncertainty in the climate response because it can be implemented quickly after this uncertainty is resolved, providing a tool to manage the inertia in the carbon-climate decision problem. Without SRM, the existence of high-consequence low-probability climate impacts, combined with the irreversibility of emissions, may force very high levels of abatement and hence high costs. In our model, we find that SRM is used in the case of an unlucky (high-impact) outcome even if the damages from SRM exceed the expected damages from climate change. Under the same assumption about the damages from SRM, SRM is substantially reduced when climate impacts are relatively low. Second, we find that learning about SRM—that is the value of information associated with reducing the uncertainty about the side-effects of SRM—can reduce the overall costs of climate change in the order of 10%, depending on the amount of learning. Suppose learning about SRM reduced the expected cost of climate change by 5%. We can compare these savings, equivalent to 0.05% of world GDP, with the current spending on SRM research which is less than $10 million per year, or 0.00002% of GDP; though we cannot, of course, conclude that learning will be proportional to spending since we don't know how effective this research will be in reducing uncertainty about SRM. Moreover, this specific numerical result depends on the calibration of the model and on the assumptions about the prior probability distribution over the side-effects of SRM. The model is a highly simplified representation of the problem and its applicability is limited by the caveats presented in the introduction to this paper. Also, the model is afflicted by the same constraints attached to any model of climate policy that supposes a single decision maker; namely, no strategic interaction, no asymmetries and therefore, no distributional issues. We have used, however, a calibration of climate damages and abatement that is widely used and is representative of results derived in many complex models. Hence, the limitations of the model likely do not affect its main result; that is, SRM is valuable for managing climate risk, not because of its low cost, but because it can be implemented quickly if we discover that climate impacts are high, a "climate emergency." There is not evidence of how steep the damages from SRM are. By choosing quadratic damages we are assuming they have the same weight as other climate related damages. As suggested by the reviewers, we analyzed the results for different target years, 50 years from now and 150 years. All the qualitative results are the same. The probability distribution described in Eq. 7, albeit quite simple, captures the main characteristics of the climate distribution described by the IPCC. According to the IPCC (2007) "climate sensitivity is likely to be in a range of 2–4.5°C with a best estimate of about 3°C," while "values substantially higher than 4.5°C cannot be excluded." Using the simplified expression for radiative forcing obtained from the IPCC Third Assessment Report, we find that with probability 0.9 climate sensitivity will be 2.6°C and with probability 0.1 climate sensitivity will be 8.5°C. On average, climate sensitivity is 3.2°C. Allen MR, Ingram WJ (2002) Constraints on future changes in climate and the hydrologic cycle. Nature 419:224–232 Bala G, Duffy PB, Taylor KE (2008) Impacts of geoengineering schemes on the global hydrological cycle. Proc Natl Acad Sci 105:7664–7669 Baker E (2006) Increasing risk and increasing informativeness. Oper Res 54(1):26–36 Brander LM, Rehdanz K, Tol RSJ, van Beukering PJH (2009) The economic impact of ocean acidification on coral reefs. ESRI working paper 282 Caldeira K, Matthews HD (2007) Transient climate-carbon simulations of planetary geoengineering. Proc Natl Acad Sci 104:9949–9954 Caldeira K, Wickett ME (2003) Anthropogenic carbon and ocean pH: the coming centuries may see more ocean acidification than the past 300 million years. Nature 425:365 Caldeira K, Wickett ME (2005) Ocean model predictions of chemistry changes from carbon dioxide emissions to the atmosphere and ocean. J Geophys Res Oceans 110:C9 Core Writing Team, Pachauri RK, Reisinger A (eds) (2007) IPCC 2007: climate change 2007: synthesis report. Tech report, Intergovernmental Panel on Climate Change Crutzen PJ (2006) Albedo enhancement by stratospheric sulfur injections: a contribution to resolve a policy dilemma? Clim Change 77:211–219 Environment Canada (1997) Global costs and benefits of the Montreal protocol Goulder LH, Mathai K (2000) Optimal CO 2 abatement in the presence of induced technological change. J Environ Econ Manage 39:1–38 Keith DW (2000) Geoengineering the climate: history and prospect. Annu Rev Energy Environ 25:245–284 Keith DW, Dowlatabadi H (1992) A serious look at geoengineering. Eos Trans Am Geophys Union 73:289, 292–293 Kravitz B, Robock A, Oman L, Stenchikov G, Marquardt AB (2009) Sulfuric acid deposition from stratospheric geoengineering with sulfate aerosols. J Geophys Res 114:D14109 Matthews HD, Gillett NP, Scott PA, Zickfeld K (2009) The proportionality of global warming to cumulative carbon emissions. Nature 459:829–833 McClellan J, Sisco J, Suarez B, Keogh G (2010) Geoengineering cost analysis. Final report, Aurora Flight Sciences Corporation, Cambridge, Massachusetts Millar-Ball A (2012) The Tuvalu syndrome: can geoengineering solve climate's collective action problem? Clim Change 110:1047–1066 Moreno-Cruz J (2011) Mitigation and the geoengineering threat. Mimeo, Unviersity of Calgary Moreno-Cruz J, Ricke K, Keith D (2012) A simple model to account for regional inequalities in the effectiveness of solar radiation management. Clim Change 110:649–668 Morgan G, Keith DW (1995) Subjective judgments by climate experts. Environ Sci Technol 29(10) Nordhaus W (2008) A question of balance: weighing the options on global warming policies. Yale University Press, 234 pp Ricke K, Morgan G, Allen M (2010) Regional climate response to solar radiation management. Nature Geosci 3:537–541 Robock A, Oman L, Stenchikov G (2008) Regional climate responses to geoengineering with tropical and Arctic SO2 injections. J Geophys Res 113:D16101 Robock A, Marquardt A, Kravitz B, Stenchikov G (2009) Benefits, risks, and costs of stratospheric geoengineering. Geophys Res Lett 36:L19703 Roe GH, Baker MB (2007) Why is climate sensitivity so unpredictable? Science 318:629–632 Shepherd J, Caldeira K, Haigh J, Keith D, Launder B, Mace G, MacKerron G, Pyle J, Rayner S, Redgwell C, Watson A (2009) Geoengineering the climate: science, governance and uncertainty. The Royal Academy Soden BJ, Wetherald RT, Stenchikov GL, Robock A (2002) Global cooling after the eruption of Mount Pinatubo: a test of climate feedback by water vapor. Science 296:727–730 Solomon S, Portman RW, Garcia RR, Thomason LW, Poole LR, McCormick MP (1996) The role of aerosol variations in anthropogenic ozone depletion at northern midlatitudes. J Geophys Res 101:6713–6727 Solomon S (1999) Stratospheric ozone depletion: a review of concepts and history. Rev Geophys 37:275–316 Solomon S, Plattnera G-K, Knutti R, Friedlingstein P (2009) Irreversible climate change due to carbon dioxide emissions. Proc Natl Acad Sci 106:1704–1709 Sunstein CR (2007) Of Montreal and Kyoto: a tale of two protocols. Harv Environ Law Rev 31:1–65 Weitzman M (2009) On modeling and interpreting the economics of catastrophic climate change. Rev Econ Stat 91:1–19 Wigley TML (2006) A combined mitigation/geoengineering approach to climate stabilization. Science 314:452–454 Zickfeld K, Morgan MG, Frame DJ, Keith DW (2010) Expert judgments about transient climate response to alternative future trajectories of radiative forcing. PNAS 107:12451–12456 The authors want to thank four anonymous referees, Kate Ricke, Daniel Dutton, Sjak Smulders, Gregory Nemet and participants at WCERE 2010 for comments on an earlier version of this paper. This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. School of Economics, Georgia Institute of Technology, 221 Bobby Dodd Way, Atlanta, GA, 30332, USA Juan B. Moreno-Cruz Kennedy School of Government and School of Engineering and Applied Sciences, Harvard University, 29 Oxford Street, Cambridge, MA, 02138, USA David W. Keith Correspondence to Juan B. Moreno-Cruz. Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Moreno-Cruz, J.B., Keith, D.W. Climate policy under uncertainty: a case for solar geoengineering. Climatic Change 121, 431–444 (2013). https://doi.org/10.1007/s10584-012-0487-4 Optimal Policy Abatement Cost Solar Radiation Management
CommonCrawl
A hybrid passive localization method under strong interference with a preliminary experimental demonstration Bo Lei1, Yixin Yang1, Kunde Yang1, Yong Wang1 & Yang Shi1 EURASIP Journal on Advances in Signal Processing volume 2016, Article number: 130 (2016) Cite this article Strong interference exists in many passive localization problems and may lead to the inefficacy of traditional localization methods. In this study, a hybrid passive localization method is proposed to address strong interference. This method combines generalized cross-correlation and interference cancellation for time-difference-of-arrival (TDOA) measurement, followed by a time-delay-based iterative localization method. The proposed method is applied to a preliminary experiment using three hydrophones. The TDOAs estimated by the proposed method are compared with those obtained by the particle filtering method. Results show that the positions are in agreement when the TDOAs are accurately obtained. Furthermore, the proposed method is more capable of localization in the presence of a strong moving jamming source. Passive source localization is a significant and important topic in signal processing because of its minimal impact on the environment and low susceptibility to the effects of clutter. This viable approach has certain advantages in navigation [1], speaker tracking [2], radar [3], and underwater acoustics [4,5]. The time-delay-based method is the most widely used localization strategy, which is a two-step scheme. In general, time-difference-of-arrival (TDOA) measurements of a passive signal on spatially separate receivers are first estimated, followed by the solution of nonlinear hyperbolic equations using the range-difference information obtained from the product of the measured time delays and the known propagation speed. Thus, the source position can then be determined based on the sensor array geometry. In localization, a straightforward TDOA estimation between a pair of receivers can be realized by determining the peak of the cross-correlation function. A generalized cross-correlation called the phase transform (PHAT) [6], which uses the normalized spectra of the signals, is commonly used in time-delay measurements [7–9]. The position of the source can be estimated through the intersection point of each pair of hyperbolic functions. However, since each pair can have zero, one, or two intersections, the logic to find the correct one is nontrivial. Also, determining the correct weighting is difficult. Solving the hyperbolic functions using nonlinear least squares has been considered as a possible approach [10], in which a Taylor-series expansion is used for linearization and the solution is determined iteratively. A two-step weighted least-squares algorithm proposed by Chan [11] could provide the final solution of the position coordinates by exploiting the known relation between the intermediate variable and the position coordinates. Young et al. [12] explored the use of cross-correlation-based TDOA methods for localization by a modified minimum-variance distortionless response technique. Lui [13] derived a semi-definite programming algorithm for source localization by integrating some available prior information. Friedlander [14] estimated the range and depth of an underwater source by measuring the propagation delay differences among multiple propagation paths on two vertically deployed receivers. Felisberto et al. [15] further developed a localization method that minimizes a time-delay objective function with respect to the depth and range with the use of a ray-based backpropagation algorithm. Particle filtering (PF) has also been used to automate detection and localization [16, 17]. Furthermore, a number of approaches have been proposed for multitarget localization. A PF-based algorithm was developed for the localization and tracking of multiple acoustic sources in reverberative environments [18]. Based on maximum-likelihood estimation, a technique using two omnidirectional passive sensors for the detection and estimation of a target in the presence of false measurements was developed [19], where the target motion parameters are obtained by directly maximizing a joint-likelihood function. A multitarget tracking formulation [20] was studied as an incomplete data problem, where a maximum-likelihood estimator was derived based on an expectation–maximization algorithm [21]. The likelihood estimator is maximized indirectly by iterating the expectation and maximization steps until certain appropriate convergence conditions are satisfied and is successfully applied to the state estimation of nonmaneuvering targets in a cluttered underwater environment. A nonlinear least-square technique was used to compute the motion parameters for each target by modeling the Gaussian mixture probability density functions of TDOA measurement errors [22]. Without loss of generality, the localization technique described in this paper is based on TDOA measurement between spatially separated hydrophones. However, a problem originates from the boat-noise target signal being totally polluted by strong interference. This contamination is common in a coastal environment and may be in the form of interference from moving merchant ships, artificial noise, and marine mammal bioacoustics, among others. If the signal-to-interference ratio is low, then localization may be inaccurate when the TDOAs of the two signals are close. Therefore, the interference has to be canceled before TDOA estimation. Accordingly, this paper presents a hybrid localization method integrated with interference cancellation. The method comprises three steps. First, PHAT processing is applied to the recorded data. Second, an interference cancellation method involving the Radon transform [23, 24] is exploited, so that the TDOAs on each pair of receivers are accurately acquired. Finally, iteration is performed to search for the source position. The proposed method is validated by a preliminary experiment, in which a moving jamming source is present. The remainder of the paper is organized as follows: Section 2 presents the framework of the hybrid localization method. In Section 3, an experimental demonstration with three spatially separated hydrophones in a lake is provided. Section 4 further discusses the proposed method, with a moving interference taken into consideration. Section 5 presents the conclusions of this study. Framework of the method In underwater localization, the target signals are contaminated by strong interference, and this contamination may result in inaccurate TDOA measurement. As a result, the efficacy of conventional localization methods is affected. The framework of the proposed localization method, which can cancel interference, comprises three processing steps, as shown in Fig. 1. PHAT processing is first applied to the received data from each pair of receivers, followed by a novel interference cancellation method involving the Radon transform, whereby, the TDOAs of the target signal are obtained. The final process is to localize the target source based on the estimated TDOAs of all receiver pairs. Sketch of the proposed hybrid localization method, comprising three processing steps: PHAT processing, interference cancellation, and localization PHAT processing Let s(t) and p(t) represent the boat-noise target signal and interference from a jamming source, respectively, and x 1(t) and x 2(t) represent the signals received by two hydrophones located at distant locations and arranged in a known geometry. These two received signals are respectively expressed as $$ \begin{array}{l}{x}_1(t)=s\left(t-{D}_1\right)+p\left(t-{D}_{\mathrm{i}1}\right)+{n}_1(t),\\ {}{x}_2(t)=s\left(t-{D}_2\right)+p\left(t-{D}_{\mathrm{i}2}\right)+{n}_2(t),\end{array} $$ where the unknown parameters D 1 and D 2 are the time delays of the target signal on the two hydrophones, D i1 and D i2 are the time delays of the interference, and n 1 and n 2 are additive noises on the two hydrophones. In general, the interference is uncorrelated with the target signal s(t). Thus, TDOAs D 1 − D 2 and D i1 − D i2 may be derived as two peaks of the common cross-correlation function between the two hydrophone signals. However, if the interference-to-signal ratio is strong, the peak of the target signal output may be buried in the interference. The PHAT method, which is a generalized cross-correlation processing method, has the capability to suppress the interference power and can be mathematically expressed as $$ y(t)=\mathrm{IFFT}\left(\frac{X_1(f){X}_2^{*}(f)}{\left|{X}_1(f)\right|\left|{X}_2(f)\right|}\right), $$ where * indicates complex conjugation and X 1(f) and X 2(f) are the spectra of x 1(t) and x 2(t), respectively. In the PHAT output y(t), two peaks corresponding to the interference and target signal are present. When both TDOAs are close, the TDOA of the target signal is difficult to accurately obtain because the target-signal peak of the PHAT output is significantly obscured. Therefore, the interference should be suppressed before TDOA estimation. Even in cases in which the two peaks are totally separated, the cancellation process is beneficial to automatically determine the TDOA. Interference cancellation If a strong jamming source exists in the background, an obvious additional peak will appear in the PHAT output. In block processing, the sampled waveform of the target signal when the source is moving is divided into blocks, and PHAT processing is applied to each of the blocks. Once the PHAT outputs are organized into a cross-correlation matrix, in which each row represents a PHAT output, a false trajectory corresponding to the peaks may be present along the running time dimension. In most cases, the trajectory does not exhibit a straight-line behavior. If the PHAT outputs are rearranged to generate a line for the dominant interference component, the Radon transform can be exploited, which is effective for line detection. On the basis of this intuition, a novel processing method is proposed for interference cancellation on the PHAT outputs. The procedure of this method is illustrated in Fig. 2 and described as follows: Interference cancellation method, whereby the Radon transform is executed on two aligned matrices selected from the PHAT outputs, and the interference is canceled by the inverse Radon transform on their difference Successive blocks of the received signals on each receiver pair are processed using the PHAT technique to generate a cross-correlation matrix. Given that the Radon transform renders good line detection, all the peaks of the PHAT outputs, which correspond to the dominant interference component in the matrix, are aligned to generate a line along the running time axis. Thus, a new matrix is generated, as shown on the left of Fig. 2. In this way, an output matrix P with dimension N × M is produced, where N is the number of processed blocks and M is the length of the PHAT output. The output matrix has a nearly straight vertical line along the running time axis, and this line corresponds to peaks of the interference cross-correlation. The offset of each PHAT peak in the processing procedure is stored in memory for later use. The first M rows of P are selected and form the block named P 1, which is of dimension M × M and, in this example, covers an event when the target signal and interference have the same or similar TDOAs. The second block following P 1, named P 2, is also of dimension M × M. The Radon transform is performed on both matrices: $$ \begin{array}{l}{P}_{1\mathrm{R}}=\mathrm{R}\mathrm{T}\left({P}_1\right),\\ {}{P}_{2\mathrm{R}}=\mathrm{R}\mathrm{T}\left({P}_2\right),\ \end{array} $$ where RT (∙) denotes the Radon transform. The transformed matrix P 1R contains both the TDOA variations of the interference and target signal along the running time dimension, whereas the matrix P 2R contains only information of the interference. If the target signal is partially contained in P 2, then a negative peak will appear in the interference cancellation result. 3. Let ΔP R = P 1R − P 2R, so that the interference in P 1R is canceled. The inverse Radon transform (IRT) is then applied to ΔP R, yielding $$ \tilde{P}=\mathrm{I}\mathrm{R}\mathrm{T}\left(\varDelta {P}_{\mathrm{R}}\right), $$ As shown in (4), the PHAT output of the target signal is retained, whereas that of the interference is canceled. The TDOA of the target signal can then be evaluated according to the peak position on the relative time axis by undoing the recorded offset compensation from step 1 above. Theoretically, parameter M is independent of the moving speed of the source. Only variations in the peak values of the PHAT results can degrade the jamming signal cancellation performance. If the variation in the jamming signal is weak, then a large M value may be set, and vice versa. Once the TDOAs on the receiver pairs are determined, the position of the object is further assessed by estimating the intersection point of each pair of hyperbolic functions or determining the true position values by some other method. Localization algorithm N receivers are assumed to be located at position (a i , b i ), and the general formulation of TDOA distance between Receiver i and Receiver j for source position (X,Y) is mathematically described as $$ \varDelta {d}_{ij}=f\left(X,Y;{a}_i,{b}_i,{a}_j,{b}_j\right),\kern0.5em 1\le i<j\le N. $$ The distance function f can be estimated using the geometry for the direct wave without boundary interaction or using a ray model for multipath propagation. The nonlinear Eq. (5) is overdetermined when N > 3, and the solution can then be derived by nonlinear least-squares estimation of (X,Y), given by $$ \left(\widehat{X},\kern0.5em \widehat{Y}\right)= \arg \underset{\left(X,Y\right)}{ \min }{{\displaystyle \sum_{i<j}^N\left[\varDelta {d}_{ij}-f\left(X,Y;{a}_i,{b}_i,{a}_j,{b}_j\right)\right]}}^2. $$ For simplification, Q = (X, Y) is used to represent the position of the object. Therefore, with the use of a least-squares criterion, the minimization problem in vector notation can be written as $$ \hat{Q}= \arg \underset{\mathbf{Q}}{ \min}\left\{{\left[\varDelta d-f(Q)\right]}^T{R}^{-1}\left[\varDelta d-f(Q)\right]\right\}, $$ where Δd = (Δd 1,2, ⋯, Δd N − 1,N )T and are the covariance matrix of the TDOA measurements. The minimum-variance solution can be obtained using the stochastic gradient algorithm $$ {Q}^{m+1}={Q}^m-{\mu}^m{f}_{\mathbf{Q}}^{\hbox{'}}\left({Q}^m\right)\left[\varDelta d-f\left({Q}^m\right)\right], $$ where \( {f}_Q^{\hbox{'}} \) indicates the derivation of function f with respect to Q. A good choice of step size is formulated by the normalization $$ {\mu}^m=\frac{\mu }{{\left\Vert {f}_Q^{\hbox{'}}\left({Q}^m\right)\right\Vert}^2}. $$ The performance of the localization method depends on the accuracy of the TDOA measurement, Δd. For N hydrophones, the maximum number of intersection points for each pair of hyperbolic functions is $$ \left(\begin{array}{c}\hfill \left(\begin{array}{c}\hfill N\hfill \\ {}\hfill 2\hfill \end{array}\right)\hfill \\ {}\hfill 2\hfill \end{array}\right), $$ where \( \left(\begin{array}{c}\hfill p\hfill \\ {}\hfill q\hfill \end{array}\right)=\frac{p!}{q!\left(p-q\right)!} \) is the number of combinations of p things taken q at a time. Once the number of hydrophones exceeds 3, the problem is overdetermined. Given time-delay errors caused by noise, waveguide fluctuation, and interference, a good solution may not be achieved if a large error exists on some of the hydrophones. Therefore, three hydrophones is a good choice for a practical localization system. Experimental demonstration Experiment configuration A preliminary experiment was conducted in a lake with a depth of 40 m, aimed to verify the localization method under strong interference with a limited number of hydrophones. The configuration is shown in Fig. 3. An omnidirectional broadband transmitter with center frequency of 10 kHz was deployed as a jamming source at a depth of 10 m and range of 1100 m. Owing to the limited conditions of this experiment, only three hydrophones were deployed at a depth of 10 m, with the #1 and #2 hydrophones being 5 m apart and #2 and #3 hydrophones being 10 m apart. The hydrophone outputs were followed by a pre-filter with a pass-band of 4–16 kHz (to verify the proposed method under strong interference). A boat approximately 4 m in length was traveling at a speed of approximately 0.5 m/s, based on its global positioning system (GPS). Experimental configuration, showing three hydrophones at a depth of 10 m, a boat traveling at low speed, and a jamming source at a distance of approximately 1100 m from the receivers The sound speed profile measured by CTD (conductivity–temperature–depth) is shown in Fig. 4a. The upper isovelocity volume exhibits a constant sound speed of approximately 1484 m/s. In the lower volume, the sound speed profile shows a negative gradient because of the temperature decrease. The sound speed was measured down to a depth of 33 m because exact information below that level was not available. At the lower volume, the sound speed was evaluated according to the negative gradient with respect to a reference value of 1445 m/s at a depth of 40 m. The bottom was assumed to be a half-space with a density of 1.6 g/cm3 and a sound speed of 1720 m/s. This assumption did not bear any impact on the propagation path. Measured sound speed profile and propagation paths from the position of the small boat. a Sound speed, showing approximately constant value at depths of more than 18 m but a negative gradient below this depth. b Ray model outputs where the depth of the source (moving boat) was set to 0.5 m The rays propagated from the moving boat in this environment were computed using the Bellhop ray model [25, 26] with the source located at a depth of 0.5 m, as shown in Fig. 4b. Most of the rays travel downward and are then reflected from the bottom. Direct-path waveforms were received at a depth of 10 m when the source range was less than 500 m, and bounces at the boundaries occurred more than once when the source distance exceeded 700 m. As the boat moves beyond the receiver array, a 5–15 kHz linear frequency modulation (LFM) signal was radiated from the jamming source with a duration of 0.1 s and repeated every 0.5 s. Both the LFM signal and boat noise were simultaneously filtered and recorded. Even though the spectrum of the boat noise is lower at frequencies of only a few thousand hertz, it is more significant for the interference cancellation study. A portion of the waveform recorded on the #1 hydrophone and its power spectrum are shown in Fig. 5. The plots show that the boat noise is approximately 25 dB lower than the LFM jamming signal and is therefore seriously contaminated. Characteristics of signal received on #1 hydrophone. a Received waveform. b Power spectral density, showing boat noise (dark line) significantly polluted by interference (light line) Given that the jamming source was practically motionless, a line in the cross-correlation output matrix should exist. Therefore, the interference cancellation procedure can be simplified in subsequent processing because PHAT peak offsets are not necessary. Processing results and comparison All three hydrophone outputs were used to compute the TDOAs, as described in (5). Consequently, three hyperbolic functions were generated. The PHAT results from the received data on the first pair of hydrophones (#1 and #2) are shown in Fig. 6a. The peaks of the cross-correlation output of the boat noise are evident, owing to the spectral normalization of the interference by PHAT processing. The proposed interference cancellation method was then applied to the PHAT output, where parameter M = 400 and pulses 1–400 were selected for P 1, whereas pulses 11–410 were selected for P 2. The interference cancellation results in Fig. 6b show that the interference was well suppressed throughout the entire running time, particularly at the crossing event. The TDOAs are corresponding to the time delays of the maximum values of the rows of matrix \( \tilde{P} \) were finally determined, as shown in Fig. 7a. The result obtained by the PF method is shown in Fig. 7b for comparison. The PF method apparently tracked the wrong target at the crossing, whereas the proposed method provides a satisfactory assessment of the TDOAs. Processing results for data received on the first hydrophone pair at a running time from 40 to 60 s, showing that the TDOAs of the target signal component, before (a) and after (b) interference cancellation, are almost the same TDOA estimation and localization results based on PHAT processing of the first pair of signals. a TDOA proposed method. b TDOA PF method (unavailable at the crossing event). c Localization proposed method. d Localization PF method (only the portion with the correct TDOAs is shown) The localization process was then performed using the assessed TDOA for each of the three receiver pairs. The results for the proposed method throughout the entire running time are shown in Fig. 7c, where μ = 1, showing that the boat traveled approximately along a straight line. By contrast, a portion of the results obtained using PF method is shown in Fig. 7d. Both methods have nearly the same localization results, with a difference not exceeding 20 m along the y direction. Experiment on moving jamming source In Section 3, the jamming source is almost motionless and cooperative. In actual multitarget localization, however, the jamming source may be moving as well as strong. Strong moving jamming sources could include a merchantman or military vessel. In this scenario, the same problem prevails in the TDOA estimation. Given that the trajectory does not exhibit a line on the PHAT outputs, additional preprocessing is required for TDOA estimation used in localization. When PHAT processing is performed, the strongest peaks from the outputs should be aligned and the deviations stored in memory. Subsequently, the interference cancellation method is performed by block processing. Afterward, the TDOA of the target signal is connected using the recorded deviations. As an example, two moving sources are present in this experiment: one is the same boat whose trajectory is known based on its GPS, and the other is an unknown boat moving at a high speed. In processing, the second boat is considered the jamming source. The recorded waveforms are prefiltered the same as in Section 3 and processed in 0.5-s blocks. The two correlation outputs are displayed in Fig. 8a, corresponding to the jamming source (dark curve) and target source (light curve). The relative delays of the jamming source indicate that it moves in the opposite direction of the target source. At a running time of approximately 68 s, the two sources are at the same position and thus have the same TDOA. Given the relative time delays of the interference alignment, interference cancellation is performed on the PHAT output, yielding the result shown in Fig. 8b. The strong moving jamming source is eliminated, whereas the target source is retained. Some portions of the interference are not well isolated because of variations in the jamming source when it moves, as mentioned in Section 2.2. Moving jamming source suppression. a PHAT outputs for the first pair in 0.5-s blocks. b Processing results with the strong jamming source subtracted. c Localization results. d GPS measurements (solid line and dashed line represent the evaluated and measured ranges, respectively) The relative time delays of the target source on the receiver pairs can then be obtained directly, even at a running time close to the crossing event. Finally, the localization results are obtained, as shown in Fig. 8c, which agrees well with the GPS measurements in Fig. 8d. The Radon transform works well for line detection. When the signal-to-interference ratio is very weak, a weak variation in the PHAT output exists at the crossing event, such that the target signal will be eliminated as well during the interference cancellation. Consequently, the trajectory of the target signal will be interrupted at the crossing event, causing a gap in the estimated time delays. Given that the wideband jamming source (moving boat noise) has a good correlation function, the interference has only a slight influence on the TDOA estimation in the experiment. Nevertheless, this influence is sufficient to show the efficacy of the proposed method. If the jamming signal does not have a sharp correlation peak, this method may be more applicable. In passive localization, the target signal is significantly contaminated by strong interference. As a result, traditional localization methods may be ineffective. In this study, a hybrid method involving PHAT processing, interference cancellation, and position searching is proposed. By certain additional preprocessing of the PHAT outputs, the interference can be adequately suppressed, allowing for good localization results in the preliminary experiments. Although the experimental range is not the main concern of this study, the localization method can also achieve good performance at farther distances. A large system aperture is expected in that case, such as a long-baseline sensor array to achieve better localization. Furthermore, joint estimation is suggested for multiple localization systems when the number of receivers is more than three. One possible application of this method is the monitoring of multiple moving acoustic sources with fixed hydrophones. A factor that is likely to impact the performance of this method is strong variation in the interference. This problem may be solved by applying a constant strength to the PHAT output setting over an appropriate threshold. In the experimental investigation, only direct arrival signals are considered. However, multipath propagation may not be negligible at longer ranges. Thus, a ray model may be necessary for time-delay estimation. A possible method to address this issue is to replace the analytical partial derivatives by a numerical method; however, this may require substantial computational resources. JBY Tsui, Fundamentals of global positioning system receivers. (Wiley-Interscience, New York, 2000). doi:10.1002/0471200549 WK Ma, BN Vo, SS Singh, A Baddeley, Tracking an unknown time-varying number of speakers using TDOA measurements: a random finite set approach. IEEE Trans. Signal Process. 54(9), 3291–3304 (2006). doi:10.1109/TSP.2006.877658 NH Lehmann, AM Haimovich, RS Blum, L Cimini, Proc. Fortieth Asilomar Conference on Signals, Systems and Computers. High resolution capabilities of MIMO radar, 2006. doi:10.1109/ACSSC.2006.356576 M Bruno, KW Chung, H Salloum, A Sedunov, N Sedunov, A Sutin, H Graber, P Mallas, in Waterside Security Conference (WSS), 2010. doi:10.1109/WSSC.2010.5730229. Concurrent use of satellite imaging and passive acoustics for maritime domain awareness J Gebbie, M Siderius, R McCargar, JS Allen III, G Pusey, Localization of a noisy broadband surface target using time differences of multipath arrivals. J. Acoust. Soc. Am. 134(1), EL77–EL83 (2013). doi:10.1121/1.4809771 CH Knapp, GC Carter, The generalized correlation method for estimation of time delay. IEEE Trans. Acoust. Speech Signal Process. 24(4), 320–327 (1976). doi:10.1109/TASSP.1976.1162830 B Qin, H Zhang, Q Fu, Y Yan, Proc. 9th Int. Conf. on Signal Processing. Subsample time delay estimation via improved GCC PHAT algorithm, 2008. doi:10.1109/ICOSP.2008.4697676 MS Brandstein, HF Silverman, Proc. IEEE Int. Conf. Acoust. Speech, Signal Process. A robust method for speech signal time-delay estimation inreverberant rooms, 1997. doi:10.1109/ICASSP.1997.599651 J Chen, J Benesty, Y Huang, Time delay estimation in room acoustic environments: an overview. EURASIP J. Appl. Signal Processing. 170, (2006). doi:10.1155/ASP/2006/26503 DJ Torrieri, in Autonomous robot vehicles, ed. by IJ Cox, GT Wilfong (Springer, New York, 1990), pp. 151–166 YT Chan, KC Ho, A simple and efficient estimator for hyperbolic location. IEEE Trans. Signal Process. 42(8), 1905–1915 (1994). doi:10.1109/78.301830 DP Young, CM Keller, DW Bliss, KW Forsythe, Proc. 37th Asilomar Conf. Signals, Syst. Comput. Ultra-wideband (UWB) transmitter location using time difference of arrival (TDOA) techniques, 2003. doi:10.1109/ACSSC.2003.1292184 KW Lui, FKW Chan, HC So, Accurate time delay estimation based passive localization. Signal Processing 89(9), 1835–1838 (2009). doi:10.1016/j.sigpro.2009.03.009 B Friedlander, Accuracy of source localization using multipath delays. IEEE Trans. Aerosp. Electron. Syst. 24(4), 346–359 (1988). doi:10.1109/7.7176 P Felisberto, O Rodriguez, P Santos, E Ey, SM Jesus, Experimental results of underwater cooperative source localization using a single acoustic vector sensor. Sensors (Basel) 13(7), 8856–8878 (2013). doi:10.3390/s130708856 NY Ko, TG Kim, YS Moon, Proc. OCEANS Int. Conf. Particle filter approach for localization of an underwater robot using time difference of arrival, 2012. doi:10.1109/OCEANS-Yeosu.2012.6263573 J Gebbie, M Siderius, JS Allen III, A two-hydrophone range and bearing localization algorithm with performance analysis. J. Acoust. Soc. Am. 137(3), 1586–1597 (2015). doi:10.1121/1.4906835 F Antonacci, D Riva, D Saiu, A Sarti, M Tagliasacchi, S Tubaro, Proc. 14th European Signal Processing Conference, Tracking multiple acoustic sources using particle filtering, 2006 HM Shertukde, Y Bar-Shalom, Detection and estimation for multiple targets with two omnidirectional sensors in the presence of false measurements. IEEE Trans. Acoust. Speech Signal Process. 38(5), 749–763 (1990). doi:10.1109/29.56019 H Gauvrit, JP Le Cadre, C Jauffret, A formulation of multitarget tracking as an incomplete data problem. IEEE Trans. Aerosp. Electron. Syst. 33(4), 1242–1257 (1997). doi:10.1109/7.625121 A.P. Dempster, N.M. Laird, D.B. Rubin, Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B. 39(1), 1–38 (1977). D Carevic, Automatic estimation of multiple target positions and velocities using passive TDOA measurements of transients. IEEE Trans. Signal Process. 55(2), 424–436 (2007). doi:10.1109/TSP.2006.885745 SR Deans, The Radon transform and some of its applications (Dover Publication, New York, 2007) G Beylkin, Discrete radon transform. IEEE Trans. Acoust. Speech Signal Process. 35(2), 162–172 (1987). doi:10.1109/TASSP.1987.1165108 MB Porter, Gaussian beam tracing for computing ocean acoustic fields. J. Acoust. Soc. Am. 82(4), 1349 (1987). doi:10.1121/1.395269 M.B. Porter, The BELLHOP manual and user's guide: PRELIMINARY DRAFT. http://oalib.hlsresearch.com/Rays/HLS-2010-1.pdf. Accessed 1 Mar 2011. The authors gratefully acknowledge the support for this research by the National Natural Science Foundation of China (61571366) and Natural Science Basic Research Plan in Shaanxi Province of China (2015JQ5199). School of Marine Science and Technology, Northwestern Polytechnical University, Xi'an, 710072, China Bo Lei , Yixin Yang , Kunde Yang , Yong Wang & Yang Shi Search for Bo Lei in: Search for Yixin Yang in: Search for Kunde Yang in: Search for Yong Wang in: Search for Yang Shi in: Correspondence to Bo Lei. Lei, B., Yang, Y., Yang, K. et al. A hybrid passive localization method under strong interference with a preliminary experimental demonstration. EURASIP J. Adv. Signal Process. 2016, 130 (2016) doi:10.1186/s13634-016-0430-3 Radon transform Cross-correlation Underwater acoustics
CommonCrawl
Simultaneous Uniformization Theorem October 23, 2010 Tags: geometry, topology The other day, in the graduate student talks, Subhojoy was talking about the Simultaneous Uniformization Theorem. It was a nice treat because I used to be really into geometric topology (or at least as much as an undergrad with way too little background could.) The big reveal is but most of the talk, naturally, goes into defining what those letters mean. The Riemann Mapping Theorem says that any simply connected set is conformally equivalent to the disc . Conformal maps are angle-preserving; in practice, they're holomorphic functions with derivative everywhere nonzero. Conformal maps take round circles to round circles if the circles are small enough. A Riemann Surface is a topological surface with conformal structure: a collection of charts to the complex plane such that the transition maps are conformal. The first version of the Uniformization Theorem says that any simply-connected Riemann surface is conformally equivalent to the Riemann sphere (or complex plane or unit disc; these are all equivalent.) The second, more general version of the Uniformization Theorem says that any Riemann surface of genus is conformally equivalent to where is the hyperbolic plane and is a discrete subgroup of . To understand this better, we should observe more about the universal cover of a Riemann surface. This is, of course, simply connected. Its deck transformations are conformal automorphisms of the disc. But it can be proven that conformal automorphisms of the disc are precisely the Mobius transformations, or functions of the form This implies that the automorphism group of is . Now observe that there's a model of the hyperbolic plane on the disc, by assigning the metric And, if you were to check, it would turn out that conformal transformations on the disc preserve this metric. So it begins to make sense; Riemann surfaces are conformally equivalent to their universal covering space, modulo some group of relations, a subgroup of the group of deck transformations of the universal cover. are called Fuchsian groups — these define which Riemann surface we're talking about, up to a conformal transformation. Now we can define Fuchsian space as It's the set of maps from the fundamental group of a surface to . And we can define Teichmuller space as the space of marked conformal structures on the surface S. This is less enormously huge than you might think, because we consider these up to an equivalence relation. If and are conformal structures, and there exists a conformal map such that then we consider equivalent structures. In fact, Teichmuller space is not that enormously huge: . It turns out that Teichmuller space is completely determined by what happens to the boundary circles in a pair of pants decomposition of the surface. Here's a picture of a pair of pants (aka a three-punctured sphere): Here's a picture of a decomposition of a Riemann surface into pairs of pants: (Here's a nice article demonstrating the fact. It's actually not as hard as it looks.) Now we generalize to Quasi-Fuchsian spaces. For this, we'll be working with hyperbolic 3-space instead of 2-space. The isometries of hyperbolic 3-space happen to be . Instead of a Poincare Disc Model, we have a ball model; again, acts by Mobius transformations, functions of the form . A quasiconformal function takes, infinitesimally, circles to ellipses. It's like a conformal map, but with a certain amount of distortion. The Beltrami coefficient definds how much distortion: Quasi-Fuchsian space, QF(S), is the set of all quasiconformal deformations of a Fuchsian representation. In other words, this is the space of all representations to preserving topological circles on the boundary. Now, the Simultaneous Uniformization Theorem can be stated: the Quasi-Fuchsian space of a surface is isomorphic to the product of two Teichmuller spaces of the surface. One application of this theorem is to hyperbolic 3-manifolds. If is a hyperbolic 3-manifold, and if then $M \simeq S \times \mathbb{R}$. In other words, we can think of a hyperbolic three-manifold as the real line, with a Riemann surface at each point — you can only sort of visualize this, as it's not embeddable in 3-space. The Simultaneous Uniformization Theorem implies that there is a hyperbolic metric on this 3-manifold for any choice of conformal structure at infinity. This contrasts with the Mostow Rigidity Theorem, which states that a closed 3-manifold has at most one hyperbolic structure. Together, these statements imply that any hyperbolic metric on is determined uniquely by the choice of conformal structures at infinity. Bulldog! Bulldog! Bow wow wow! August 29, 2010 Tags: geometry, grad life Just a few notes as the week goes by: 1. I've got classes picked out now: modern algebra, measure theory and integration, algebraic topology, and harmonic analysis. And the new requirement, "ethical conduct of research" (which I assume means "don't plagiarize and don't torture monkeys.") 2. I love Ikea. I know everybody loves Ikea, but there's no reason to depart from convention here. Everything is beautiful and cheap and quite a lot of things come in orange. 3. The international students, apparently, were taken aside to learn Yale football chants and associated paraphernalia (including the "Bulldog! Bulldog! Bow wow wow!" cheer.) We Americans were never told anything of the kind. I guess this goes along with the tradition of immigrants knowing the Constitution better than we do. 4. Thanks to a fellow grad student, I'm remembering how much I like geometry and topology. Here's something I learned (at a pub night, no less!) A hyperbolic manifold is the hyperbolic plane modulo a discrete group of isometries. That means, if you look at the Poincare disc model, some of the points on the circular boundary are limit points of the orbit of some point in the interior under isometries in the group. Consider the set of such points. Apparently: if it's not the whole circle, it is a Cantor set, and its Hausdorff dimension equals the first eigenvalue of the Laplacian on the manifold. WHOA. Convergence of the Discrete Laplace-Beltrami Operator June 15, 2010 Tags: geometry, laplace-beltrami operator Via the Geomblog, here's a paper about the convergence of discrete approximations to the Laplace-Beltrami operator. What is the Laplace-Beltrami operator? It's a generalization of the Laplacian for use on surfaces. Like the Laplacian, it's defined as the divergence of the gradient of a function. But what does this really mean on a surface? Given a vector field X, the divergence is defined as in Einstein notation, where g is the metric tensor associated with the surface. (Compare to the ordinary divergence of a vector field, which is the sum of its partial derivatives.) The gradient is defined as Combining the definitions, Euclidean space is a special case where the metric tensor is just the Kronecker delta. Why do we care about this operator? Well, you can analyze surfaces with it; on the applied end this means it's important for signal and image processing. (The Laplace-Beltrami operator is essential to diffusion maps, for instance.) The thing is, in computational applications we generally want a discrete version of the operator to converge to the real McCoy. Hence the importance of convergence algorithms. Taubin's discrete version of is defined as an averaging operator over the neighbors. This is a weighted graph Laplacian. But this cannot be an approximation to the Laplace-Beltrami operator because it goes to 0 as the mesh becomes finer. The authors use a different discretization. We assume a triangular mesh on the surface. The local tangiential polygon at a point v is defined to be the polygon formed by the images of all the neighbors upon projection onto the tangent space at v. A function can be lifted locally to the local tangiential polygon, by defining it at the points on the surface, and making it piecewise linear in the natural way. After a few lines of manipulation of the Taylor series for a function on the plane, it follows that Now you can use this as a discrete Laplacian applied to the local lifting function. The authors show this converges to the Laplace-Beltrami operator by using the exponential map from the tangent space into the surface. The Laplace-Beltrami operator can be computed from the second derivatives of any two perpendicular geodesics; let the geodesics be the images under the exponential map of the basis elements of the tangent space. We use the fact that the normal to a sufficiently fine triangular mesh approximates the normal to the surface according to where r is the size of the mesh. This shows that the basis for the approximating tangent plane is close to the basis for the true tangent plane with error only O(r^2). Calculating the approximate Laplacian gives us then that the error is only O(r). There are also some numerical simulations in the paper giving evidence that this approximation is faster and more accurate than a competing approximation.
CommonCrawl
Vocab list 2 gorsak28 Abscond Verb - run away, usually includes taking something or somebody along Accosted Verb- approached and spoke to in a bold way, confronted Noun- an man-made opening, usually small, a device that controls amount of light admitted, a natural opening of something Catacombs Noun- a series of grave sites in an underground burial place, an underground cemetery Clamored Verb- demanded or complained noisily Distill Verb- give off (liquid), undergo condensation, change from a gaseous to a liquid star and fall in drops, remove impurities from, increase the concentration of Fettered Adj.- bound by chains fastened around the ankles Noun- large metal or pottery vessel with a handle and spout, used to hold alcoholic beverages (usually wine) Gesticulation Noun- deliberate and vigorous gesture or motion Adj.- distorted and unnatural in shape or size, abnormal and hideous, ludicrously odd, Noun- art characterized by incongruous mixture of parts of humans and animal interwoven with plants Hearken Verb- listen, usually in the imperative Noun- ignorant person Noun- exemption from the punishment or loss Noun- the craft of a mason, structure built of anyone or brick by mason Noun- the status of an organism within it's environment and community (affecting its survival as a species) a position particularly well suited to the person who occupies it, an enclosure that is set back or indented, a small concavity Noun- the ball shaped capsule contains the vertebrate eye, an object with a spherical shape Verb- move in an orbit Verb- become conscious of, to become aware of through the senses Precluded Verb- made impossible, prevented to presence, existence, or occurrence of Noun- a des position free from stress or emotion, freedom from activity (work or strain responsibility) the absence of mental stress or anxiety, Verb- put or confine something in a horizontal position, lean in a comfortable resting position, be inherent or innate in Noun- the act of correcting for your wrong doing, a justly deserved penalty, the act of taking revenge ( harming someone in retaliation for something harmful that they have done) especially in the next life Noun- a watery discharge from the mucous membranes (especially from the eyes or nose) Noun- a candlestick with a flat side to be hung on the wall, a forbidding stronghold Noun- a small hand till with a handle and flat metal blade, used for scooping or spreading plaster or similar materials, Verb- use it on light garden work or plaster work Noun- the oldness of wines, a season'a yield of wine from a vineyard Noun- a movement back from impact, the backward jerk of a gun when it is fired Verb- spring back, as from forceful thrust, spring away from impact, draw back, as with fear or pain, come back to the originator of an action with undesired effects English Vocab 2 hey_its_jessie13 Grade 3 WordMasters #2 Winter 2017 Cara_Martin69 SAT Vocab 1 MarcoL01 Test 1-1120 1,120 terms oleksandrbash Julius Caesar vocab Week of 5/18/15 Спланхнология Часть (1) lis1337lis ETAP (Abadie): Terms to Know for the AP Exam camille45 exam 2 reading materials krystenthomas Lecture 15 Acharya, A., Blackwell, M., and M. Sen,… nshellander From the list below, supply the words needed to complete the paragraph. Some words will not be used. renown, forte, confute, brinkmanship, dynasty, recumbent, tribulation. Damian mounted his new _____ bicycle, but he immediately crashed into a light pole because he was not used to sitting back while riding a bike. After a few minutes of _____, though, he was able to ride around in the parking lot without falling down. Damian's friends _____ his decision to spend a lot of money on what they called a novelty item, but Damian was _____ for wasting money on things that sat in the basement and collected dust when he tired of them. His credit card sprees would stop eventually. Damian was bound to lose his game of financial _____, in which he waited to pay his bills until he received threatening notices from the bank. Study the entries and answer the questions that follow. The root fort means "strong." The root graph means "writing." The root gen means "born," "to produce," or "kind" (type). The prefix mono means "one." List all the words you can think of that contain the roots graph and gen. From (he list below, supply the words needed to complete the paragraph. Some words will not be used. $$ \begin{matrix} \text{resign} & \text{genocide} & \text{foray} & \text{quintessential}\\ \text{faux} & \text{conjecture} & \text{manifesto}\\ \end{matrix} $$ Few outsiders knew for sure the condition of the city in the days following the violent uprising, but most _____ portrayed a place of rampant looting and lawlessness after the rebels ______ into the capital city. Winston, the nearest correspondent, traveled to the city to report the situation, and what he found shocked him. Poor-quality copies of the revolutionaries ______ hung on bullet-riddled walls. Orphaned children and distraught mothers roamed the streets as remnants of the near ______ that had occurred in the weeks leading to the uprising. Most of the combatants ______ themselves to simple survival, considering themselves lucky that food and water were in good supply for the moment. From the list below, supply the words needed to complete the paragraph. Some words will not be used. Charisma, efface, advocate, mesmerize, gist, bandy, ogre. Joan, who _____ the cleanup of the James River, is always trying to gain supporters for her cause. The usual _____ of her speech focuses on the effects of the river's pollution on future generations. Her eloquent speech _____ audiences, and her ______ helps her to win the hearts of people who are not even affected by the pollution. Joan hopes someday to ______ the consequences of the irresponsible dumping practices that continue to foul the James River.
CommonCrawl