text
stringlengths
100
500k
subset
stringclasses
4 values
Research article | Open | Published: 28 February 2019 Development of a questionnaire to assess dietary restrictions runners use to mitigate gastrointestinal symptoms Jill A. Parnell ORCID: orcid.org/0000-0002-2936-05581, Hailey Lafave2, Kim Wagner–Jones3, Robyn F. Madden4 & Kelly Anne Erdman5 Exercise induced gastrointestinal (GI) symptoms can plague athletes, especially runners. Sport nutrition recommendations are nutrient rather than foods focused and do not adequately address strategies to reduce GI symptoms. The objective was to develop a valid and reliable questionnaire to evaluate pre-training and pre-racing voluntary food restrictions/choices, reasons for avoiding foods, and gastrointestinal symptoms in endurance runners. Validity testing occurred through four Registered Dietitians, three of whom possess Master's degrees, and a dietetic trainee who provided initial feedback. Additionally, one Registered Dietitian is a Board Certified Specialist in Sports Dietetics (CSSD), and another has an International Olympic Committee Diploma in Sports Nutrition. The second version was sent out to nine different experts who rated each question using a Likert scale and provided additional comments. For reliability testing, the questionnaire was administered to 39 participants in a test re-test format. Kappa statistics and the prevalence-adjusted bias-adjusted kappa (PABAK) were used to assess the reliability. All questions had an average Likert scale rating of 4/5 or greater. All test re-test results falling under basic information exhibited substantial agreement (kappa ≥0.61). All medical questions including food allergies and intolerances had moderate (kappa ≥0.41) or higher agreement. Responses were less consistent for food avoidances while training (5/28 outcomes) versus racing (0/28 outcomes) with a kappa below 0.41. All reasons for avoiding foods were deemed reliable. Regarding symptoms, side stitch while training and gas while racing were the only flagged categories. Overall, the questionnaire is a valid and reliable tool to evaluate voluntary dietary restrictions among endurance runners. Future studies can use the questionnaire to assess dietary strategies runners employ to reduce GI distress and optimize performance. An athlete's nutritional preparation prior to exercise plays a key role in optimizing performance, yet the details of this preparation remain largely unstudied. The amount of carbohydrate required pre-exercise has been extensively researched [1, 2]; however, information on the optimal foods to meet these requirements is lacking, as are recommendations regarding the amounts of other macronutrients. Pre-exercise nutrition should consider a multitude of factors including nutrient composition, the potential to promote gastrointestinal issues, and digestibility. Food and fluid intakes during exercise have been studied, however, less is known regarding food intolerances and preferences in the pre-exercise nutrition. An estimated 30–90% of distance runners experience gastrointestinal (GI) symptoms while running, which has been found, anecdotally, to be an underlying cause of underperformance [3]. Commonly reported symptoms during exercise include flatulence, belching, diarrhea, urge to defecate, epigastric pain, reflux/heartburn, abdominal cramping, nausea, vomiting, and fecal blood loss [3,4,5]. The underlying factors promoting GI symptoms are believed to result from physiological, mechanical, psychological, and nutritional interactions [3, 5, 6]. GI symptoms are commonly observed in endurance athletes and are affected by the intensity and sporting type [5]. Proposed physiological causes are linked to mechanical irritation and reduction of splanchnic blood flow during exercise [7, 8]. Reduced blood flow can lead to gastrointestinal ischemia resulting in increased permeability, bacterial translocation and inflammation; ultimately presenting as increased GI distress in the athlete [9]. During exercise gastric emptying is slowed and orocaecal transit time increases. Furthermore, there is evidence of nutrient malabsorption, and one or both of these effects may aggravate GI symptoms [10]. Environmental conditions also play a role, as symptoms are increased in warmer (30 °C) as opposed to temperate (22 °C) conditions [11]. Consequently, nutritional strategies are needed to moderate changes in gut physiology occurring with exercise, especially among runners. Many endurance athletes believe the consumption or avoidance of specific foods and/or fluids prior to exercise can reduce GI distress and optimize performance. For example, it was reported that 41% of non-celiac athletes followed a gluten-free diet at least 50% of the time to reduce GI symptoms during training/competition [12]. The same authors also found that a low fermentable oligosaccharides, disaccharides, monosaccharides and polyols (FODMAP) diet reduced GI symptoms in athletes [13]. Moreover, it has been suggested that foods with high fiber, fat, and fructose can trigger GI symptoms [14, 15]. Despite the importance of nutrition as it relates to exercise induced GI symptoms, the current position paper on Nutrition and Athletic Performance does not make recommendations regarding specific foods/food groups in the pre-exercise period. General guidelines to avoid foods high in fat, fiber, and protein are provided with little specificity. The position paper recommends that athletes determine their own food intolerances and stick to a diet that optimizes performance [2]. An evaluation of the dietary restrictions endurance runners have developed by personal trial and error will provide an entry point in the investigation of foods/food groups that optimize performance while minimizing exercise induced GI symptoms. The objective of the study was to develop a valid and reliable questionnaire to assess pre-training/pre-racing voluntary food restrictions, food choices, reasons for avoiding foods, and GI symptoms in endurance runners. The questionnaire will become a valuable tool for researchers to identify pre-exercise nutritional strategies used by endurance runners. The study investigators, two Registered Dietitians with a Certified Specialist in Sports Dietetics (CSSD) and Master's degrees, one of which was an Olympic cyclist, and the other a competitive distance runner, and an academic (PhD) with sport nutrition expertise, developed a draft version of the questionnaire that included basic demographics, running experience and events, medical information, food allergies and intolerances, foods avoided/chosen prior to endurance running, GI symptoms experienced, and reasons for avoiding foods before running. Questions regarding sources of nutrition information were also included. The responses were provided by checking boxes or ranking, with the exception of foods chosen, which were open-ended questions. For the content validity testing, the draft version was sent to five experts in the field: four Registered Dietitians with their Master's degree and one dietetic intern. Additionally, one Registered Dietitian is a CSSD, and another has an International Olympic Committee Diploma in Sports Nutrition. All experts provided written feedback, which was incorporated into the development of the second draft. The second draft was sent out to three different academics with doctorate degrees in nutrition, one Registered Dietitian, and five coaches, all of whom include running in their training programs. Two of the academics have extensive research in sports nutrition and one in the development of nutrition questionnaires. The Registered Dietitian specializes in gastrointestinal disorders. These experts provided written feedback and rated each question using a Likert rating scale with 1 = unacceptable, 3 = acceptable, 5 = highly acceptable. Further amendments were made based on their comments to obtain a final draft. A copy of the questionnaire is available [16] and included as a supplemental file (see Additional file 1). The questionnaire was administered to endurance runners who were 18 years of age or older. It was estimated that thirty-one participants were required for the test re-test based on the null hypothesis of kappa equal to 0.4, true kappa of 0.9, a proportion of positive ratings of 30%, two-tailed significance value of 0.05, and power of 80% [17]. The athletes were recruited from running groups upon approval from the organizers. The Mount Royal University Human Research Ethics Board approved the study (ethics ID 2016–38). All participants provided voluntary, written, informed consent. Test re-test protocol Reliability was determined using the test re-test method. Participants completed the questionnaire twice, with a minimum of one week and a maximum of one month between the initial test and subsequent re-test. The purpose of the test re-test procedure was to investigate the reliability of the questions based on the agreement of participants' responses. The kappa statistics, using Cohen's method, was calculated for all categorical questions [18]. Questions where the participants were asked to rank their top sources of information and preferred sources of information were coded as "yes" or "no" responses for the kappa calculation. Age was evaluated using a Pearson correlation coefficient, as it is a continuous variable. Kappa is the measure of true agreement: it measures the proportion of agreement expected beyond that of chance [19]. The range of possible kappa values is from − 1 to 1, usually falling between 0 and 1. One represents 100% agreement, while 0 represents that agreement is no better than that expected by chance. A negative kappa value indicates that the agreement is worse than that expected by chance [19]. When interpreting kappa values 0.01–0.20 = slight agreement, 0.21–0.40 = fair, 0.41–0.60 = moderate, 0.61–0.80 = substantial, and 0.81–0.99 = almost perfect. The kappa value is determined using the observed agreement and the expected agreement [19]: $$ \mathrm{K}=\frac{\mathrm{Observed}\ \mathrm{agreement}\hbox{-} \mathrm{Expected}\ \mathrm{agreement}}{1\hbox{-} \mathrm{Expected}\ \mathrm{agreement}} $$ Prevalence and bias play a role in the determination of the kappa value; therefore, kappa can be adjusted to account for high or low prevalence. According to Sim & Wright [17], the adjusted kappa is referred to as PABAK: prevalence-adjusted bias-adjusted kappa and can be calculated as follows [20]. $$ \mathrm{PABAK}=\left(2\ \mathrm{x}\ \mathrm{Observed}\ \mathrm{Proportional}\ \mathrm{Agreement}\right)\hbox{-} 1 $$ The unadjusted kappa and the adjusted kappa (PABAK) values were calculated because the response prevalence for several items was skewed; thus, the unadjusted kappa values were not indicative of the true reliability of the question. For example, the unadjusted kappa values are zero when there is 100% agreement, but only responses from one category (i.e. all no responses for celiac disease); the PABAK adjusts for the low prevalence in one response category and high prevalence in the other response category and presents a value of 1 indicating 100% agreement. We considered the PABAK value when the prevalence index was 0.8 or greater (i.e. when 80% or more of the sample responded in the same direction) or the bias index was greater than 0.15. The same cut-offs for the PABAK assessment were used as for the unadjusted kappa [19]. All statistical tests were conducted using SPSS statistical software version 23 (IBM, Armonk, New York, USA) and STATA S/E version 15 (StataCorp LLC, College Station, TX, USA). For the validity testing, all questions had an average Likert scale rating of 4/5 or greater. With respect to reliability, thirty-nine participants (37% male) completed the initial and re-test questionnaire. The questionnaire took approximately 10 min to complete. The mean (SD) age of the group was 45 [14] years. The participants represented a range of performance levels, running experience, and race distances (Table 1). With respect to medical conditions, two reported inflammatory bowel disease (IBD), two reported irritable bowel syndrome (IBS), six reported heart burn, and one reported hiatus hernia. Table 1 Participant characteristics Assessment of reliability for all test re-test results falling under demographic and running experience (gender, performance level, running hours per week, years running, and competition distance) exhibited kappa values above 0.61, demonstrating substantial agreement. Age had 100% agreement (r = 1.0). Test/ retest results for medical information are presented in Table 2. All questions had a moderate agreement or greater. Table 2 Test re-test results for medical information Questions surrounding dietary restrictions, reasons for avoiding foods and symptoms while training are presented in Table 3. When asked about foods that were avoided pre-training there were twelve flagged categories upon initial assessment. Importantly, however, gluten free grains, water, hot cereal, nuts, fruit, almond milk, and coconut milk were all deemed reliable when the PABAK criteria were considered. All reasons for avoiding foods while training had at minimum a moderate agreement. With respect to symptoms experienced while training, only side ache/stitch had poor reliability with a kappa of 0.37. Table 3 Dietary restrictions, reasons for avoiding foods and gastrointestinal symptoms while training Results for dietary restrictions, reasons for avoiding foods, and symptoms experienced while racing are found in Table 4. All foods had substantial reliability, with the exception of chocolate and starchy vegetables, which had moderate reliability. All reasons for avoiding foods pre-racing had a kappa value of 0.67 or greater. Gas was the only symptom experienced while racing that did not meet the moderate threshold with a kappa of 0.37. Table 4 Dietary restrictions while racing reasons for avoiding foods and gastrointestinal symptoms while training Questions regarding current sources of information and preferred sources of information asked the participants to rank their top five or top three options respectively. Out of the seventeen sources of information listed, athletes, teammates, and physicians had a kappa below 0.41 indicating poor agreement. When asked if they had attended a workshop on nutrition, the kappa value was 0.83 and their response to the importance of receiving information was kappa 0.62. All preferred means of receiving information had at least moderate agreement with the exception of websites (kappa 0.39). The pathophysiology of GI distress experienced by endurance athletes is of a heterogeneous nature. Although there are proposed hypotheses, including the mechanical nature of the exercise and physiological changes, the underlying causes remain poorly understood [3]. Clearly, however, nutrition has a key role in minimizing exercise induced GI symptoms. In this context, it is important to explore voluntary pre-exercise food/fluid restrictions endurance athletes are using to mitigate GI symptoms. The objective of this study was to develop a questionnaire to evaluate food avoidances and choices used by endurance runners to minimize exercise induced GI symptoms and then test it for validity and reliability. The present questionnaire can be deemed valid as it underwent two rounds of content validity testing by a combination of nutrition academics, Registered Dietitians, and coaches. The inclusion of the Likert scale rating allows for quantification of the validity. Reliability testing was conducted using the test re-test method with 39 participants. Categories with a kappa statistic below moderate agreement (kappa < 0.41) were flagged as having low reliability. According to Lantz and Nebenzahl [21], the relevance of kappa values must take into consideration the issue of prevalence. The symmetrical distribution of agreement, reflected by kappa values, may be skewed in the presence of unbalanced prevalence. For instance, if a research design is investigating a particular trait, yet majority of the population is without this particular trait, it results in the agreement to be largely skewed due to the low prevalence. Although a balanced prevalence nullifies this effect, it is not always possible to incorporate into the research design [21]. As this study spanned a broad range of categories in order to determine specific food/fluid restrictions, the issue of low prevalence was expected. Bias refers to how much the raters disagree on what proportion of the cases are positive or negative. Kappa is higher when there is a large bias than when the bias is small [17]. The adjusted kappa (PABAK: prevalence-adjusted bias-adjusted kappa) was used to control for extreme prevalence and/or bias. All test re-test results falling under basic information exhibited substantial agreement and were not of concern. Test retest results for medical information found milk intolerance had one of the lowest agreements. There is a common misunderstanding among the general public and even some health care providers, regarding the difference between cow's milk protein allergy and lactose intolerance [22]. Lactose intolerance is characterized by a deficiency in the lactase enzyme, leaving undigested lactose in the GI tract resulting in distress. Conversely, a cow's milk protein allergy is characterized by an immunological response when these proteins are ingested. According to Baron [22], many people misinterpret the signs and symptoms of lactose maldigestion as an allergy. Further complicating the issue, not all people with lactase non-persistence will experience intolerance symptoms, there is a dose effect, and symptoms can be related to other digestive disorders [23]. Results for dietary restrictions pre-training exhibited five items with low reliability grains, sports bar/gel, starchy vegetable, high fiber foods, and soy milk. The stem of the question was "When TRAINING are there any foods/fluids that you purposely AVOID in your pre-run MEAL or SNACK (0-4 hours before running TRAINING)? Please check all that apply". The remaining 23 items had good reliability suggesting the inconsistency is due to the specific food category, not the wording of the question. Interestingly, although the question wording was similar pre-racing "When RACING are there any types of food/fluid that you AVOID in your pre-race MEAL or SNACK (0-4 hours before running RACES/COMPETITIONS)? Please check all that apply" and the food choices were identical, all race options had at least moderate agreement and most had substantial agreement. Studies have shown that pre-performance routines have direct influence on an athlete's mental and technical performance. Athletes are often consistent with their pre-competition routines in order to optimize performance; however, may be more flexible when it comes to training, given the lower importance [24]. Logically, it is not surprising that the pre-training category exhibited flagged categories as compared to pre-racing. The disagreement may be due to the participants' tendency to be more lenient and flexible in their pre-exercise nutrition while training as compared to competing. Poor agreement was observed in training and proportionally lower agreement pre-racing for "avoiding starchy vegetables", suggesting confusion with respect to this food category. Starchy vegetables (such as potatoes, corn, and peas) have a higher amount of carbohydrates and fiber in comparison to non-starchy vegetables, thereby affecting their digestion. It would be important to consider providing the participants with more examples of starchy vegetables to increase clarity. Given the recent interest in the impact of FODMAP diets on GI distress [13] one could consider categorizing the fruits and vegetables in this respect, however, it is unlikely that the general population would know the FODMAP classification of a food. Grains pre-training also had poor agreement and may reflect confusion regarding a gluten free versus gluten containing grain. Reasons for avoiding foods pre-training and pre-racing were reliable and these questions will investigate the athletes' thoughts regarding why they choose to restrict certain foods pre-exercise. The questionnaire was also designed to assess the symptoms that runners might experience while training or racing should they consume a food that they would typically avoid. Responses were consistent with the exceptions of side ache while training and gas while racing. It is possible that, as some of the symptoms are similar i.e. gas and bloating, the participants are not able to distinguish between these categories, thus creating confusion. A consideration would be to group these categories in the analysis. As with food avoidances, the results were often more consistent with respect to racing versus training. The difference could be due to the intensity of exercise, as it is reported that symptoms increase with increasing exercise intensity [3], suggesting athletes have a heightened awareness while competing. Additionally, if the athletes were more consistent in their pre-racing diet, it would follow suit that their symptoms would be more consistent while racing. A secondary objective was to assess sources of dietary advice and potential sources of information. The questions asked participants to rank a top number of options from a selection. In general, the reliability of these questions was at least moderate. The questions should, however, be reworded to ask the participants to "check all that apply" rather than rank, given that they were analyzed as "yes" or "no". Furthermore, this wording aligns with the wording in the other questions. The questionnaire is limited in that it does not ask the participant to indicate the reason for avoiding each food, simply their overall reasons. Although this information would be of interest, logistically with 27 food options plus the open-ended "other" and five reason options plus the open-ended "other" it would have made the questionnaire too cumbersome. Conversely, the questionnaire assesses all food avoidances and all reasons for avoiding foods; thus, can indicate reasons for avoiding foods in general. The test re-test would also have benefited from a larger sample size, especially for the questions with a low prevalence; however, to be transparent about the precision of reliability estimates based on the small sample size, the 95% confidence intervals and % observed agreement were provided. Finally, the test re-tests typically occurred two weeks apart; therefore, the questionnaire cannot be considered to provide an indication of the reliability of the responses over a longer timeframe and should be viewed as a cross-sectional tool. The questionnaire is a valid and reliable tool to assess pre-training and pre-racing nutrition, as it relates to exercise induced GI symptoms. Future research should focus on administering the questionnaire to runners in a fully powered study. Furthermore, the questionnaire can easily be adapted to other endurance sports and demographics. The information gained from administering this questionnaire will provide the foundation for the development of evidence-guided recommendations to optimize performance in endurance runners. BI: Bias index GI: IBD: IBS: PABAK: Prevalence-adjusted bias-adjusted kappa PI: Prevalence index Burke LM, Hawley JA, Wong SHS, Jeukendrup AE. Carbohydrates for training and competition. J Sports Sci. 2011;29(Suppl 1):S17–27. Thomas DT, Erdman KA, Burke LM. Position of the Academy of Nutrition and Dietetics, Dietitians of Canada, and the American College of Sports Medicine: nutrition and athletic performance. J Acad Nutr Diet. 2016;116:501–28. de Oliveira EP, Burini RC, Jeukendrup A. Gastrointestinal complaints during exercise: prevalence, etiology, and nutritional recommendations. Sports Med 2014;44(Suppl 1):S79–S85. Stuempfle KJ, Hoffman MD. Gastrointestinal distress is common during a 161-km ultramarathon. J Sports Sci. 2015;33:1814–21. Waterman JJ, Kapur R. Upper gastrointestinal issues in athletes. Curr Sports Med Rep. 2012;11:99–104. Wilson PB. Perceived life stress and anxiety correlate with chronic gastrointestinal symptoms in runners. J Sports Sci. 2018;36:1713–9. de Oliveira EP, Burini RC. The impact of physical exercise on the gastrointestinal tract. Curr Opin Clin Nutr Metab Care. 2009;12:533–8. Rehrer NJ, Meijer GA. Biomechanical vibration of the abdominal region during running and bicycling. J Sport Med Phys Fit. 1991;31:231–4. van Wijck K, Lenaerts K, Grootjans J, Wijnands KAP, Poeze M, van Loon LJ, Dejon CH, Buurman WA. Physiology and pathophysiology of splanchnic hypoperfusion and intestinal injury during exercise: strategies for evaluation and prevention. Am J Physiol Gastrointest Liver Physiol. 2012;303:G155–68. Costa RJS, Snipe RMJ, Kitic CM, Gibson PR. Systematic review: exercise-induced gastrointestinal syndrome—implications for health and intestinal disease. Aliment Pharmacol Ther. 2017;46:246–65. Snipe RMJ, Khoo A, Kitic CM, Gibson PR, Costa RJS. The impact of mild heat stress during prolonged running on gastrointestinal integrity, gastrointestinal symptoms, systemic endotoxin and cytokine profiles. Int J Sports Med. 2018;39:255–63. Lis D, Stellingwerff T, Shing CM, Ahuja KDK, Fell JW. Exploring the popularity, experiences, and beliefs surrounding gluten-free diets in nonceliac athletes. Int J Sport Nutr Exerc Metab. 2015;25:37–45. Lis DM, Stellingwerff T, Kitic CM, Fell JW, Ahuja KDK. Low FODMAP: a preliminary strategy to reduce gastrointestinal distress in athletes. Med Sci Sports Exerc. 2018;50:116–23. de Oliveira EP, Burini RC. Carbohydrate-dependent, exercise-induced gastrointestinal distress. Nutrients. 2014;6:4191–9. Lis D, Ahuja KDK, Stellingwerff T, Kitic CM, Fell J. Case study: utilizing a low FODMAP diet to combat exercise-induced gastrointestinal symptoms. Int J Sport Nutr Exerc Metab. 2016;26:481–7. Parnell JA, Erdman KA, Wagner-Jones K. Food restriction in running questionnaire. 2018. https://drive.google.com/open?id=1uePlzVW_wPyC4PAgx1AMxdDLWS92L0dS. Accessed 6 February 2019. Sim J, Wright CC. The Kappa statistic in reliability studies: use, interpretation, and sample size requirements. Phys Ther. 2005;85:257–68. Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas. 1960;20:37–46. Viera AJ, Garrett JM. Understanding interobserver agreement : the kappa statistic. Fam Med. 2005;37:360–3. Chen G, Faris P, Hemmelgarn B, Walker RL, Quan H. Measuring agreement of administrative data with chart data using prevalence unadjusted and adjusted kappa. BMC Med Res Methodol. 2009;9:1–8. Lantz CA, Nebenzahl E. Behavior and interpretation of the K statistic: resolution of the two paradoxes. J Clin Epidemiol. 1996;49:431–4. Baron ML. Assisting families in making appropriate feeding choices: cow's milk protein allergy versus lactose intolerance. Pediatr Nurs. 2000;26:516–20. Deng Y, Misselwitz B, Dai N, Fox M. Lactose intolerance in adults: biological mechanism and dietary management. Nutrients. 2015;7:8020–35. Dömötör Z, Ruíz-Barquín R, Szabo A. Superstitious behavior in sport: a literature review. Scand J Psychol. 2016;57:368–82. The authors would like to thank Jodi Siever for her assistance with the statistical analysis. A Mount Royal University Innovation Grant provided funding for the project. Mount Royal University did not have a role in the design of the study, collection, analysis, interpretation of the data or manuscript preparation. The dataset used during the study is available from the corresponding author on reasonable request. The questionnaire is available as a supplemental file (see Additional file 1). Department of Health and Physical Education, Mount Royal University, 4825 Mount Royal Gate SW, Calgary, Alberta, T3E 6K6, Canada Jill A. Parnell Department of Biology, Mount Royal University, 4825 Mount Royal Gate SW, Calgary, Alberta, T3E 6K6, Canada Hailey Lafave Helios Wellness Centres, Teaching, Research, Wellness Building, Suite 402, 3280 Hospital Drive NW, Calgary, Alberta, T2N 4Z6, Canada Kim Wagner–Jones Faculty of Kinesiology, University of Calgary, 2500 University Drive NW, Calgary, Alberta, T2N 1N4, Canada Robyn F. Madden Sport Medicine Centre, University of Calgary, 2500 University Drive NW, Calgary, Alberta, T2N 1N4, Canada Kelly Anne Erdman Search for Jill A. Parnell in: Search for Hailey Lafave in: Search for Kim Wagner–Jones in: Search for Robyn F. Madden in: Search for Kelly Anne Erdman in: The study was designed by JAP, KAE, and KWJ. All authors contributed to the data collection and entry. Data were analyzed by JAP. Data interpretation and manuscript preparation were undertaken by HL and JAP. All authors read and approved the final manuscript. Correspondence to Jill A. Parnell. The Mount Royal University Human Research Ethics Board approved the study (ethics ID 2016–38). All participants provided voluntary, written, informed consent. Additional file 1: Food Restriction in Running Questionnaire. (PDF 116 kb) Exercise-induced gastrointestinal symptoms Pre-exercise nutrition Reliability and validity Endurance running
CommonCrawl
All modular forms of weight 2 can be expressed by Eisenstein series Martin Raum1 & Jiacheng Xia1 Research in Number Theory volume 6, Article number: 32 (2020) Cite this article We show that every elliptic modular form of integral weight greater than 1 can be expressed as linear combinations of products of at most two cusp expansions of Eisenstein series. This removes the obstruction of nonvanishing central \(\mathrm{L}\)-values present in all previous work. For weights greater than 2, we refine our result further, showing that linear combinations of products of exactly two cusp expansions of Eisenstein series suffice. Working on a manuscript? Avoid the common mistakes Kohnen–Zagier proved in their work on periods of modular forms [18] that every modular form of level 1 can be written as a linear combination of products of at most two Eisenstein series. Their insight provides a precise connection between the resulting expressions for cuspidal Hecke eigenforms and the special values of the associated \({\mathrm {L}}\)-functions. This connection also appeared in subsequent work by Borisov–Gunnells [4,5,6], who investigated specific modular forms associated with toric varieties, Kohnen–Martin [17], the first named author [23], and Dickson–Neururer [11], who investigated the case of higher levels. The nonvanishing of specific \({\mathrm {L}}\)-values was crucial in all cases. For levels that are square-free away from at most two primes, Dickson–Neururer obtain a characterization of weight 2 newforms that can be expressed as a linear combination of products of at most two Eisenstein series for the congruence subgroup \(\Gamma _1(N)\). These are exactly those newforms whose central \({\mathrm {L}}\)-values do not vanish. In particular, results for newforms of weight 2 whose central \({\mathrm {L}}\)-values vanish are not included in any of the cited papers. The condition on the central \({\mathrm {L}}\)-value for weight 2 newforms is a severe restriction in light of the Birch–Swinnerton–Dyer Conjecture which relates it to the rank of the Mordell–Weil groups of elliptic curves. For instance, if a newform f of weight 2 has rational Fourier coefficients and negative Atkin–Lehner eigenvalue, it corresponds to an elliptic curve over \({{{\mathbb {Q}}}}\) with Mordell–Weil-rank at least 1 by work of Gross–Zagier [14]; See [2, 3, 13] for a discussion of and results on distributions of ranks of elliptic curves. However, the case of vanishing central \({\mathrm {L}}\)-values of weight 2 newforms is excluded from all available statements on products of Eisenstein series. In the present paper we close this gap; See Eq. (1.2) in Theorem 1 and compare with the previously available assertion in Eq. (1.1). Given positive integers k and N, we denote by \({{{\mathcal {E}}}}_k(N)_\infty \) the space of functions spanned by Fourier expansions at \(\infty \) of all Eisenstein series of weight k and level N, i.e., for \({\Gamma }_1(N)\). The space of Fourier expansions at any cusp of all Eisenstein series of weight k and level N is denoted by \({{{\mathcal {E}}}}_k(N)\). As opposed to \({{{\mathcal {E}}}}_k(N)_\infty \) it contains Fourier expansions that feature fractional exponents. Write \({\mathrm {M}}_k({\Gamma })\) for the space of weight k modular forms for a group \({\Gamma }\subseteq {\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\) and \({\mathrm {S}}^{\mathrm {new}}_k({\Gamma })\) for the subspace of newforms. The results of Dickson–Neururer, which hold if N is the product of two prime powers and a square-free integer, can be formulated as follows: $$\begin{aligned}&\mathrm{M}_k({\Gamma }_0(N)) \subseteq {{{\mathcal {E}}}}_{k}(N)_\infty \,+\, \sum _{l =1}^{k-1} {{{\mathcal {E}}}}_{k-l}(N)_\infty \,\cdot \, {{{\mathcal {E}}}}_l(N)_\infty ,\quad \text {if }k > 2;\nonumber \\&\quad {\mathop {\mathrm {span}}}{{{\mathbb {C}}}}\big \{\, f \in {\mathrm {S}}^{\mathrm {new}}_2({\Gamma }_0(N)) \,:\, {\mathrm {L}}(f,1) \ne 0 \,\big \} \subseteq {{{\mathcal {E}}}}_2(N)_\infty \,+\, {{{\mathcal {E}}}}_1(N)_\infty \,\cdot \, {{{\mathcal {E}}}}_1(N)_\infty . \end{aligned}$$ The main theorem of the present paper improves significantly on the second statement and drops completely the condition on N. It also provides a variant of the first statement by suppressing the sum over weights l, again without any condition on N. One novel aspect of our main theorem is that we can omit Eisenstein series \({{{\mathcal {E}}}}_{k+l}(N)\) from the right hand side of Eq. (1.3), which holds for \(k + l \ge 3\). Another one is that both k and l are fixed in Theorem 1. Theorem 1 Let k, l, and N be positive integers. Then there is a positive integer \(N_0\) such that $$\begin{aligned} {\mathrm {M}}_{k+l}({\Gamma }(N)) \;\subseteq \; {{{\mathcal {E}}}}_{k+l}(N) \,+\, {{{\mathcal {E}}}}_k(N_0) \,\cdot \, {{{\mathcal {E}}}}_l(N_0). \end{aligned}$$ Moreover, if \(k + l \ge 3\), then a suitable \(N_0\) is explicitly specified in Theorem 4.4 on p. 9, and there is a positive integer \(N_1\)—specified explicitly in Theorem 5.2 on p. 12—such that $$\begin{aligned} {\mathrm {M}}_{k+l}({\Gamma }(N)) \;\subseteq \; {{{\mathcal {E}}}}_k({\mathrm {lcm}}(N_0,N N_1)) \,\cdot \, {{{\mathcal {E}}}}_l({\mathrm {lcm}}(N_0,N_1)) . \end{aligned}$$ Theorem 1 is a consequence of Theorems 4.4 and 5.2 in conjunction with Section 3.1, which revisits the connection between vector-valued and classical modular forms. Besides the case of principal congruence subgroups, Theorem 4.4 and Theorem 5.2 also cover the cases of modular forms for \({\Gamma }_1(N)\), for \({\Gamma }_0(N)\), and for Dirichlet characters \(\chi \), and, most generally, of vector-valued modular forms. Kamal Khuri–Makdisi informed us that similar results can be inferred from his work on the moduli interpretation of Eisenstein series [16]. Theorems 4.4 and 5.2 contain precise statements about which subspaces of the right hand sides of (1.2) and (1.3) equal which modular forms. For example, the space \({\mathrm {M}}_{k+l}(\chi )\) of modular forms for a Dirichlet character \(\chi \) modulo N equals the following space of \({\Gamma }_0(N)\)-invariants: $$\begin{aligned} {\mathrm {M}}_{k+l}(\chi ) \;=\; \Big ( \big ( {{{\mathcal {E}}}}_{k+l}(N) \,+\, {{{\mathcal {E}}}}_k(N_0) \,\cdot \, {{{\mathcal {E}}}}_l(N_0) \big ) \otimes \chi \Big )^{{\Gamma }_0(N)} , \end{aligned}$$ where \(\chi \) stands for the \({\Gamma }_0(N)\) right representation \(\left( {\begin{matrix} a &{} b \\ c &{} d \end{matrix}}\right) {\mapsto }\overline{\chi }(d)\) and \({\Gamma }_0(N)\) acts on the spaces \({{{\mathcal {E}}}}_{k+l}(N)\), \({{{\mathcal {E}}}}_k(N_0)\), and \({{{\mathcal {E}}}}_l(N_0)\) from the right via the usual slash actions \(|_{k+l}\), \(|_k\), and \(|_l\). An explicit bound for \(N_0\) in the case of \(k + l = 2\) could be obtained from an effective bound on gaps in the Fourier expansion of weight \(\frac{3}{2}\) modular forms. Computer experiments for small N suggest that the second part of Theorem 1 also holds true if \(k + l = 2\). The first named author suggested in [23] that a statement alike the one in (1.2), expressing modular forms in terms of Eisenstein series, can be employed to compute cusp expansions of modular forms of levels that are not square-free. Observe that algorithms rooted in modular symbols, which currently are the primary methods to compute elliptic modular forms, only reveal Fourier expansions at cusps mapped to \(\infty \) by Atkin-Lehner involutions. If the level is not square-free this is a proper subset of cusps. Cohen has implemented products of Eisenstein series in Pari/GP [1, 9], and indeed uses them to compute cusp expansions. He built up on the results of Borisov-Gunnells, who restricted themselves to weights greater than 2. Theorem 1 in this paper allows us to perform a similar computation of Fourier expansions of weight 2 modular forms. More precisely, since the action of \({\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\) on the right hand side of (1.2) and (1.3) is known explicitly, Theorem 1 yields a possibility to determine Fourier expansions of modular forms at all cusps. Tobias Magnusson and the first named author are preparing an implementation of this. We now explain the three key differences of the present paper compared to previous work [4,5,6, 11, 17, 18]. The first key difference is the appearance of \({{{\mathcal {E}}}}_k(N)\) as opposed to \({{{\mathcal {E}}}}_k(N)_\infty \). A less general version of this was already used in [23]. Indeed, the vector-valued Hecke operator \({\mathrm {T}}_M\) in [23] produces from the Eisenstein series \(E_k\) of level 1 the expansions at all cusps of the associated oldform \(E_k(M \,\cdot \,)\). The space spanned by the cusp expansion of a modular form at infinity, in general, does not carry an action of \({\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\), but the space spanned by cusp expansions at all cusps does. Passing from \({{{\mathcal {E}}}}_k(N)_\infty \) to \({{{\mathcal {E}}}}_k(N)\) allows us to employ representation theoretic machinery and the theory of vector-valued Hecke operators developed in [23]. The second key difference is that both k and l are fixed in Theorem 1, while l must run in (1.1). This directly impacts the strategy of proof, since the varying weight l gives access to almost the complete period polynomial as opposed to a single special \({\mathrm {L}}\)-value. In the case of weight 2 modular forms, however, the approach of [18, 21] merely reveals parts of the period polynomial, excluding the central \({\mathrm {L}}\)-value. Given a modular form, the vanishing of its central \({\mathrm {L}}\)-value is not strong enough to imply the vanishing of the modular form. Among the innovations of [23] was to fix the weights of Eisenstein series, but vary their levels. This yields a relation to the nonvanishing problem for families of special \({\mathrm {L}}\)-values, which can also be solved for weight 2 modular forms. The third key difference is that the right hand side of (1.3) displays only products of two Eisenstein series, omitting the additional space of weight \((k+l)\) Eisenstein series. This yields a statement about the constant terms of products of Eisenstein series. If k and l are greater than 2, it can be derived without difficulties by multiplying suitable Eisenstein series of level N. The cases of \(k \le 2\) or \(l \le 2\), however, require a more detailed analysis. We are leaving one open end in this context. The case of \(k = l = 1\) hinges on a precise understanding of tensor products of certain Weil representations, which we were not able to obtain here. The proof of (1.2) in Theorem 4.4 extends ideas in [23]. In particular, we have refined the argument at some places in order to obtain the explicit bound \(N_0\) for the level of Eisenstein series that appears on the right hand side of (1.2). Our approach is based on a combination of the theory of vector-valued Hecke operators [23] and the Rankin–Selberg method [18, 21]. The proof of (1.3) in Theorem 5.2 builds up on the statement of Theorem 4.4. The methods that we employ are quite different, however. Specifically, we examine the spaces \({{{\mathcal {E}}}}_{k}(N)\), \({{{\mathcal {E}}}}_{l}(N)\), and \({{{\mathcal {E}}}}_{k+l}(N)\) as \({\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\)-representations. Their subspaces of vectors fixed by the action of \(T = \left( {\begin{matrix} 1 &{} 1 \\ 0 &{} 1 \end{matrix}}\right) \) are related to the spaces spanned by constant terms of Eisenstein series. Then Theorem 5.2 follows from an argument from representation theory. We write \({{\mathbb {H}}}\) for the Poincaré upper half plane, which carries an action of \({\mathrm {SL}_{2}}({{{\mathbb {R}}}})\) by Möbius transformations. We fix the notation \(T = \left( {\begin{matrix} 1 &{} 1 \\ 0 &{} 1 \end{matrix}}\right) \in {\mathrm {SL}_{2}}({{{\mathbb {R}}}})\) for the transformation acting on \({{\mathbb {H}}}\) as a translation by 1. We write \({\Gamma }_\infty ^+ \subset {\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\) for the subgroup generated by T. Arithmetic types An arithmetic type is a finite dimensional, complex representation of \({\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\). The representation space of an arithmetic type \(\rho \) is denoted by \(V(\rho )\). We call an arithmetic type a congruence type if its kernel is a congruence subgroup. The level of a congruence type is the level of its kernel. We record that all congruence types \(\rho \) are unitarizable, i.e., the representation space \(V(\rho )\) admits an \({\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\)-invariant scalar product. The trivial, one-dimensional, complex representation of a subgroup \({\Gamma }\subseteq {\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\) will be denoted by \({\mathbb {1}}_{\Gamma }\). Usually, \({\Gamma }\) is clear from the context and we abbreviate \({\mathbb {1}}_{\Gamma }\) by \({\mathbb {1}}\). The induction of arithmetic types is explained in detail in [23] using a choice of representatives. We set $$\begin{aligned} \rho ^\times _N \;:=\; {\mathrm {Ind}}_{{\Gamma }_1(N)}^{{\mathrm {SL}_{2}}({{{\mathbb {Z}}}})}\, {\mathbb {1}}. \end{aligned}$$ Recall from, for example, [8] that $$\begin{aligned} \rho ^\times _N \;\cong \; \bigoplus _\chi {\mathrm {Ind}}_{{\Gamma }_0(N)}^{{\mathrm {SL}_{2}}({{{\mathbb {Z}}}})}\,\chi , \end{aligned}$$ where \(\chi \) runs through Dirichlet characters mod N considered as representations of \({\Gamma }_0(N)\) via the assignment \(\left( {\begin{matrix} a &{} b \\ c &{} d \end{matrix}}\right) {\mapsto }\chi (d)\). We write \(\rho _\chi \) for its induction to \({\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\). Observe that by Frobenius reciprocity, we have $$\begin{aligned} {\mathop {\mathrm {Hom}}}_{{\mathrm {SL}_{2}}({{{\mathbb {Z}}}})} \big ( {\mathbb {1}}, \rho ^\times _N \big ) \;\cong \; {\mathop {\mathrm {Hom}}}_{{\Gamma }_{1}(N)} ( {\mathbb {1}}, {\mathbb {1}}) , \end{aligned}$$ which is one-dimensional. In particular, \(\rho ^\times _N\) contains a unique copy of the trivial representation. We write $$\begin{aligned} \rho ^\times _N \ominus {\mathbb {1}}\end{aligned}$$ for its orthogonal complement. For this purpose, we choose any unitary structure of the representation \(\rho \), since the result is independent of it. We say that an arithmetic type \(\rho \) has T-fixed vectors, if there is a nonzero vector \(v \in V(\rho )\) such that \(\rho (T) v = v\). The subrepresentation of \(\rho \) on which \(\left( {\begin{matrix} -1 &{} 0 \\ 0 &{} -1 \end{matrix}}\right) \) in the center of \({\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\) acts by \(\pm 1\) is denoted by \(\rho ^\pm \). Write \(\rho ^T\) for the space of \({\Gamma }_\infty ^+\)-invariants in \(V(\rho )\). The intersection of \(\rho ^T\) with \(\rho ^\pm \) is denoted by \(\rho ^{T\,\pm }\). We let \({\mathrm {par}}(k) = \pm \) be the parity of k, so that \(\rho ^{{\mathrm {par}}(k)}\) denotes the subrepresentation of \(\rho \) on which \(\left( {\begin{matrix} -1 &{} 0 \\ 0 &{} -1 \end{matrix}}\right) \) acts by \((-1)^k\). The classical slash actions for \(k \in {{{\mathbb {Z}}}}\), $$\begin{aligned} \big ( f \big |_k\, \left( {\begin{matrix} a &{} b \\ c &{} d \end{matrix}}\right) \big )(\tau ) := (c \tau + d)^{-k}\, f\Big (\frac{a\tau + b}{c \tau + d} \Big ) , \end{aligned}$$ extend to vector-valued slash actions $$\begin{aligned} \big ( f \big |_{k,\rho }\, \left( {\begin{matrix} a &{} b \\ c &{} d \end{matrix}}\right) \big )(\tau ) := (c \tau + d)^{-k}\, \rho \big ( \left( {\begin{matrix} a &{} b \\ c &{} d \end{matrix}}\right) ^{-1} \big ) f\Big (\frac{a\tau + b}{c \tau + d} \Big ) . \end{aligned}$$ The space of classical modular forms \({\mathrm {M}}_k({\Gamma })\) for a subgroup \({\Gamma }\subseteq {\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\) is the space of holomorphic functions \(f :\, {{\mathbb {H}}}{\rightarrow }{{{\mathbb {C}}}}\) such that (i) \(f \big |_k\,{\gamma }= f\) for all \({\gamma }\in {\Gamma }\) and (ii) \((f \big |_k\,{\gamma })(\tau )\) is bounded as \(\tau {\rightarrow }i \infty \) for all \({\gamma }\in {\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\). The subspace \({\mathrm {S}}_k({\Gamma })\) of cusp forms is defined as the space of modular forms that satisfy the stronger second condition \((f \big |_k\,{\gamma })(\tau ) {\rightarrow }0\) as \(\tau {\rightarrow }i \infty \) for all \({\gamma }\in {\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\). Recall that we set \(\chi (\left( {\begin{matrix} a &{} b \\ c &{} d \end{matrix}}\right) ) = \chi (d)\) for \(\left( {\begin{matrix} a &{} b \\ c &{} d \end{matrix}}\right) \in {\Gamma }_0(N)\) and a Dirichlet character \(\chi \) modulo N. The space \({\mathrm {M}}_k(\chi )\) is defined as the space of holomorphic functions \(f :\, {{\mathbb {H}}}{\rightarrow }{{{\mathbb {C}}}}\) such that (i) \(f \big |_k\,{\gamma }= \chi ({\gamma }) f\) for all \({\gamma }\in {\Gamma }_0(N)\) and (ii) \((f \big |_k\,{\gamma })(\tau )\) is bounded as \(\tau {\rightarrow }i \infty \) for all \({\gamma }\in {\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\). Similarly, the spaces of vector-valued modular forms \({\mathrm {M}}_k(\rho )\) and cusp forms \({\mathrm {S}}_k(\rho )\) for an arithmetic type \(\rho \) are defined as the spaces of holomorphic functions \(f :\, {{\mathbb {H}}}{\rightarrow }V(\rho )\) such that (i) \(f \big |_{k,\rho }\,{\gamma }= f\) for all \({\gamma }\in {\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\) and (ii) \(f(\tau )\) is bounded (with respect to some norm on \(V(\rho )\)) as \(\tau {\rightarrow }i \infty \) if \(f \in {\mathrm {M}}_k(\rho )\) or \(f(\tau ) {\rightarrow }0\) as \(\tau {\rightarrow }i \infty \) if \(f \in {\mathrm {S}}_k(\rho )\). If \({\Gamma }\) is a congruence subgroup, we write \({\mathrm {S}}^{\mathrm {old}}_k({\Gamma }) \subseteq {\mathrm {S}}_k({\Gamma })\) for the space of oldforms and \({\mathrm {S}}^{\mathrm {new}}_k({\Gamma }) \subseteq {\mathrm {S}}_k({\Gamma })\) for the set of (normalized) newforms. As a Sturm bound for modular forms in \({\mathrm {M}}_k(\chi )\), where \(\chi \) is a Dirichlet character modulo N, we use $$\begin{aligned} B(k, N) \;:=\; \left\lceil \frac{k}{12}\, N \prod _{\begin{array}{c} p {\mathop {\mid }}N \\ p \,\text {prime} \end{array}}\big ( 1 + \tfrac{1}{p} \big ) \right\rceil . \end{aligned}$$ There are natural bases for \(\rho ^\times _N\), \(\rho _\chi \), and their dual representations that are indexed by (a choice of representatives for) the cosets \({\Gamma }_1(N) \backslash {\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\) and \({\Gamma }_0(N) \backslash {\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\). Specifically, the representation spaces of \(\rho ^\times _N\) and \(\rho _\chi \) are $$\begin{aligned}&V(\rho ^\times _N) \;=\; {\mathop {\mathrm {span}}}{{{\mathbb {C}}}}\,\big \{ {\mathfrak {e}}_{\gamma }\,:\, {\gamma }\in {\Gamma }_1(N) \backslash {\mathrm {SL}_{2}}({{{\mathbb {Z}}}}) \big \} \quad \text {and}\quad \nonumber \\&V(\rho _\chi ) \;=\; {\mathop {\mathrm {span}}}{{{\mathbb {C}}}}\,\big \{ {\mathfrak {e}}_{\gamma }\,:\, {\gamma }\in {\Gamma }_0(N) \backslash {\mathrm {SL}_{2}}({{{\mathbb {Z}}}}) \big \} . \end{aligned}$$ Given a modular form for any of these types, we refer to the component associated with the trivial coset as the component at infinity. The vector-valued Hecke operators \({\mathrm {T}}_M\) including their basic properties were introduced in [23]. Given a representation \(\rho \), they yield a representation on the vector space $$\begin{aligned}&V(\rho ) \,\otimes \, {\mathop {\mathrm {span}}}{{{\mathbb {C}}}}\,\big \{ {\mathfrak {e}}_m \,:\, {\gamma }\in \Delta _M \big \} ,\nonumber \\&\Delta _M := \big \{ m = \left( {\begin{matrix} a &{} b \\ 0 &{} d \end{matrix}}\right) \,:\, a,b,d \in {{{\mathbb {Z}}}},\, 0< a, d,\, 0 \le b < d,\, ad = M \big \} . \end{aligned}$$ More precisely, we have $$\begin{aligned} \big ({\mathrm {T}}_M\,\rho \big )({\gamma }) (v \otimes {\mathfrak {e}}_m) := \rho \big ( I^{-1}_m({\gamma }^{-1}) \big ) v \otimes {\mathfrak {e}}_{\overline{m {\gamma }^{-1}}} , \end{aligned}$$ where \(I_m({\gamma }) \in {\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\) for \(m \in \Delta _M\), \({\gamma }\in {\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\) is defined by \(m {\gamma }= I_m({\gamma }) \overline{m {\gamma }}\) with \(\overline{m {\gamma }} \in \Delta _M\). We will give more precise references to the properties of vector-valued Hecke operators when we employ them. At some point in the proof of Theorem 4.4, we will use \({\mathrm {T}}_M\) to denote the classical Hecke operator, but otherwise it is the vector-valued one. Vector-valued Hecke operators subsume, for instance, twists by characters via the following map \(\iota _{\mathrm {twist}}\) from (2.19) of [23] (cf. Proposition 2.19 of [23]). Given Dirichlet characters \(\chi \) modulo N and \(\epsilon \) modulo M, we let \(\chi '\) denote the Dirichlet character modulo \(N M^2\) defined by \(\chi \epsilon ^2\). Write $$\begin{aligned} G(\epsilon , e(b /M)) := \sum _{a \,{\;(\mathrm {mod}\, M)}} \epsilon (a) e(a b /M) \end{aligned}$$ for the Gauss sum. Then we have the inclusion $$\begin{aligned}&\iota _{\mathrm {twist}} :\, \rho _{\chi '} {\longrightarrow }{\mathrm {T}}_{M^2}\,\rho _\chi ,\\&{\mathfrak {e}}_{\gamma }{\longmapsto }\frac{1}{M} \sum _{b \,{\;(\mathrm {mod}\, M)}} G\big ( \epsilon , e(-b /M) \big )\, \overline{\chi }\big ( I_{I_2}(I_{m_{M,b}}({\gamma })) \big )\, {\mathfrak {e}}_{I_{m_{M,b}}({\gamma })} \otimes {\mathfrak {e}}_{m_{M,b}{\gamma }} , \end{aligned}$$ where \(m_{M,b} = \left( {\begin{matrix} M &{} b \\ 0 &{} M \end{matrix}}\right) \), \(I_2 = \left( {\begin{matrix} 1 &{} 0 \\ 0 &{} 1 \end{matrix}}\right) \). We write \(\pi _{\mathrm {twist}}\) for the projection adjoint to \(\iota _{\mathrm {twist}}\) with respect to the natural scalar product on \(V({\mathrm {T}}_{M^2}\,\rho )\) inherited from \(V(\rho )\) (cf. Lemma 2.4 of [23]). We also have the following projection \(\pi _{\mathrm {adj}}\) defined in (2.10) of [23]: where \(m^\# = \left( {\begin{matrix} d &{} -b \\ -c &{} a \end{matrix}}\right) \) for \(m = \left( {\begin{matrix} a &{} b \\ c &{} d \end{matrix}}\right) \) denotes the adjoint matrix. Its properties with respect to Hecke operators are given in Proposition 2.10 of [23]. Congruence types and their modular forms We start this section with a characterization of congruence types that are generated by their T-fixed vectors. Let \(\rho \) be an irreducible congruence type of level N and assume that \(\rho \) has T-fixed vectors. Then there is an embedding This is a straightforward application of Frobenius reciprocity. Observe that \(T \in {\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\) generates \({\Gamma }_\infty ^+\) and that \({\Gamma }_1(N)\) is generated by \({\Gamma }_\infty ^+\) and \({\Gamma }(N) \subseteq {\mathop {\mathrm {ker}}}(\rho )\). The assumption that \(\rho \) be generated by its T-fixed vectors can thus be rephrased as $$\begin{aligned} 0 \;\ne \; {\mathop {\mathrm {Hom}}}_{{\Gamma }_\infty ^+}\big ( {\mathbb {1}},\, {\mathrm {Res}}_{{\Gamma }_\infty ^+}\,\rho \big ) \;\cong \; {\mathop {\mathrm {Hom}}}_{{\Gamma }_1(N)}\big ( {\mathbb {1}},\, {\mathrm {Res}}_{{\Gamma }_1(N)}\,\rho \big ) \;\cong \; {\mathop {\mathrm {Hom}}}_{{\mathrm {SL}_{2}}({{{\mathbb {Z}}}})}\big ( {\mathrm {Ind}}_{{\Gamma }_1(N)} {\mathbb {1}},\, \rho \big ) . \end{aligned}$$ Since \({\mathrm {Ind}}_{{\Gamma }_1(N)} {\mathbb {1}}\) is unitarizable, this implies that \(\rho \) occurs among its direct summands. \(\square \) The next lemma later allows us to focus on congruence types that are generated by their T-fixed vectors, so that we can invoke Lemma 3.1. Let \(\rho \) be a congruence type of level N. Then, for every positive integer M that is divisible by N, there is a subrepresentation \(\rho ' \subseteq {\mathrm {T}}_M \rho \) that is generated by its T-fixed vectors and that satisfies \(\rho \subseteq {\mathrm {T}}_M\, \rho '\). Vector-valued Hecke operators intertwine up to isomorphism with direct sums of arithmetic types by the following computation. Given two arithmetic types \(\rho _1\) and \(\rho _2\), we have a natural isomorphism of representation spaces $$\begin{aligned}&V\big ( \rho _1 \oplus \rho _2 \big ) \,\otimes \, {\mathop {\mathrm {span}}}{{{\mathbb {C}}}}\,\big \{ {\mathfrak {e}}_m \,:\, {\gamma }\in \Delta _M \big \} {\longrightarrow }V(\rho _1) \otimes {\mathop {\mathrm {span}}}{{{\mathbb {C}}}}\,\big \{ {\mathfrak {e}}_m \,:\, {\gamma }\in \Delta _M \big \} \,\oplus \, V(\rho _2)\\&\quad \otimes {\mathop {\mathrm {span}}}{{{\mathbb {C}}}}\,\big \{ {\mathfrak {e}}_m \,:\, {\gamma }\in \Delta _M \big \} . \end{aligned}$$ The action of \({\gamma }\in {\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\) on \({\mathfrak {e}}_m \otimes (v_1,v_2)\) on the left hand side yields $$\begin{aligned}&(\rho _1 \oplus \rho _2)\big ( I_m^{-1}({\gamma }^{-1}) \big ) (v_1,v_2) \otimes {\mathfrak {e}}_{m {\gamma }^{-1}} {\longmapsto }\Big ( \rho _1\big ( I_m^{-1}({\gamma }^{-1}) \big ) v_1 \\&\quad \otimes {\mathfrak {e}}_{m {\gamma }^{-1}} ,\, \rho _2\big ( I_m^{-1}({\gamma }^{-1}) \big ) v_2 \otimes {\mathfrak {e}}_{m {\gamma }^{-1}} \Big ) . \end{aligned}$$ Hence we can and will assume that \(\rho \) is irreducible. By the assumptions \(N {\mathop {\mid }}M\), and therefore for any \(v \in V(\rho )\), we have the invariance $$\begin{aligned} \big ( {\mathrm {T}}_M\,\rho \big )(T) \Big ( v \otimes {\mathfrak {e}}_{\left( {\begin{matrix} M &{} 0 \\ 0 &{} 1 \end{matrix}}\right) } \Big ) = \big ( \rho (T^M) v \big ) \otimes {\mathfrak {e}}_{\left( {\begin{matrix} M &{} 0 \\ 0 &{} 1 \end{matrix}}\right) } = v \otimes {\mathfrak {e}}_{\left( {\begin{matrix} M &{} 0 \\ 0 &{} 1 \end{matrix}}\right) } . \end{aligned}$$ In particular, \({\mathrm {T}}_M\,\rho \) contains nonzero T-fixed vectors. We record that there is an injection \(\rho {\hookrightarrow }{\mathrm {T}}_M ({\mathrm {T}}_M\, \rho )\) by Proposition 2.10 of [23]. Consider an arbitrary irreducible subrepresentation \(\rho '\) of \({\mathrm {T}}_M\, \rho \). Since \({\mathrm {T}}_M\, \rho \) is unitarizable by Lemma 2.4 of [23], we conclude that \(\rho '\) is also a quotient of \({\mathrm {T}}_M\, \rho \). Specifically, there is a surjective homomorphism \(\phi :\, {\mathrm {T}}_M\, \rho {\twoheadrightarrow }\rho '\). We will need the concrete shape of \(\phi \). Given \(m \in \Delta _M\), where \(\Delta _M\) is as in (2.4), there is a linear map \(\phi _m : V(\rho ) {\rightarrow }V(\rho ')\) such that \(\phi (v \otimes {\mathfrak {e}}_m) = \phi _m(v)\) for every \(v \in V(\rho )\). Since \(\phi \) is nonzero, there is at least one m such that \(\phi _m\) is not zero. We claim that \(\rho \subseteq {\mathrm {T}}_M\, \rho '\). For a proof, we consider the following composition of homomorphisms, where the first and second one arise from Proposition 2.10 and Proposition 2.5 of [23]: We have to demonstrate that this composition is not zero. To this end, we let \(v \in V(\rho )\) and inspect to what element of \(V({\mathrm {T}}_M\,\rho ')\) it is mapped. We write \(m^\#\) for the adjugate of \(m \in \Delta _M\) and obtain $$\begin{aligned} v {\longmapsto }\sum _{m \in \Delta _M} v \otimes {\mathfrak {e}}_m \otimes {\mathfrak {e}}_{m^\#} {\longmapsto }\sum _{m \in \Delta _M} \phi _m(v) \otimes {\mathfrak {e}}_{m^\#} , \end{aligned}$$ which does not vanish for all v, since \(\phi _m\) is nonzero for some m. To finish the proof, it suffices to choose some irreducible \(\rho ' \subseteq {\mathrm {T}}_M\,\rho \) that contains a nonzero T-fixed vector. Since \(\rho '\) is irreducible, \(\rho '\) is generated by this vector. \(\square \) Passing from vector-valued to classical modular forms There is a connection between vector-valued modular forms and classical modular forms via induction. For instance, \({\mathrm {M}}_k(\chi )\) for a Dirichlet character \(\chi \) to \({\mathrm {M}}_k(\rho _\chi )\) are related to each other by Proposition 1.5 of [23]. The present section likewise connects Theorem 1 to Theorems 4.4 and 5.2. Throughout, we fix an integer k. Given a Dirichlet character \(\chi \), recall the map $$\begin{aligned} {\mathrm {Ind}}:\, {\mathrm {M}}_k(\chi ) {\longrightarrow }{\mathrm {M}}_k(\rho _\chi ) ,\quad f {\longmapsto }\sum _{{\gamma }\in {\Gamma }_0(N) \backslash {\mathrm {SL}_{2}}({{{\mathbb {Z}}}})} (f |_k\,{\gamma }) {\mathfrak {e}}_{\gamma }\end{aligned}$$ in (1.3) of [23] and Proposition 1.5 of [23], asserting that \({\mathrm {Ind}}\) is an isomorphism. The argument extends to the map $$\begin{aligned} {\mathrm {Ind}}:\, {\mathrm {M}}_k({\Gamma }) {\longrightarrow }{\mathrm {M}}_k\big ( {\mathrm {Ind}}_{\Gamma }^{{\mathrm {SL}_{2}}({{{\mathbb {Z}}}})}\,{\mathbb {1}}\big ) ,\quad f {\longmapsto }\sum _{{\gamma }\in {\Gamma }\backslash {\mathrm {SL}_{2}}({{{\mathbb {Z}}}})} (f |_k\,{\gamma }) {\mathfrak {e}}_{\gamma }\end{aligned}$$ for any finite index subgroup \({\Gamma }\subseteq {\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\). Let \({{{\mathcal {F}}}}\) be a finite dimensional space of functions on \({{\mathbb {H}}}\) on which \({\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\) acts by the slash action of weight k. Then the induction map yields isomorphisms $$\begin{aligned} \big ( {{{\mathcal {F}}}}\otimes \chi \big )^{{\Gamma }_0(N)} {\longrightarrow }\big ( {{{\mathcal {F}}}}\otimes \rho _\chi \big )^{{\mathrm {SL}_{2}}({{{\mathbb {Z}}}})} \quad \text {and}\quad {{{\mathcal {F}}}}^{\Gamma }{\longrightarrow }\big ( {{{\mathcal {F}}}}\otimes {\mathrm {Ind}}_{{\Gamma }}^{{\mathrm {SL}_{2}}({{{\mathbb {Z}}}})}\,{\mathbb {1}}\big )^{{\mathrm {SL}_{2}}({{{\mathbb {Z}}}})} , \end{aligned}$$ where \(\chi \) and \({\Gamma }\) are as before. Their inverse maps are the projections to the component at infinity. The proof is analogous to the one of Proposition 1.5 of [23]. Components of Eisenstein series Given an integer k and an arithmetic type \(\rho \), recall the space of vector-valued Eisenstein series \({\mathrm {E}}_{k}(\rho )\) of weight k and type \(\rho \) from (3.2) of [23]. We write $$\begin{aligned} {{{\mathcal {E}}}}_k(\rho ) \;=\; \big \{ v \circ f \,:\, f \in {\mathrm {E}}_k(\rho ),\, v \in V(\rho )^\vee \big \} \end{aligned}$$ for the space of their components, matching our definition from the introduction. We infer from (3.2) that \({{{\mathcal {E}}}}_k(N) := {{{\mathcal {E}}}}_k(\rho ^\times _N)\). The next lemma is a variation of Proposition 3.4 of [23]. In its proof, we use the following fact, which can be inferred along the lines of Lemma 7.1.4 from of [19]: $$\begin{aligned}&{{{\mathcal {E}}}}_k(N) = {\mathop {\mathrm {span}}}{{{\mathbb {C}}}}\big \{ G_{k,N,c,d}(\tau ,0) \,:\, c,d \in {{{\mathbb {Z}}}},\, \gcd (c,d,N) = 1 \big \} ,\quad \text {where}\\&G_{k,N,c',d'}(\tau , s):= \sum _{\begin{array}{c} (c,d) \in {{{\mathbb {Z}}}}^2 \setminus (0,0) \\ c \equiv c' {\;(\mathrm {mod}\, N)} \\ d \equiv d' {\;(\mathrm {mod}\, N)} \end{array}} (c \tau + d)^{-k} |c \tau + d|^{-2s} . \end{aligned}$$ Let \(N, M \ge 1\). Then for every \(k \ge 1\) we have $$\begin{aligned} {\mathop {\mathrm {span}}}{{{\mathbb {C}}}}\, \big \{ f \big |_k m \,:\, f \in {{{\mathcal {E}}}}_k(N), m \in \Delta _M \big \} \subseteq {{{\mathcal {E}}}}_k(M N) . \end{aligned}$$ Given any pair of integers \((c', d')\) and any nonnegative integers \(\alpha ,\beta ,\delta \) with \(\alpha \delta = M\), we see that, for all \(s \in {{{\mathbb {C}}}}\) satisfying \(k + 2 {\mathrm {Re}}(s) > 2\), $$\begin{aligned}&G_{k,N,c',d'}(\tau , s) \big |_k\, \left( {\begin{matrix} \alpha &{} \beta \\ 0 &{} \delta \end{matrix}}\right) = \sum _{\begin{array}{c} (c,d) \in {{{\mathbb {Z}}}}^2 \setminus (0,0) \\ c \equiv c' {\;(\mathrm {mod}\, N)} \\ d \equiv d' {\;(\mathrm {mod}\, N)} \end{array}} (c \tau + d)^{-k} |c \tau + d|^{-2s} \big |_k\, \left( {\begin{matrix} \alpha &{} \beta \\ 0 &{} \delta \end{matrix}}\right) \\&= \delta ^{2s} \sum _{\begin{array}{c} (c,d) \in {{{\mathbb {Z}}}}^2 \setminus (0,0)\\ c \equiv c' {\;(\mathrm {mod}\, N)} \\ d \equiv d' {\;(\mathrm {mod}\, N)} \end{array}} (\alpha c \tau + \delta d + \beta c)^{-k} |\alpha c \tau + \delta d + \beta c|^{-2s}\\&= \delta ^{2s} \sum _{\begin{array}{c} c'', d'' {\;(\mathrm {mod}\, M N)} \\ c'' \equiv \alpha c' {\;(\mathrm {mod}\, \alpha N)} \\ d'' \equiv \delta d' + \beta c' {\;(\mathrm {mod}\, \gcd (\beta ,\delta )N)} \end{array}}\; \sum _{\begin{array}{c} c,d \in {{{\mathbb {Z}}}}^2 \setminus (0,0) \\ c \equiv c'' {\;(\mathrm {mod}\, M N)} \\ d \equiv d'' {\;(\mathrm {mod}\, M N)} \end{array}} (c \tau + d)^{-k} |c \tau + d|^{-2s} . \end{aligned}$$ If \(k \ne 2\) the right hand side is analytic at \(s = 0\) and yields a linear combination of holomorphic Eisenstein series \(G_{k,MN,c'',d''}(\tau )\). We thus obtain the statement directly. In the case of \(k = 2\), a linear combination of the \(G_{2,N,c',d'}(\tau ,0)\) lies in \({{{\mathcal {E}}}}_2(N)\) if and only if its image under the \(\xi _2\)-operator of [7] vanishes. Since the \(\xi \)-operator intertwines with the action of \({\mathrm {SL}_{2}}({{{\mathbb {R}}}})\), this finishes the proof. \(\square \) Cusp forms The first lemma of this section is a variant of, for instance, Propositions 3.7 and 4.1 in [23] or Corollary 4.2 in [11]. It relates products of Eisenstein series to special values of \({\mathrm {L}}\)-functions, generalizing results by Rankin [21] and various other authors. We omit the proof, which is mutatis mutandis the one of Corollary 4.2 of [11]. Fix integers \(k, l \ge 1\) and a Dirichlet character \(\chi \) mod N for some positive integer N. Let \(\psi \) be a primitive Dirichlet character of modulus dividing N satisfying \(\psi (-1) = (-1)^k\). If \(k = 2\) assume that \(\psi \ne {\mathbb {1}}\), and if \(l \le 2\) assume that \(\psi \ne \chi \). Then there is $$\begin{aligned} g \;\in \; \big ( {{{\mathcal {E}}}}_k(N)_\infty \otimes {{{\mathcal {E}}}}_l(N)_\infty \otimes \chi \big )^{{\Gamma }_0(N)} \;\subseteq \; {\mathrm {M}}_{k+l}(\chi ) \end{aligned}$$ such that for every \(f \in {\mathrm {S}}_{k+l}(\chi )\), we have $$\begin{aligned} \big \langle g,\, f \big \rangle \;=\; c \cdot {\mathrm {L}}\big ( f^{\mathrm {c}}, k+l-1 \big )\, {\mathrm {L}}\big ( f^{\mathrm {c}}\otimes \psi , l \big ) \end{aligned}$$ for a nonzero constant c. Here \(f^{\mathrm {c}}(\tau ) := \overline{f(-\overline{\tau })}\) denotes the modular form whose Fourier coefficients are the complex conjugate of those of f. While Lemma 4.1 accounts solely for the analytic machinery required for the proof of Theorem 4.4, we split the representation theoretic instrumentation into several separate statements. We will employ the next lemma to discard the space of oldforms from our considerations, so that we can focus on the span of newforms. Let \(M, N, N'\) be positive integers and \(f \in {\mathrm {M}}^{\mathrm {old}}_k({\Gamma }_1(N))\) an oldform. Assume that there is a modular form g of level \(N'\) such that \(f(\tau ) = g(M \tau )\). Then we have $$\begin{aligned} {\mathrm {Ind}}(f) \;\in \; {\mathrm {T}}_M\, {\mathrm {M}}_k\big ( \rho ^\times _{N'} \big ) . \end{aligned}$$ By Proposition 1.5 of [23], we have \({\mathrm {Ind}}(g) \in {\mathrm {M}}_k(\rho ^\times _{N'})\). Now the lemma is a special case of Proposition 2.17 of [23]. \(\square \) A key difference between [23] and [11] is that the latter employs the products of Eisenstein series of varying weight, while the weight of Eisenstein series is fixed in [23] and the level of Eisenstein series varies. We follow [23] in this paper. The argument in [11] builds up on the vanishing of (twisted) period polynomials, which is a strategy that goes back to [18, 21]. This reasoning was replaced in [23] by an inspection of twists of one special \({\mathrm {L}}\)-value, whose simultaneous vanishing can be controlled by means of results by Waldspurger [22]. If \(k + l = 2\), we need a result of Ono and Skinner [20], which builds up on these and work of Friedberg-Hoffstein [12]. The following lemma provides the required relation between vector-valued Hecke operators and twists of modular forms, which yield twisted \({\mathrm {L}}\)-values. Let M, N be positive integers, \(f \in {\mathrm {M}}_k({\Gamma }_1(N))\), and \(\chi \) a Dirichlet character mod M. Then we have $$\begin{aligned} {\mathrm {Ind}}( f \otimes \chi ) \;=\; \phi \big ( {\mathrm {T}}_{M^2}\, {\mathrm {Ind}}(f) \big ) \end{aligned}$$ for a homomorphism \(\phi :\, {\mathrm {T}}_{M^2}\, \rho ^\times _N {\rightarrow }\rho ^\times _{M^2 N}\) of arithmetic types. This is a special case of Proposition 2.19 of [23]. Specifically, the map \(\phi \) arises from \(\pi _{\mathrm {twist}}\) in that proposition after decomposing \(\rho ^\times _N\) and \(\rho ^\times _{M^2 N}\) as in (2.1). \(\square \) Theorem 4.4 Let \(k,l \ge 1\) and \(\rho \) be a congruence type of level N. Then we have $$\begin{aligned} {\mathrm {S}}_{k+l}(\rho ) \;\subseteq \; \big ( {{{\mathcal {E}}}}_k(N_0) \otimes {{{\mathcal {E}}}}_l(N_0) \otimes \rho \big )^{{\mathrm {SL}_{2}}({{{\mathbb {Z}}}})} \end{aligned}$$ for \(N_0\) chosen as follows in terms of the Sturm bound in (2.2), if \(\rho \) is generated by its T-fixed vectors: $$\begin{aligned} N_0&\;=\; N B(k+l, N),\quad \text {if}\, k \ne l, k \,\text {even;}\\ N_0&\;=\; 16 N B(k+l, 16 N)),\quad \text {if }\,k \ne l, k \,\text {odd;}\\ N_0&\;=\; N (16 B(k+\tfrac{1}{2}, N))^4 B\big (k+l, (16 B(k+\tfrac{1}{2}, N))^4 \big ),\quad \text {if }\,k = l \ge 2. \end{aligned}$$ For general \(\rho \), the above bounds for \(N_0\) are multiplied by an additional factor N: $$\begin{aligned} N_0&\;=\; N^2 B(k+l, N) ,\quad \text {if }\,k \ne l, k \,\text {even;}\\ N_0&\;=\; 16 N^2 B(k+l, 16 N)),\quad \text {if} \,k \ne l, k \,\text {odd;}\\ N_0&\;=\; N^2 (16 B(k+\tfrac{1}{2}, N))^4 B\big (k+l, (16 B(k+\tfrac{1}{2}, N))^4 \big ),\quad \text {if}\, k = l \ge 2. \end{aligned}$$ If \(k = l = 1\), there exists some \(N_0\) such that (4.2) holds. Remark 4.5 The factor of \(B(k+\frac{1}{2},N)^4\) in the case of \(k = l\) arises (twice) because we restrict in the proof to twists of \({\mathrm {L}}\)-values by Kronecker symbols associated with imaginary quadratic fields. Explicit non-vanishing bounds for central values of the families \({\mathrm {L}}(f \otimes \psi , s)\), f a fixed newform and \(\psi \) a Dirichlet character can be used to improve this bound. Suppose that the statement holds for all \(\rho \) that are generated by their T-fixed vectors. Then the general case follows by applying Lemma 3.2 in conjunction with Lemma 3.3 in this paper and Theorem 2.8 of [23]. In particular, we can and will assume that \(\rho \) is generated by its T-fixed vectors. Specifically, it suffices to treat all subrepresentations of \(\rho ^\times _N = {\mathrm {Ind}}_{{\Gamma }_1(N)}\,{\mathbb {1}}\) by Lemma 3.1. Recall that we have \(\rho ^\times _N = \oplus _\chi \rho _\chi \), where \(\chi \) runs through all Dirichlet characters mod N. For this reason, we can further restrict to the subrepresentations of \(\rho _\chi = {\mathrm {Ind}}_{{\Gamma }_0(N)}\,\chi \), where \(\chi \) is an arbitrary Dirichlet character mod N. Using induction on N, we can use Lemma 4.2 to obtain the inclusion $$\begin{aligned} {\mathrm {Ind}}\big ( {\mathrm {S}}^{\mathrm {old}}_{k+l}(\chi ) \big ) \subseteq \big ( {{{\mathcal {E}}}}_k(N_0) \otimes {{{\mathcal {E}}}}_l(N_0) \otimes \rho \big )^{{\mathrm {SL}_{2}}({{{\mathbb {Z}}}})} . \end{aligned}$$ Hence we can and will restrict to newforms in the remainder of the proof. Consider the subset (as opposed to subspace) \({{{\mathcal {S}}}}^{\mathrm {new}}_{k+l}(\chi ) \subseteq {\mathrm {S}}_{k+l}(\chi )\) of newforms. Since the right hand side is a vector space, the statement of the theorem follows, if we show that $$\begin{aligned} {\mathrm {Ind}}\big ( {{{\mathcal {S}}}}^{\mathrm {new}}_{k+l}(\chi ) \big ) \subseteq \big ( {{{\mathcal {E}}}}_k(N_0) \otimes {{{\mathcal {E}}}}_l(N_0) \otimes \rho \big )^{{\mathrm {SL}_{2}}({{{\mathbb {Z}}}})} . \end{aligned}$$ We next inspect the Petersson scalar products of newforms for the Dirichlet character \(\chi \) and the component at infinity \(g_\infty \) of elements g of \(({{{\mathcal {E}}}}_k(N') \otimes {{{\mathcal {E}}}}_l(N') \otimes \rho )^{{\mathrm {SL}_{2}}({{{\mathbb {Z}}}})}\) for some positive integer \(N'\). By the isomorphisms (3.3), the choice of g is equivalent to choosing a classical modular form \(g_\infty \in ({{{\mathcal {E}}}}_k(N') \otimes {{{\mathcal {E}}}}_l(N') \otimes \chi )^{{\Gamma }_0(N)}\). Consider an arbitrary newform \(f \in {{{\mathcal {S}}}}^{\mathrm {new}}_{k+l}(\chi )\) and assume that \(\langle g_\infty , f \rangle \ne 0\) for some \(g_\infty \in ({{{\mathcal {E}}}}_k(N') \otimes {{{\mathcal {E}}}}_l(N') \otimes \chi )^{{\Gamma }_0(N)}\). Decomposing \(g_\infty \) as a sum of Hecke eigenforms, we find that there is a linear combination \(\sum c(M) {\mathrm {T}}_M\), M running through a suitable range of positive integers, of classical Hecke operators \({\mathrm {T}}_M\) such that $$\begin{aligned} f \;=\; g_\infty \Big |_{k+l} \sum c(M) {\mathrm {T}}_M . \end{aligned}$$ By virtue of the Sturm bound for modular forms of weight \(k+l\) and level \(N'\), we can further assume that \(c(M) = 0\) if M is larger than \(B(k+l,N')\) in (2.2) We summarize that, given a positive integer \(N'\) divisible by N such that there is \(g_\infty \in ({{{\mathcal {E}}}}_k(N') \otimes {{{\mathcal {E}}}}_l(N') \otimes \chi )^{{\Gamma }_0(N)}\) with \(\langle g_\infty ,\, f \rangle \ne 0\), we have $$\begin{aligned} {\mathrm {Ind}}(f) \in \sum _{M=1}^{B(k+l,N')} {\mathrm {T}}_M \big ( {{{\mathcal {E}}}}_k(N') \otimes {{{\mathcal {E}}}}_l(N') \otimes \rho \big )^{{\mathrm {SL}_{2}}({{{\mathbb {Z}}}})} . \end{aligned}$$ Proposition 2.5 of [23] and Lemma 3.3 imply the inclusion $$\begin{aligned} {\mathrm {Ind}}(f) \in \big ( {{{\mathcal {E}}}}_k(N' B(k+l,N')) \otimes {{{\mathcal {E}}}}_l(N' B(k+l,N')) \otimes \rho \big )^{{\mathrm {SL}_{2}}({{{\mathbb {Z}}}})} . \end{aligned}$$ To finish the proof, we have to show the existence of \(g_\infty \) for \(N'\) suitable to match the statement of Theorem 4.4. By symmetry of the assertion in k and l, we can and will assume that \(l \ge k\). We next utilize Lemma 4.1. Observe that the complex conjugate \(f^{\mathrm {c}}\) that appears in Lemma 4.1 lies in \({\mathrm {S}}_{k+l}(\overline{\chi })\). The special \({\mathrm {L}}\)-values \({\mathrm {L}}(f^{\mathrm {c}}, s)\) and \({\mathrm {L}}(f^{\mathrm {c}}\otimes \psi , s)\) do not vanish if \(s > (1 + k+l) /2\), since they can be expressed in terms of a convergent Euler product (using Deligne's bound [10]). If \(s = (1 + k+l) /2\), they lie on the abscissa of convergence of \({\mathrm {L}}(f^{\mathrm {c}}, s)\) and \({\mathrm {L}}(f^{\mathrm {c}}\otimes \psi , s)\), respectively. We deduce that they do not vanish from Lemma 5.9 of [15] in combination with Deligne's assertion of the Ramanujan Conjecture for holomorphic elliptic modular forms [10]. If \(l \ge k + 1\), then both \({\mathrm {L}}(f^{\mathrm {c}}, k+l-1)\) and \({\mathrm {L}}(f^{\mathrm {c}}\otimes \psi , l)\) in Lemma 4.1 do not vanish by our observations in the preceding paragraph. If k is even, we can choose the trivial Dirichlet character for \(\psi \). If k is odd, we can choose an odd Dirichlet character of modulus 16 and regard f as a modular form of level 16N. As a result, we obtain the desired element \(g_\infty \in ( {{{\mathcal {E}}}}_k(N') \otimes {{{\mathcal {E}}}}_l(N') \otimes \chi )^{{\Gamma }_0(N')}\) with \(N' = N\) if k is even and \(N' = 16 N\) if k is odd. This finishes the proof if \(l \ge k + 1\). It remains to treat the setting of \(k = l\), in which case \({\mathrm {L}}(f^{\mathrm {c}}\otimes \psi , l)\), and if \(k = l = 1\) also \({\mathrm {L}}(f^{\mathrm {c}}, k+l-1)\), in Lemma 4.1 are central \({\mathrm {L}}\)-values. By Waldspurger [22], there is a nonzero modular form of weight \(k + 1 /2\) whose \((-1)^k D\)-th Fourier coefficient, for a fundamental discriminant D with \((-1)^k D > 0\), is a nonzero multiple of \({\mathrm {L}}(f^{\mathrm {c}}\otimes \epsilon _D, l)^{1 /2}\), where \(\epsilon _D\) is the Kronecker symbol associated with D. The Sturm bound for half-integral weight modular forms implies that \({\mathrm {L}}(f^{\mathrm {c}}\otimes \epsilon _D, l)^{1 /2} \ne 0\) for some \(|D| < B(k + \frac{1}{2},N)\). Consider the case of \(k = l \ge 2\) and fix some D as in the previous paragraph. As \(k + l - 1 > (1 + k+l) /2\), we have verified that both \({\mathrm {L}}(f^{\mathrm {c}}, k+l-1)\) and \({\mathrm {L}}(f^{\mathrm {c}}\otimes \epsilon _D\psi , k+l-1)\) do not vanish. Our choice of D guarantees that \({\mathrm {L}}(f^{\mathrm {c}}\otimes \epsilon _D, l) \ne 0\) and therefore \({\mathrm {L}}(f^{\mathrm {c}}\otimes \epsilon _D \psi ^2, l) \ne 0\). We can apply Lemma 4.1 with \(f \leadsto f \otimes \epsilon _D\psi \) for a suitable Dirichlet character \(\psi \) mod 16. This yields an element \(g_{D\,\infty }\) of \({{{\mathcal {E}}}}_k((16D)^2 N) \otimes {{{\mathcal {E}}}}_l((16D)^2 N)\) such that \(\langle g_{D\,\infty },\, f \otimes \epsilon _D \psi \rangle \ne 0\). By the following computation, we can choose $$\begin{aligned} g_\infty = \Big ( \pi _{\mathrm {adj}}\big ( {\mathrm {T}}_{(16D)^2}\, \iota _{\mathrm {twist}}\big ( {\mathrm {Ind}}(g_{D\,\infty }) \big ) \big ) \Big )_\infty \in {{{\mathcal {E}}}}_k((16D)^4 N) \otimes {{{\mathcal {E}}}}_l((16D)^4 N) . \end{aligned}$$ Indeed, using the maps \(\pi _{\mathrm {twist}}\) and \(\iota _{\mathrm {twist}}\) in conjunction with \(\pi _{\mathrm {adj}}\), we find the relation $$\begin{aligned} \big \langle g_{D\,\infty },\, f \otimes \epsilon _D\psi \big \rangle= & {} \big \langle {\mathrm {Ind}}(g_{D\,\infty }),\, {\mathrm {Ind}}(f \otimes \epsilon _D\psi ) \big \rangle \nonumber \\= & {} \Big \langle {\mathrm {Ind}}(g_{D\,\infty }),\, \pi _{\mathrm {twist}}\, \big ({\mathrm {T}}_{(16D)^2}\, {\mathrm {Ind}}(f) \big ) \Big \rangle \nonumber \\= & {} \Big \langle \iota _{\mathrm {twist}}\big ( {\mathrm {Ind}}(g_{D\,\infty }) \big ),\, {\mathrm {T}}_{(16D)^2}\, {\mathrm {Ind}}(f) \Big \rangle \nonumber \\= & {} \Big \langle \pi _{\mathrm {adj}}\big ( {\mathrm {T}}_{(16D)^2}\, \iota _{\mathrm {twist}}\big ( {\mathrm {Ind}}(g_{D\,\infty }) \big ) \big ),\, {\mathrm {Ind}}(f) \Big \rangle \nonumber \\= & {} \Big \langle \Big ( \pi _{\mathrm {adj}}\big ( {\mathrm {T}}_{(16D)^2}\, \iota _{\mathrm {twist}}\big ( {\mathrm {Ind}}(g_{D\,\infty }) \big ) \big ) \Big )_\infty ,\, f \Big \rangle . \end{aligned}$$ We are left with the case \(k = l = 1\). By Corollary 3 of Ono–Skinner [20], strengthening results of Waldspurger, there are infinitely many fundamental discriminants \(D < 0\) co-prime to the level N of f such that \(L(f^{\mathrm {c}}\otimes \epsilon _D, 1) \ne 0\). We fix one such D. Inspecting its Fourier expansion, we find that \(f \otimes \epsilon _D\) coincides with a newform of level at most \(N D^2\). This allows us to apply Ono–Skinner [20] a second time, and find another fundamental discriminant \(D' < 0\) such that \(L(f^{\mathrm {c}}\otimes \epsilon _D \otimes \epsilon _{D'}, 1) \ne 0\). Since \(\epsilon _{D'}(-1) = -1 = (-1)^k\), we can set \(\psi = \epsilon _{D'}\) and invoke Lemma 4.1 with \(f \leadsto f \otimes \epsilon _D\psi \). The calculation in (4.5) extends, showing that there is \(g_\infty \in {{{\mathcal {E}}}}_k((DD')^2 N) \otimes {{{\mathcal {E}}}}_l((D D')^2 N)\) such that \(\langle g_\infty , f \rangle \ne 0\). This concludes the proof. \(\square \) Eisenstein series Given a weight \(k \ge 2\) and a vector \(v \in V(\rho )\) for an arithmetic type \(\rho \), recall from (3.1) of [23] the vector valued Eisenstein series \(E_{k,v}\). If v is fixed under the action of T, we have the formula $$\begin{aligned} E_{k,v}(\tau )&{}= \frac{1}{2} \sum _{{\Gamma }_\infty ^+ \backslash {\mathrm {SL}_{2}}({{{\mathbb {Z}}}})} v \big |_k\,{\gamma },\quad \text {for }\, k > 2,\\ E_{2,v}(\tau )&{}= \frac{1}{2} \lim _{s {\rightarrow }0} \sum _{{\Gamma }_\infty ^+ \backslash {\mathrm {SL}_{2}}({{{\mathbb {Z}}}})} {\mathrm {Im}}(\tau )^s v \big |_2\,{\gamma }. \end{aligned}$$ By analytic continuation, we can define Eisenstein series \(E_{1,v}\) of weight 1 in complete analogy (cf. [19]). In preparation for the proof of this section's main theorem, we determine the space of cusp expansions of Eisenstein series. Fix an irreducible congruence type \(\rho \) with T-fixed vectors and an integer k such that \(\rho = \rho ^{{\mathrm {par}}(k)}\). Then we have the inclusions of \({\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\)-representations \(\rho {\hookrightarrow }{{{\mathcal {E}}}}_k(\rho )\) if \(k \ge 3\). If \(\rho \ne {\mathbb {1}}\) and \(k = 2\), we also have \(\rho {\hookrightarrow }{{{\mathcal {E}}}}_2(\rho )\). Moreover, \({{{\mathcal {E}}}}_1(N)\) is nonzero if \(N \ge 3\). The condition \(\rho = \rho ^{{\mathrm {par}}(k)}\) ensures that the center of \({\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\) acts trivially via the slash action \(|_{k,\rho }\). Consider the case \(k > 2\). Since \(\rho \) is unitarizable, the associated vector-valued Eisenstein series of weight \(k > 2\) converge absolutely and locally uniformly. We obtain a nonzero map If \(k = 2\), the usual procedure of analytic continuation yields a real-analytic Eisenstein series \(E_{2,v}\) whose image under the \(\xi _2\)-operator of [7] is modular of arithmetic type \(\overline{\rho }\). In particular, if \(\rho \ne {\mathbb {1}}\), then \(\xi _2\) annihilates \(E_{2,v}\). In other words, \(E_{2,v}\) is holomorphic, and we obtain a nonzero map as in (5.1). In the case of \(k = 1\), the nonvanishing of \({{{\mathcal {E}}}}_1(N)\) follows from a standard computation following Sect. 6 of [19]. \(\square \) Complementing Theorem 4.4, which deals with cusp forms that are expressed as products of Eisenstein series, the next theorem is concerned with Eisenstein series. We achieve complete results except if \(k = l = 1\) (see Remark 5.3). Fix intergers \(k \ge 2\), \(l \ge 1\), and a congruence type \(\rho \) of level N. Let \(N_0\) be as in Theorem 4.4, then we have $$\begin{aligned} {\mathrm {E}}_{k+l}(\rho ) \;\subseteq \; \big ( {{{\mathcal {E}}}}_k({\mathrm {lcm}}(N_0, N N_1)) \otimes {{{\mathcal {E}}}}_l({\mathrm {lcm}}(N_0, N_1)) \otimes \rho \big )^{{\mathrm {SL}_{2}}({{{\mathbb {Z}}}})} . \end{aligned}$$ for \(N_1\) co-prime to N chosen such that $$\begin{aligned} N_1&\;\ge \; 1,\quad \text {if }\,k> 2\, \text {and}\, l \,\text {even;}\\ N_1&\;\ge \; 2,\quad \text {if }\,k> 2\, \text {and}\, l> 1 \,\text {odd;}\\ N_1&\;\ge \; 2,\quad \text {if }\, k = 2 \,\text {and}\, l > 1;\\ N_1&\;\ge \; 3,\quad \text {if}\, l = 1; \end{aligned}$$ By symmetry, an analogous statement for \(k = 1\) and \(l \ge 2\) can be deduced from Theorem 5.2. The case \(k = l = 1\), however, is not included. Computer-based experiments for small N suggest that Theorem 5.2 should also hold in this case, if \(N_1\) is sufficiently large. Given a space \({{{\mathcal {F}}}}\) of possibly vector-valued functions that are representable as Puiseux series, e.g., \({{{\mathcal {F}}}}= {{{\mathcal {E}}}}_k(N)\) or \({{{\mathcal {F}}}}= {\mathrm {E}}_k(\rho )\), denote by \(c({{{\mathcal {F}}}},0)\) the space of its constant coefficients. We will show that $$\begin{aligned} c\big ( {\mathrm {E}}_{k+l}(\rho ),\, 0 \big ) \;=\; c\Big ( \big ( {{{\mathcal {E}}}}_k({\mathrm {lcm}}(N_0, N N_1)) \otimes {{{\mathcal {E}}}}_l({\mathrm {lcm}}(N_0, N_1)) \otimes \rho \big )^{{\mathrm {SL}_{2}}({{{\mathbb {Z}}}})},\, 0 \Big ) \;\subseteq \; V(\rho ) .\nonumber \\ \end{aligned}$$ Suppose that (5.3) is true, and let \(f \in {\mathrm {E}}_{k+l}(\rho )\). Then there exists an element $$\begin{aligned} g \;\in \; \big ( {{{\mathcal {E}}}}_k({\mathrm {lcm}}(N_0, N N_1)) \otimes {{{\mathcal {E}}}}_l({\mathrm {lcm}}(N_0, N_1)) \otimes \rho \big )^{{\mathrm {SL}_{2}}({{{\mathbb {Z}}}})} \end{aligned}$$ such that the constant term of \(f - g\) vanishes. In other words, the difference \(f - g\) is a cusp form. Therefore, by Theorem 4.4 and our choice of \(N_0\), we can conclude that f is contained in the right hand side of (5.2). Thus we finish the proof once we have established (5.3). Observe that we have $$\begin{aligned} {\mathrm {M}}_{k+l}(\rho ) \supseteq \big ( {{{\mathcal {E}}}}_k({\mathrm {lcm}}(N_0, N N_1)) \otimes {{{\mathcal {E}}}}_l({\mathrm {lcm}}(N_0, N_1)) \otimes \rho \big )^{{\mathrm {SL}_{2}}({{{\mathbb {Z}}}})} , \end{aligned}$$ so that it follows in a straightforward way that the right hand side of (5.3) is contained in the left hand side. It remains to show that its left hand side is contained in the right hand side. Equality (5.3) follows if it holds for all irreducible \(\rho \). If \(\rho \) has no T-fixed vectors, the space of Eisenstein series \({\mathrm {E}}_{k+l}(\rho )\) is zero by definition, and for this reason (5.3) holds. If \(\rho \) is irreducible and has T-fixed vectors, then it embeds into \(\rho ^\times _N\) by Lemma 3.1. We conclude that for the remainder of the proof, we may assume that \(\rho = \rho ^\times _N\). Given positive integers \(N' {\mathop {\mid }}N''\), we have \({{{\mathcal {E}}}}_k(N') \subseteq {{{\mathcal {E}}}}_k(N'')\) and \({{{\mathcal {E}}}}_l(N') \subseteq {{{\mathcal {E}}}}_l(N'')\). To establish (5.3), it therefore suffices to show that $$\begin{aligned} c\big ( {\mathrm {E}}_{k+l}(\rho ^\times _N),\, 0 \big ) \subseteq c\Big ( \big ( {{{\mathcal {E}}}}_k(N N_1) \otimes {{{\mathcal {E}}}}_l(N_1) \otimes \rho ^\times _N \big )^{{\mathrm {SL}_{2}}({{{\mathbb {Z}}}})},\, 0 \Big ) . \end{aligned}$$ Since Eisenstein series of weight \(k + l > 2\) converge absolutely, we can obtain every T-fixed vector as a constant term of an Eisenstein series. This implies that $$\begin{aligned} c\big ( E_{k+l}(\rho ^\times _N), 0 \big ) = \rho ^{\times \,T{\mathrm {par}}(k+l)}_N . \end{aligned}$$ In the case of weight 2, we can employ the same argument as in the proof of Lemma 5.1 to find that $$\begin{aligned} c\big ( E_2(\rho ^\times _N), 0 \big ) = \big ( \rho ^\times _N \ominus {\mathbb {1}}\big )^{T+} . \end{aligned}$$ Since N and \(N_1\) are co-prime by assumption, we can decompose \(\rho ^\times _{N N_1}\) as the tensor product \(\rho ^\times _N \otimes \rho ^\times _{N_1}\). Decomposing further by the action of the center of \({\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\), we obtain the embedding In order to accommodate the case of \(k = 2\) and even l, we refine (5.7) as Any irreducible constituent of the left hand side of (5.7) and (5.8) is a tensor product of irreducible constituents of the tensor factors, since N and \(N_1\) are co-prime. Vice versa, we infer from (5.5) and (5.6) that the tensor product of irreducible constituents of the left hand side of (5.7) and (5.8) embeds into \({{{\mathcal {E}}}}_k(N N_1)\) by Lemma 5.1. Our assumption on \(N_1\) guarantees by Lemma 5.1 that there is an irreducible \({\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\)-representation \(\sigma \) that embeds into \({{{\mathcal {E}}}}_l(N_1)\). If \(k = 2\) and l is even, then \(N_1\) is constrained in such a way that we can and will assume that \(\sigma \) is not the trivial representation. In particular, \(\rho ' \otimes \sigma \) embeds into the left hand side of (5.8) for any irreducible constituent \(\rho '\) of \(\rho ^{\times \,{\mathrm {par}}(k+l)}_N\). Fix a T-fixed vector \(w_1 \in V(\sigma )\) and complete it to an orthogonal basis \(w_j\), \(1 \le j \le \dim (\sigma )\) of \(V(\sigma )\). In the case of \(l \ge 2\), the Eisenstein series \(\tilde{E}_{l,\sigma ^\vee ,w_1^\vee } := E_{l,\sigma ^\vee ,w_1^\vee }\) has constant term \(w_1^\vee \). If \(l = 1\), we can choose \(w_1\) in such a way that \(w_1^\vee \) is the constant term of an Eisenstein series \(\tilde{E}_{l,\sigma ^\vee ,w_1^\vee }\) (in general, \(\tilde{E}_{l,\sigma ^\vee ,w_1^\vee } \ne E_{l,\sigma ^\vee ,w_1^\vee }\) for \(l = 1\)). This provides an embedding of \(\sigma \) into \({{{\mathcal {E}}}}_l(\sigma ^\vee ) \subseteq {{{\mathcal {E}}}}_l(N_1)\) via \(\iota _\sigma :\, w {\mapsto }w \circ \tilde{E}_{l,\sigma ^\vee ,w_1^\vee }\). In addition, we obtain the embedding Fix an irreducible, arbitrary constituent Observe that \(v_1 \otimes w_1\) is a T-fixed vector in \(V(\rho ' \otimes \sigma ) \subseteq V(\rho ^\times _{N N_1})\). Since \(k > 2\) or \(\sigma \not \cong {\mathbb {1}}\), the Eisenstein series \(E_{k, v_1 \otimes w_1}\) exists. It allows us to define the embedding Combining all the above maps we obtain the following embedding of \({\mathrm {SL}_{2}}({{{\mathbb {Z}}}})\)-representations: Complete \(v_1\) to an orthonormal basis \(v_i\), \(1 \le i \le \dim (\rho ')\) of \(V(\rho ')\). Evaluating the composition of 5.10, we obtain $$\begin{aligned}&\sum _{i = 1}^{\dim (\rho ')} v_i^\vee \otimes v_i \;{\longmapsto }\; \sum _{i = 1}^{\dim (\rho ')} \sum _{j = 1}^{\dim (\sigma )} v_i^\vee \otimes w_j^\vee \otimes \big ( w_j \circ \tilde{E}_{l,\sigma ^\vee ,w_1^\vee } \big ) \otimes v_i\\&\quad \;{\longmapsto }\; \sum _{i = 1}^{\dim (\rho ')} \sum _{j = 1}^{\dim (\sigma )} \big ( (v_i^\vee \otimes w_j^\vee ) \circ E_{k, v_1 \otimes w_1} \big ) \otimes \big ( w_j \circ \tilde{E}_{l,\sigma ^\vee ,w_1^\vee } \big ) \otimes v_i . \end{aligned}$$ Recall that \(\iota _\sigma :\, w {\mapsto }w \circ \tilde{E}_{l,\sigma ^\vee ,w_1^\vee }\). In order to determine the constant term of the image, observe that the constant term of \(\iota _\sigma (w_j)\) equals 1, if \(j = 1\), and 0, otherwise, since the \(w_j\) are mutually orthogonal. Similarly, the constant term of \(\iota _{\rho ' \otimes \sigma }(v_i^\vee w_j^\vee )\) equals 1 if \(i = j = 1\), and 0, otherwise. As a result, we directly see that the constant term of the right hand side equals \(v_1\). Since \(v_1\) and the embedding of \(\rho '\) were arbitrary, this confirms (5.4) and finishes the proof. \(\square \) Belabas, K., Cohen, H.: Modular forms in Pari/GP. Res. Math. Sci. 5(3), 1–19 (2018) Bhargava, M., Shankar, A.: Binary quartic forms having bounded invariants, and the boundedness of the average rank of elliptic curves. Ann. Math. (2) 181(1), 191–242 (2015) Bhargava, M., Skinner, C.: A positive proportion of elliptic curves over \({\mathbb{Q}}\) have rank one. J. Ramanujan Math. Soc. 29(2), 221–242 (2014) MathSciNet MATH Google Scholar Borisov, L.A., Gunnells, P.E.: Toric modular forms and nonvanishing of \(L\)-functions. J. Reine Angew. Math. 539, 149–165 (2001) Borisov, L.A., Gunnells, P.E.: Toric varieties and modular forms. Invent. Math. 144(2), 297–325 (2001) Borisov, L.A., Gunnells, P.E.: Toric modular forms of higher weight. J. Reine Angew. Math. 560, 43–64 (2003) Bruinier, J.H., Funke, J.: On two geometric theta lifts. DukeMath. J. 125(1), 45–90 (2004) Carnahan, S.: Generalized moonshine, II: Borcherds products. DukeMath. J. 161(5), 893–950 (2012) Cohen, H.: Expansions at cusps and Petersson products in Pari/GP. Elliptic integrals, elliptic functions and modular forms in quantum field theory. Texts Monogr. Symbol. Comput. Springer, Cham (2019) Deligne, P.: La conjecture de Weil. II. Inst. Hautes Études Sci. Publ. Math. 52, 137–252 (1980) Dickson, M., Neururer, M.: Products of Eisenstein series and Fourier expansions of modular forms at cusps. J. Number Theory 188, 137–264 (2018) Friedberg, S., Hoffstein, J.: Nonvanishing theorems for automorphic \(L\)-functions on GL(2). Ann. Math. (2) 142(2), 385–423 (1995) Goldfeld, D.: Conjectures on elliptic curves over quadratic fields. In: Number theory, Carbondale 1979 (Proc. Southern Illinois Conf., Southern Illinois Univ., Carbondale, Ill., 1979), vol. 751. Lecture Notes in Math. Springer, Berlin (1979) Gross, B.H., Zagier, D.B.: Heegner points and derivatives of \(L\)-series. Invent. Math. 84(2), 225–320 (1986) Iwaniec, H., Kowalski, E.: Analytic number theory. In: American Mathematical Society Colloquium Publications, vol. 53. American Mathematical Society, Providence (2004) Khuri-Makdisi, K.: Moduli interpretation of Eisenstein series. Intl. J. Number Theory 8(3), 715–748 (2012) Kohnen, W., Martin, Y.: Products of two Eisenstein series and spaces of cusp forms of prime level. J. Ramanujan Math. Soc. 23(4), 337–356 (2008) Kohnen, W., Zagier, D.B.: Modular forms with rational periods. Modular forms (Durham, 1983). Ellis Horwood Ser. Math. Appl.: Statist. Oper. Res. Horwood, Chichester (1984) Miyake, T.: Modular Forms. Translated from the Japanese by Yoshitaka Maeda. Springer, Berlin (1989) Ono, K., Skinner, C.: Non-vanishing of quadratic twists of modular \(L\)-functions. Invent. Math. 134(3), 651–660 (1998) Rankin, R.A.: The scalar product of modular forms. Proc. Lond. Math. Soc. 3(2), 198–217 (1952) Waldspurger, J.-L.: Sur les coefficients de Fourier des formes modulaires de poids demi-entier. J. Math. Pures Appl. (9) 60(4), 375–484 (1981) Westerholt-Raum, M.: Products of vector valued Eisenstein series. Forum Math. 29(1), 157–186 (2017) Open access funding provided by Chalmers University of Technology. Institutionen för Matematiska vetenskaper, Chalmers tekniska högskola och Göteborgs Universitet, 412 96, Göteborg, Sweden Martin Raum & Jiacheng Xia Martin Raum Jiacheng Xia Correspondence to Martin Raum. The first author was partially supported by Vetenskapsrådet Grant 2015-04139. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Raum, M., Xia, J. All modular forms of weight 2 can be expressed by Eisenstein series. Res. number theory 6, 32 (2020). https://doi.org/10.1007/s40993-020-00207-z Central values of \({\mathrm {L}}\)-functions Vector-valued Hecke operators Products of Eisenstein series Mathematics Subject Classification Primary 11F11 Secondary 11F67
CommonCrawl
Asian-Australasian Journal of Animal Sciences (아세아태평양축산학회지) Asian Australasian Association of Animal Production Societies (아세아태평양축산학회) Cloning of OLR1 Gene in Pig Adipose Tissue and Preliminary Study on Its Lipid-accumulating Effect Sun, Chao (College of Animal Science and Technology, Northwest A&F University) ; Liu, Chun-wei (College of Animal Science and Technology, Northwest A&F University) ; Zhang, Zhong-pin (College of Animal Science and Technology, Northwest A&F University) https://doi.org/10.5713/ajas.2009.90121 In this study we cloned and characterized a novel lipid-accumulating gene, the oxidized low-density lipoprotein receptor 1 (OLR1), which is associated with lipogenesis. We analyzed the gene structure and detected the mRNA transcriptional expression levels in pig adipose tissues at different months of age (MA) and in different economic types (lean type and obese type) using real-time fluorescence quantitative PCR. OLR1 expression profile in different tissues of pig was analyzed. Finally, we studied the correlation between OLR1 and lipid metabolism related genes including peroxisome proliferator-activated $receptor{\gamma}2$ ($PPAR{\gamma}2$), fatty acid synthetase (FAS), triacylglycerol hydrolase (TGH), CAAT/enhancer binding protein $\alpha$ ($C/EBP{\alpha}$) and sterol regulatory element binding protein-1c (SREBP-1c). Results indicated that the OLR1 gene of the pig exhibited the highest homology with the cattle (84%), and the lowest with the mouse (27%). The signal peptide located from amino acid 38 to 60 and the domain from amino acid 144 to 256 were shared by the C-type lectin family. The expression level of OLR1 in pig lung was exceedingly higher than other tested tissues (p<0.01). In pig adipose tissue, the expression level of OLR1 mRNA increased significantly with growth (p<0.01). The expression level of OLR1 mRNA in obese-type pigs was significantly higher than that of lean-type pigs of the same monthly age (p<0.05). In adipose tissue, the expression of OLR1 correlated with $PPAR{\gamma}2$, FAS and SREBP-1c, but not TGH or C/EBP${\alpha}$. In conclusion, OLR1 was highly associated with fat deposition and its transcription, as suggested by high correlations, was possibly regulated by $PPAR{\gamma}2$ and SREBP-1c. OLR1;Pig;Lipid Metabolism;Adipose Tissue Chen, M., S. Narumiya, T. Masaki and T. Sawamura. 2001, Conserved C-terminal residues within the lectin-like domain of LOX-1 are essential for oxidized low-density-lipoprotein binding, Biochem. J 355:289-296 https://doi.org/10.1042/0264-6021:3550289 Hsu, M. H., S. S. Chirala and S. J. Wakil. 1996, Human fatty- acid synthase gene, Biol. Chem. 271:13584-13592 https://doi.org/10.1074/jbc.271.23.13584 May-Yun Wang, Paul Grayburn, Shuyuan Chen, Mariella Ravazzola, Lelio Orci and Roger H. Unger. 2008, Adipogenic capacity and the susceptibility to type 2 diabetes and metabolic syndrome, PNAS 105:6139-6144 https://doi.org/10.1073/pnas.0801981105 Schweizer, M., K. Roder, L. Zhang and S. S. Wolf. 2002, Transcription factors acting on the promoter of the rat fatty acid synthase gene, Biochem. Soc. Trans. 30:1070-1072 https://doi.org/10.1042/BST0301070 Wei, E. H., M. Alam, F. C. Sun, L. B. Agellon, D. E. Vance and R. Lehner. 2007, Apolipoprotein B and triacylglycerol secretion in human triacylglycerol hydrolase transgenic mice, J. Lipid Res 48:2597-2606 https://doi.org/10.1194/jlr.M700320-JLR200 Werman, A., A. Hollenberg, G. Solanes, C. Bjorbaek, A. J. Vidal- Puig and J. S. Flier. 1997, Ligand-independent activation domain in the N terminus of peroxisome proliferatorsactivated receptor$\gamma$ (PPAR$\gamma$) J. Biol. Chem 272:20230-20235 https://doi.org/10.1074/jbc.272.32.20230 Ferre, P. and F. Foufelle. 2007, SREBP-1c transcription factor and lipid homeostasis: Clinical perspective, Horm. Res 68:72-82 https://doi.org/10.1159/000100426 Gondret, F., P. Ferre and I. Dugail. 2001, ADD-1/ SREBP-1 is a major determinant of tissue differential lipogenic capacity in mammalian and avian species, J. Lipid Res 42:106-113 Chui, P. C., H. P. Guan, M. Lehrke, M. Lehrke and M. A. Lazar. 2005, PPAR gamma regulates adipocyte cholesterol metabolism via oxidized LDL receptor 1. Clin. Invest 115:2244-2256 https://doi.org/10.1172/JCI24130 Sawamura, T., N. Kume, T. Aoyama, H. Moriwaki, H. Hoshikawa, Y. Aiba, T. Tanaka, S. Miwa, Y. Katsura, T. Kita and T. Masaki. 1997, An endothelial receptor for oxidized low-density lipoprotein, Nature 386:73-77 https://doi.org/10.1038/386073a0 Chen, M., T. Masaki and T. Sawamura. 2002, LOX-1, the receptor for oxidized low-density lipoprotein identified from endothelial cells: implications in endothelial dysfunction and atherosclerosis, Pharmacol. Ther 95:89-100 https://doi.org/10.1016/S0163-7258(02)00236-X Constance, C. M., J. I. Morgan and R. M. Umek. 1996, C/EBP alpha regulation of the growth-arrest-associated gene gadd45, Mol. Cell. Biol 16:3878-3883 Sharma, A. M. and B. Staels. 2007, Peroxisome proliferatoractivated receptor and adipose tissue - Understanding obesity- Related changes in regulation of lipid and glucose metabolism, J. Clin. Endocrinol. Metab 92:386-395 https://doi.org/10.1210/jc.2006-1268 Wei, E. H., W. H. Gao and R. Lehner. 2007, Attenuation of adipocyte triacylglycerol hydrolase activity decreases basal fatty acid efflux, J. Biol. Chem 282:8027-8035 https://doi.org/10.1074/jbc.M605789200 Matsuda, M., B. S. Korn, R. E. Hammer, Y. A. Moon, R. Komuro, J. D. Horton, J. L. Goldstein, M. S. Brown and I. Shimomura. 2001, SREBP cleavage- activating protein (SCAP) is required for increased lipid synthesis in liver induced by cholesterol deprivation and insulin elevation, Genes Dev 15:1206-1216 https://doi.org/10.1101/gad.891301 Reiter, S. S., C. H. Halsey, B. M. Stronacha, J. L. Bartosha, W. K. Owsleya and W. G. Bergen. 2007, Lipid metabolism related gene-expression profiling in liver, skeletal muscle and adipose tissue in crossbred Duroc and Pietrain pigs, Comp. Biochem. Physiol. Part D: Genomics Proteomics 200-206 Sekiya, M., N. Yahagi, T. Matsuzaka, Y. Takeuchi, Y. Nakagawa, H. Takahashi, H. Okazaki, Y. Iizuka, K. Ohashi, T. Gotoda, S. Ishibashi, R. Nagai, T. Yamazaki, T. Kadowaki, N. Yamada, J. Osuga and H. Shimano. 2007, SREBP-1-independent regulation of lipogenic gene expression in adipocytes, J. Lipid Res 48:1581-1591 https://doi.org/10.1194/jlr.M700033-JLR200 Chen, X. P. and GH. Du. 2007, Lectin-like oxidized low-density lipoprotein receptor-1: protein, ligands, expression and pathophysiological significance, Chin. Med. J 120:421-426 Hausman, G. J. 2003, Dexamethasone induced preadipocyte recruitment and expression of CCAAT/enhancing binding protein α and peroxisome proliferator activated receptpr-γ proteins in porcine stromal-vascular (S-V) cell cultures obtained before and after the onset of fetal adipogenesis, Gen. Comp.Endocrinol 130:61-70 Roder, K., L. Zhang and M. Schweizer. 2007, SREBP-1c mediates the retinoid-dependent increase in fatty acid synthase promoter activity in HepG2, FEBS Lett 581:2715-2720 https://doi.org/10.1016/j.febslet.2007.05.022 Ronald, L. M., S. T. Ding, E. O'B. Smith and H. J. Mersmann. 2000, Expression of porcine adipocyte transcripts during differentiation in vitro and in vivo, Comp. Biochem. Physiol. Part B, Biochem. Mol. Biol 126:291-302 https://doi.org/10.1016/S0305-0491(00)00185-1 Rawson, R. B. 2003, Control of lipid metabolism by regulated intramembrane proteolysis of sterol. Regulatory element binding proteins (SREBPs), Biochem. Soc. Symp 70:221-231 Chen, X. P., T. T. Zhang and DH. Du. 2007, Lectin-like oxidized low-density lipoprotein receptor-1, a new promising target for the therapy of atherosclerosis? Cardiovasc. Drug Rev 25:146-161 https://doi.org/10.1111/j.1527-3466.2007.00009.x Mathieu, L., T. F. William, S. Genevieve, G. Yves, L. Josee, P. B. Joel and D. Yves. 2006, Mechanisms of the depot specificity of peroxisome proliferator-Activated receptor $\gamma$ action on adipose tissue metabolism, Diabetes 55:2771-2778 https://doi.org/10.2337/db06-0551 Yamanaka, S., X. Y. Zhang, K. Miura, S. Kim and H. Iwao. 1998, The human gene encoding the lectin-type oxidized LDL receptor (OLR1) is a novel member of the natural killer gene complex with a unique expression profile. Genomics 54:191-199 https://doi.org/10.1006/geno.1998.5561 Laplante, M., W. T. Festuccia, G. Soucy, Y. Gelinas, J. Lalonde, J. P. Berger and Y. Deshaies. 2006, Mechanisms of the depot specificity of peroxisome proliferator- Activated receptor $\gamma$ action on adipose tissue metabolism, Diabetes 55:2771-2778 https://doi.org/10.2337/db06-0551 Larsen, T. M., S. Toubro and A. Astrup. 2003, PPARgamma agonists in the treatment of type II diabetes: is increased fatness commensurate with long-term efficacy, Int. J. Obes27:147-161 https://doi.org/10.1038/sj.ijo.802223 Hollenberg, A. N., V. S. Susulic, J. P. Madura, B. Zhang, D. E. Moller, P. Tontonoz, P. Sarraf, B. M. Spiegelman and B. B. Lowell. 1997, Functional antagonism between CCAAT/ enhancer binding protein-αand peroxisome proliferatorsactivated receptor-γ on the leptin promoter, J. Biol. Chem 272:5283-5290 https://doi.org/10.1074/jbc.272.8.5283 Analysis of the oxidized low density lipoprotein receptor 1 gene as a potential marker for carcass quality traits in Qinchuan cattle vol.32, pp.1, 2019, https://doi.org/10.5713/ajas.18.0079
CommonCrawl
Electrical Engineering Stack Exchange is a question and answer site for electronics and electrical engineering professionals, students, and enthusiasts. It only takes a minute to sign up. Why does capacitor voltage lag current? So far I've established the following: Current is the movement of charge over time, measured in coulombs/second. Charge is the electron in an atom. Voltage is the potential difference between two points and the energy per unit of charge. Still, I don't understand why this happens: How is it possible that at time t=0 the current is present in an RC circuit without the potential difference? What caused the charge to flow in the first place? voltage capacitor current Shady ProgrammerShady Programmer \$\begingroup\$ How is it possible that at t=0 current is present without voltage? Well, remember that what is plotted is the voltage across the capacitor, not the voltage across the resistor. In fact, there is voltage across the resistor! For a resistor, current can only be present if voltage is simultaneously across the resistor; for a capacitor, this isn't always true. You can have current without voltage, positive current with positive voltage, or even positive current with negative voltage (depending, of course, on what the capacitor is connected to). \$\endgroup\$ – Zulu May 5 '15 at 17:47 \$\begingroup\$ So in the beginning the voltage is basically everywhere in the circuit except for the capacitor ? that seems too sketchy for my liking \$\endgroup\$ – Shady Programmer May 5 '15 at 22:38 \$\begingroup\$ is that so hard to believe? Imagine, if you will, that you initially have 0V on the capacitor, 0V on the voltage source, and 0V on the resistor. Suddenly the voltage source pops up to 1V, and proceeds to oscillate as a cosine. For a moment, right at the start, there was (and must have been) 0V across the capacitor, because its voltage couldn't change instantaneously (doing so would require infinite current). Therefore, for that moment, there was 1V across the resistor. So, yes, for that moment, there's voltage everywhere except the capacitor. \$\endgroup\$ – Zulu May 5 '15 at 22:49 The picture in your question assumes that the voltage waveform started some time earlier and that the transient of it beginning is no longer affecting things. Basically Q=CV and this translates to I = C dv/dt and, if you applied a sinewave the differential of that sinewave voltage gives rise to the cosine wave of current but, of course at t=0 things are a little different; For a start you can't suddenly start a sinewave from rest - that would imply infinite bandwidth. Given this fact, there is a small finite time which the current rapidly ramps up to the starting value in your picture. From thereon it pretty much follows the equation given above. EDIT section, mechanical analogy A mechancial analogy could be regarded as a flywheel i.e. a rotating mass. The force applied to the end of the flywheel will accelerate the speed at which the flywheel rotates but when the flywheel (lossless assumed) is at constant speed, no force is needed. You can imagine the flywheel speed like voltage; the flywheel has charged up to speed n and there is no longer any force needed to keep it charged at that speed. Just like a capacitor, once charged to a constant voltage there is no current needed to keep a perfect capacitor at that voltage. However, if you applied a constant force to decelerate the flywheel, the speed decelerates linearly and if the constant force is a true constant force, the flywheel speed will decelerate through n=0 and start rotating in the opposite direction after a little while. Force is -X and speed ramps down linearly. Ditto with the capacitor, if you take a constant current from the capacitor the voltage falls linearly and eventually becomes negative and charges up to a negative voltage. Andy akaAndy aka \$\begingroup\$ The current, as I explained in my answer, is C dv/dt. Slightly before or after t=0 the voltage will have values that imply a ramp and this ramp in voltage gets differentiated to a near-constant value that is the current. \$\endgroup\$ – Andy aka May 5 '15 at 13:38 \$\begingroup\$ @ShadyProgrammer the terms leading and lagging are just conveniences; one doesn't come before the other but it's convenient sometimes to think this way. There is no theory of leading and lagging. \$\endgroup\$ – Andy aka May 5 '15 at 14:17 \$\begingroup\$ @ShadyProgrammer, the instantaneous voltage across a capacitor is not dependent on the current through at that instant but, rather, on the history of the current through. Also, it is important to distinguish between AC analysis (sinusoidal steady state) and transient analysis. Only in AC analysis can we say the voltage lags the current by 90 degrees. \$\endgroup\$ – Alfred Centauri May 5 '15 at 14:18 \$\begingroup\$ @ShadyProgrammer, You asked "does a capacitor in some weird way track history of what is going on through it ?". Yes, but it's not weird. Just look at the integral form of the capacitor equation. \$V(t) = \frac{1}{C}\int_{-\infty}^{t}I(t') \mathrm{d}t'\$. \$\endgroup\$ – The Photon May 5 '15 at 15:59 \$\begingroup\$ @ShadyProgrammer, as an analogy, think of the pressure associated with a balloon filling with air. The pressure at any instant does not depend on the flow of air into (or out off) the balloon at that instant but, rather, the amount of air in the balloon at that instant. But the amount of air in the balloon at any instant depends on the history of the flow of air. Similarly, the voltage across a capacitor depends on the charge Q separated on the plates. But the separated charge Q depends on the history of the flow of charge (current). \$\endgroup\$ – Alfred Centauri May 5 '15 at 16:47 First, note that your waveform shows what happens in the sinusoidal steady state. This implies that the voltage and current have been stable sinusoids for all time. So there's no "in the first place" in your graph. The reason there's a current at t = 0 because the voltage is changing at t = 0. To get the voltage to start rising, you need to be pumping charge onto the plates of the capacitor. I think you're trying to apply DC thinking to an AC circuit. The voltage might be zero at t = 0, but its first derivative is not. That derivative has physical significance! It's what really matters to the capacitor. Adam HaunAdam Haun I asked myself such questions in the late 70's when I was studying the subject of Theoretical Electrical Engineering... where they unsuccessfully tried to "explain" me this phenomenon through strict definitions. I remember what I could not imagine was why as the current went down, the voltage on the capacitor kept rising. Many years later, in an interesting conversation with my former students and followers… and by the help of the hydraulic analogy, I finally managed to figure out what was really going on... After the main question why the voltage across a capacitor lags the current through it, another logical question arises, "And why is this lag exactly 90 deg in the single capacitor and less than 90 deg in the RC circuit"? Here are possible intuitive explanations (such as I would like to hear years ago). 1. Single capacitor. I have come to the conclusion that textbooks fail to explain the phase shift between the current and voltage since they consider the case of a voltage-supplied capacitor. But this arrangement (an AC voltage source directly drives a capacitor) is fundamentally incorrect (like the case where a voltage source directly drives a diode)... although it is still used to make a voltage-to-current differentiator. But what is more important to us is this arrangement is not suitable for an intuitive explanation of what is happening. The dual arrangement - current-supplied capacitor, can help us easily explain why voltage lags the current with exactly 90 deg. In this arrangement, an AC current source drives the capacitor that now acts as a current-to-voltage integrator. "Current source" means that it produces and passes sinusoidal current through the capacitor in spite of all. No matter what the voltage across the capacitor is - zero (empty capacitor), positive (charged capacitor) or even negative (reverse charged capacitor), our current source will pass the desired current with desired direction through the capacitor. So the voltage across capacitor does not impede the current (it tries... but the current source compensates it by increasing its internal voltage). Until the input current is positive (imagine the positive half-sine wave) it charges the capacitor and its positive voltage continuously increases in spite of the current's magnitude. The strangest here is that even when the current decreases to zero the voltage continues to increase to maximum (my amazement in the past). Then the current changes its direction and during the negative half-sine wave it charges the capacitor with an opposite polarity... and the magnitude of its negative voltage continuously increases in spite of the decreasing current's magnitude. So, in this arrangement, the phase shift is constant and exactly 90 deg because of the ideal input current source that compensates the voltage drop across the capacitor. Hydraulic analogy. The popular "water vessel analogy" ("electrical current - water flow" and "voltage - water level") can help us fully understand in an intuitive way the phase shift idea. First half wave (0 - 180 deg): Imagine you fill a vessel with water and picture graphically this process. Choose the half of the maximum water height as a zero level (ground) and begin gradually, in a sinusoidal manner, opening (in the interval 0 - 90 deg) and then closing (90 - 180 deg) the supply faucet. Note that no matter you close the faucet (in the interval 90 - 180 deg), the water level will continue rising. It is strange that you close the faucet but the water continues rising. Finally, you have completely closed the faucet (zero current), but the level of the water will be maximum (maximum positive voltage). Second half wave (180 - 360 deg): At this point, you have to change the flow (current) direction to make the water level decrease. For this purpose, you can begin gradually opening and then closing another faucet at the bottom to draw the water (i.e., you draw current from the capacitor). But again, no matter if you close the faucet the water level will continue falling. It is strange that you close the faucet but the water continues falling. Finally, you have completely closed the faucet (zero current), but the level of the water will be maximum negative (maximum negative voltage). So, the basic idea behind all kind of such storing elements (named integrators) is: The sign of the output pressure-like quantity (voltage, water level, air pressure, etc.) can be changed only by changing the direction of the input flow-like quantity (current, water flow, air flow, etc.); it cannot be changed by changing the magnitude of the flow-like quantity. At the final point, the current is zero but the voltage is maximum; this gives the 90 phase shift on the graph. 2. RC circuit (voltage-to-voltage integrator). We have already realized that it is incorrect to drive a capacitor directly by a voltage source; it is better to drive it by a current source. For this purpose, let's connect a resistor between the voltage source and the capacitor to convert the input voltage to current; so, the resistor acts as a voltage-to-current converter. Thus we have built a current source by the input voltage source and resistor. Let's now consider the circuit operation (I will do it electrically but the hydraulic analogy of communicating vessels is an impressive way to do it as well). Imagine how the input voltage VIN changes in a sinusoidal manner. In the beginning, the voltage rapidly increases and the current I = (VIN - VC)/R flows from the input source through the resistor and enters the capacitor; the output voltage begins increasing lazy. After some time, the input voltage approaches the sine peak and then begins decreasing. But until the input voltage is higher than the voltage across the capacitor the current continues flowing in the same direction. As above, it is strange that the input voltage decreases but the capacitor voltage continues increasing. Figuratively speaking, the two voltages "move" against each other... and finally meet. At this instant, the two voltages become equal; the current is zero and the capacitor voltage is maximum. The input voltage continues decreasing and becomes less than the capacitor voltage. The current changes its direction, begins flowing from the capacitor through the resistor and enters the input voltage source. It is very interesting that the capacitor acts as a voltage source that "pushes" current into the input voltage source acting as a load. Before the source was a source and the capacitor was a load; now, the source is a load and the capacitor is a source… The moment where the two voltages become equal and the current changes its direction is the moment of the maximum output voltage. Note it depends on the rate of changing (the frequency) of the input voltage: as higher the frequency is, as low the maximum voltage across the capacitor is... as later the moment is... as bigger the phase shift between the two voltages is... At the maximum frequency, the voltage across the capacitor cannot move from the ground... and the moment of current direction change is when the input voltage crosses the zero (the situation is similar to the case of current-supplied capacitor). So, in this arrangement, the phase shift varies from zero to 90 deg when the frequency varies from zero to infinity. This is because of the imperfect input current source that cannot neutralize the voltage drop across the capacitor. If we want the phase shift between current and voltage in the RC circuit to be exactly 90 deg regardless of frequency (as in the case of a single capacitor), we should somehow compensate the voltage across the capacitor. This is done by the operational amplifier in the circuit of the op-amp inverting integrator. It makes its output voltage equal to the voltage drop across the capacitor and adds it in series. The result is zero voltage (the so-called virtual ground). Circuit fantasistCircuit fantasist \$\begingroup\$ +1 for 4.5 years delay \$\endgroup\$ – muyustan Dec 1 '19 at 15:45 \$\begingroup\$ Such fundamental ideas are eternal and it's never too late to find an explanation for them... \$\endgroup\$ – Circuit fantasist Dec 1 '19 at 19:25 \$\begingroup\$ don't get me wrong please, I am just joking \$\endgroup\$ – muyustan Dec 1 '19 at 20:20 To answer your question, let's start with a simple DC source e.g. a battery. Just when you turn on the circuit, the schematic appears like this: Capacitor is like a hungry child and someone is serving him cookies on a plate. You are trying to measure his eating speed by monitoring his plate, which is a wrong plan because initially when the child is very hungry, you will see an empty plate. But as his stomach gets full, his eating speed will become zero and you will see a full plate. That's the case with the capacitor. Initially, there will be a large current through the capacitor essentially making it equivalent to a short circuit. Assuming the wire to be of negligible resistance, you are essentially putting your probes together which will give you a zero voltage reading. Now let the circuit sit for a while till the capacitor gets charged. Now the equivalent circuit looks somewhat like: It's open circuit now with zero current flow (ideally). Now you will be able to measure the true charging voltage (5V). Now coming to your doubt, initially at t = 0, there was a potential source which made the electrons move. However, the current was moving so rapidly through the capacitor that you were unable to measure a potential drop across it. At this point you might think, where did that potential go? Well say you are using a 5V battery along with an ideal zero resistance capacitor. The potential drop will occur across the internal resistance of the battery, Giving you this scenario: Well again you are putting the probes together at t = 0 and hence you will get zero voltage. You simply can't measure any voltage this way at t = 0. So, how can anyone measure it: 1) Impossible way - Split the battery into two components - an ideal battery and a resistor equivalent to internal resistance and put the probes across the resistor. This will give you battery potential at t= 0. 2) Possible way - Usually internal resistance is small. Take a bigger resistor and put it in series with the capacitor and measure the voltage across that resistor. At t= 0, this will give you almost the battery potential. Almost because some potential drop is across the internal resistor as well. After a long time though, the current will diminish to zero and the circuit will essentially be open. In open circuits, there is no point of resistors and hence the circuit becomes equivalent to the initial charged circuit where you can measure all the battery potential across the capacitor. WhiskeyjackWhiskeyjack \$\begingroup\$ "Now coming to your doubt, initially at t = 0, there was a potential source which made the electrons move. However, the current was moving so rapidly through the capacitor that you were unable to measure a potential drop across it." --- From that I've just got this epiphany that initially there is no voltage on the capacitor because the amount of charge Leaving the capacitor on one plate is the same as the amount of charge arriving on the other plate and then the voltage starts building up when there is more and more charge on one plate in comparison to the other one. Am I right or am I right? \$\endgroup\$ – Shady Programmer May 6 '15 at 13:30 \$\begingroup\$ @Shady - I think you are right but this case holds for capacitor only, I guess. Considering a resistor, equal number of charges reach one end and leave the other end. Going by your logic, there shouldn't be a potential drop across a resistor. However this isn't true. We both know that. \$\endgroup\$ – Whiskeyjack May 6 '15 at 18:04 \$\begingroup\$ I've researched, I've thought about it long and hard and found an answer to that. Voltage is a broad term that describes couple of things. It's an electromotive force, potential difference, electrical potential energy per unit of charge. What we've done is we've talked about potential difference between capacitor plates measured in volts. Considering a resistor we're now talking about an electrical potential energy of EACH CHARGE unit that is "LOST" (converted to heat) also measured in volts. \$\endgroup\$ – Shady Programmer May 8 '15 at 20:24 \$\begingroup\$ It is because of this broad terminology that we can talk about an excess of charge on one plate of a capacitor in comparison to the other plate and call it "VOLTAGE over capacitor" and also talk about the transfer of energy (in joules) that each coulomb dissipates in a resistor (V=J/C) and call it "VOLTAGE over resistor". Now can a brother get an AMEN or are my findings flawed ? \$\endgroup\$ – Shady Programmer May 8 '15 at 20:54 I think the main point here is that the notion of voltage lagging current by 90deg is a theoretical best case, and in practice the lag will be slightly less. In reality the connecting leads have some resistance, so the point at which the capacitor's voltage is zero will occur slightly later in time than the point at which the AC generator's output is zero. Hence the PD driving the current. IanRIanR If you look back at the graph, the equation \$ I=C*{dV}/{dt} \$ does not equate a level to a level with a phase shift. It equates a level to a rate of change or a slope. It requires current to change the voltage, and that's exactly what's happening in the graph. Instantaneous points are weird. Now that we've got the math out of the way, I'll also mention that you will never get that graph in real life. Real capacitors also have some inductance, which will smooth out the sharp transition at the beginning, assuming \$V=I=0\$ to start. AaronDAaronD Capacitors needs current to develop voltage. So first there should be current before the voltage. Current leads voltage. (no pun intended) Voltage lags current. Just trying to visualize intuitively. chintuchintu \$\begingroup\$ Your answer is like saying "It happens this way because that's the way it works" \$\endgroup\$ – Shady Programmer May 8 '15 at 20:59 To me, the answer to this question is very intuitive. Notwithstanding the math, it is really very simple if reduced to what happens with a capacitor in a DC circuit. If you connect a battery to a capacitor, current must flow into the capacitor to charge it up. If the capacitor is not charged, then the voltage across the capacitor is zero before it is connected to the battery. The instant (and when I say instant, I mean an infinitely small point in time) the battery is connected to the capacitor the battery begins to charge the capacitor, but the capacitor does not charge up to the battery's voltage instantly. No matter the value of the capacitor, it takes some time for this to happen. It is very quick for a small value of capacitance, and it takes longer for a large value of capacitance, but no matter the size of the capacitor it takes some amount of time. The current is initially large, but as the voltage charge across the capacitor approaches the battery voltage, the amount of current falls, until such time as the capacitor is fully charged. Thus, the voltage is behind (lagging) the current. When the capacitor is charged to the battery's voltage, for a perfect capacitor, the current is zero; for a real-world capacitor in good working order, the current is extremely small. Think about what would happen if you connect a 100,000 mfd capacitor across a 12 volt power source? If you do that, you better connect it through a resistor to limit the current to a safe value, or have a very large power capacity power source. When first connected, the capacitor would be almost a dead short. Current would be limited only by the value of the resistor. When the capacitor is charged to 12 volts, current will become almost zero for a good quality capacitor. That is why large broadcast transmitters charge their oil filled rectifier power supply capacitors through a resistor of appropriate value, which is shorted out by a contactor once the capacitor is fully charged (typically after about 1 second after the power supply is turned on). This is more complicated to visualize on an AC circuit, but it works exactly the same way. The math just becomes more complicated. But this is why a capacitor bank, shunted across an AC power line, can provide reactive power for voltage support when the line has inductive loads. Right after the sine wave just begins to move closer to zero, the capacitor voltage is still building up almost 90 degrees behind the power line's wave form and begins to discharge its energy to support the power line's voltage. Without such capacitor banks, our power system would be very inefficient. Hope this helps your understanding. EricEric I like to think as follows: Capacitors are basically two plates isolated by a dielectric. To have a voltage between the two plates you should charge them first. To charge them you must source current, so the voltage between the plates is like a response to the current you gave. Inductors, on other side, behaves in respect of Lenz's Law. Voltage is correlated to electric field. So, when you apply voltage (electric field) on a winding you make a current flows on it, which leads us to think that the current on a inductor is like a response to the voltage applied. Pedro QuadrosPedro Quadros \$\begingroup\$ Capacitors - okey seems fair enough and straight forward but why is there a delay in response to the current ? As soon as charge flows onto one plate of the capacitor it means that the potential between plates changes AS SOON as each particle arrives onto the plate so where does the lag come from ? \$\endgroup\$ – Shady Programmer May 6 '15 at 10:54 \$\begingroup\$ Like you, I don't know what really happens inside it. All I'm doing is guessing upon physics theory. I suppose the delay is due to the time the other plate takes to react. When you start charging a capacitor, firstly you charge one plate with V+. The other side, in response (due to the electric field) is charged with V-, but I suppose its not instantaneous, due to the dielectric properties. \$\endgroup\$ – Pedro Quadros May 6 '15 at 13:21 \$\begingroup\$ Now, supposing it really has a delay, let's see what's happen on AC. Firstly one side (plate 1) is charged with V+. The plate 2, at first instance, is neutral. After delay it is V- (and so we have the maximum voltage between them). When plate 1 is V- it again takes a time to plate 2 be V+ (maximum voltage on capacitor). Again, as I said, all I'm doing is supposition and hope, as you, to get it clarified by one which properly know about the subject. \$\endgroup\$ – Pedro Quadros May 6 '15 at 13:28 \$\begingroup\$ tf.uni-kiel.de/matwis/amat/elmat_en/kap_3/backbone/r3_7_2.html Like shown by the link, greater is the dielectric constant, greater is the refraction of the material. We also know that greater is the refraction index, lower it's speed of the wave on it. So, since the dielectric between the plates has a greater value of refraction than copper, we expect to have a delay when a wave crosses it. \$\endgroup\$ – Pedro Quadros May 6 '15 at 13:42 Thanks for contributing an answer to Electrical Engineering Stack Exchange! Not the answer you're looking for? Browse other questions tagged voltage capacitor current or ask your own question. How can there be a current without a voltage? Does attraction of charges or voltage cause a current Why is there no net current in a wire without a voltage applied? Does a capacitor store voltage? In a DC circuit with one battery and one resistor, why does the voltage drop remain the same irrespective of the value of the resistor? Is my understanding of current, voltage, and resistance correct? What does voltage really do? Does decrease in voltage decreases the speed of electrons in a series circuit?
CommonCrawl
Plenary Wed-1: Laplace Lecture (Tony Cai) Plenary Wed-2: Public Lecture (Young-Han Kim) Plenary Wed-3: Wald Lecture 2 (Martin Barlow) Plenary Wed-4: IMS Medallion Lecture (Gerard Ben Arous) Invited 05: Recent Advances in Shape Constrained Inference (Organizer: Bodhisattva Sen) Invited 06: Optimization in Statistical Learning (Organizer: Garvesh Raskutti) Invited 01: Conformal Invariance and Related Topics (Organizer: Hao Wu) Invited 14: Optimal Transport (Organizer: Philippe Rigollet) Invited 21: Probabilistic Theory of Mean Field Games (Organizer: Xin Guo) Invited 35: Stochastic Analysis in Mathematical Finance and Insurance (Organizer: Marie Kratz) Invited 40: KSS Invited Session: Nonparametric and Semi-parametric Approaches in Survival Analysis (Organizer: Woncheol Jang) Invited 03: Potential Theory for Non-local Operators and Jump Processes (Organizer: Panki Kim) Invited 10: Change-point Problems for Complex Data (Organizer: Claudia Kirch) Invited 12: Statistics for Data with Geometric Structure (Organizer: Sungkyu Jung) Invited 25: Random Graphs (Organizer: Christina Goldschmidt) Invited 36: Problems and Approaches in Multi-Armed Bandits (Organizer: Vianney Perchet) Organized 09: Random Matrices and Infinite Particle Systems (Organizer: Hirofumi Osada) Organized 18: Advanced Learning Methods for Complex Data Analysis (Organizer: Xinlei Wang) Organized 27: Bayesian Inference for Complex Models (Organizer: Joungyoun Kim) Organized 28: Recent advances in Time Series Analysis (Organizer: Changryoung Baek) Organized 03: Gaussian Processes (Organizer: Naomi Feldheim) Organized 20: Theories and Applications for Complex Data Analysis (Organizer: Arlene K.H. Kim) Organized 29: Sequential Analysis and Applications (Organizer: Alexander Tartakovsky) Contributed 29: Spatial Data Analysis Contributed 13: Random Structures Contributed 20: Copula Modeling Contributed 26: Multivariate Data Analysis Contributed 31: Statistical Prediction Contributed 03: Numerical Study of Stochastic Processes / Stochastic Interacting Systems Contributed 08: Study of Various Distributions Contributed 12: Optimal Transport Contributed 27: Machine Learning / Structural Equation Poster II-1: Poster Session II-1 Poster II-2: Poster Session II-2 Plenary Wed-1 Laplace Lecture (Tony Cai) Jul 20 Tue, 8:00 PM — 9:00 PM EDT Transfer Learning: Optimality and adaptive algorithms Tony Cai (University of Pennsylvania) Human learners have the natural ability to use knowledge gained in one setting for learning in a different but related setting. This ability to transfer knowledge from one task to another is essential for effective learning. In this talk, we consider statistical transfer learning in various settings with a focus on nonparametric classification based on observations from different distributions under the posterior drift model, which is a general framework and arises in many practical problems. We first establish the minimax rate of convergence and construct a rate-optimal weighted K-NN classifier. The results characterize precisely the contribution of the observations from the source distribution to the classification task under the target distribution. A data-driven adaptive classifier is then proposed and is shown to simultaneously attain within a logarithmic factor of the optimal rate over a large collection of parameter spaces. Runze Li (Pennsylvania State University) Public Lecture (Young-Han Kim) Jul 20 Tue, 9:00 PM — 10:00 PM EDT Structure and Randomness in Data Young-Han Kim (University of California at San Diego and Gauss Labs Inc.) In many engineering applications ranging from communications and networking to compression and storage to artificial intelligence and machine learning, the main goal is to reveal, exploit, or even design structure in apparently random data. This talk illustrates the art and science of such information processing techniques through a variety of examples, with a special focus on data storage systems from memory chips to cloud storage platforms. Keywords: Information theory, noise, manufacturing, computer vision, distributed computing, probability laws. Joong-Ho Won (Seoul National University) Recent Advances in Shape Constrained Inference (Organizer: Bodhisattva Sen) Jul 20 Tue, 10:30 PM — 11:00 PM EDT Global rates of convergence in mixture density estimation Arlene Kyoung Hee Kim (Korea University) In this talk, we consider estimating a monotone decreasing density f_0 represented by a scale mixture of uniform densities. We first derive a general bound on the hellinger accuracy of the MLE over convex classes. Using this bound with an entropy calculation, we provide a different proof for the convergence of the MLE for d=1. Then we consider a possible multidimensional extension. We can prove, for d ≥ 2, that the rate is as conjectured by Pavlides and Wellner under the assumption that the density is bounded from above and below and supported on a compact region. We are exploring strategies for weakening the assumptions. Convex regression in multidimensions Adityanand Guntuboyina (University of California Berkeley) I will present results on the rates of convergence of the least squares estimator for multidimensional convex regression with polytopal domains. Our results imply that the least squares estimator is minimax suboptimal when the dimension exceeds 5. This is joint work with Gil Kur, Frank Fuchang Gao and Bodhisattva Sen. Multiple isotonic regression: limit distribution theory and confidence intervals Qiyang Han (Rutgers University) In the first part of the talk, we study limit distributions for the tuning-free max-min block estimators in multiple isotonic regression under both fixed lattice design and random design settings. We show that at a fixed interior point in the design space, the estimation error of the max-min block estimator converges in distribution to a non-Gaussian limit at certain rate depending on the number of vanishing derivatives and certain effective dimension and sample size that drive the asymptotic theory. The limiting distribution can be viewed as a generalization of the well-known Chernoff distribution in univariate problems. The convergence rate is optimal in a local asymptotic minimax sense. In the second part of the talk, we demonstrate how to use this limiting distribution to construct tuning-free pointwise nonparametric confidence intervals in this model, despite the existence of an infinite-dimensional nuisance parameter in the limit distribution that involves multiple unknown partial derivatives of the true regression function. We show that this difficult nuisance parameter can be effectively eliminated by taking advantage of information beyond point estimates in the block max-min and min-max estimators through random weighting. Notably, the construction of the confidence intervals, even new in the univariate setting, requires no more efforts than performing an isotonic regression for once using the block max-min and min-max estimators, and can be easily adapted to other common monotone models. This talk is based on joint work with Hang Deng and Cun-Hui Zhang. Bodhisattva Sen (Columbia University) Optimization in Statistical Learning (Organizer: Garvesh Raskutti) Statistical inference on latent network growth processes using the PAPER model Min Xu (Rutgers University) We introduce the PAPER (Preferential Attachment Plus Erods--Renyi) model for random networks, in which we let a random network G be the union of a preferential attachment (PA) tree T and additional Erdos--Renyi (ER) random edges. The PA tree component captures fact that real world networks often have an underlying growth/recruitment process where vertices and edges are added sequentially and the ER component can be regarded as random noise. Given only a single snapshot of the final network G, we study the problem of constructing confidence sets for the root node of the unobserved growth process, which can be patient-zero in a disease infection network or the source of fake news in a social media network. We propose inference algorithm based on Gibbs sampling that scales to networks of millions of nodes and provide theoretical analysis showing that the expected size of the confidence set is small so long as the noise level of the ER edges is not too large. We also propose variations of the model in which multiple growth processes occur simultaneously, reflecting the growth of multiple communities, and we use these models to derive a new approach community detection. Adversarial classification, optimal transport, and geometric flows Nicolas Garcia Trillos (University of Wisconsin-Madison) The purpose of this talk is to provide an explicit link between the three topics that form the talk's title, and to introduce a new perspective (more dynamic and geometric) to understand robust classification problems. For concreteness, we will discuss a version of adversarial classification where an adversary is empowered to corrupt data inputs up to some distance \epsilon. We will first describe necessary conditions associated with the optimal classifier subject to such an adversary. Then, using the necessary conditions we derive a geometric evolution equation which can be used to track the change in classification boundaries as \veps varies. This evolution equation may be described as an uncoupled system of differential equations in one dimension, or as a mean curvature type equation in higher dimension. In one dimension we rigorously prove that one can use the initial value problem starting from \veps=0, which is simply the Bayes classifier, to solve for the global minimizer of the adversarial problem. Global optimality is certified using a duality principle between the original adversarial problem and an optimal transport problem. Several open questions and directions for further research will be discussed. Capturing network effect via fused lasso penalty with application on shared-bike data Given a dataset with network structures, one of the common research interests is to model nodal features accounting for network effects. In this study, we investigate shared-bike data in Seoul, under a spatial network framework focusing on the rental counts of each station. Our proposed method models rental counts via a generalized linear model with regularizations. The regularization is made via fused lasso penalty which is devised to capture network effect. In this model, parameters are posed in a station-specific manner. The fused lasso penalty terms are applied on the parameters associated with locationally nearby stations. This approach facilitates parameters corresponding to neighboring stations to have the same value and account for underlying network effect in a data-adaptive way. The proposed method shows promising results. Garvesh Raskutti (University of Wisconsin-Madison) Random Matrices and Infinite Particle Systems (Organizer: Hirofumi Osada) Dynamical universality for random matrices Hirofumi Osada (Kyushu University) We establish an invariance principle corresponding to the universality of random matrices. More precisely, we prove dynamical universality of random matrices in the sense that, if random point fields $ \mu ^N $ of $ N $-particle systems describing eigenvalues of random matrices or log-gases with general self-interaction potentials $ V^N $ converge to some random point field $ \mu $, then the associated natural $ \mu ^N $-reversible diffusion processes represented by solutions of stochastic differential equations (SDE) converge to some $ \mu $-reversible diffusion processes given by a solution of the infinite-dimensional stochastic differential equations (ISDE). Our results are general theorems and can be applied to various random point fields related to random matrices such as sine, Airy, Bessel, and Ginibre random point fields. The representations of finite-dimensional SDEs describing the $ N $-particle systems are very complicated in general. The limit ISDE has simple and universal representations, nevertheless, according to a class of random matrices such as bulk, soft-edge, and hard-edge scaling. We thus prove ISDE such that the infinite-dimensional Dyson model and Airy, Bessel, and Ginibre interacting Brownian motions are universal dynamical objects. The key ingredients are (1) Local uniform convergence of correlation functions to that of the limit point process. (2) The uniqueness of a weak solution of the limit ISDE, which deduces the uniqueness of Dirichlet forms. Concerning (2), we use the result in [1] and [2]. [1] Hirofumi Osada, Hideki Tanemura, Infinite-dimensional stochastic differential equations and tail $\sigma$-fields, Probability Theory and Related Fields 177, 1137-1242 (2020). [2] Yosuke Kawamoto, Hirofumi Osada, Hideki Tanemura, Uniqueness of Dirichlet forms related to infinite systems of interacting Brownian motions, (online) Potential Anal. Signal processing via the stochastic geometry of spectrogram level sets Subhroshekhar Ghosh (National University of Singapore) Spectrograms are fundamental tools in the detection, estimation and analysis of signals in the time-frequency analysis paradigm. The spectrogram of a signal (usually corrupted with noise) is the squared magnitude of its short time Fourier transform (STFT), which in turn is a generalised version of the classical Fourier transform, augmented with a window in the time domain. Signal analysis via spectrograms has traditionally explored their peaks, i.e. their maxima, complemented by a recent interest in their zeros or minima. In particular, recent investigations have demonstrated connections between Gabor spectrograms of Gaussian white noise and Gaussian analytic functions (abbrv. GAFs) in different geometries. However, the zero sets (or the maxima or minima) of GAFs have a complicated stochastic structure, which makes a direct theoretical analysis of usual spectrogram based techniques via GAFs a difficult proposition. These techniques, in turn, largely rely on statistical observables from the analysis of spatial data, whose distributional properties for spectrogram extrema are mostly understood only at an empirical level. In this work, we investigate spectrogram analysis via the stochastic, geometric and analytical properties of their level sets. We obtain theorems demonstrating the efficacy of a spectrogram level sets based approach to the detection and estimation of signals, framed in a concrete inferential set-up. Exploiting these ideas as theoretical underpinnings, we propose a level sets based algorithm for signal analysis that is intrinsic to given spectrogram data. We substantiate the effectiveness of the algorithm by exten sive empirical studies. Our results also have theoretical implications for spectrogram zero based approaches to signal analysis. Based on joint work with Meixia Lin and Dongfang Sun. Logarithmic derivatives and local densities of point processes arising from random matrices Shota Osada (Kyushu University) We talk about a distribution (a generalized function) theory for point processes. We show that a logarithmic derivative in the distributional sense can indicate the local density of the point process. This theory is especially effective for point processes appearing in random matrix theory. In particular, using this result, we solve infinite-dimensional stochastic differential equations associated with the point process given by de Branges spaces, so-called integrable kernels, and random matrices such as Airy, sine, and Bessel point processes. [2] Conventionally, the point process that describes an infinite particle system is described by the Dobrushin-Lanford-Ruelle (DLR) equation. The point process of an infinite particle system appearing in a random matrix has a logarithmic potential as an interaction potential. Because the logarithmic potential is not integrable at infinity, the DLR equation cannot describe the point process as it is. Logarithmic derivative for point process is a concept introduced in [1] to settle this problem. There must be a logarithmic derivative and local density of the point process to solve the infinite-dimensional stochastic differential equation. [3] With our result, the existence of a logarithmic derivative with suitable integrability is sufficient for the construction of the stochastic dynamics as a solution of infinite-dimensional stochastic differential equations. [1] Hirofumi Osada, Infinite-dimensional stochastic differential equations related to random matrices, Probability theory and related fields, 2012, 153(3-4), 471--509. [2] Alexander I Bufetov, Andrey V Dymov, Hirofumi Osada, The logarithmic derivative for point processes with equivalent Palm measures, J. Math. Soc. Japan, 71(2), 2019, 451--469. Stochastic differential equations for infinite particle systems of jump type with long range interactions Hideki Tanemura (Keio university) Infinite dimensional stochastic differential equations (ISDEs) describing systems with an infinite number of particles are considered. Each particle undergoes Levy process, and interaction between particles is given by long range interaction potential, which is not only of Ruelle's class but also logarithmic. We discuss the existence and uniqueness of strong solutions of the ISDEs. This talk is based on a collaboration with Shota Esaki (Fukuoka University). Advanced Learning Methods for Complex Data Analysis (Organizer: Xinlei Wang) Peel learning for pathway-related outcome prediction Rui Feng (University of Pennsylvania) Traditional regression models are limited in outcome prediction due to their parametric nature. Current deep learning methods allow for various effects and interactions and have shown improved performance, but they typically need to be trained on a large amount of data to obtain reliable results. Gene expression studies often have small sample sizes but high dimensional correlated predictors so that traditional deep learning methods are not readily applicable. In this talk, I present peel learning, a novel neural network that incorporates the prior relationship among genes. In each layer of learning, overall structure is peeled into multiple local substructures. Within the substructure, dependency among variables is reduced through linear projections. The overall structure is gradually simplified over layers and weight parameters are optimized through a revised backpropagation. We applied PL to a small lung transplantation study to predict recipients' postsurgery primary graft dysfunction using donors' gene expressions within several immunology pathways, where PL showed improved prediction accuracy compared to conventional penalized regression, classification trees, feed-forward neural network, and a neural network assuming prior network structure. Through simulation studies, we also demonstrated the advantage of adding specific structure among predictor variables in neural network, over no or uniform group structure, which is more favorable in smaller studies. The empirical evidence is consistent with our theoretical proof of improved upper bound of PL's complexity over ordinary neural networks. Principal boundary for data on manifolds Zhigang Yao (National University of Singapore) We will discuss the problem of finding principal components to the multivariate datasets, that lie on an embedded nonlinear Riemannian manifold within the higher-dimensional space. Our aim is to extend the geometric interpretation of PCA, while being able to capture the non-geodesic form of variation in the data. We introduce the concept of a principal sub-manifold, a manifold passing through the center of the data, and at any point of the manifold, it moves in the direction of the highest curvature in the space spanned by the eigenvectors of the local tangent space PCA. We show the principal sub-manifold yields the usual principal components in Euclidean space. We illustrate how to find, use and interpret the principal sub-manifold, with which a classification boundary can be defined for data sets on manifolds. Probabilistic semi-supervised learning via sparse graph structure learning Li Wang (University of Texas at Arlington) We present a probabilistic semi-supervised learning (SSL) framework based on sparse graph structure learning. Different from existing SSL methods with either a predefined weighted graph heuristically constructed from the input data or a learned graph based on the locally linear embedding assumption, the proposed SSL model is capable of learning a sparse weighted graph from the unlabeled high-dimensional data and a small amount of labeled data, as well as dealing with the noise of the input data. Our representation of the weighted graph is indirectly derived from a unified model of density estimation and pairwise distance preservation in terms of various distance measurements, where latent embeddings are assumed to be random variables following an unknown density function to be learned and pairwise distances are then calculated as the expectations over the density for the model robustness to the data noise. Moreover, the labeled data based on the same distance representations is leveraged to guide the estimated density for better class separation and sparse graph structure learning. A simple inference approach for the embeddings of unlabeled data based on point estimation and kernel representation is presented. Extensive experiments on various data sets show the promising results in the setting of SSL compared with many existing methods, and significant improvements on small amounts of labeled data. Bayesian modeling for paired data in genome-wide association studies with application to breast cancer Min Chen (University of Texas at Dallas) Genome-wide association studies (GWAS) has emerged as a useful tool to identify common genetic variants that are linked to complex diseases. Conventional GWAS are based on the case-control design where the individuals in cases and controls are independent. In cancer research, matched pair designs, which compare tumor tissues with normal ones from the same subjects, are becoming increasingly popular. Such designs succeed in identifying somatic mutations in tumors while controlling both genetic and environmental factors. Somatic variation is one of the most important cancer risk factors that contribute to continuous monitoring and early detection of various cancers. However, most GWAS analysis methods, developed for unrelated samples in case-control studies, cannot be employed in the matched pair designs. A novel framework is proposed in this manuscript to accommodate for the particularity of matched-data in association studies of somatic mutation effects. In addition, we develop a Bayesian model to combine multiple markers to further improve the power of mapping genome regions to cancer risks. Xinlei Wang (Southern Methodist University) Bayesian Inference for Complex Models (Organizer: Joungyoun Kim) Nonparametric Bayesian latent factor model for multivariate functional data with covariate dependency Yeonseung Chung (Korea Advanced Institute of Science and Technology (KAIST)) Nowadays, multivariate functional data are frequently encountered in many fields of science. While there exist a variety of methodologies for univariate functional clustering, the approach for multivariate functional clustering are less studied. Moreover, there is little research for the functional clustering methods incorporating additional covariate information. In this paper, we propose a Bayesian nonparametric sparse latent factor model for covariate-dependent multivariate functional clustering. Multiple functional curves are represented by basis coefficients for splines, which are reduced to latent factors. Then, the factors and covariates are jointly modeled using a Dirichlet process (DP) mixture of Gaussians to facilitate a model-based covariate dependent multivariate functional clustering. The method is further extended to dynamic multivariate functional clustering to handle sequential multivariate functional data. The proposed methods are illustrated through a simulation study and applications to Canadian weather and air pollution data. Bayesian model selection for ultrahigh-dimensional doubly-intractable distributions Jaewoo Park (Yonsei University) Doubly intractable distributions commonly arise in many complex statistical models in physics, epidemiology, ecology, social science, among other disciplines. With an increasing number of model parameters, they often result in ultrahigh-dimensional posterior distributions; this is a challenging problem and is crucial for developing the computationally feasible approach. A particularly important application of ultrahigh-dimensional doubly intractable models is network psychometrics, which gets attention in item response analysis. However, its parameter estimation method, maximum pseudo-likelihood estimator (MPLE) combining with lasso certainly ignores the dependent structure, so that it is inaccurate. To tackle this problem, we propose a novel Markov chain Monte Carlo methods by using Bayesian variable selection methods to identify strong interactions automatically. With our new algorithm, we address some inferential and computational challenges: (1) likelihood functions involve doubly-intractable normalizing functions, and (2) increasing number of items can lead to ultrahigh dimensionality in the model. We illustrate the application of our approaches to challenging simulated and real item response data examples for which studying local dependence is very difficult. The proposed algorithm shows significant inferential gains over existing methods in the presence of strong dependence among items. Post-processed posteriors for banded covariances Kwangmin Lee (Seoul National University) We consider Bayesian inference of banded covariance matrices and propose a post- processed posterior. The post-processing of the posterior consists of two steps. In the first step, posterior samples are obtained from the conjugate inverse-Wishart posterior which does not satisfy any structural restrictions. In the second step, the posterior samples are transformed to satisfy the structural restriction through a post-processing function. The conceptually straightforward procedure of the post-processed posterior makes its computation efficient and can render interval estimators of functionals of covariance matrices. We show that it has nearly optimal minimax rates for banded covariances among all possible pairs of priors and post-processing functions. Furthermore, we prove that, the expected coverage probability of the $(1-\alpha)$100% highest posterior density region of the post-processed posterior is asymptotically $1-\alpha$ with respect to a conventional posterior distribution. It implies that the highest posterior density region of the post-processed posterior is, on average, a credible set of a conventional posterior. The advantages of the post-processed posterior are demonstrated by a simulation study and a real data analysis. Adaptive Bayesian inference for current status data on a grid Minwoo Chae (Pohang University of Science and Technology) We study a Bayesian approach to the inference of an event time distribution in the current status model where observation times are supported on a grid of potentially unknown sparsity and multiple subjects share the same observation time. The model leads to a very simple likelihood, but statistical inferences are non-trivial due to the unknown sparsity of the grid. In particular, for an inference based on the maximum likelihood estimator, one needs to estimate the density of the event time distribution which is challenging because the event time is not directly observed. We consider Bayes procedures with a Dirichlet prior on the event time distribution. With this prior, the Bayes estimator and credible sets can be easily computed via a Gibbs sampler algorithm. Our main contribution is to provide thorough investigation of frequentist's properties of the posterior distribution. Specifically, it is shown that the posterior convergence rate is adaptive to the unknown sparsity of the grid. If the grid is sufficiently sparse, we further prove the Bernstein-von Mises theorem which guarantees frequentist's validity of Bayesian credible sets. A numerical study is also conducted for illustration. Joungyoun Kim (Yonsei Univesity) Recent advances in Time Series Analysis (Organizer: Changryoung Baek) Resampling long-range dependent time series Shuyang Bai (University of Georgia) For time series exhibiting long-range dependence, inference through resampling is of particular interest since the asymptotic distributions are often difficult to determine statistically. On the other hand, due to the strong dependence and the non-standard scaling, designing versatile resampling strategies and establishing their validity is challenging. We shall introduce some progress on this direction. Robust test for structural instability in dynamic factor models Changryong Baek (Sungkyunkwan University) In this paper, we consider a robust test for structural breaks in dynamic factor models. Our framework considers structural changes when the underlying high dimensional time series is contaminated by some outlying observations, which is typically observed in many real applications such as fMRI, economics and finance. We propose a test based on the robust estimation of vector autoregressive model for principal component factors using minimum density power divergence estimator. Simulations study shows excellent finite sample performance, higher powers while achieving good sizes in all cases considered. Our method is illustrated to resting state fMRI series to detect brain connectivity changes. It shows that brain connectivity indeed changes even in the resting state and this is not an artifact of outlier effects. On scaling in high dimensions Gustavo Didier (Tulane University) Scaling relationships have been found in a wide range of phenomena that includes coastal landscapes, hydrodynamic turbulence, the metabolic rates of animals and Internet traffic. For scale invariant systems, also called fractals, a continuum of time scales contributes to the observed dynamics, and the analyst's focus is on identifying mechanisms that relate the scales, often in the form of exponents. In this talk, we will look into the little explored topic of scale invariance in high dimensions, which is especially important in the modern era of "Big Data". We will discuss the role played by wavelets in the analysis of self-similar stochastic processes and visit recent contributions to the wavelet modeling of high- and multidimensional scaling systems. This is joint work with P. Abry (CNRS and ENS-Lyon), B.C. Boniece (Washington University in St Louis) and H. Wendt (CNRS and Université de Toulouse). Thresholding and graphical local Whittle estimation Marie Duker (Cornell University) The long-run variance matrix and its inverse, the so-called precision matrix, give, respectively, information about correlations and partial correlations between dependent component series of multivariate time series around zero frequency. This talk will present non-asymptotic theory for estimation of the long-run variance and precision matrices for high-dimensional Gaussian time series under general assumptions on the dependence structure including long-range dependence. The presented results for thresholding and penalizing versions of the classical local Whittle estimator ensure consistent estimation in a possibly high-dimensional regime. The key technical result is a concentration inequality of the local Whittle estimator for the long-run variance matrix around the true model parameters. In particular, it handles simultaneously the estimation of the memory parameters which enter the underlying model. Cotrending: testing for common deterministic trends in varying means model Vladas Pipiras (University of North Carolina at Chapel Hill) In a varying means model, the temporary evolution of a p-vector system is determined by p deterministic nonparametric functions superimposed by error terms, possibly dependent cross sectionally. The basic interest is in linear combinations across the p dimensions that make the deterministic functions constant over time. The number of such linearly independent linear combinations is referred to as a cotrending dimension, and their spanned space as a cotrending space. This work puts forward a framework to test statistically for cotrending dimension and space. Connections to principal component analysis and cointegration are also considered. Finally, a simulation study to assess the finite-sample performance of the proposed tests, and applications to several real data sets are also provided. Spatial Data Analysis Wild bootstrap for high-dimensional spatial data Daisuke Kurisu (Tokyo Institute of Technology) This study establishes a high-dimensional CLT for the sample mean of p-dimensional spatial data observed over irregularly spaced sampling sites in R^d, allowing the dimension p to be much larger than the sample size n. We adopt a stochastic sampling scheme that can flexibly generate irregularly spaced sampling sites and include both pure increasing domain and mixed increasing domain frameworks. To facilitate statistical inference, we develop the spatially dependent wild bootstrap (SDWB) and justify its asymptotic validity in high dimensions by deriving error bounds that hold almost surely conditionally on the stochastic sampling sites. Our dependence conditions on the underlying random field cover a wide class of random fields such as Gaussian random fields and continuous autoregressive moving average random fields. Through numerical simulations and a real data analysis, we demonstrate the usefulness of our bootstrap-based inference in several applications, including joint confidence interval construction for high-dimensional spatial data and change-point detection for spatio-temporal data. Lifting scheme for streamflow data in river networks Seoncheol Park (Chungbuk National University) In this presentation, we suggest a new multiscale method for analyzing water pollutant data located in river networks. The main idea of the proposed method is to adapt the conventional lifting scheme, reflecting the characteristics of streamflow data in the river network domain. Due to the complexity of the data domain structure, it is difficult to apply the lifting scheme to the streamflow data directly. To solve this problem, we propose a new lifting scheme algorithm for streamflow data that incorporates flow-adaptive neighborhood selection, flow proportional weight generation, and flow-length adaptive removal point selection. A nondecimated version of the proposed lifting scheme is also suggested. We will provide a simulation study and a real data analysis of water pollutant data observed on the Geum-River basin in South Korea. Optimal designs for some bivariate cokriging models Subhadra Dasgupta (Indian Institute of Technology Bambay-Monash Research Academy) This article focuses on the estimation and design aspects of a bivariate collocated cokriging experiment. For a large class of covariance matrices a linear dependency criterion is identified, which allows the best linear unbiased estimator of the primary variable in a bivariate collocated cokriging setup to reduce to a univariate kriging estimator. Exact optimal designs for efficient prediction for such simple and ordinary reduced cokriging models, with one dimensional inputs are determined. Designs are found by minimizing the maximum and integrated prediction variance, where the primary variable is an Ornstein-Uhlenbeck process. For simple and ordinary cokriging models with known covariance parameters, the equispaced design is shown to be optimal for both criterion functions. The more realistic scenario of unknown covariance parameters is addressed by assuming prior distributions on the parameter vector, thus adopting a Bayesian approach to the design problem. The equispaced design is proved to be the Bayesian optimal design for both criteria. The work is motivated by designing an optimal water monitoring system for an Indian river. Yaeji Lim (Chung-Ang University) Poster II-1 Poster Session II-1 Nonconstant error variance in generalized propensity score model Doyoung Kim (Sungkyunkwan University) In observational study, the most salient challenge is to adjust for confounders to mimic randomized experiment. In the setting of more than two treatment levels, several generalized propensity score (GPS) models have been proposed to balance covariates among treatment groups. Those models assume some parametric forms for treatment variable distributions especially with constant variance assumption. With the existence of heteroskedasticity, the constant variance assumption might affect the existing propensity score methods and the causal effect of interest. In this paper, we propose a novel GPS method to handle non-constant variance in the treatment model by extending Xiao et al. (2020) with weighted least squares method. We conduct a set simulation studies and show that the proposed method outperforms in terms of covariate balance and low bias in causal effect estimates. Causal mediation analysis with multiple mediators of general structures Youngho Bae (Sungkyunkwan University) In assessing causal mediation effects, a challenge is that there can be more than one mediator on pathways from treatment to outcome. More precisely, we do not know exactly how many mediators are in the causal path and how they relate to each other. A few approaches have been proposed to estimate direct and indirect effects in the presence of two causally independent or dependent mediators. However, those methods cannot be generalized to settings of more than two mediators where causally independent and dependent mediators coexist. We propose a novel approach to identify direct and indirect effects under a general situation of multiple mediators: two causally dependent mediators (V,W) and one causally independent mediator (M). With our proposed sequential ignorability assumption, the overall treatment effect can be decomposed into direct and mediator-specific indirect effects. A sensitivity analysis strategy is developed for testing the proposed identifying assumptions. We can try to apply this method to the pollination data. In other words, we may use this approach to estimate the effect of a particular emission control technology, that installed on power plants, on ambient pollution where power plant emissions are potential mediators. A fuzzy clustering ensemble based Mapper algorithm SungJin Kang (Chung-Ang University) Mapper is a popular topological data analysis method to analyze structure of the complex high-dimensional dataset. Since Mapper algorithm can be applied to the clustering and feature selection with visualization, it is used in various fields such as biology, chemistry, etc. However, there are some resolution parameters to be chosen before applying the Mapper algorithm, and the results are sensitive to these selection. In this paper, we focus on the selection of the two resolution parameters, the number of intervals, and the overlapping percentage. We propose a new parameter selection method in Mapper based on ensemble technique. We generate multiple Mapper results under various parameters, and apply the fuzzy clustering ensemble method to combine the results. Three real data are considered to evaluate mapper algorithms including proposed one, and the results demonstrate the superiority of the proposed ensemble Mapper method. Analysis of the association between suicide attempts and meteorological factors Seunghyeon Kim (Chonnam National University) Several studies indicate that there is an association between suicide and meteorological factors, particularly an increase in ambient temperature increases the risk of suicide. Although suicide attempts are highly likely to lead to suicide in the future, research on the relationship between suicide attempts and meteorological factors is not done much. We evaluated the association between suicide attempts and meteorological factors and examined gender and age differences. Method: We studied 30,012 people who attempted suicide and hospitalized in the emergency room of medical institutions located in Seoul from January 1, 2014, to December 31, 2018. This information was provided by the National Emergency Department Information System data. Seven meteorological factors were studied: daily lowest temperature, highest temperature, average temperature, daily temperature difference, average relative humidity, sunshine duration, and average cloud cover in Seoul during the same period. Meteorological factors were categorized, and the daily Age-standardized Suicide Attempt rate (per 100,000) (ASDAR) was defined for each category. Subgroup analysis by gender and age was done to explore the association between meteorological factors and suicide attempts. From 2014 to 2018, the ASDAR was 61.3. The ASDAR for women was 69.3 and for men was 52.8, the highest suicide attempts by age in their 20s. In terms of the seven meteorological factors, suicide attempts increased as the lowest temperature, the highest temperature, the average temperature, and the relative humidity increased. Both genders showed an increase in suicide attempts as the lowest, the highest, the average temperature, and the relative humidity increased and showed the same trend in all ages except for women in their 20s. We found that the risk of suicide attempts increases as temperature and relative humidity increase. These results suggest that exposure to high temperatures can be a suicide attempt-inducing factor. Spectral clustering with the Wasserstein distance and its application SangHun Jeong (Pusan National University) The advance of modern automatic devices can produce a massive number of samples from the population of the individual subject. Although this development allows us to access the entire distributional structure for the population of each individual subject, traditional approaches tend to focus on detecting the local feature to recognize the pattern of the data. In this project, we consider the pattern recognition problem classifying the subject specific distributions into a few categories after estimating the subject specific distributions. Suggested approach consists of three stages procedure including the probability density estimation, the dissimilarity computation, and the clustering computation. Specifically, we use the kernel density estimator for the subject specific distribution in the first stage. Then, we focus on the Wasserstein distance to account for the dissimilarity between these distributions while using the optimal transport map for distance. Finally, we use such a dissimilar measure to figure out the structure of the Laplacian graph and conduct the spectral clustering to deal with these distributions contained not in the Euclidean space but some nonlinear space. We will demonstrate the benefit of the spectral clustering with the Wasserstein distance through simulation studies, applying our suggested method to the real data. Robust covariance estimation for partially observed functional data Hyunsung Kim (Chung-Ang University) In recent years, applications have emerged that produce partially observed functional data, where each trajectory is collected over individual-specific subinterval(s) within the whole domain of interest. Robustness to atypical partially observed curves in the application is a practical concern, especially in the dimension reduction step through functional principal component analysis (FPCA). Existing studies implemented FPCA by applying smoothing techniques to estimate mean and covariance functions under irregular functional data structure, however, its estimation is easily affected by outlying curves with heavy-tailed noises or spikes. In this study, we investigate the robust method for the covariance estimation by using bounded loss function, and it enables us to obtain robust functional principal components under partially observed functional data. Using the functional principal scores, we reconstruct the missing parts of trajectories. Numerical experiments show that our method provides a stable and robust estimation when the data contain the atypical curves. Fast Bayesian functional regression for non-Gaussian spatial data Yeo Jin Jung (Yonsei University) Functional generalized linear models (FGLM) have been widely used to study the relations between non-Gaussian response and functional covariates. However, most existing works assume independence among observations and therefore have limited applicability on correlated data. A particularly important example is functional data with spatial correlation, where we observe functions over spatial domains, such as the age population curve or temperature curve at each areal unit. In this paper, we extend FGLM by incorporating spatial random effects. However, such models have computational and inferential challenges. The high-dimensional spatial random effects cause the slow mixing of Markov chain Monte Carlo (MCMC) algorithms. Furthermore, spatial confounding can lead to bias in parameter estimates and inflate their variances. To address these issues, we propose an efficient Bayesian method using a sparse reparameterization of high-dimensional random effects. Furthermore, we study an often-overlooked challenge in functional spatial regression: practical issues in obtaining credible bands of functional parameters and assessing whether they provide nominal coverage. We apply our methods to simulated and real data examples, including malaria incidence data and US COVID-19 data. The proposed method is fast while providing accurate functional estimates. Jul 21 Wed, 6:00 AM — 7:00 AM EDT Low dimensional random fractals The behaviour of the random walk can often can be described by two indices, called by physicists the 'fractal' and 'walk' dimensions, and denoted by d_f and d_w. This lecture will look at the tools which enable us to calculate these, and obtain the associated transition probability or heat kernel bounds. Three kinds of estimate are needed: (1) control of the size of balls (2) control of the resistance across annuli, and (3) a smoothness result (a Harnack inequality). In the 'low dimensional case' the Harnack inequality is not needed, and (2) can be replaced by easier bounds on the resistance between points. Many random fractals of interest are low dimensional: examples include critical branching processes, the incipient infinite cluster (IIC) for percolation in high dimensions, and the uniform spanning tree. Critical percolation in d=2 remains a challenge however. IMS Medallion Lecture (Gerard Ben Arous) Random determinants and the elastic manifold Gerard Ben Arous (New York University) The elastic manifold is a paradigmatic representative of the class of disordered elastic systems. These are surfaces with rugged shapes resulting from a competition between random spatial impurities (preferring disordered configurations), on the one hand, and elastic self-interactions (preferring ordered configurations), on the other. The elastic manifold model is interesting because it displays a depinning phase transition and has a long history as a testing ground for new approaches in statistical physics of disordered media, for example for fixed dimension by Fisher (1986) using functional renormalization group methods, and in the high-dimensional limit by Mézard and Parisi (1992) using the replica method. We study the energy landscape of this model, and compute the (annealed) topological complexity both of total critical points and of local minima, in the Mezard-Parisi high dimensional limit. Our main result confirms the recent formulas by Fyodorov and Le Doussal (2020). It gives the phase diagram and identifies the boundary between simple and glassy phases. Our approach relies on new exponential asymptotics of random determinants, for non-invariant random matrices. This is joint work with Paul Bourgade and Benjamin McKenna (Courant Institute, NYU). Arup Bose (Indian Statistical Institute) Conformal Invariance and Related Topics (Organizer: Hao Wu) Asymptotics of determinants of discrete Laplacians Konstantin Izyurov (University of Helsinki) The zeta-regularized determinants of Laplace-Beltrami operators play an important role in analysis and mathematical physics. We show that for Euclidean surfaces with conical singularities that are glued of finitely many equal equilateral triangles or squares, these determinants appear in the asymptotic expansions of the determinants of discrete Laplacians, as the mesh size of a lattice discretization of the surface tends to zero. This establishes a particular case of a conjecture by Cardy and Peschel on the behavior of partition functions of critical lattice models, and their relation to partition functions of underlying Conformal field theories. Joint work with Mikhail Khristoforov. On Loewner evolutions with jumps Eveliina Peltola (Rheinische Friedrich-Wilhelms-Universität Bonn) I discuss the behavior of Loewner evolutions driven by a Levy process. Schramm's celebrated version (Schramm-Loewner evolution), driven by standard Brownian motion, has been a great success for describing critical interfaces in statistical physics. Loewner evolutions with other random drivers have been proposed, for instance, as candidates for finding extremal multifractal spectra, and some tree-like growth processes in statistical physics. Questions on how the Loewner trace behaves, e.g., whether it is generated by a (discontinuous) curve, whether it is locally connected, tree-like, or forest-like, have been partially answered in the symmetric alpha-stable case. We shall consider the case of general Levy drivers. Joint work with Anne Schreuder (Cambridge). Extremal distance and conformal radius of a CLE_4 loop Titus Lupu (Centre National de la Recherche Scientifique / Sorbonne Université) Consider CLE_4 in the unit disk and let be the loop of the CLE_4 surrounding the origin. Schramm, Sheffield and Wilson determined the law of the conformal radius seen from the origin of the domain surrounded by this loop. We complement their result by determining the law of the extremal distance between the loop and the boundary of the unit disk. More surprisingly, we also compute the joint law of these conformal radius and extremal distance. This law involves first and last hitting times of a one-dimensional Brownian motion. Similar techniques also allow us to determine joint laws of some extremal distances in a critical Brownian loop-soup cluster. This is a joint work with Juhan Aru (EPFL) and Avelio Sep_lveda (Universit_ Lyon 1 Claude Bernard). Hao Wu (Yau Mathematical Sciences Center, Tsinghua University) Optimal Transport (Organizer: Philippe Rigollet) Density estimation and conditional simulation using triangular transport Youssef Marzouk (Massachusetts Institute of Technology) Triangular transformations of measures, such as the Knothe-Rosenblatt rearrangement, underlie many new computational approaches for density estimation and conditional simulation. This talk discusses two aspects of such constructions. First, is the problem of estimating a triangular transformation given a sample from a distribution of interest—and hence, transport-driven density estimation. We present a general functional framework for representing monotone triangular maps between distributions, and analyze properties of maximum likelihood estimation in this framework. We demonstrate that the associated optimization problem is smooth and, under appropriate conditions, has no spurious local minima. This result provides a foundation for a greedy semi-parametric estimation procedure. Second, we discuss a conditional simulation method that employs a specific composition of maps, derived from the Knothe-Rosenblatt rearrangement, to push forward a joint distribution to any desired conditional. We show that this composed-map approach reduces variability in conditional density estimates and reduces the bias associated with any approximate map representation. Moreover, this approach motivates alternative estimation objectives that focus on the removal of dependence. For context, and as a pointer to an interesting application domain, we elucidate links between conditional simulation with composed maps and the ensemble Kalman filter. Estimation of Wasserstein distances in the spiked transport model Jonathan Niles-Weed (Courant Institute of Mathematical Sciences, New York University) We propose a new statistical model, the spiked transport model, which formalizes the assumption that two probability distributions differ only on a low-dimensional subspace. We study the minimax rate of estimation for the Wasserstein distance under this model and show that this low-dimensional structure can be exploited to avoid the curse of dimensionality. As a byproduct of our minimax analysis, we establish a lower bound showing that, in the absence of such structure, the plug-in estimator is nearly rate-optimal for estimating the Wasserstein distance in high dimension. We also give evidence for a statistical-computational gap and conjecture that any computationally efficient estimator is bound to suffer from the curse of dimensionality. Statistical estimation of barycenters in metric spaces and the space of probability measures Quentin Paris (National Research University Higher School of Economics) The talk presents rates of convergence for empirical barycenters over a large class of geodesic spaces with curvature bounds in the sense of Alexandrov. We show that parametric rates of convergence are achievable under natural conditions that characterise the bi-extendibility of geodesics emanating from a barycenter. We show that our results apply to infinite-dimensional spaces such as the 2-Wasserstein space, where bi-extendibility of geodesics translates into regularity of Kantorovich potentials Philippe Rigollet (Massachusetts Institute of Technology) Probabilistic Theory of Mean Field Games (Organizer: Xin Guo) Portfolio liquidation games with self-exciting order flow Ulrich Horst (Humboldt University Berlin) We analyze novel portfolio liquidation games with self-exciting order flow. Both the $N$-player game and the mean-field game are considered. We assume that players' trading activities have an impact on the dynamics of future market order arrivals thereby generating an additional transient price impact. Given the strategies of her competitors each player solves a mean-field control problem. We characterize open-loop Nash equilibria in both games in terms of a novel mean-field FBSDE system with unknown terminal condition. Under a weak interaction condition we prove that the FBSDE systems have unique solutions. Using a novel sufficient maximum principle that does not require convexity of the cost function we finally prove that the solution of the FBSDE systems do indeed provide existence and uniqueness of open-loop Nash equilibria. This is joint work with Guanxing Fu and Xiaonyu Xia. A mean-field game approach to equilibrium pricing in renewable energy certificate markets Sebastian Jaimungal (University of Toronto) Solar Renewable Energy Certificate (SREC) markets are a market-based system that incentivizes solar energy generation. A regulatory body imposes a lower bound on the amount of energy each regulated firm must generate via solar means, providing them with a tradeable certificate for each MWh generated. Firms seek to navigate the market optimally by modulating their SREC generation and trading rates. As such, the SREC market can be viewed as a stochastic game, where agents interact through the SREC price. We study this stochastic game by solving the mean-field game (MFG) limit with sub-populations of heterogeneous agents. Market participants optimize costs accounting for trading frictions, cost of generation, non-linear non-compliance costs, and generation uncertainty. Moreover, we endogenize SREC price through market clearing. We characterize firms' optimal controls as the solution of McKean-Vlasov (MV) FBSDEs and determine the equilibrium SREC price. We establish the existence and uniqueness of a solution to this MV-FBSDE, and prove that the MFG strategies form an $\epsilon$-Nash equilibrium for the finite player game. Finally, we develop a numerical scheme for solving the MV-FBSDEs and conduct a simulation study. Entropic optimal transport Marcel Nutz (Columbia University) Applied optimal transport is flourishing after computational advances have enabled its use in real-world problems with large data sets. Entropic regularization is a key method to approximate optimal transport in high dimensions while retaining feasible computational complexity. In this talk we discuss the convergence of entropic optimal transport to the unregularized counterpart as the regularization parameter vanishes, as well as the stability of entropic optimal transport with respect to its marginals. Based on joint works with Espen Bernton (Columbia), Promit Ghosal (MIT), Johannes Wiesel (Columbia). Xin Guo (University of California, Berkeley) Stochastic Analysis in Mathematical Finance and Insurance (Organizer: Marie Kratz) From signature based models in finance to affine and polynomial processes and back Christa Cuchiero (University of Vienna) Modern universal classes of dynamic processes, based on neural networks or signature methods, have recently entered the field of stochastic modeling, in particular in Mathematical Finance. This has opened the door to more data-driven and thus more robust model selection mechanisms, while first principles like no arbitrage still apply. We focus here on signature based models, i.e. (possibly Levy driven) stochastic processes whose characteristics are linear functions of an underlying process' signature and present methods how to learn these characteristics from data. From a more theoretical point of view, we show how these new models can be embedded in the framework of affine and polynomial processes, which have been -- due to their tractability -- the dominating process class prior to the new era of highly overparametrized dynamic models. Indeed, we prove that generic classes of models can be viewed as infinite dimensional affine processes, which in this setup coincide with polynomial processes. A key ingredient to establish this result is again the signature process. This then allows to get power series expansions for expected values of analytic functions of the process' marginals. The talk is based on joint works with Guido Gazzani, Francesca Primavera, Sara-Svaluto-Ferro and Josef Teichmann. Optimal dividends with capital injections at a level-dependent cost Ronnie Loeffen (University of Manchester) Assume the capital or surplus of an insurance company evolves randomly over time as in the Cramér-Lundberg model but where in addition the company has the possibility to pay out dividends to shareholders and to inject capital at a cost from shareholders. We impose that when the resulting surplus becomes negative the company has to decide whether to inject capital to get to a positive surplus level in order for the company to survive or to let ruin occur. The objective is to find the combined dividends and capital injections strategy that maximises the expected paid out dividends minus cost of injected capital, discounted at a constant rate, until ruin. Such optimal dividends and capital injections problems have been studied before but in the cae where the cost of capital (injections) is constant whereas we consider the setting where the cost of capital is level-dependent in the sense that it is higher when the surplus is below 0 than when it is above 0. We investigate optimality of a 3-parameter strategy with parameters -r < 0 < c < b where dividends are paid out to keep the surplus below b, capital injections are made in order to keep the surplus above c unless capital drops below the level -r in which case the company decides to let ruin occur. This is joint work with Zbigniew Palmowski. Exponential Lévy-type change-point models in mathematical finance Lioudmila Vostrikova (University of Angers) Marie Kratz (ESSEC Business School, CREAR) KSS Invited Session: Nonparametric and Semi-parametric Approaches in Survival Analysis (Organizer: Woncheol Jang) Smoothed quantile regression for censored residual lifetime Sangwook Kang (Yonsei University) We consider a regression modeling of the quantiles of residual lifetime at a specific time given a set of covariates. For estimation of regression parameters, we propose an induced smoothed version of the existing non-smooth estimating equations approaches. The proposed estimating equations are smooth in regression parameters, so solutions can be readily obtained via standard numerical algorithms. Moreover, smoothness in the proposed estimating equations enables one to obtain a closed form expression of the robust sandwich-type covariance estimator of regression estimators. To handle data under right censoring, inverse probabilities of censoring are incorporated as weights. Consistency and asymptotic normality of the proposed estimator are established. Extensive simulation studies are conducted to verify performances of the proposed estimator under various finite samples settings. We apply the proposed method to dental study data evaluating the longevity of dental restorations. Superefficient estimation of future conditional hazards based on marker information Enno Mammen (Heidelberg University) We introduce a new concept for forecasting future events based on marker information. The model is based on a nonparametric approach with counting processes featuring so-called high quality markers. Despite the model having nonparametric parts we show that we attain a parametric rate of uniform consistency and uniform asymptotic normality. In usual nonparametric scenarios reaching such a fast convergence rate is not possible, so one can say that our approach is superefficient. We then use these theoretical results to construct simultaneous confidence bands directly for the hazard rate. On a semiparametric estimation method for AFT mixture cure models Ingrid Van Keilegom (Katholieke Universiteit Leuven) When studying survival data in the presence of right censoring, it often happens that a certain proportion of the individuals under study do not experience the event of interest and are considered as cured. The mixture cure model is one of the common models that take this feature into account. It depends on a model for the conditional probability of being cured (called the incidence) and a model for the conditional survival function of the uncured individuals (called the latency). This work considers a logistic model for the incidence and a semiparametric accelerated failure time model for the latency part. The estimation of this model is obtained via the maximization of the semiparametric likelihood, in which the unknown error density is replaced by a kernel estimator based on the Kaplan-Meier estimator of the error distribution. Asymptotic theory for consistency and asymptotic normality of the parameter estimators is provided. Moreover, the proposed estimation method is compared with several competitors. Finally, the new method is applied to data coming from a cancer clinical trial. Woncheol Jang (Seoul National University) Gaussian Processes (Organizer: Naomi Feldheim) Gaussian determinantal processes: a new model for directionality in data Subhro Ghosh (National University of Singapore) Determinantal point processes (DPPs) have recently become pop- ular tools for modeling the phenomenon of negative dependence, or repulsion, in data. However, our understanding of an analogue of a classical parametric statistical theory is rather limited for this class of models. In this work, we investigate a parametric family of Gaussian DPPs with a clearly interpretable effect of parametric modulation on the observed points. We show that parameter modulation impacts the observed points by introducing direc- tionality in their repulsion structure, and the principal directions correspond to the directions of maximal (i.e., the most long- ranged) dependency. This model readily yields a viable alternative to principal component analysis (PCA) as a dimension reduc- tion tool that favors directions along which the data are most spread out. This methodological contribution is complemented by a statistical analysis of a spiked model similar to that employed for covariance matrices as a framework to study PCA. These theoretical investigations unveil intriguing questions for further examination in random matrix theory, stochastic geometry, and related topics. Based on joint work with Philippe Rigollet. Persistence exponents of Gaussian stationary functions Ohad Noy Feldheim (Hebrew University of Jerusalem) Let $f:R \to R$ be a Gaussian stationary process, that is, a random function, invariant to real shifts, whose marginals have multi-normal distribution. Persistence is the event that the process remains positive over the interval [0,T]. The asymptotics of this quantity as T tends to infinity has been long studied since the early 50's with motivation stemming from Probability theory, Physics and Electric Engineering. In recent years, it has been discovered that persistence is best characterized in spectral terms. This view was used to describe the decay rate of persistence probability (up to a constant in the exponent). In this work we take this study one step further, showing mild conditions for the existence of persistence exponents, that is, C such that the probability of persistence on [0,T] is $e^{-CT(1+o(1)}$. This we obtain by establishing an array of continuity properties of the persistence probability and relating the problem to small ball exponents. In particular, we show that the persistence exponent is independent from the singular component of the spectral measure away from the origin. Joint work with N. Feldheim and S. Mukherjee. Connectivity of the excursion sets of Gaussian fields with long-range correlations Stephen Muirhead (University of Melbourne) In recent years the global connectivity of the excursion sets of smooth Gaussian fields with rapidly decaying correlations has been fairly well understood (at least in the case of positively-correlated fields), and the general picture that emerges is that the connectivity undergoes a phase transition which is analogous to that of Bernoulli percolation. On the other hand, if the fields have long-range correlations then they are believed to lie outside the Bernoulli percolation universality class, with different scaling limits and critical exponents. The behaviour of the connectivity is not well-understood in this regime, and in this talk I will present some recent results and conjectures that shed some light on the behaviour. Overcrowding estimates for the nodal volume of stationary Gaussian processes on R^d Lakshmi Priya (Indian Institute of Science) We consider centered stationary Gaussian processes (SGPs) on Euclidean spaces R^d and study an aspect of their nodal set: for T>0, we study the nodal volume in [0,T]^d. In earlier studies, under varying assumptions on the spectral measures of SGPs, the following statistics were obtained for the nodal volume in [0,T]^d: expectation, variance asymptotics, CLT, exponential concentration (only for d=1), and finiteness of moments. We study the unlikely event of overcrowding of the nodal set in [0,T]^d; this is the event that the volume of the nodal set in [0,T]^d is much larger than its expected value. Under some mild assumptions on the spectral measure, we obtain estimates for the overcrowding event's probability. We first get overcrowding estimates for the zero count of SGPs on R. In higher dimensions, we consider Crofton's formula which gives the volume of the nodal set in terms of the number of intersections of the nodal set with all lines in R^d. We discretise this formula to get a more workable version of it; we use this and the ideas used to obtain the overcrowding estimates in one dimension to get the overcrowding estimates in higher dimensions. Naomi Feldheim (Bar-Ilan University) Theories and Applications for Complex Data Analysis (Organizer: Arlene K.H. Kim) Partly interval-censored rank regression Sangbum Choi (Korea University) This paper studies estimation of the semiparametric accelerated failure time model for double and partly interval-censored data. Gehan-type weighted estimating function is constructed by contrasting comparable rank cases under interval-censoring. An extension to the general class of log-rank estimating functions can also be investigated, along with an efficient variance estimation procedure. Asymptotic behaviors of the proposed estimator are established under mild conditions by using empirical processes theory. Simulation studies demonstrate our method works very well with practical size of samples. Two data examples are given to illustrate the practical usefulness of our method. Two-sample testing of high-dimensional linear regression coefficients via complementary sketching Tengyao Wang (University College London) We introduce a new method for two-sample testing of high-dimensional linear regression coefficients without assuming that those coefficients are individually estimable. The procedure works by first projecting the matrices of covariates and response vectors along directions that are complementary in sign in a subset of the coordinates, a process which we call 'complementary sketching'. The resulting projected covariates and responses are aggregated to form two test statistics, which are shown to have essentially optimal asymptotic power under a Gaussian design when the difference between the two regression coefficients is sparse and dense respectively. Simulations confirm that our methods perform well in a broad class of settings. Optimal rates for independence testing via U-statistic permutation tests Tom Berrett (University of Warwick) Independence testing is one of the most well-studied problems in statistics, and the use of procedures such as the chi-squared test is ubiquitous in the sciences. While tests have traditionally been calibrated through asymptotic theory, permutation tests are experiencing a growth in popularity due to their simplicity and exact Type I error control. In this talk I will present new, finite-sample results on the power of a new class of permutation tests, which show that their power is optimal in many interesting settings, including those with discrete, continuous, and functional data. A simulation study shows that our test for discrete data can significantly outperform the chi-squared for natural data-generating distributions. Defining a natural measure of dependence $D(f)$ to be the squared $L^2$-distance between a joint density $f$ and the product of its marginals, we first show that there is generally no valid test of independence that is uniformly consistent against alternatives of the form $\{f: D(f) \geq \rho^2 \}$. Motivated by this observation, we restrict attention to alternatives that satisfy additional Sobolev-type smoothness constraints, and consider as a test statistic a U-statistic estimator of $D(f)$. Using novel techniques for studying the behaviour of U-statistics calculated on permuted data sets, we prove that our tests can be minimax optimal. Finally, based on new normal approximations in the Wasserstein distance for such permuted statistics, we also provide an approximation to the power function of our permutation test in a canonical example, which offers several additional insights. This is joint work with Ioannis Kontoyiannis and Richard Samworth. Empirical Bayes PCA in high dimensions Zhou Fan (Yale University) When the dimension of data is comparable to or larger than the number of data samples, Principal Components Analysis (PCA) may exhibit problematic high-dimensional noise. In this work, we propose an Empirical Bayes PCA method that reduces this noise by estimating a joint prior distribution for the principal components. EB-PCA is based on the classical Kiefer-Wolfowitz nonparametric MLE for empirical Bayes estimation, distributional results derived from random matrix theory for the sample PCs, and iterative refinement using an Approximate Message Passing (AMP) algorithm. In theoretical "spiked" models, EB-PCA achieves Bayes-optimal estimation accuracy in the same settings as an oracle Bayes AMP procedure that knows the true priors. Empirically, EB-PCA significantly improves over PCA when there is strong prior structure, both in simulation and on quantitative benchmarks constructed from the 1000 Genomes Project and the International HapMap Project. An illustration is presented for analysis of gene expression data obtained by single-cell RNA-seq. Arlene K.H. Kim (Korea University) Random Structures Universal phenomena for random constrained permutations Jacopo Borga (University of Zurich) How do local/global constraints affect the limiting shape of random permutations? This is a classical question that has received considerable attention in the last 15 years. In this talk we give an overview of some recent results on this topic, mainly focusing on random pattern-avoiding permutations. We first introduce a notion of scaling limit for permutations, called permutons. Then we present some recent results that highlight certain universal phenomena for permuton limits of various families of pattern-avoiding permutations. These results will lead us to the definition of three remarkable new limiting random permutons: the "biased Brownian separable permuton", the "Baxter permuton" and the "skew Brownian permuton". We finally discuss some recent results that show how permuton limits are useful to investigate the behaviour of certain statistics on random pattern-avoiding permutations, such as the length of the longest increasing subsequence. The scaling limit of the strongly connected components of a uniform directed graph with an i.i.d. degree sequence Serte Donderwinkel (University of Oxford) Spherical principal curves Jongmin Lee (Seoul National University) This paper presents a new approach for dimension reduction of data observed on spherical surfaces. Several dimension reduction techniques have been developed in recent years for non-Euclidean data analysis. As a pioneer work, Hauberg (2016) attempted to implement principal curves on Riemannian manifolds. However, this approach uses approximations to process data on Riemannian manifolds, resulting in distorted results. This study proposes a new approach to project data onto a continuous curve to construct principal curves on spherical surfaces. Our approach lies in the same line of Hastie and Stuetzle (1989) that proposed principal curves for data on Euclidean space. We further investigate the stationarity of the proposed principal curves that satisfy the self-consistency on spherical surfaces. The results on the real data analysis and simulation examples show promising empirical characteristics of the proposed approach. Namgyu Kang (Korea Institute for Advanced Study) Copula Modeling Estimation of multivariate generalized gamma convolutions through Laguerre expansions Oskar Laverny (Université Lyon 1) The generalized gamma convolution class of distribution appeared in Thorin's work while looking for the infinite divisibility of the log-Normal and Pareto distributions. Although these distributions have been extensively studied in the univariate case, the multivariate case and the dependence structures that can arise from it have received little interest in the literature. Furthermore, only one projection procedure for the univariate case was recently constructed, and no estimation procedure are available. By expending the densities of multivariate generalized gamma convolutions into a tensorized Laguerre basis, we bridge the gap and provide performant estimations procedures for both the univariate and multivariate cases. We provide some insights about performance of these procedures, and a convergent series for the density of multivariate gamma convolutions, which is shown to be more stable than Moschopoulos's and Mathai's univariate series. We furthermore discuss some examples. Copula-based Markov zero-inflated count time series models Mohammed Alqawba (Qassim University) Count time series data with excess zeros are observed in several applied disciplines. When these zero-inflated counts are sequentially recorded, they might result in serial dependence. Ignoring the zero-inflation and the serial dependence might produce inaccurate results. In this paper, Markov zero-inflated count time series models based on a joint distribution on consecutive observations are proposed. The joint distribution function of the consecutive observations is constructed through copula functions. First and second order Markov chains are considered with the univariate margins of zero-inflated Poisson (ZIP), zero-inflated negative binomial (ZINB), or zero-inflated Conway-Maxwell-Poisson (ZICMP) distributions. Under the Markov models, bivariate copula functions such as the bivariate Gaussian, Frank, and Gumbel are chosen to construct a bivariate distribution of two consecutive observations. Moreover, the trivariate Gaussian and max-infinitely divisible copula functions are considered to build the joint distribution of three consecutive observations. Likelihood based inference is performed and asymptotic properties are studied. To evaluate the estimation method and the asymptotic results, simulated examples are studied. The proposed class of models are applied to sandstorm counts example. The results suggest that the proposed models have some advantages over some of the models in the literature for modeling zero-inflated count time series data. Bi-factor and second-order copula models for item response data Sayed H. Kadhem (University of East Anglia) Bi-factor and second-order models based on copulas are proposed for item response data, where the items can be split into non-overlapping groups such that there is a homogeneous dependence within each group. Our general models include the Gaussian bi-factor and second-order models as special cases and can lead to more probability in the joint upper or lower tail compared with the Gaussian bi-factor and second-order models. Details on maximum likelihood estimation of parameters for the bi-factor and second-order copula models are given, as well as model selection and goodness-of-fit techniques. Our general methodology is demonstrated with an extensive simulation study and illustrated for the Toronto Alexithymia Scale. Our studies suggest that there can be a substantial improvement over the Gaussian bi-factor and second-order models both conceptually, as the items can have interpretations of latent maxima/minima or mixtures of means in comparison with latent means, and in fit to data. Daewoo Pak (Yonsei University) Multivariate Data Analysis A nonparametric test for paired data Grzegorz Wyłupek (Institute of Mathematics, University of Wrocław) The paper proposes the weighted Kolmogorov-Smirnov type test for the two-sample problem when the data is paired. We derive the asymptotic distribution of the test statistic under the null model as well as prove the consistency of the related test under the general alternatives. The dependence of the asymptotic distribution of the test statistic from the dependence structure of the data forces the usage of the wild bootstrap technique for the inference. The bootstrap version of the test controls the Type I error under the null model and works very well under the alternative. In the proofs, the main role play the empirical processes' tools. Inference for Generalized Multivariate Analysis of Variance (GMANOVA) models, under multivariate skew t distribution for modelling skewed and heavy-tailed data Sayantee Jana (Indian Institute of Management Nagpur) The most extensively used statistical model in practice, both in research and in practice, is the linear model, due to its simplicity and interpretability. Linear models are preferred, even when approximate, for both univariate and multivariate data, especially since, multivariate skewed models come with their own added complexity. Hence, researchers would not prefer to deliberately add extra layers of complexity by considering non-linear models. Generalized Multivariate Analysis of Variance (GMANOVA) models, is one such linear model useful for the analysis of longitudinal data, which is repeated measurements of a continuous variable, from several individuals across any ordered variable such as time, temperature, pressure etc. It consists of a bilinear structure which allows for comparison across between groups, while maintaining the temporal structure of the data, unlike the Multivariate Analysis of Variance (MANOVA) which does not allow for any temporal ordering or temporal correlation in the model. GMANOVA models are widely used in economics, social and physical sciences, medical research and pharmaceutical studies. However, despite financial data being time-varying, the traditional GMANOVA model has limited to no applications in finance, due to the skewed and volatile nature of such data. This in turn makes financial data the right candidate for Multivariate Skew t (MST) distribution, as it allows for outliers in the data to be modelled, due to its heavy tails. In fact, portfolio analysis including mutual funds, capital asset pricing are all modelled using elliptical distributions, especially multivariate t distribution. The classical GMANOVA model assumes multivariate normality, and hence inferential tools developed for the classical GMANOVA model, may not be appropriate for skewed and heavy-tailed data. In our study, first we explore the sensitivity of inferential tools developed under multivariate normality for skewed and volatile data, and then we develop inferential tools for the GMANOVA model under the MST distribution. Multiscale representation of directional scattered data: use of anisotropic radial basis functions Junhyeon Kwon (Seoul National Universtiy) Spatial inhomogeniety along the one-dimensional curve makes two-dimensional data non-stationary. Curvelet transform, first proposed by Candes and Donoho (1999), is one of the most well-known multiscale methods to represent the directional singularity, but it has a limitation that the data needs to be observed on equally-spaced sites. On the other hand, radial basis function interpolation is widely used to approximate the underlying function from the scattered data. However, the isotropy of the radial basis functions lowers the efficiency of the directional representation. This research proposes a new multiscale method that uses anisotropic radial basis functions to efficiently represent the direction from the noisy scattered data in two-dimensional Euclidean space. Basis functions are orthogonalized across the scales so that each scale can represent global or local directional structure separately. It is shown that the proposed method is remarkable for representing directional scattered data through the numerical experiments. Convergence property and practical issues in implementation are discussed as well. Statistical Prediction Robust geodesic regression Ha-Young Shin (Seoul National University) This study explores robust regression for data on Riemannian manifolds. Geodesic regression is the generalization of linear regression to a setting with a manifold-valued dependent variable and one or more real-valued independent variables. The existing work on geodesic regression uses the sum-of-squared errors to find the solution, but as in the classical Euclidean case, the least-squares method is highly sensitive to outliers. In this study, we use M-type estimators, including the L1, Huber and Tukey biweight estimators, to perform robust geodesic regression, and describe how to calculate the tuning parameters for the latter two. We show that, on compact symmetric spaces, all M-type estimators are maximum likelihood estimators, and argue for the overall superiority of the L1 estimator over the L2 and Huber estimators on high-dimensional manifolds and over the Tukey biweight estimator on compact high-dimensional manifolds. A derivation of the Riemannian Gaussian distribution on k-dimensional spheres is also included. Results from numerical examples, including analysis of real neuroimaging data, demonstrate the promising empirical properties of the proposed approach. A multi-sigmoidal logistic model: statistical analysis and first-passage-time application Paola Paraggio (Università degli Studi di Salerno (UNISA)) Sigmoidal growth models are widely used in various applied fields, from biology to software reliability and economics. Usually, they describe dynamics in restricted environments. However, many real phenomena exhibit different phases, each one following a sigmoidal-type pattern. Stimulated by these more complex dynamics, many researchers investigate generalized versions of classical sigmoidal models characterized by several inflection points. Along these research lines, a generalization of the classical logistic growth model is considered in the present work, introducing in its expression a polynomial term. The model is described by a stochastic differential equation obtained from the deterministic counterpart by adding a multiplicative noise term. The resulting diffusion process, having a multi-sigmoidal mean, may be useful in the description of particular growth dynamics in which the evolution occurs by stages. The problem of finding the maximum likelihood estimates of the parameters involved in the definition of the process is also addressed. Precisely, the maximization of the likelihood function will be performed by means of meta-heuristic optimization techniques. Moreover, various strategies for the selection of the optimal degree of the polynomial will be provided. Further, the first-passage-time (FPT) problem is considered: an approximation of its density function will be obtained numerically, by means of the fptdApprox R-package Finally, some simulated examples are presented. Statistical inference for functional linear problems Tim Kutta (Ruhr University Bochum) In this talk we consider the linear regression model Y=SX+e with functional regressors and responses. This model has attracted much attention in terms of estimation and prediction, but less is known with regard to statistical inference for the unobservable slope operator S. In this talk we discuss new inference tools to detect relevant deviations of the parameter S from a hypothesized slope S'. As modes of comparison we consider the Hilbert-Schmidt norm || S-S'||^2 as well as the prediction error E || SX-S' X ||^2. Our theory is based on the novel technique of "smoothness shifting", which helps us to circumvent existing negative results on the weak convergence of estimators for S. In contrast to all related works the test statistic proposed converges at a rate of N^(-1/2), permitting a fast detection of local alternatives. Furthermore, while most existing procedures rely on i.i.d. observations for Gaussian approximations, our test statistic converges even in the presence of dependence, quantified by phi- or strong mixing. Due to a self-normalization procedure, our approach is user friendly, computationally inexpensive and robust. Changwon Lim (Chung-Ang University) Potential Theory for Non-local Operators and Jump Processes (Organizer: Panki Kim) Jul 21 Wed, 9:30 AM — 10:00 AM EDT SDEs driven by multiplicative stable-like Levy processes In this talk, I will present results on weak as well as strong well-poshness results for solutions to time-inhomogeneous SDEs driven by stable-like Levy processes with Holder continuous coefficients. The Levy measure of the Levy process can be anisotropic and singular with respect to the Lebesgue measure on R^d and its support can be a proper subset of R^d. Based on joint work with Xicheng Zhang and Guohuan Zhao. Periodic homogenization of non-symmetric Lévy-type processes Optimal Hardy identities and inequalites for the fractional Laplacian on $L^p$ Krzysztof Bogdan (Wrocław University of Science and Technology) We will present a route from symmetric Markovian semigroups to Hardy inequalities, to nonexplosion and contractivity results for Feynman-Kac semigroups on $L^p$. We will focus on the fractional Laplacian on $\mathbb{R}^d$, in which case the constants, estimates of the Feynman-Kac semigroups and tresholds for contractivity and explosion are sharp. Namely we will discuss selected results from joint work with Bartłomiej Dyda, Tomasz Grzywny, Tomasz Jakubowski, Panki Kim, Julia Lenczewska, Katarzyna Pietruska-Pałuba and Dominika Pilarczyk (see arXiv). Change-point Problems for Complex Data (Organizer: Claudia Kirch) Two-sample tests for relevant differences in the eigenfunctions of covariance operators Alexander Aue (University of California at Davis) This talk deals with two-sample tests for functional time series data, which have become widely available in conjunction with the advent of modern complex observation systems. Here, particular interest is in evaluating whether two sets of functional time series observations share the shape of their primary modes of variation as encoded by the eigenfunctions of the respective covariance operators. To this end, a novel testing approach is introduced that connects with, and extends, existing literature in two main ways. First, tests are set up in the relevant testing framework, where interest is not in testing an exact null hypothesis but rather in detecting deviations deemed sufficiently relevant, with relevance determined by the practitioner and perhaps guided by domain experts. Second, the proposed test statistics rely on a self-normalization principle that helps to avoid the notoriously difficult task of estimating the long-run covariance structure of the underlying functional time series. The main theoretical result of this paper is the derivation of the large-sample behavior of the proposed test statistics. Empirical evidence, indicating that the proposed procedures work well in finite samples and compare favorably with competing methods, is provided through a simulation study, and an application to annual temperature data. Multiple change point detection under serial dependence We propose a methodology for detecting multiple change points in the mean of an otherwise stationary, autocorrelated, linear time series. It combines solution path generation based on the wild energy maximisation principle, and an information criterion-based model selection strategy termed gappy Schwarz criterion. The former is well-suited to separating shifts in the mean from fluctuations due to serial correlations, while the latter simultaneously estimates the dependence structure and the number of change points without performing the difficult task of estimating the level of the noise as quantified e.g. by the long-run variance. We provide modular investigation into their theoretical properties and show that the combined methodology, named WEM.gSC, achieves consistency in estimating both the total number and the locations of the change points. The good performance of WEM.gSC is demonstrated via extensive simulation studies, and we further illustrate its usefulness by applying the methodology to London air quality data. An asymptotic test for constancy of the variance in a time series Herold Dehling (Ruhr-University Bochum) We present a novel approach to test for heteroscedasticity of a non-stationary time series that is based on Gini's mean difference of logarithmic local sample variances. In order to analyse the large sample behaviour of our test statistic, we establish new limit theorems for U-statistics of dependent triangular arrays. We derive the asymptotic distribution of the test statistic under the the null hypothesis of a constant variance and show that the test is consistent against a large class of alternatives, including multiple structural breaks in the variance. Our test is applicable even in the case of non-stationary processes, assuming a locally varying mean function. The performance of the test and its comparatively low computation time are illustrated in an extensive simulation study. Claudia Kirch (Otto von Guericke University Magdeburg) Statistics for Data with Geometric Structure (Organizer: Sungkyu Jung) Wasserstein regression Hans-Georg Müller (University of California, Davis) The analysis of samples of random objects that do not lie in a vector space has found increasing attention in statistics in recent years. An important class of such object data are univariate probability measures defined on the real line. Adopting the Wasserstein metric, we develop a class of regression models for data that include random distributions as predictors and distributions or scalars as responses. To study these regression models, we utilize the geometry of tangent bundles of the metric space of random measures with the Wasserstein metric and derive asymptotic rates of convergence for estimators of the regression coefficient function and for predicted distributions. We also study an extension to autoregressive models for distribution-valued time series. The proposed methods are illustrated with data that include distributional components in various regression settings. Finite sample smeariness for Fréchet means Stephan Huckemann (Georg-August-Universitaet Goettingen) It is well known for the Euclidean setting that a variety of statistical asymptotic tests, e.g. T-tests or MANOVA, are robust under nonnormality. It is much less known, that this cannot be taken for granted, for similar tests based on manifolds data, in particular for data on compact spaces. The reason lies in a recently discovered phenomenon: Smeariness lowers the classical square-root-of-n-rate for Fréchet means. While true smeariness is only present for a nullset of most parametric families, it surfaces in a finite sample regime for a large class of distributions: For instance, all nontrivial distributions on spheres are affected and all distributions on circles whose support extends beyond a half circle, like, e.g. all Fisher-von-Mises distributions. We give finite sample smeariness a precise definition and illustrate some effects in theory and practice. In particular, the presence of finite sample smeariness renders tests based on quantiles of asymptotic distributions ineffective up to considerably high sample sizes. Suitably designed bootstrap tests remain valid, however. Score matching for microbiome compositional data Janice Scealy (Australian National University) Compositional data and multivariate count data with known totals are challenging to analyse due to the non-negativity and sum constraint on the sample space. It is often the case with microbiome compositional data that many of the components are highly right-skewed, with large numbers of zeros. A major limitation of currently available estimators for compositional models is that they either cannot handle many zeros in the data or are not computationally feasible in moderate to high dimensions. We derive a new set of novel score matching estimators applicable to distributions on a Riemannian manifold with boundary, of which the standard simplex is a special case. The score matching method is applied to estimate the parameters in a new flexible model for compositional data and we show that the estimators are scalable and available in closed form. We apply the new model and estimators to real microbiome compositional data and show that the model provides a good fit to the data. Sungkyu Jung (Seoul National University) Random Graphs (Organizer: Christina Goldschmidt) An unexpected phase transition for percolation on scale-free networks Souvik Dhara (Massachusetts Institute of Technology) The talk concerns the critical behavior for percolation on finite, inhomogeneous random networks, where the weights of the vertices follow a power-law distribution with exponent $\tau \in (2,3)$. Such networks, often referred to as scale-free networks, exhibit critical behavior when the percolation probability tends to zero, as the network-size becomes large. We identify the critical window for percolation phase transition. Rather surprisingly, the critical window turns out to be of finite length, which is in sharp contrast with the previously studied critical behaviors for$\tau \in (3,4)$ and $\tau >4$ regimes. The rescaled vector of maximum component sizes are shown to converge in distribution to an infinite vector of non-degenerate random variables that can be described in terms of components of a one-dimensional inhomogeneous percolation model studied in a seminal work by Durrett and Kesten (1990). Based on joint work with Shankar Bhamidi, Remco van der Hofstad. Recent results for the graph alignment problem Marc Lelarge (INRIA) Random graph alignment refers to recovering the underlying vertex correspondence between two random graphs with correlated edges. This can be viewed as an average-case and noisy version of the well-known NP-hard graph isomorphism problem. For the correlated Erdös-Rényi model, we give an impossibility result for partial recovery in the sparse regime. We also propose a machine learning approach to solve the problem and design a new graph neural network architecture showing great performances. Local law and Tracy-Widom limit for sparse stochastic block models We consider the spectral properties of sparse stochastic block models, where N vertices are partitioned into K balanced communities. Under an assumption that the intra-community probability and inter-community probability are of similar order, we prove a local semicircle law up to the spectral edges, with an explicit formula on the deterministic shift of the spectral edge. We also prove that the fluctuation of the extremal eigenvalues is given by the GOE Tracy-Widom law after rescaling and centering the entries of sparse stochastic block models. Applying the result to sparse stochastic block models, we rigorously prove that there is a large gap between the outliers and the spectral edge without centering. Christina Goldschmidt (University of Oxford) Problems and Approaches in Multi-Armed Bandits (Organizer: Vianney Perchet) Dynamic pricing and learning under the Bass model Shipra Agrawal (Columbia University) We consider a novel formulation of the dynamic pricing and demand learning problem, where the evolution of demand in response to posted prices is governed by a stochastic variant of the popular Bass model with parameters (α, β) that are linked to the so-called "innovation" and "imitation" effects. Unlike the more commonly used i.i.d. demand models, in this model the price posted not only affects the demand and the revenue in the current round but also the evolution of demand, and hence the fraction of market potential that can be captured, in future rounds. Finding a revenue-maximizing dynamic pricing policy in this model is non-trivial even when model parameters are known, and requires solving for the optimal non-stationary policy of a continuous-time, continuous-state MDP. In this paper, we consider the problem of dynamic pricing is used in conjunction with learning the model parameters, with the objective of optimizing the cumulative revenues over a given selling horizon. Our main contribution is an algorithm with a regret guarantee of O (m^2/3), where m is mnemonic for the (known) market size. Moreover, we show that no algorithm can incur smaller order of loss by deriving a matching lower bound. We observe that in this problem the market size m, and not the time horizon T, is the fundamental driver of the complexity; our lower bound in fact indicates that for any fixed α,β, most non-trivial instances of the problem have constant T and large m. This insight sets the problem setting considered here uniquely apart from the MAB type formulations typically considered in the learning to price literature. Keywords: Dynamic Pricing, Multi-armed bandits, Bass model TensorPlan: A new, flexible, scalable and provably efficient local planner for huge MDPs Csaba Szepesvari (Deepmind & University of Alberta) In this talk I will consider provably efficient planning in huge MDPs when the planner is helped with a hint about the form of the optimal value function. In particular, a thoughtful oracle provides the planner with basis functions the linear combination of which give the optimal value function either exactly, or with small errors. The problem is to design a local planner, which, similarly to model-predictive control, is called to find a good action after every state transition, while it is given access to a simulator. We propose a new planner which when used continuously is guaranteed to induce a near-optimal policy. When the number of action is kept as a constant, the planner is shown to require only polynomially many simulator queries as a function of the horizon and the number of basis functions. The planner does not use dynamic programming as we know it, but is based on optimism and the "tensorization" of the Bellman optimality equation. On the importance of (linear) structure in contextual multi-armed bandit Alessandro Lazaric (Facebook AI Research) In this talk I will discuss how structural assumptions on the reward function impacts the regret performance of bandit algorithms. Notably, I will focus on linear contextual bandits and first review recent results showing how the structure of the arm set and reward function can be leveraged to achieve improved regret guarantees. Then, I will describe a novel incremental algorithm able to achieve asymptotic optimality, while ensuring finite-time worst-case optimality in the context-free case. Finally, I will discuss how stronger assumptions on context distribution and linear representation may be leveraged to achieve constant regret. This eventually leads to a representation-selection algorithm matching the regret of the best linear representation in a given set, up to a logarithmic factor in the number of representations. Most relevant references: T. Lattimore, Cs. Szepesvari. "The End of Optimism? An Asymptotic Analysis of Finite-Armed Linear Bandits", 2016. B. Hao, T. Lattimore, Cs. Szepesvari, "Adaptive Exploration in Linear Contextual Bandit", 2019. A. Tirinzoni, M. Pirotta, M. Restelli, A. Lazaric, "An Asymptotically Optimal Primal-Dual Incremental Algorithm for Contextual Linear Bandits", 2020. M. Papini, A. Tirinzoni, M. Restelli, A. Lazaric, M. Pirotta, "Leveraging Good Representations in Linear Contextual Bandits", 2021. Vianney Perchet (École nationale de la statistique et de l'administration économique Paris) Sequential Analysis and Applications (Organizer: Alexander Tartakovsky) Asymptotically optimal control of FDR and related metrics for sequential multiple testing Jay Bartroff (University of Southern California) I will discuss asymptotically optimal multiple testing procedures for sequential data in the context of prior information on the number of false null hypotheses, for controlling FDR/FNR, pFDR/pFNR, and other metrics. These procedures are closely related to those proposed and shown by Song & Fellouris (2017) to be asymptotically optimal for controlling type 1 and 2 familywise error rates (FWEs). We show that by appropriately adjusting the critical values of the Song-Fellouris procedures, they can be made asymptotically optimal for controlling any multiple testing error metric that is bounded between multiples of FWE in a certain sense. In addition to FDR/FNR and pFDR/pFNR this includes other metrics like the per-comparison and per-family error rates, and the false positive rate. Our setup includes asymptotic regimes in which the number of null hypotheses approaches infinity. Nearly optimal sequential detection of signals in correlated Gaussian noise Grigory Sokolov (Xavier University) Detecting an object in AR(p) noise assuming the intensity of the signal is not specified is a problem of interest to many practitioners. To this end we examine three procedures: (i) an adaptive version of the sequential probability ratio test (SPRT) built upon one-stage delayed estimators of the unknown signal intensity; (ii) the generalized SPRT; and (iii) the non-adaptive double SPRT (2-SPRT). The generalized SPRT has certain drawbacks in selecting thresholds to guarantee the upper bounds on error probabilities, but may appear to be slightly more efficient than the adaptive SPRT. However, simulations show that the loss in performance of the adaptive SPRT compared to the generalized SPRT is very minor, so—coupled with the error probability guarantee—the adaptive SPRT can be recommended for practical applications. And although the non-adaptive 2-SPRT is not asymptotically optimal for all signal strength values, it does offer benefits at the worst point in the indifference zone. Acknowledgement: The work of Alexander Tartakovsky was supported in part by the Russian Science Foundation Grant 18-19-00452 at the Moscow Institute of Physics and Technology. A unified approach for solving sequential selection problems Yaakov Malinovsky (University of Maryland) In this work we develop a unified approach for solving a wide class of sequential selection problems. This class includes, but is not limited to, selection problems with no-information, rank-dependent rewards, and considers both fixed as well as random problem horizons. We demonstrate that our approach allows exact and efficient computation of optimal policies and various performance metrics thereof for a variety of sequential selection problems, several of which have not been solved to date. Sequential change detection by optimal weighted l2 divergence Yao Xie (Georgia Institute of Technology) We present a new non-parametric statistic, called the weighed l2 divergence, based on empirical distributions for sequential change detection. We start by constructing the weighed l2 divergence as a fundamental building block for two-sample tests and change detection. The proposed statistic is proved to attain the optimal sample complexity in the offline setting. We then study the sequential change detection using the weighed l2 divergence and characterize the fundamental performance metrics, including the average run length (ARL) and the expected detection delay (EDD). We also present practical algorithms to find the optimal projection to handle high-dimensional data and the optimal weights, which is critical to quick detection since, in such settings, there are not many post-change samples. Simulation results and real data examples are provided to validate the good performance of the proposed method. Detection of temporary disorders Michael Baron (American University) Change-point detection methods are proposed for the case of temporary failures, or transient changes, when an unexpected disorder is ultimately followed by an adjustment and return to the initial state. A known base distribution of the in-control state changes to different unknown distributions for unknown periods of time. Sequential and retrospective methods are proposed for the detection and estimation of each pair of change-points. Examples of similar problems are shown in quality and process control, energy finance, and statistical genetics, although the meaning of disorder and adjustment change-points is quite different in these applications. Alexander Tartakovsky (Moscow Institute of Physics and Technology ) Numerical Study of Stochastic Processes / Stochastic Interacting Systems Splitting methods for SDEs with locally Lipschitz drift. An illustration on the FitzHugh-Nagumo model Massimiliano Tamborrino (University of Warwick) In this talk, we construct and analyse explicit numerical splitting methods for a class of semilinear stochastic differential equations (SDEs) with additive noise, where the drift is allowed to grow polynomially and satis?es a global one-sided Lipschitz condition. The methods are proved to be mean-square convergent of order 1 and to preserve important structural properties of the SDE. In particular, first, they are hypoelliptic in every iteration step. Second, they are geometrically ergodic and have asymptotically bounded second moments. Third, they preserve oscillatory dynamics, such as amplitudes, frequencies and phases of oscillations, even for large time steps. Our results are illustrated on the stochastic FitzHugh-Nagumo model (a well-known neuronal model describing the generation of spikes of single neurons at the intracellular level) and compared with known mean-square convergent tamed/truncated variants of the Euler-Maruyama method. The capability of the proposed splitting methods to preserve the aforementioned properties makes them applicable within different statistical inference procedures. In contrast, known Euler-Maruyama type methods commonly fail in preserving such properties, yielding ill-conditioned likelihood-based estimation tools or computationally infeasible simulation-based inference algorithms. Simulation methods for trawl processes Dan Leonte (Imperial College London) Trawl processes are continuous-time, stationary and infinitely divisible processes which can describe a wide range of possible serial correlation patterns in data. This talk introduces a new algorithm for the efficient simulation of monotonic trawl processes. The algorithm accommodates any monotonic trawl shape and any infinitely divisible distribution described via the Lévy seed, requiring only access to samples from the distribution of the Lévy seed. Further, the computational complexity does not scale with the number of spatial dimensions of the trawl. We describe how the above method can be generalized to a simulation scheme for monotonic ambit fields via Monte Carlo methods. Stochastic optimal control of SDEs and importance sampling Han Cheng Lie (University of Potsdam)
CommonCrawl
Effective field theory approach to lepto-philic self-conjugate dark matter Hrishabh Bharadwaj , , Ashok Goyal Department of Physics & Astrophysics, University of Delhi, Delhi, India We study self-conjugate dark matter (DM) particles interacting primarily with Standard Model (SM) leptons in an effective field theoretical framework. We consider SM gauge-invariant effective contact interactions between Majorana fermion, real scalar and real vector DM with leptons by evaluating the Wilson coefficients appropriate for interaction terms up to dimension 8, and obtain constraints on the parameters of the theory from the observed relic density, indirect detection observations and from the DM-electron scattering cross-sections in direct detection experiments. Low energy LEP data has been used to study sensitivity in the pair production of low mass ($ \leqslant$ 80 GeV) DM particles. Pair production of DM particles of mass $\geqslant$ 50 GeV in association with mono-photons at the proposed ILC has rich potential to probe such effective operators. dark matter theory , mono-photon , indirect and direct detection , effective operator [1] N. Aghanim et al. (Planck), arXiv: 1807.06209 [astro-ph.CO] [2] R. Bernabei et al., Eur. Phys. J. C 73, 2648 (2013), arXiv:1308.5109[astro-ph.GA doi: 10.1140/epjc/s10052-013-2648-7 [3] R. Bernabei et al., Universe 4(11), 116 (2018) , [Nucl. Phys. Atom. Energy 19(4), 307 (2018)] doi: 10.3390/universe4110116, 10.15407/jnpae2018.04.307 arXiv: 1805.10486 [hep-ex] [4] C. E. Aalseth et al. (CoGeNT Collaboration), Phys. Rev. D 88, 012002 (2013), arXiv:1208.5737[astro-ph.CO doi: 10.1103/PhysRevD.88.012002 [5] G. Angloher et al. (CRESST Collaboration), Eur. Phys. J. C 77(5), 299 (2017), arXiv:1612.07662[hep-ex doi: 10.1140/epjc/s10052-017-4878-6 [6] R. Agnese et al. (CDMS Collaboration), Phys. Rev. Lett. 111(25), 251301 (2013), arXiv:1304.4279[hep-ex doi: 10.1103/PhysRevLett.111.251301 [7] E. Aprile et al. (XENON100 Collaboration), Phys. Rev. D 94(12), 122001 (2016), arXiv:1609.06154[astro-ph.CO doi: 10.1103/PhysRevD.94.122001 [8] E. Aprile et al. (XENON Collaboration), Eur. Phys. J. C 77(12), 881 (2017), arXiv:1708.07051[astro-ph.IM doi: 10.1140/epjc/s10052-017-5326-3 [9] D. S. Akerib et al. (LUX Collaboration), Phys. Rev. Lett. 118(2), 021303 (2017), arXiv:1608.07648[astro-ph.CO doi: 10.1103/PhysRevLett.118.021303 [10] X. Cui et al. (PandaX-Ⅱ Collaboration), Phys. Rev. Lett. 119(18), 181302 (2017), arXiv:1708.06917[astro-ph.CO doi: 10.1103/PhysRevLett.119.181302 [11] T. M. Hong, arXiv: 1709.02304 [hep-ex] [12] F. Kahlhoefer, Int. J. Mod. Phys. A 32(13), 1730006 (2017) doi: 10.1142/S0217751X1730006X arXiv: 1702.02430 [hep-ph] [13] V. A Mitsou, 2015 J. Phys.: Conf. Ser. 651 012023 doi: 10.1088/1742-6596/651/1/012023. [14] H. Dreiner, M. Huck, M. Krämer et al., Phys. Rev. D 87(7), 075015 (2013), arXiv:1211.2254[hep-ph doi: 10.1103/PhysRevD.87.075015 [15] M. Battaglia and M. E. Peskin, eConf C 050318, 0709 (2005), arXiv:hep-ph/0509135 [16] S. Dutta, D. Sachdeva, and B. Rawat, Eur. Phys. J. C 77(9), 639 (2017), arXiv:1704.03994[hep-ph doi: 10.1140/epjc/s10052-017-5188-8 [17] M. Ackermann et al. (Fermi-LAT Collaboration), Phys. Rev. Lett. 115(23), 231301 (2015), arXiv:1503.02641[astro-ph.HE doi: 10.1103/PhysRevLett.115.231301 [18] M. Ajello et al. (Fermi-LAT Collaboration), Astrophys. J. 819(1), 44 (2016), arXiv:1511.02938[astro-ph.HE doi: 10.3847/0004-637X/819/1/44 [19] A. Albert et al. (Fermi-LAT and DES Collaborations), Astrophys. J. 834(2), 110 (2017), arXiv:1611.03184[astro-ph.HE doi: 10.3847/1538-4357/834/2/110 [20] A. Abramowski et al. (H.E.S.S. Collaboration), Phys. Rev. Lett. 110, 041301 (2013), arXiv:1301.1173[astro-ph.HE doi: 10.1103/PhysRevLett.110.041301 [21] M. Aguilar et al. (AMS Collaboration), Phys. Rev. Lett. 113, 121102 (2014 doi: 10.1103/PhysRevLett.113.121102 [22] M. Aguilar et al. (AMS Collaboration), Phys. Rev. Lett. 117(9), 091103 (2016 doi: 10.1103/PhysRevLett.117.091103 [23] O. Adriani et al. (PAMELA Collaboration), Phys. Rev. Lett. 111, 081102 (2013), arXiv:1308.0133[astro-ph.HE doi: 10.1103/PhysRevLett.111.081102 [24] O. Adriani et al. (PAMELA Collaboration), Nature 458, 607 (2009), arXiv:0810.4995[astro-ph doi: 10.1038/nature07942 [25] A. D. Panov et al., Bull. Russ. Acad. Sci. Phys. 71, 494 (2007 doi: 10.3103/S1062873807040168[astro-ph/0612377 [26] K. Yoshida et al., 42 (Nov., 2008) 1670–1675, doi: 10.1016/j.asr.2007.04.043. [27] G. Ambrosi et al. (DAMPE Collaboration), Nature 552, 63 (2017), arXiv:1711.10981[astro-ph.HE doi: 10.1038/nature24475 [28] T. Appelquist, H. C. Cheng, and B. A. Dobrescu, Phys. Rev. D 64, 035002 (2001 doi: 10.1103/PhysRevD.64.035002[hep-ph/0012100 [29] J. Wess and B. Zumino, Nucl. Phys. B 70, 39 (1974 doi: 10.1016/0550-3213(74)90355-1 [30] H. P. Nilles, Phys. Rept. 110, 1 (1984 doi: 10.1016/0370-1573(84)90008-5 [31] M. Drees, R. Godbole, and P. Roy, Theory and phenomenology of sparticles: an account of four-dimensional N=1 supersymmetry in high energy physics (2004) [32] N. Arkani-Hamed, A. G. Cohen, and H. Georgy, Phys. Lett. B 513, 232-C240 (2001 [33] H. C. Cheng and I. Low, JHEP 0309, 051 (2003), arXiv:hep-ph/0308199 doi: 10.1088/1126-6708/2003/09/051, [34] S. Dutta, A. Goyal, and M. P. Singh, arXiv: 1809.07877[hep-ph] [35] J. M. Zheng, Z. H. Yu, J. W. Shao et al., Nucl. Phys. B 854, 350 (2012), arXiv:1012.2022[hep-ph doi: 10.1016/j.nuclphysb.2011.09.009 [36] A. Freitas and S. Westhoff, JHEP 1410, 116 (2014), arXiv:1408.1959[hep-ph doi: 10.1007/JHEP10(2014)116 [37] K. G. Savvidy and J. D. Vergados, Phys. Rev. D 87(7), 075013 (2013), arXiv:1211.3214[hep-ph doi: 10.1103/PhysRevD.87.075013 [38] C. F. Chang, X. G. He, and J. Tandean, Phys. Rev. D 96(7), 075026 (2017), arXiv:1704.01904[hep-ph doi: 10.1103/PhysRevD.96.075026 [39] S. Dutta, A. Goyal, and L. K. Saini, JHEP 1802, 023 (2018), arXiv:1709.00720[hep-ph doi: 10.1007/JHEP02(2018)023 [40] M. O. Khojali, A. Goyal, M. Kumar and A. S. Cornell, Eur. Phys. J. C 78 (2018) no. 11, 920 doi: 10.1140/epjc/s10052-018-6407-7[arXiv: 1705.05149[hep-ph]]. ibid Eur. Phys. J. C 77(1), 25 (2017) doi: 10.1140/epjc/s10052-016-4589-4 arXiv: 1608.08958 [hep-ph] [41] A. Boveia and C. Doglioni, Ann. Rev. Nucl. Part. Sci. 68, 429 (2018) doi: 10.1146/annurev-nucl-101917-021008 [arXiv: 1810.12238[hep-ex]]. [42] S. Chatrchyan et al. (CMS Collaboration), JHEP 1212, 034 (2012), arXiv:1210.3844[hep-ex doi: 10.1007/JHEP12(2012)034 [43] G. Aad et al. (ATLAS Collaboration), Phys. Rev. Lett. 112(23), 231806 (2014) doi: 10.1103/PhysRevLett.112.231806 arXiv: 1403.5657 [hep-ex] [44] N. F. Bell, Y. Cai, R. K. Leane et al., Phys. Rev. D 90(3), 035027 (2014) doi: 10.1103/PhysRevD.90.035027 arXiv: 1407.3001 [hep-ph] [45] B. Bhattacherjee, D. Choudhury, K. Harigaya et al., JHEP 1304, 031 (2013), arXiv:1212.5013[hep-ph doi: 10.1007/JHEP04(2013)031 [46] R. C. Cotta, J. L. Hewett, M. P. Le et al., Phys. Rev. D 88, 116009 (2013), arXiv:1210.0525[hep-ph doi: 10.1103/PhysRevD.88.116009 [47] J. Y. Chen, E. W. Kolb, and L. T. Wang, Phys. Dark Univ. 2, 200 (2013), arXiv:1305.0021[hep-ph doi: 10.1016/j.dark.2013.11.002 [48] A. Crivellin, U. Haisch, and A. Hibbs, Phys. Rev. D 91, 074028 (2015), arXiv:1501.00907[hep-ph doi: 10.1103/PhysRevD.91.074028 [49] N. Chen, J. Wang, and X. P. Wang, arXiv: 1501.04486 [hep-ph] [50] P. J. Fox, R. Harnik, J. Kopp et al., Phys. Rev. D 85, 056011 (2012), arXiv:1109.4398[hep-ph doi: 10.1103/PhysRevD.85.056011 [51] Y. J. Chae and M. Perelstein, JHEP 1305, 138 (2013), arXiv:1211.4008[hep-ph [52] N. F. Bell, J. B. Dent, A. J. Galea et al., Phys. Rev. D 86, 096011 (2012), arXiv:1209.0231[hep-ph doi: 10.1103/PhysRevD.86.096011 [53] D. J. Gross and F. Wilczek, Phys. Rev. D 9, 980 (1974 doi: 10.1103/PhysRevD.9.980 [54] M. Drees and M. Nojiri, Phys. Rev. D 48, 3483 (1993), arXiv:hep-ph/9307208 doi: 10.1103/PhysRevD.48.3483 [55] J. Hisano, K. Ishiwata, and N. Nagata, Phys. Rev. D 82, 115007 (2010), arXiv:1007.2601[hep-ph doi: 10.1103/PhysRevD.82.115007 [56] J. Hisano, arXiv: 1712.02947 [hep-ph] [57] J. Hisano, K. Ishiwata, N. Nagata et al., Prog. Theor. Phys. 126, 435 (2011), arXiv:1012.5455[hep-ph doi: 10.1143/PTP.126.435 [58] N. F. Bell, Y. Cai, J. B. Dent et al., Phys. Rev. D 92(5), 053008 (2015), arXiv:1503.07874[hep-ph doi: 10.1103/PhysRevD.92.053008 [59] S. Bruggisser, F. Riva, and A. Urbano, SciPost Phys. 3(3), 017 (2017), arXiv:1607.02474[hep-ph doi: 10.21468/SciPostPhys.3.3.017 [60] S. Bruggisser, F. Riva, and A. Urbano, JHEP 1611, 069 (2016), arXiv:1607.02475[hep-ph doi: 10.1007/JHEP11(2016)069 [61] J. Hisano, R. Nagai, and N. Nagata, JHEP 1505, 037 (2015), arXiv:1502.02244[hep-ph doi: 10.1007/JHEP05(2015)037 [62] J. Hisano, K. Ishiwata, and N. Nagata, Phys. Lett. B 706, 208 (2011), arXiv:1110.3719[hep-ph doi: 10.1016/j.physletb.2011.11.017 [63] J. Hisano, K. Ishiwata, and N. Nagata, JHEP 1506, 097 (2015), arXiv:1504.00915[hep-ph doi: 10.1007/JHEP06(2015)097 [64] E. W. Kolb and M. S. Turner, Front. Phys. 69, 1-547 (1990 [65] F. Ambrogi, C. Arina, M. Backovic et al., arXiv: 1804.00044 [hep-ph] [66] J. Alwall et al., JHEP 1407, 079 (2014), arXiv:1405.0301[hep-ph doi: 10.1007/JHEP07(2014)079 [67] L. M. Carpenter, R. Colburn, J. Goodman et al., Phys. Rev. D 94(5), 055027 (2016), arXiv:1606.04138[hep-ph doi: 10.1103/PhysRevD.94.055027 [68] J. Kopp, V. Niro, T. Schwetz et al., Phys. Rev. D 80, 083502 (2009 doi: 10.1103/PhysRevD.80.083502 [69] E. Aprile et al. (XENON100 Collaboration), Science 349(6250), 851 (2015), arXiv:1507.07747[astro-ph.CO doi: 10.1126/science.aab2069 [70] S. Schael et al. (ALEPH and DELPHI and L3 and OPAL and LEP Electroweak Collaborations), Phys. Rept. 532, 119 (2013), arXiv:1302.3415[hep-ex doi: 10.1016/j.physrep.2013.07.004 [71] T. Behnke et al., arXiv: 1306.6329 [physics.ins-det] [72] T. Behnke et al., The International Linear Collider Technical Design Report - Volume 1: Executive Summary, arXiv: 1306.6327 [physics.acc-ph] [73] E. Conte, B. Fuks, and G. Serret, Comput. Phys. Commun. 184, 222 (2013), arXiv:1206.1599[hep-ph doi: 10.1016/j.cpc.2012.09.009 [74] A. Alloul, N. D. Christensen, C. Degrande et al., Comput. Phys. Commun. 185, 2250 (2014), arXiv:1310.1921[hep-ph doi: 10.1016/j.cpc.2014.04.012 [75] C. Bartels, M. Berggren, and J. List, Eur. Phys. J. C 72, 2213 (2012), arXiv:1206.6639[hep-ex doi: 10.1140/epjc/s10052-012-2213-9 [76] Z. H. Yu, Q. S. Yan, and P. F. Yin, Phys. Rev. D 88(7), 075015 (2013), arXiv:1307.5740[hep-ph doi: 10.1103/PhysRevD.88.075015 [77] J. Liu, L. T. Wang, X. P. Wang et al., Phys. Rev. D 97(9), 095044 (2018), arXiv:1712.07237[hep-ph doi: 10.1103/PhysRevD.97.095044 [1] null . Constraints on dark matter annihilation and decay from the isotropic gamma-ray background. Chinese Physics C, doi: 10.1088/1674-1137/41/4/045104 [2] Tong Li , Peiwen Wu . Simplified dark matter models with loop effects in direct detection and the constraints from indirect detection and collider search. Chinese Physics C, doi: 10.1088/1674-1137/43/11/113102 [3] Hua-Yong Han , Hong-Yan Wu , Si-Bo Zheng . Effective field theory of the Majorana dark matter. Chinese Physics C, doi: 10.1088/1674-1137/43/4/043103 [4] CHEN Ya-Zheng , CHEN Jun-Mou , LUO Yan-An , SHEN Hong , LI Xue-Qian . Effects of nuclear deformation on the form factorfor direct dark matter detection. Chinese Physics C, doi: 10.1088/1674-1137/36/6/005 [5] YANG Jun . Have we found conclusive evidence for dark matter through direct detection experiments?. Chinese Physics C, doi: 10.1088/1674-1137/38/4/045101 [6] Kun Wang , Jingya Zhu , Quanlin Jie . Higgsino asymmetry and direct-detection constraints of light dark matter in the NMSSM with non-universal Higgs masses. Chinese Physics C, doi: 10.1088/1674-1137/**/*/****** [7] Liuyuan Shen , Yicen Mou , Yunlong Zheng , Mingzhe Li . Direct couplings of mimetic dark matter and their cosmological effects. Chinese Physics C, doi: 10.1088/1674-1137/42/1/015101 [8] YUAN Qiang , BI Xiao-Jun , ZHANG Juan . Perspective of Galactic dark matter subhalo detection on Fermi from the EGRET observation. Chinese Physics C, doi: 10.1088/1674-1137/33/10/002 [9] J. H. Field . A comparison of direct and indirect determinations of the masses of the Higgs Boson and the top quark. Chinese Physics C, doi: 10.1088/1674-1137/41/10/103001 [10] LI Xi-Guo , LI Yong-Qing . Study on effective masses of meson in dense nuclear matter. Chinese Physics C, [11] Niu Wan , Takayuki Myo , Chang Xu , Hiroshi Toki , Hisashi Horiuchi , Mengjiao Lyu . Finite particle number description of neutron matter using the unitary correlation operator and high-momentum pair methods. Chinese Physics C, doi: 10.1088/1674-1137/abb4d1 [12] Yan-Ling Li , Yong-Liang Ma , Mannque Rho . Nuclear axial currents from scale-chiral effective field theory. Chinese Physics C, doi: 10.1088/1674-1137/42/9/094102 [13] D. Rozpedzik , J. Golak . Low energy electromagnetic processes based on the chiral effective field theory approach. Chinese Physics C, doi: 10.1088/1674-1137/34/9/051 [14] Hong-na Li , P. Wang . Chiral extrapolation of nucleon axial charge gA in effective field theory. Chinese Physics C, doi: 10.1088/1674-1137/40/12/123106 [15] Yidian Chen , Xiao-Jun Bi , Mei Huang . Holographic technicolor model and dark matter. Chinese Physics C, doi: 10.1088/1674-1137/44/9/093102 [16] HUANG Xiu-Lin , WANG Hai-Jun , LIU Guang-Zhou , LIU Cheng-Zhi , XU Yan . Effects of δ mesons on baryonic direct Urca processes inneutron star matter. Chinese Physics C, doi: 10.1088/1674-1137/39/10/105102 [17] HE Xiao-Gang . Darkon dark matter, unparticle effects and collider physics. Chinese Physics C, doi: 10.1088/1674-1137/33/6/010 [18] CHANG Xue , LIU Chun , MA Feng-Cai , YANG Shuo . Supersymmetric extension of the minimal dark matter model. Chinese Physics C, doi: 10.1088/1674-1137/36/9/003 [19] S. A. Alavi , F. S. Kazemian . Photonic dark matter portal and quantum physics. Chinese Physics C, doi: 10.1088/1674-1137/40/2/025101 [20] Yi Zhao , Xiao-Jun Bi , Su-Jie Lin , Peng-Fei Yin . Nearby dark matter subhalo that accounts for the DAMPE excess. Chinese Physics C, doi: 10.1088/1674-1137/43/8/085101 Hrishabh Bharadwaj and Ashok Goyal. Effective Field Theory approach to lepto-philic self conjugate dark matter[J]. Chinese Physics C. doi: 10.1088/1674-1137/abce50 PDF Downloads(13) Hrishabh Bharadwaj , Corresponding author: Hrishabh Bharadwaj, [email protected] (Corresponding author) Abstract: We study self-conjugate dark matter (DM) particles interacting primarily with Standard Model (SM) leptons in an effective field theoretical framework. We consider SM gauge-invariant effective contact interactions between Majorana fermion, real scalar and real vector DM with leptons by evaluating the Wilson coefficients appropriate for interaction terms up to dimension 8, and obtain constraints on the parameters of the theory from the observed relic density, indirect detection observations and from the DM-electron scattering cross-sections in direct detection experiments. Low energy LEP data has been used to study sensitivity in the pair production of low mass ($ \leqslant$ 80 GeV) DM particles. Pair production of DM particles of mass $\geqslant$ 50 GeV in association with mono-photons at the proposed ILC has rich potential to probe such effective operators. Several cosmological and astrophysical observations at cosmic and galactic scales point towards the existence of dark matter in the Universe. The dark matter constitutes roughly 23% of the energy density of the Universe and roughly 75% of all the matter existing in the Universe. The Planck Collaboration [1] has measured the dark matter (DM) density to great precision and has given the relic density value $\Omega_{\rm DM} h^2 = 0.1198 \pm 0.0012$. The nature of DM, however, so far remains undetermined. Features of DM interactions can be determined from direct and indirect detection experiments. Direct detection experiments like DAMA/LIBRA [2, 3], CoGeNT [4], CRESST [5], CDMS [6], XENON100 [7, 8], LUX [9] and PandaX-II [10] are designed to measure the recoil momentum of a scattered atom or nucleon by DM in the chemically inert medium of the detector. These experiments involving spin-independent (SI) and spin-dependent scattering cross-sections in the non-relativistic (NR) regime have reached a sensitivity level where $\sigma_{\rm SI} > 8 \times 10^{-47}$ cm$ ^2 $ for DM mass $ \sim 30 $ GeV. Collider reaches in the present [11-13] and proposed [14-16] colliders aim at identifying the signature of DM particle production involving mono or di-jet events accompanied by missing energy. So far no experimental observation has made any confirmed detection and as a result a huge DM parameter space has been excluded. Indirect experiments such as FermiLAT [17-19], HESS [20] and AMS-02 [21, 22] are looking for evidence of excess cosmic rays produced in DM annihilation to Standard Model (SM) particles such as photons, leptons, $ b\ \bar{b} $ and gauge boson pairs, and so on. In the last several years, experiments like PAMELA [23, 24] have reported an excess in the positron flux without any significant excess in the proton to antiproton flux. These peaks in the $ e^+ \, e^- $ channel have also been observed in the ATIC [25] and PPB-BETS [26] balloon experiments at around 1 TeV and 500 GeV respectively. Recently, the Dark Matter Particle Explorer (DAMPE) experiment [27] has also observed a sharp peak around $ \sim $ 1.4 TeV, favouring a lepto-philic DM annihilation cross-section of the order of $ 10^{-26} $ cm$ ^3 $/s. The excess in $ e^+ \, e^- $ can either be due to astrophysical events like high energy emissions from pulsars, or result from DM pair annihilation, preferentially to the $ e^+\, e^- $ channel, in our galactic neighborhood. Since the aforementioned experiments have not observed any significant excess in the anti-proton channel, the DM candidates, if any, appears to be lepton friendly, lepto-philic, and have suppressed interactions with quarks at the tree level. Most of the effort in understanding the DM phenomenon has revolved around the hypothesis that DM is a weakly interacting massive particle (WIMP) with mass lying between several GeV to a few TeV. WIMPs provide the simplest production mechanism for the DM relic density from the early Universe. Various UV-complete new physics extensions of the SM have been proposed essentially to solve the gauge hierarchy problem in the top-down approach, which includes theories like extra dimensions [28], supersymmetry [29-31], little Higgs [32, 33], extended two-Higgs-doublet models (2HDM) with singlets as a portal of DM interactions [34] and so on. These models naturally provide DM candidates or WIMPs whose mass scales are close to that of electroweak physics. However, the direct detection experiments have shrunk the parameter space of the simplified and popular models where the WIMPs are made to interact with the visible world via neutral scalars or gauge bosons. The model-independent DM-SM particle interactions have also been studied in an effective field theory (EFT) approach where the DM-SM interaction mediator is believed to be much heavier than the lighter mass scale of DM and SM interactions. The EFT approach provides a simple, flexible approach to investigate various aspects of DM phenomenology. The EFT approach treats the interaction between DM and SM particles as a contact interaction described by non-renormalizable operators. In the context of DM phenomenology, each operator describes different processes, such as DM annihilation, scattering and DM production in collider searches, with each process at its own energy scale, which is required to be smaller than the cut-off scale $ \Lambda_{\rm{eff}} \gg $ the typical energy E. The nature of these interactions is encapsulated in a set of coefficients corresponding to a limited number of Lorentz and gauge-invariant dimension-5 and dimension-6 effective operators constructed with the light degrees of freedom. The constrained parameter space from various experimental data then essentially maps the viable UV-complete theoretical models. The generic effective Lagrangian for scalar, pseudo-scalar, vector and axial vector interactions of SM particles with dark matter candidates of spin $0,\ {1}/{2}, 1$ and ${3}/{2}$ have been studied in the literature [35-40]. Sensitivity analyses for DM-quark effective interactions at the LHC have been performed [12, 13, 41-44] in a model-independent way for the dominant (a) mono-jet + ${\not\!\! E}_{\rm{T}}$, (b) mono-b jet + ${\not\!\! E}_{\rm{T}}$ and (c) mono-t jet + ${\not\!\! E}_{\rm{T}}$ processes. Similarly, analyses of DM-gauge boson effective couplings at the LHC have been done in Refs. [45-47]. Sensitivity analyses of the coefficients for the lepto-philic operators have also been performed through the $e^+e^-\to \gamma\ +\ {\not\!\! E}_{\rm{T}}$ [48-50] and $e^+e^-\to Z^0 + {\not\!\! E}_{\rm{T}}$ [16, 51] channels. In the context of deep inelastic lepton-hadron scattering, Gross and Wilczek [52] analyzed the twist-2 operators appearing in the operator-product expansion of two weak currents along with the renormalization group equations of their coefficients for asymptotically free gauge theories. A similar analysis was done in Ref. [53] for the effective DM-nucleon scattering induced by twist-2 quark operators in the supersymmetric framework where DM is identified with the lightest supersymmetric particle - the neutralino. In Refs. [54-56], the one-loop effect in DM-nucleon scattering induced by twist-2 quark and gluonic operators for scalar, vector and fermionic DM particles was calculated. Although there exist many studies of dimension-5 and dimension-6 lepto-philic operators, only a few of them are invariant under SM gauge symmetry. As discussed above, the contributions of the cosmologicallyconstrained effective operators are not only sensitive at DM direct and indirect detection experiments but are also important in direct searches at high energy colliders. In fact, operators which do not meet the SM gauge symmetry requirement will not be able to maintain perturbative unitarity [57] due to their bad high energy behaviour at collider-accessible energies comparable to the electroweak scale $ \sim 246 $ GeV. Thus the remaining dimension-5 and dimension-6 operators based on SM gauge symmetry and on the principle of perturbative unitarity may not contribute to the $ 2 \to 2 $ scattering processes relevant for direct detection experiments and should not be considered in production channels at high energy colliders. It is in this context that the study of additional SM gauge-invariant operators of dimension greater than six is important and needs to be undertaken [58, 59]. In this paper we consider a DM current that couples primarily to the SM leptons through the $ SU(2)_L \times U(1)_Y $ gauge-invariant effective operators. To ensure the invariance of SM gauge symmetry at all energy scales, we restrict our dark matter candidates to be self-conjugate: a Majorana fermion, a real spin 0 or a real spin 1 SM gauge singlet. In Section II, we formulate the effective interaction Lagrangian for fermionic, scalar and vector DM with SM leptons via twist-2 dimension-8 operators. In Section III, the coefficients of the effective Lagrangian are constrained from the observed relic density and a consistency check is performed from indirect and direct experiments. The constraints from LEP and the sensitivity analysis of the coefficients of the effective operators at the proposed ILC are discussed in Section IV. We summarise our results in section V. II. EFFECTIVE LEPTO-PHILIC DM INTERACTIONS Following earlier authors [60-62], the interaction between dark matter particles ($ \chi^0,\ \phi^0\ \&\ V^0 $) and SM leptons is assumed to be mediated by a heavy mediator which can be a scalar, vector or a fermion. The effective contact interaction between the dark matter particles and leptons is obtained by evaluating the Wilson coefficients appropriate for the contact interaction terms up to dimension 8. The mediator mass is assumed to be greater than all the other masses in the model and sets the cut-off scale $ \Lambda_{\rm{eff}} $. We then obtain the following effective operators for self-conjugate spin-$ \frac{1}{2} $, spin-$ 0 $ and spin-$ 1 $ dark matter particles interacting with the leptons: $ {\cal L}_{\rm{eff.\, Int.}}^{\rm{spin \,1/2 \,DM}} = \frac{{\alpha^{\chi^0}_{S}}}{\Lambda_{\rm {eff}}^4} {\cal{O}}_{S}^{1/2}+\frac{{\alpha^{\chi^0}_{T_1}}}{\Lambda_{\rm {eff}}^4} {\cal{O}}_{T_1}^{1/2} + \frac{{\alpha^{\chi^0}_{AV}}}{\Lambda_{\rm {eff}}^2} {\cal{O}}_{AV}^{1/2}, $ $ {\cal L}_{\rm{eff.\, Int.}}^{\rm{spin \,0 \,DM} } = \frac{{\alpha^{\phi^0}_{S}}}{\Lambda_{\rm {eff}}^4} {\cal{O}}_{S}^{0}+ \frac{{\alpha^{\phi^0}_{T_2}}}{\Lambda_{\rm {eff}}^4} {\cal{O}}_{T_2}^{0}, $ $ {\cal L}_{\rm{eff.\, Int.}}^{\rm{spin \,1 \,DM} } = \frac{{\alpha^{V^0}_{S}}}{\Lambda_{\rm {eff}}^4} {\cal{O}}_{S}^{1} + \frac{{\alpha^{V^0}_{T_2}}}{\Lambda_{\rm {eff}}^4} {\cal{O}}_{T_2}^{1} + \frac{{\alpha^{V^0}_{AV}}}{\Lambda_{\rm {eff}}^2} {\cal{O}}_{AV}^{1} , $ $ {\cal{O}}^{1/2}_S\equiv m_{\chi^0}\ \left(\bar{\chi^0}\,\chi^0\; \right) \,\, m_l\,\, \left(\overline{l}\,l\right), $ $ {\cal{O}}^{1/2}_{T_1}\equiv \bar{\chi^0}\,i\,\partial^\mu \,\gamma^\nu\, \chi^0 \,\,\,{\cal{O}}^l_{\mu\nu} \ +\ {\rm{h.c.}} , $ $ {\cal{O}}^{1/2}_{\rm{AV}}\equiv\bar{\chi^0}\,\gamma_\mu\,\gamma_5\,\chi^0 \,\,\, \left(\overline{l}\,\gamma^\mu\,\gamma_5\,l\right), $ $ {\cal{O}}^{0}_S\equiv m_{\phi^0}^2 \ {\phi^0}^2 \,\, m_l\,\, \left(\overline{l}\,l\right) , $ $ {\cal{O}}^{0}_{T_2}\equiv \,\,\phi^0\,\, i\, \partial^\mu \,\,i\,\partial^\nu\,\, \phi^0\,\, {\cal{O}}^{l}_{\mu\nu}\ +\ {\rm{h.c.}}, $ $ {\cal{O}}^{1}_S\equiv m_{V^0}^2\ {V^0}^\mu \, {V^0}_\mu \,\, m_l\,\, \left(\overline{l}\,l\right), $ $ {\cal{O}}^{1}_{T_2}\equiv \ {V^0}^\rho\,\, i\, \partial^\mu \,\,i\,\partial^\nu\,\, {V^0}_\rho\,\, {\cal{O}}^{l}_{\mu\nu} \ +\ {\rm{h.c.}}, $ $ {\cal{O}}^{1}_{\rm{AV}}\equiv i \ \epsilon_{\mu\nu\rho\sigma}\,\, {V^0}^\mu\ i\ \partial^\nu \ {V^0}^\rho\,\,\, \left(\overline{l}\,\gamma^\sigma \, \gamma_5\,l \right). $ The effective operators given above can be seen to be $ SU(2)_L \otimes U(1)_Y $ gauge-invariant by noting that the leptonic bilinear terms written in terms of left- and right-handed gauge eigenstates $ l_L $ and $ e_R $ can be combined to give the above operators. The term proportional to the lepton mass $ m_l $ is obtained by integrating out the Higgs in the EFT formalism. However, this term is only valid up to the weak scale. The twist-2 operators $ {\cal{O}}^{l}_{\mu\nu} $ for charged leptons are defined as: $ \begin{aligned}[b] {\cal{O}}^{l}_{\mu\nu}\equiv&\frac{\rm i}{2}\ \overline{l_L}\ \left( {D}_\mu^L\gamma_\nu^{} +{{D}_\nu^{L}}\gamma_\mu^{}-\frac{1}{2}g_{\mu\nu}^{} {{\not\!\! D}^L}\right)\ l_L\ \\ &+\ \frac{\rm i}{2}\ \overline{e_R}\ \left( {D}_\mu^R\gamma_\nu^{} +{{D}_\nu^{R}}\gamma_\mu^{}-\frac{1}{2}g_{\mu\nu}^{} {{\not\!\! D}^R}\right)\ e_R , \end{aligned} $ where $ {D_\mu}^L $ and $ {D_\mu}^R $ are the covariant derivatives given by: $ \begin{aligned}[b] & D_\mu^L\equiv {\rm i}\ \partial_\mu\ -\ \dfrac{1}{2}g\ \overrightarrow{\tau}\cdot \overrightarrow{W}_\mu\ +\ \dfrac{1}{2}g^\prime\ B_\mu ,\\ &D_\mu^R\equiv {\rm i}\ \partial_\mu\ +\ g^\prime\ B_\mu. \end{aligned} $ The Lorentz structure of the operators determines the nature of dominant DM pair annihilation cross-sections. It turns out that the scalar and axial-vector operator contributions for fermionic and vector DM respectively are p-wave suppressed. III. DM PHENOMENOLOGY A. Constraints from relic density In the early Universe the DM particles were in thermal equilibrium with the plasma through the creation and annihilation of DM particles. The relic density contribution of the DM particles is obtained by numerically solving the Boltzmann equation [63] to give: $ \begin{aligned}[b] \Omega_{\rm{DM}} { h}^2 =& \frac{\pi\,\sqrt{{g_{\rm{eff}}}(x_F)}}{\sqrt{90}}\frac{x_F\,T_0^3\ g}{M_{Pl}\,\rho_c\,\langle\sigma^{ann} \left\vert \vec{v}\right\vert\rangle\, { g_{\rm{eff}}}(x_F)}\\ \\ \approx &0.12 \, \frac{x_F}{28} \, \frac{\sqrt{ g_{\rm{eff}}(x_F)}}{10}\, \frac{2\times10^{-26} cm^3/s}{\langle\sigma^{ann}\left\vert \vec v\right\vert\rangle} , \end{aligned} $ and $ x_F $ at freeze-out is given by: $ x_F = \log \left[a\,\left(a+2\right)\, \sqrt{\frac{45}{8}}\,\frac{ g \,M_{Pl}\, m_{\rm{DM}}\,\langle\sigma^{ann}\left\vert \vec v\right\vert\rangle}{2\,\pi^3\, \sqrt{x_F\, g_{\rm{eff}}(x_F)}} \right], $ where a is a parameter of the order of one. $ g_{\rm{eff}} $ is the effective number of degrees of freedom and is taken to be $ 92 $ near the freeze-out temperature, and $ g = 2,\ 1\ {\rm{and}}\ 3 $ for fermionic, scalar and vector DM particles respectively. The relevant annihilation cross-sections are given in Appendix A. We have computed the relic density numerically using MadDM [64] and MadGraph [65], generating the input model file using the Lagrangian given in Eqs. (1)-(11). Figure 1 shows the contour graphs in the effective cut-off $ \Lambda_{\rm{eff}} $ and DM mass plane for the fermionic, scalar and vector DM particles. For arbitrary values of the coupling $ \alpha $, the effective cut-off $ \Lambda_{\rm{eff}} $ is obtained by noting that $ \Lambda_{\rm{eff}} $ for scalar and twist-2 tensor operators scales as $ \alpha^{1/4} $ whereas for AV operators $ \Lambda_{\rm{eff}} $ scales as $ \alpha^{1/2} $. We have shown the graphs by taking one operator at a time and taking the coupling $ \alpha's = 1 $. We have made sure that perturbative unitarity of the EFT is maintained for the entire parameter space scanned in Fig. 1. The points lying on the solid lines satisfy the observed relic density $\Omega_{\rm DM} h^2 = 0.1198$. The region below the corresponding solid line is the cosmologically allowed parameter region of the respective operator. We find from Fig. 1(a) that the scalar operator for the fermionic DM is sensitive to the low DM mass. Figure 1. (color online) Relic density contours satisfying $ \Omega_{\rm{DM}}h^2 $ = $ 0.1198 \pm 0.0012 $ in the DM mass - $ \Lambda_{\rm{eff}} $ plane. All contours are drawn assuming universal lepton flavor couplings of effective DM-lepton interactions. The region below the corresponding solid line is the cosmologically allowed parameter region of the respective operator. B. Indirect detection DM annihilation in the dense regions of the Universe would generate a high flux of energetic SM particles. The Fermi Large Area Telescope (LAT) [17-19] has produced the strongest limit on DM annihilation cross-sections for singular annihilation final states to $ b\ \bar{b},\ \tau\ \bar{\tau} $, etc. In the case of DM particles annihilating into multiple channels, the bounds on cross-sections have been analysed in Ref. [66]. In our case we display the bounds from Fermi-LAT in Fig. 2, assuming the DM particles considered in this article to couple only to $ \tau $-leptons i.e. $ \tau $-philic DM. Figure 2. (color online) DM annihilation cross-section to $ \tau^+ \tau^- $. Solid lines in all figures show the variation of DM annihilation cross-section with DM mass where all other parameters are taken from the observed relic density. The median of the DM annihilation cross-section, derived from a combined analysis of the nominal target sample for the $ \tau^+ \tau^- $ channel assuming 100% branching fraction, restricts the allowed shaded region from above. v is taken to be $ \sim 10^{-3}\ c $. In Fig. 2 we show the prediction for dark matter annihilation cross-section into $ \tau^+ \tau^- $ for the set of parameters which satisfy the relic density constraints for the $ \tau $-philic DM particles. These cross-sections are compared with the upper bounds on the allowed annihilation cross-sections in the $ \tau^+ \tau^- $ channel obtained from the Fermi-LAT data [17-19]. The Fermi-LAT data puts a lower limit on the DM particle mass even though allowed by the relic-density observations. Likewise Fermi-LAT puts severe constraints on the twist-2 $ {\cal{O}}_{T_1}^{1/2} $ operator (Fig. 2(a)) for fermionic DM and the $ {\cal{O}}_{S}^0 $ operator (Fig. 2(b)) for scalar DM. There is a minimum dark matter particle mass allowed by Fermi-LAT observations. C. DM-electron scattering Direct detection experiments [2-10] look for the scattering of a nucleon or atom by DM particles. These experiments are designed to measure the recoil momentum of the nucleons or atoms of the detector material. This scattering can be broadly classified as (a) DM-nucleon, (b) DM-atom and (c) DM-electron scattering. Since lepto-philic DM does not have direct interaction with quarks or gluons at the tree level, the DM-nucleon interaction can only be induced at the loop levels. It has been shown [67] and has been independently verified by us that the event rate for direct detection of DM-atom scattering is suppressed by a factor of $ \sim 10^{-7} $ with respect to the DM-electron elastic scattering, which is in turn is suppressed by a factor of $ \sim 10^{-10} $ with respect to the loop-induced DM-nucleon scattering. In this article we restrict ourselves to the scattering of DM particles with free electrons. $ \begin{aligned}[b]\sigma_{S}^{\chi^0\, e^-} = & \frac{{\alpha^{\chi^0}_{S}}^2}{\pi}\ \frac{m_{\chi^0}^2}{\Lambda_{\rm{eff}}^8}\ m_e^4\ \\ \simeq &\ {\alpha^{\chi^0}_{S}}^2\ \left(\frac{m_{\chi^0}}{200\ {\rm{GeV}}}\right)^2\ \left( \frac{1 \ {\rm{TeV}}}{\Lambda_{\rm{eff}}} \right)^8\ 3.09 \times 10^{-61}\ {\rm{cm}}^2 ,\end{aligned} $ $ \begin{aligned}[b]\sigma_{T_1}^{\chi^0\, e^-} = &36\ \frac{{\alpha^{\chi^0}_{T_1}}^2}{\pi}\ \frac{m_{\chi^0}^2}{\Lambda_{\rm{eff}}^8}\ m_e^4\ \\ \simeq&\ {\alpha^{\chi^0}_{T_1}}^2\ \left(\frac{m_{\chi^0}}{200\ {\rm{GeV}}}\right)^2\ \left( \frac{1 {\rm{TeV}}}{\Lambda_{\rm{eff}}} \right)^8\ 1.11 \times 10^{-59}\ {\rm{cm}}^2 , \end{aligned}$ $ \begin{aligned}[b] \sigma_{AV}^{\chi^0\, e^-} = & 3\ \frac{{\alpha^{\chi^0}_{AV}}^2}{\pi}\ \frac{m_e^2}{\Lambda_{\rm{eff}}^4}\ \\ \simeq&\ {\alpha^{\chi^0}_{AV}}^2\ \left( \frac{1 {\rm{TeV}}}{\Lambda_{\rm{eff}}} \right)^4\ 9.27 \times 10^{-47}\ {\rm{cm}}^2 , \end{aligned}$ $ \begin{aligned}[b] \sigma_{S}^{\phi^0 \,e^-} = & \frac{{\alpha^{\phi^0}_{S}}^2}{\pi}\ \frac{m_{\phi^0}^2}{\Lambda_{\rm{eff}}^8}\ m_e^4\ \\ \simeq &\ {\alpha^{\phi^0}_{S}}^2\ \left(\frac{m_{\phi^0}}{200\ {\rm{GeV}}}\right)^2\ \left( \frac{1 {\rm{TeV}}}{\Lambda_{\rm{eff}}} \right)^8\ 3.09 \times 10^{-61}\ {\rm{cm}}^2 ,\end{aligned} $ $ \begin{aligned}[b] \sigma_{T_2}^{\phi^0 \,e^-} =& \frac{9}{16}\ \frac{{\alpha^{\phi^0}_{T_2}}^2}{\pi}\ \frac{m_{\phi^0}^4}{\Lambda_{\rm{eff}}^8}\ m_e^2\ \\ \simeq&\ {\alpha^{\phi^0}_{T_2}}^2\ \left(\frac{m_{\phi^0}}{200\ {\rm{GeV}}}\right)^4\ \left( \frac{1 {\rm{TeV}}}{\Lambda_{\rm{eff}}} \right)^8\ 2.78 \times 10^{-50}\ {\rm{cm}}^2 , \end{aligned} $ $ \begin{aligned}[b]\sigma_{S}^{V^0\, e^-} =& \frac{{\alpha^{V^0}_{S}}^2}{\pi}\ \frac{m_{V^0}^2}{\Lambda_{\rm{eff}}^8}\ m_e^4\ \\ \simeq&\ {\alpha^{V^0}_{S}}^2\ \left(\frac{m_{V^0}}{200\ {\rm{GeV}}}\right)^2\ \left( \frac{1 {\rm{TeV}}}{\Lambda_{\rm{eff}}} \right)^8\ 3.09 \times 10^{-61}\ {\rm{cm}}^2 , \end{aligned} $ $ \begin{aligned}[b] \sigma_{T_2}^{V^0\, e^-} = &\frac{9}{16}\ \frac{{\alpha^{V^0}_{T_2}}^2}{\pi}\ \frac{m_{V^0}^4}{\Lambda_{\rm{eff}}^8}\ m_e^2\ \\ \simeq&\ {\alpha^{V^0}_{T_2}}^2\ \left(\frac{m_{V^0}}{200\ {\rm{GeV}}}\right)^4\ \left( \frac{1 {\rm{TeV}}}{\Lambda_{\rm{eff}}} \right)^8\ 2.78 \times 10^{-50}\ {\rm{cm}}^2 , \end{aligned} $ $ \begin{aligned}[b] \sigma_{AV}^{V^0\, e^-} = &\frac{1}{144}\ \frac{{\alpha^{V^0}_{AV}}^2}{\pi}\ \frac{1}{\Lambda_{\rm{eff}}^4}\ \frac{m_e^4}{m_{V^0}^2}\ v^4\ \\ \simeq&\ {\alpha^{V^0}_{AV}}^2\! \left(\frac{200\ {\rm{GeV}}}{m_{V^0}}\right)^2\left( \frac{1 {\rm{TeV}}}{\Lambda_{\rm{eff}}} \right)^4 v^4\ 1.34 \times 10^{-60}\ {\rm{cm}}^2 ,\end{aligned} $ We find that the electron-DM scattering cross-sections are dominated by the effective interactions mediated by the $ AV $ operator $ {\cal{O}}_{AV}^{1/2} $ for fermionic DM and by the twist-2 operators $ {\cal{O}}_{T_2}^{0} $ and $ {\cal{O}}_{T_2}^{1} $ for scalar and vector DM respectively. In Fig. 3, we plot the DM-free electron scattering cross-section as a function of DM mass only for the dominant operators as discussed above. The other operator contributions are negligible in comparison. The cross-sections for a given DM mass are computed with the corresponding value of $ \Lambda_{\rm{eff}} $ satisfying the observed relic density for these operators. These results are then compared with the null results of DAMA/LIBRA [2, 3] at 90% confidence level for DM-electron scattering and XENON100 [7, 8] at 90% confidence level for inelastic DM-atom scattering. Figure 3. (color online) DM-free electron elastic scattering cross-section as a function of DM mass. The solid lines are drawn for the dominant operators $ {\cal{O}}_{AV}^{1/2},\ {\cal{O}}_{T_2}^{0} $ and $ {\cal{O}}_{T_2}^{1} $ for fermionic, scalar and vector DM particles respectively. The exclusion plots from DAMA at 90% C.L. for the case of DM-electron scattering are also shown [67]. Bounds at 90% C.L. are shown for XENON100 from inelastic DM-atom scattering [68]. The dashed curves show the 90% C.L. constraint from the Super-Kamiokande limit on neutrinos from the Sun, by assuming annihilation into $ \tau^+\tau^- $ [67]. IV. COLLIDER SENSITIVITY OF EFFECTIVE OPERATORS A. LEP constraints on the effective operators Existing results and observations from LEP data can be used for putting constraints on the effective operators. The cross-section for the process $ e^+e^-\to \gamma^\star + \, {\rm{DM \ pair}} $ is compared with the combined analysis from the DELPHI and L3 collaborations for $ e^+e^-\to \gamma^\star + Z \to q_{i}\bar q_{i} + \nu_{l_j}\bar\nu_{l_j} $ at $ \sqrt{s} $ = $ 196.9 $ GeV and an integrated luminosity of 679.4 pb$ ^{-1} $, where $ q_i\equiv u,\,d,\,s $ and $ \nu_{l_j}\equiv \nu_e,\,\nu_{\mu},\nu_\tau $. The Feynman diagrams contributing to the production of $ \gamma / \gamma^\star $ with missing energy induced by lepto-philic operators at a lepton $ e^-\,e^+ $ collider are shown in Fig. 4. The measured cross-section from the combined analysis for the said process is found to be $ 0.055 $ pb, with the measured statistical error $ \delta\sigma_{\rm{stat}} $, systematic error $ \delta\sigma_{\rm{syst}} $ and total error $ \delta\sigma_{\rm{tot}} $ of $ 0.031 $ pb, $ 0.008 $ pb and $ 0.032 $ pb respectively [69]. Hence, the contribution due to an additional channel containing the final-state DM pairs and resulting in the missing energy along with two quark jets can be constrained from the observed $ \delta\sigma_{\rm{tot}} $. In Fig. 5, we have plotted the 95% C.L. solid line contours satisfying $ \delta\sigma_{\rm{tot}} $$ \approx $ $ 0.032 $ pb, corresponding to the operators in the DM mass-$ \Lambda_{\rm{eff}} $ plane. The region under the solid lines corresponding to the operator as shown is disallowed by the combined LEP analysis. The phenomenologically interesting DM mass range $ \leqslant 50 $ GeV is completely disfavored by the LEP experiments, except for the operator $ {\cal{O}}_{AV}^{1/2} $. Figure 4. Feynman diagrams contributing to the production of $ \gamma / \gamma^\star $ with missing energy induced by lepto-philic operators (5)-(11) at a lepton $ e^-\,e^+ $ collider. Figure 5. (color online) Solid lines depict the contours in the plane defined by DM mass and the kinematic reach of for $ e^+e^-\to {\rm{DM\, pairs}} + \gamma^\star \to \,\,\not\!\!\!E_T + q_{i}\bar q_{i} $ at $ \sqrt{s} $ = 196.9 GeV and an integrated luminosity of 679.4 pb$ ^{-1} $, satisfying the constraint $ \delta\sigma_{\rm{tot}} $ =0.032 pb obtained from combined analysis of DELPHI and L3 [69]. The regions below the solid lines are forbidden by LEP observation. The regions below the dashed lines corresponding to respective operators satisfy the relic density constraint $ \Omega_{\rm{DM}}h^2 \le $ $ 0.1198 \pm 0.0012 $. B. ${\not\!\! E}_T$ + mono-photon signals at ILC and $ {\cal{X}}^2 $ analysis In this subsection we study the DM pair production processes accompanied by an on-shell photon at the proposed International Linear Collider (ILC) for the DM mass range $ \sim $ 50 - 500 GeV: (a) $ e^+\,e^-\,\rightarrow \,\chi^0\,\bar{\chi^0}\,\gamma $, (b) $ e^+\,e^-\,\rightarrow \,\phi^0\,\phi^0\,\gamma $, and (c) $e^+\,e^-\,\rightarrow $ $ V^0\,V^0\,\gamma $ as shown in Figs 8-10. The dominant SM background for the $ e^+e^-\to \not \!\! E_T + \gamma $ signature comes from the$ Z\gamma $ production process: $ e^+\,e^-\,\rightarrow\,Z+\gamma \to \sum \nu_i\,\bar{\nu}_i + \gamma $. The analyses for the background and the signal processes corresponding to the accelerator parameters as conceived in the Technical Design Report for ILC [70, 71], given in Table 1, were performed by simulating SM backgrounds and DM signatures using Madgraph [65], MadAnalysis 5 [72] and the model file generated by FeynRules [73]. We impose the following cuts to reduce the backgrounds for the DM pair production in association with a mono-photon: ILC-250 ILC-500 ILC-1000 $\sqrt{s} \left(\rm in \;GeV\right )$ 250 500 1000 $L_{\rm int} \left(\rm in \;fb^{-1}\right )$ 250 500 1000 $\sigma_{\rm BG}\rm \;(pb)$ 1.07 1.48 2.07 Table 1. ILC accelerator parameters as per Technical Design Report [70, 71]. $\sigma_{\rm BG}$ is the background cross section for $e^-\,e^+\,\rightarrow\,\sum \nu_i\,\bar{\nu}_i\,\gamma$ process computed using the selection cuts defined in Section IVB. $ \bullet $ Transverse momentum of photon $ p_{T_{\gamma}} \geqslant $ 10 GeV, $ \bullet $ Pseudo-rapidity of photon is restricted to $ \left\vert\eta_\gamma\right\vert\leqslant $ 2.5, $ \bullet $ disallowed recoil photon energy against on-shell Z ${2\,E_\gamma}/{\sqrt{s}} \not\!\epsilon \left[0.8,0.9\right]$, $ \left[0.95,0.98\right] $ and $ \left[0.98,0.99\right] $ for $ \sqrt{s} $ = 250 GeV, 500 GeV and 1 TeV respectively. The shape profiles corresponding to the mono-photon with missing energy processes can be studied in terms of the kinematic observables $ p_{T_{\gamma}} $ and $ \eta_\gamma $, as they are found to be the most sensitive. We generate the normalized one-dimensional distributions for the SM background processes and signals induced by the relevant operators. To study the dependence on DM mass, we plot the normalized differential cross-sections in Figs. 6 and 7 for three representative values of DM mass, $ 75,\ 225 $ and $ 325 $ GeV, at center of mass energy $ \sqrt{s} = 1 $ TeV and an integrated luminosity 1 ${\rm ab}^{-1}$. Figure 6. (color online) Normalized 1-dimensonal differential cross-sections with respect to $ p_{T_\gamma} $ corresponding to the SM processes, and those induced by lepto-philic operators at three representative values of DM mass: 75, 225 and 325 GeV. Figure 7. (color online) Normalized 1-dimensonal differential cross-sections with respect to $ \eta_{\gamma} $ corresponding to the SM processes, and those induced by lepto-philic operators at the three representative values of DM mass: 75, 225 and 325 GeV. The sensitivity of $ \Lambda_{\rm{eff}} $ with respect to DM mass is enhanced by computing the $ {\cal{X}}^2 $ with the double differential distributions of kinematic observables $ p_{T_\gamma} $ and $ \eta_\gamma $ corresponding to the background and signal processes for: (i) 50 GeV $ \leqslant m_{\rm{DM}} \leqslant $ 125 GeV at $ \sqrt{s} $ = 250 GeV and an integrated luminosity of 250 fb$ ^{-1} $; (ii) 100 GeV $ \leqslant m_{\rm{DM}} \leqslant $ 250 GeV at $ \sqrt{s} $ = 500 GeV and an integrated luminosity of 500 fb$ ^{-1} $; and (iii) 100 GeV $ \leqslant m_{\rm{DM}} \leqslant $ 500 GeV at $ \sqrt{s} $ = 1 TeV and an integrated luminosity of 1 ab$ ^{-1} $. The $ {\cal{X}}^2 $ is defined as $ \begin{aligned}[b] {\cal{X}}^2\equiv&{\cal{X}}^2 \left(m_{\rm{DM}},\, \frac{\alpha_i}{\Lambda_{\rm {eff}}^n} \right) \\ =& \sum\limits_{j = 1}^{n_1}\sum\limits_{i = 1}^{n_2} \left [ \dfrac{\dfrac{\Delta N_{ij}^{\rm NP}}{\left(\Delta p_{T_\gamma}\right)_i\, \left(\Delta \eta_\gamma\right)_j}}{\sqrt{ \dfrac{\Delta N_{ij}^{\rm SM+NP}} {\left(\Delta p_{T_\gamma}\right)_i\, \left(\Delta \eta_\gamma\right)_j} +\delta_{\rm{sys}}^2\left\{ \dfrac{\Delta N_{ij}^{\rm SM+NP}}{\left(\Delta p_{T_\gamma}\right)_i\, \left(\Delta \eta_\gamma\right)_j}\right\}^2} }\right ]^2 , \end{aligned}$ where $\Delta N_{ij}^{\rm NP}$ and $\Delta N_{ij}^{\rm SM+NP}$ are the number of New Physics and total differential events respectively in the two-dimensional $ \left[\left(\Delta p_{T_\gamma}\right)_i-\left(\Delta \eta_\gamma\right)_j\right]^{\rm{th}} $ grid. Here $ \delta_{\rm{sys}} $ represents the total systematic error in the measurement. Adopting a conservative value of 1% for the systematic error and using the collider parameters given in Table 1, we simulate the two-dimension differential distributions to calculate the $ {\cal{X}}^2 $. In Figs. 8-10, we have plotted the $ 3\sigma $ contours at 99.73% C.L in the $m_{\rm DM}-\Lambda_{\rm{eff}}$ plane corresponding to $ \sqrt{s} = 250 $ GeV, 500 GeV and 1 TeV respectively for the effective operators satisfying perturbative unitarity. Figure 8. (color online) Solid lines depict $ 3\sigma $ with 99.73 % C.L. contours in the $ m_{DM}-\Lambda_{\rm{eff}} $ plane from the $ {\cal{X}}^2 $ analyses of the $e^+e^-\to {\not\!\! E}_T +\gamma$ signature at the proposed ILC designed for $ \sqrt{s} $ = 250 GeV with an integrated luminosity 250 fb$ ^{-1} $. The regions below the solid lines corresponding to the respective contour are accessible for discovery with $ \geqslant $ 99.73% C.L. The regions below the dashed lines corresponding to respective operators satisfy the relic density constraint $ \Omega_{\rm{DM}}h^2 \leqslant $ $ 0.1198 \pm 0.0012 $. Figure 9. (color online) Solid lines depict $ 3\sigma $ with 99.73 % C.L. contours in the $m_{\rm DM}-\Lambda_{\rm{eff}}$ plane from the $ {\cal{X}}^2 $ analyses of the $e^+e^-\to {\not\!\! E}_T +\gamma$ signature at the proposed ILC designed for $ \sqrt{s} $ = 500 GeV with an integrated luminosity 500 fb$ ^{-1} $. The regions below the solid lines corresponding to the respective contour are accessible for discovery with $ \geqslant $ 99.73% C.L. The regions below the dashed lines corresponding to respective operators satisfy the relic density constraint $ \Omega_{\rm{DM}}h^2 \leqslant $ $ 0.1198 \pm 0.0012 $. Figure 10. (color online) Solid lines depict $ 3\sigma $ with 99.73 % C.L. contours in the $m_{\rm DM}-\Lambda_{\rm{eff}}$ plane from the $ {\cal{X}}^2 $ analyses of the $e^+e^-\to {\not\!\! E}_T +\gamma$ signature at the proposed ILC designed for $ \sqrt{s} $ = 1 TeV with an integrated luminosity 1 ab$ ^{-1} $. The regions below the solid lines corresponding to the respective contour are accessible for discovery with $ \geqslant $ 99.73% C.L. The regions below the dashed lines corresponding to respective operators satisfy the relic density constraint $ \Omega_{\rm{DM}}h^2 \leqslant $ $ 0.1198 \pm 0.0012 $. The sensitivity of mono-photon searches can be improved by considering the polarised initial beams [74, 75]. For illustration, we consider +80 % polarised $ e^- $ and -30 % polarised $ e^+ $ initial beams. In Table 2, we show the $ 3 \sigma $ reach of the cut-off $ \Lambda_{\rm{eff}} $ from $ {\cal{X}}^2 $ analysis for two representative values of DM mass, $ 75 $ and $ 225 $ GeV, at the proposed ILC for $ \sqrt{s} = 500 $ GeV with an integrated luminosity $500\ \rm fb^{-1}$ for unpolarised and polarised initial beams, and find improvement in the $ \Lambda_{\rm{eff}} $ sensitivity for the polarised beams. Unpolarised Polarised $\sqrt{s}$ in GeV 500 500 $ L$ in fb$^{-1}$ 500 500 $\left(P_{e^-},\, P_{e^+}\right)$ (0, 0) (0.8, −0.3) $m_{DM}$ in GeV $75$ $225$ $75$ $225$ ${\cal{O}}^{1/2}_{T_1} $ $956.1$ $766.4$ $1135.7$ $948.0$ ${\cal{O}}^{1/2}_{\rm{AV}}$ $2994.4$ $1629.4$ $2998.6$ $2345.5$ ${\cal{O}}^{0}_{T_2}$ $461.8$ $319.1$ $767.8$ $373.2$ ${\cal{O}}^{1}_{T_2}$ $1751.4$ $361.8$ $1651.2$ $444.3$ ${\cal{O}}^{1}_{\rm{AV}}$ $5718.0$ $777.3$ $5976.2$ $1129.8$ Table 2. Estimation of $3 \sigma$ reach of the cut-off $\Lambda_{\rm{eff}}$ in GeV from ${\cal{X}}^2$ analysis for two representative values of DM mass, $75$ and $225$ GeV, at proposed ILC for $\sqrt{s}=500$ GeV with an integrated luminosity $500\ \rm fb^{-1}$, for unpolarised and polarised initial beams. V. SUMMARY AND RESULTS In this article we have studied DM phenomenology in an effective field theory framework. We have considered SM gauge-invariant contact interactions between dark matter particles and leptons up to dimension 8. To ensure the invariance of SM gauge symmetry at all energy scales, we have restricted ourselves to self-conjugate DM particles, namely a Majorana fermion, a real scalar or a real vector. We have estimated their contribution to the relic density and obtained constraints on the parameters of the theory from the observed relic density $\Omega_{\rm DM} h^2 = 0.1198 \pm 0.0012$. Indirect detection data from FermiLAT puts a lower limit on the allowed DM mass. The data also puts severe constraints on the twist-2 $ {\cal{O}}^{1/2}_{T_1} $ operator for the fermionic DM and scalar $ {\cal{O}}^0_S $ operator for scalar DM. Analysis of the existing LEP data disallows the phenomenologically interesting DM mass range $ \leqslant 50 $ GeV, except for the ${\cal{O}}^{1/2}_{\rm AV}$ operator. We then performed $ {\cal{X}}^2 $-analysis for the pair production of DM particles at the proposed ILC for DM mass range $ \sim 50-500 $ GeV, for the relevant operators discussed in Table 1 We find that in the $m_{\rm DM}-\Lambda_{\rm{eff}}$ region allowed by the relic density and indirect detection data, higher sensitivity can be obtained from the dominant mono-photon signal at the proposed ILC, particularly for twist-2 operators. NOTE ADDED For low mass DM, our attention was drawn by the referee to the fact that in addition to on-shell Z production at LEP, the future FCC-ee and CEPC will be sources of large numbers of Zs, producing Tera Zs. This may result in competitive constraints [76, 77] on the twist-2 operators with covariant derivatives compared to the ISR and FSR processes considered from the ILC. We thank Sukanta Dutta for discussions and his initial participation in this work. HB thanks Mihoko Nojiri and Mamta Dahiya for suggestions. HB acknowledges the CSIR-JRF fellowship and support from CSIR grant 03(1340)/ 15/ EMR-II. APPENDIX A: ANNIHILATION CROSS-SECTIONS Annihilation cross-sections for the operators given in Eqs. (4) - (11) are given respectively as: $\tag{A1} \sigma_{S}^{\rm{ann}} \left\vert \vec v\right\vert \left(\chi^0\,\bar{\chi^0}\to l^+l^-\right) \!=\! \frac{1}{8\pi} \frac{{\alpha^{\chi^0}_S}^2}{\Lambda_{\rm{eff}}^8}\, m^4_{\chi^0}m_l^2\left[ 1- \frac{m_l^2}{m_{\chi^0}^2} \right]^{3/2} \left\vert \vec v\right\vert^2 , $ $ \tag{A2} \begin{aligned}[b] \sigma_{T_1}^{\rm{ann}} \left\vert \vec v\right\vert \left(\chi^0\,\bar{\chi^0}\!\to\! l^+l^-\right) \!=& \frac{1}{2 \pi} \frac{{\alpha^{\chi^0}_{T_1}}^2}{\Lambda_{\rm{eff}}^8}m^6_{\chi^0}\sqrt{1-\frac{m_l^2}{m_{\chi^0}^2} } \\ &\times\left[2\!+\!\frac{m_l^2}{m_{\chi^0}^2}\!+\!\left( \frac{7}{6}\!-\!\frac{11}{16} \frac{m_l^2}{m_{\chi^0}^2}\right.\right.\!\!\!\!\!\!\!\\ &\left.\left.-\frac{65}{48}\, \frac{m_l^4}{m_{\chi^0}^4} \right) \left\vert \vec v\right\vert^2 \right], \end{aligned} $ $ \sigma_{AV}^{\rm{ann}} \left\vert \vec v\right\vert \left(\chi^0\,\bar{\chi^0}\to l^+l^-\right) = \frac{1}{2 \pi }\ \frac{{\alpha^{\chi^0}_{\rm AV}}^2}{\Lambda_{\rm{eff}}^4}\ m^2_l\ \sqrt{1-\frac{m_l^2}{m_{\chi^0}^2} } \left[1+ \left( \frac{1}{3}\frac{m_{\chi^0}^2}{m_l^2} -\frac{5}{6}-\frac{7}{6}\frac{m_l^2}{m_{\chi^0}^2}\right)\ \left\vert \vec v\right\vert^2 \right], \tag{A3} $ $\sigma_{S}^{\rm{ann}} \left\vert \vec v\right\vert \left(\phi^0\,{\phi^0}\to l^+l^-\right) = \frac{1}{4 \pi}\ \frac{{\alpha^{\phi^0}_S}^2}{\Lambda_{\rm{eff}}^8}\, \, m^4_{\phi^0}\ m_l^2 \sqrt{1-\frac{m_l^2}{m_{\phi^0}^2} } \left[ 1- \frac{m_l^2}{m_{\phi^0}^2} + \left( -\frac{3}{2}+ \frac{15}{4} \frac{m_l^2}{m_{\phi^0}^2} \right)\ \left\vert \vec v\right\vert^2 \right],\tag{A4} $ $ \sigma_{T_2}^{\rm{ann}} \left\vert \vec v\right\vert \left(\phi^0\,{\phi^0}\to l^+l^-\right) = \frac{1}{4 \pi }\ \frac{{\alpha^{\phi^0}_{T_2}}^2}{\Lambda_{\rm{eff}}^8}\, \, m^6_{\phi^0}\ \sqrt{1-\frac{m_l^2}{m_{\phi^0}^2}} \times \left[ \frac{m_l^2}{m_{\phi^0}^2} - \frac{m_l^4}{m_{\phi^0}^4} + \left(\frac{5}{12} \frac{m_l^2}{m^2_{\phi^0}}- \frac{13}{24}\frac{m_l^4}{m^4_{\phi^0}} \right) \ \left\vert \vec v\right\vert^2 \right], \tag{A5} $ $ \sigma_{S}^{\rm{ann}} \left\vert \vec v\right\vert \left(V^0\,{V^0}\to l^+l^-\right) = \frac{1}{12\pi}\ \frac{{\alpha^{V^0}_S}^2}{\Lambda_{\rm{eff}}^8}\, \, m^4_{V^0}\ m_l^2\ \sqrt{1-\frac{m_l^2}{m_{V^0}^2}} \times \left[ 1- \frac{m_l^2}{m_{V^0}^2} + \left( \frac{1}{2} + \frac{7}{4} \frac{m_l^2}{m_{V^0}^2} \right) \ \left\vert \vec v\right\vert^2 \right], \tag{A6} $ $ \sigma_{T_2}^{\rm{ann}} \left\vert \vec v\right\vert \left(V^0\,{V^0}\to l^+l^-\right) = \frac{1}{12\ \pi }\ \frac{{\alpha^{V^0}_{T_2}}^2}{\Lambda_{\rm{eff}}^8}\, \, m^6_{V^0}\ \sqrt{1-\frac{m_l^2}{m_{V^0}^2}} \times \left[ \frac{m_l^2}{m_{V^0}^2} - \frac{m_l^4}{m_{V^0}^4} + \left(\frac{3}{4} \frac{m_l^2}{m^2_{V^0}}- \frac{7}{8} \frac{m_l^4}{m^4_{V^0}} \right) \ \left\vert \vec v\right\vert^2 \right], \tag{A7} $ $ \sigma_{\rm AV}^{\rm{ann}} \left\vert \vec v\right\vert \left(V^0\,{V^0}\to l^+l^-\right) = \frac{1}{54 \pi}\ \frac{{\alpha^{V^0}_{AV}}^2}{\Lambda_{\rm{eff}}^4}\, \, m^2_{V^0}\ \sqrt{1-\frac{m_l^2}{m_{V^0}^2}} \left[ 4- 7\frac{m_l^2}{m_{V^0}^2} \right] \ \left\vert \vec v\right\vert^2 .\tag{A8} $
CommonCrawl
What is the difference between "reaction in both directions" and "equilibrium"? According to the Wikipedia page on Chemical Equations: Symbols are used to differentiate between different types of reactions. To denote the type of reaction: "$=$" symbol is used to denote a stoichiometric relation. "$\rightarrow$" symbol is used to denote a net forward reaction. "$\rightleftarrows$" symbol is used to denote a reaction in both directions. "$\rightleftharpoons$" symbol is used to denote an equilibrium. How is a "reaction in both directions" and an "equilibrium" any different? Aren't they supposed to be identical? krismathkrismath $\begingroup$ Unfortunately this is the same definition as in the goldbook and no other explanations were given. $\endgroup$ – Martin - マーチン ♦ As far as I know, they are used indistinguishably and interchangeably. One or another could be chosen to emphasize whether you are talking about chemical equilibrium or just trying to show that a reaction can take place in both directions depending on the conditions (which can be understood as completely the same...). I saw the $⇄$ symbol used to specify some conditions for the reaction to take place in one direction and other kind of conditions for the reaction to occur in the other direction, and $⇌$ if conditions are the same for both reactions First case is not an equilibrium, but it's true that the reaction can go both directions using the appropriated reagents in each case. But that could not be expressed using $⇌$. The second reaction is an equilibrium, both reactions are happening continuously under the same conditions. It also could be expressed using $⇄$. That is the only difference I can tell. Like Martin pointed out, IUPAC does not give more hints about it, so the question cannot be answered for 100% sure. krismath Altered StateAltered State $\begingroup$ This seems reasonable. You can have effects in both directions without them being in equilibrium and I think that this is what is being denoted here. $\endgroup$ A reaction in both directions may be equilibrating, but is not necessarily equilibrating (like if the system is being excited), and it certainly isn't necessarily at equilibrium. For example, if one puts pure ammonia gas in a container, it will be far from equilibrium with respect to its decomposition products (nitrogen and hydrogen gas) but it will spontaneously proceed in the following direction: $2\mathrm{NH}_3 \rightarrow \mathrm{N}_2 + 3\mathrm{H}_2$. Some time after that, the rate of the reverse reaction becomes non-negligible; $\mathrm{N}_2 + 3\mathrm{H}_2 \rightarrow 2\mathrm{NH}_3$, so that both reactions simultaneously occurring can be summarized as: $2\mathrm{NH}_3 \rightleftarrows \mathrm{N}_2 + 3\mathrm{H}_2$. That doesn't mean the system's at equilibrium, just that the forward and reverse rates are non-negligible. Finally, the equilibrium condition (so that the concentration of all species remains constant over time) applies, and the forward and reverse rates become equivalent. That's the distinction that's specified with the double harpoon arrows: $2\mathrm{NH}_3 \rightleftharpoons \mathrm{N}_2 + 3\mathrm{H}_2$. I think the harpoon arrows are most often used to specify reaction rates, so the equilibrium condition, where the forward and reverse rates balance, is marked using them. RyanRyan Thanks for contributing an answer to Chemistry Stack Exchange! Not the answer you're looking for? Browse other questions tagged equilibrium or ask your own question. Arrows used in chemical reactions What are the correct equilibrium arrows? What is the difference between K and Kp in the equilibrium equation? What is the difference between the equilibrium position and the equilibrium constant? Relation between chemical kinetics and chemical equilibrium What's the difference between "extent of reaction" and "position of equilibrium"? What is the difference between chemical equilibrium and dynamic equilibrium? What is difference between density of the equilibrium and vapour density of the mixture
CommonCrawl
Search all SpringerOpen articles AMB Express Biopreservative efficacy of Enterococcus faecium-immobilised film and its enterocin against Salmonella enterica Muzamil Rashid1, Sunil Sharma2, Arvinder Kaur2, Amarjeet Kaur1 & Sukhraj Kaur ORCID: orcid.org/0000-0002-2703-170X1 AMB Express volume 13, Article number: 11 (2023) Cite this article The growing awareness about the adverse health effects of artificial synthetic preservatives has led to a rapid increase in the demand for safe food preservation techniques and bio preservatives. Thus, in this study, the biopreservatives efficacy of enterocin-producing Enterococcus faecium Smr18 and its enterocin, ESmr18 was evaluated against Salmonella enterica contamination in chicken samples. E. faecium Smr18 is susceptible to the antibiotics penicillin-G, ampicillin, vancomycin, and erythromycin, thereby indicating that it is a nonpathogenic strain. Further, the enterocin ESmr18 was purified and characterised as a 3.8 kDa peptide. It possessed broad spectrum antibacterial activity against both Gram-positive and Gram-negative pathogens including S. enterica serotypes Typhi and Typhimurium. Purified ESmr18 disrupted the cell membrane permeability of the target cell thereby causing rapid efflux of potassium ions from L. monocytogenes and S. enterica. Chicken samples inoculated with S. enterica and packaged in alginate films containing immobilised viable E. faecium resulted in 3 log10 colony forming units (CFU) reduction in the counts of S. enterica after 34 days of storage at 7–8 °C. The crude preparation of ESmr18 also significantly (p < 0.05) reduced the CFU counts of salmonella-inoculated chicken meat model. Purified ESmr18 at the concentration upto 4.98 µg/ml had no cytolytic effect against human red blood cells. Crude preparation of ESmr18 when orally administered in fish did not cause any significant (p < 0.05) change in the biochemical parameters of sera samples. Nonsignificant changes in the parameters of comet and micronucleus assays were observed between the treated and untreated groups of fishes that further indicated the safety profile of the enterocin ESmr18. Poultry meat is the second highest in terms of global meat consumption (Chai et al. 2017) that makes it the major source of meat-associated food-borne illness. The most common infectious disease transmitted by meat is salmonellosis caused by Salmonella enterica. Salmonellosis includes gastroenteritis caused by non-typhoidal strains of Salmonella and chronic enteric fever i.e. typhoid, caused by S. enterica typhi and paratyphi (Ryan et al. 2017). Every year, 11–20 million people become sick due to typhoid out of which 128,000 to 161,000 people die (WHO 2018). Almost 6% of the treated typhoid patients become chronic carriers of S. enterica typhi, that shed the bacteria in faeces resulting in continued transmission of the disease (Trujillo et al. 1991). Further, Increasing incidences of antibiotic resistance has been observed among food-transmitted Salmonella spp. (Karkey et al. 2018). Thus, Salmonella contamination is one of the major challenges for poultry meat producers. The most common antimicrobial intervention used in chicken is refrigeration. But Salmonella spp. not only survives refrigeration temperature (Dominguez and Schaffner 2009) but also multiplies in chicken at temperature as low as 5 °C (Smadi et al. 2012). Treatment of chicken meat with chemical preservatives such as sodium nitrite and sodium benzoate (Sindelar and Milkowski 2011) is done to prevent lipid oxidation and inhibit the growth of microorganisms but nitrites could be carcinogenic due to their ability to form nitrosamines (Massey and Lees 1992) Sodium benzoate at different doses has number of adverse health effects (Piper and Piper 2017). Owing to these limitations, more attention has been drawn towards exploration of safe alternative bio preservative technologies. Lactic acid bacteria (LAB) is known to prolong the shelf life of fermented food due to their ability to secrete number of metabolites such as organic acids, antimicrobial peptides known as bacteriocins (Mokoena et al. 2021) etc. Some genera of LAB such as Lactobacillus spp., Pediococcus spp, Enterococcus spp. etc. have been successfully used for the preservation of processed fruits and vegetables (Agriopoulou et al. 2020), cheese (Medved 'ová et al. 2020) dairy and meat (McMullen and Stiles 1996). As part of the starter culture, bacteriocin-producing Enterococcus spp. was shown to inhibit food pathogens in cheese (Giraffa 2003) and meat (Callewaert et al. 2000). The application of LAB for the preservation of non-fermented food products is a challenge because the addition of the bacterial strains to the food can change its sensory properties due to fermentation. Thus, to overcome this limitation, immobilisation of LAB in different matrices can be done. Immobilisation of L. plantarum in alginate films have been successfully used for the preservation of cheese (Silva et al. 2022). However, the preservative effect of immobilised LAB in meat has not been studied. Bacteriocins have long been the focus of research as a potential replacement for chemical preservatives. They are being explored for many applications such as therapeutic drugs (Soltani et al. 2021) and as bio preservatives in food (Cleveland et al. 2001) and cosmetics (Maurício et al. 2017). Two bacteriocins i.e. pediocin produced by Pediococcus acidilactici (Papagianni and Anastasiadou 2009) and nisin produced by Lactococcus lactis (Deegan et al. 2006) have been approved by Food and Drug Administration for use as a food preservative. However, due to their inability to inhibit Gram-negative bacteria (Zhou et al. 2016) they have limited applications as food preservatives in chicken meat products. Enterocins are the bacteriocins secreted by Enterococcus spp. Some enterocins exhibit broad-spectrum activities against both Gram-negative and Gram-positive pathogens including S. enterica (Ankaiah et al. 2018). Several studies have shown the potential of enterocins in inhibiting L. monocytogenes in the cooked and raw meat (Kasimin et al. 2022). However, the biopreservative efficacies of bacteriocinogenic strain or its bacteriocin in inhibiting S. enterica in meat models have not been evaluated. Therefore, in this study, we explored the potential of enterocin-producing E. faecium Smr18 and its enterocin ESmr18 in inhibiting S. enterica in raw chicken. Further, we purified the enterocin ESmr18 from the culture supernatant of E. faecium and tested its stability at refrigeration temperature. The biopreservative efficacy of ESmr18 against S. enterica-inoculated chicken was also tested. The safety of ESmr18 was evaluated in in vitro and in vivo studies. Bacterial isolates Enterococcus faecium Smr18 was received from Dr. Sukhraj Kaur's laboratory. It was isolated from the swab samples of healthy vaginal microflora of woman after obtaining her written informed consent. The study was approved by the Human Ethics Committee of Guru Nanak Dev University, Amritsar, India. E. faecium was cultured in de Man Rogosa and Sharpe (MRS, Himedia Laboratories Pvt. Ltd., Mumbai, India) broth at 37 °C in anaerobic jars under stationary conditions. For conducting the experiments, E. faecium was propagated twice in MRS medium at 37 °C. All the chemicals used in the study were purchased from Himedia, except where specifically mentioned. The strain was identified by using partial sequencing of 16sRNA done at National Centre for Cell Science, Pune, India. The sequence so obtained was compared with the known sequences of other Enterococcal spp. aligned by using National Center for Biotechnology Information—Basic Local Alignment Search Tool (NCBI-BLAST) database. Phylogenetic tree was constructed by using MEGA 6 software following Neighbourhood Joining method and Kimura2 Gamma I model. The strain was deposited to Microbial Type Culture Collection (MTCC), Institute of Microbial technology, Chandigarh, India with MTCC number 13248. The pathogenic bacterial strains used in the study and procured from National Collection of Industrial Microorganisms (NCIM), Pune, India were Listeria monocytogenes NCIM 5277, Staphylococcus aureus NCIM 5718, Pseudomonas aeruginosa NCIM 2862, Shigella flexneri NCIM 5265, Klebsiella pneumoniae NCIM 5215 and Escherichia coli NCIM 5662. S. enterica serotype Typhi MTCC 733, S. enterica serotype Typhimurium MTCC 1251, S. enterica serotype Typhimurium MTCC 1252 and Streptococcus pyogenes MTCC 1927 were procured from MTCC. The pathogenic indicator bacteria were propagated at 37 °C under aerobic conditions in Brain heart infusion (BHI) broth. Preparation of enterocin and its susceptibility to various enzymes Purification of the enterocin ESmr18 was done by ammonium sulphate precipitation of cell-free culture supernatant (CS) of E. faecium followed by cation exchange chromatography. The proteins were precipitated from the CS by adding ammonium sulphate at 60% saturation (w/v) and mixing it on magnetic stirrer at 4 °C, for overnight. The precipitated proteins were separated by centrifugation (8000g; 10 min) and dissolved in sodium acetate buffer (20 mM; pH 4.5). The desalting of the precipitates was done by using Biogel PD-10 column (GE Health Care, USA) and the active fractions from the PD-10 column were pooled and referred as crude ESmr18. For preparation of the purified ESmr18, the pooled fractions from PD-10 column were loaded onto SP-Sepharose Fast Flow cation-exchange column (5010 mm; GE Health Care) and the bound proteins were eluted by using linear gradient of 0.1 to 1 M NaCl. The active fractions were lyophilized and dissolved in distilled water. The purity of the protein was evaluated by electrophoresis on a 17% denaturing polyacrylamide gel and the protein concentration was evaluated by Bradford's method (Bradford 1976). Further, the susceptibility of various enzymes on the antimicrobial activities of crude and purified ESmr18 was determined. CS and ESmr18 were treated with enzymes proteinase K, trypsin, pepsin, and lipase (Sigma Aldrich, India) at the concentration of 1 mg/ml for 1 h at 37 °C, followed by heat inactivation at 60 °C for 10 min. The residual antimicrobial activity was determined by agar gel diffusion assay. Antimicrobial activity The antimicrobial activity of the CS of E. faecium, and the purified enterocin ESmr18 was determined against various pathogenic bacterial strains by using agar gel diffusion assay (Geis et al. 1983). CS was prepared by centrifuging the overnight culture of E. faecium Smr18 at 8000g for 10 min at 4 °C and then passed through syringe filters (0.22 µm) and kept at 4 °C till further use. For conducting agar gel diffusion assay the optical density (OD; at 550 nm) of pathogenic bacteria in log phase was adjusted to 0.1 and 100 µl of the culture was distributed onto BHI agar medium plates. A cork borer was used to cut wells in the agar plates with a diameter of 6.0 mm. Thereafter, 100 µl of CS (pH 6.5) crude extract and purified ESmr18 were added to the wells, and the plates were incubated at 4 °C for 4 h to allow the samples to diffuse. The plates were then incubated at 37 °C under aerobic conditions. After 24 h, the zones of inhibition were measured in millimetres. Immobilization of E. faecium in films and antimicrobial activity of the films Viable E. faecium Smr18 cells were immobilised in a sodium alginate film. The film was prepared by mixing sodium alginate (4% w/v), agar (3% w/v) and glycerol (20% v/v) in distilled water for 15 min on a magnetic stirrer at ambient temperature. The mixture was then sterilised by boiling for 20 min in a water bath. The solution so formed was mixed with autoclaved MRS medium in the ratio 1:1 under sterile conditions. 10 ml of the mixture was poured in Petri plates and allowed to cool down to semi-solid state before adding viable E. faecium cells (5 × 108 CFU). The plates were left undisturbed for 20 min. After 20 min, 20 ml of 2% calcium chloride solution was added for the polymerisation of sodium alginate film and the plates were again left undisturbed for 15 min. The extra calcium chloride was discarded, and the films were allowed to dry. Another film prepared by following similar process but without E. faecium cells was used as negative control. For determining the antimicrobial activity, films were cut with the help of well borer and placed on BHI agar plates inoculated with 100 µl of the overnight grown culture of S. enterica (OD set at 0.1). The plate was kept at 37 °C for 24 h and clear zones were measured. The bio preservative efficacy of the film was tested against S. enterica-inoculated chicken model. Fresh boneless chicken (500 g) was procured from the local market and autoclaved for 10 min for complete sterilization. Overnight cultured S. enterica cell suspension containing 6 × 107 CFUs/g was added to the pieces. The chicken pieces were covered with E. faecium-immobilised film and film without E. faecium in separate Petri dishes and stored at 7–8 °C. The pieces (1 g) were removed at different time intervals and plated on Salmonella Shigella agar (SS agar) containing plates for CFU counting. Efflux of potassium ions To determine the mechanism of action of enterocin, we evaluated the effect of ESmr18 on the stability of the cell membrane of S. enterica MTCC 733 and L. monocytogenes NCIM 5277. Disruption of the cell membrane by the action of enterocin may result in efflux of small ions from the cell. Therefore, we evaluated the effect of ESmr18 treatment of pathogens at minimum inhibitory concentration (MIC) values on the extracellular potassium ion concentration (McAuliffe et al. 1998). The bacterial cells in mid-log phase were harvested by centrifugation at 8000g for 5 min to obtain cell pellet. The pellet was washed twice and re-suspended in 2.5 mM sodium HEPES (4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid) buffer (pH 7.0) at OD600 1.0. Purified ESmr18 was added to the cell pellets of S. enterica and L. monocytogenes in two separate tubes to obtain final concentrations 3.2 µg/ml. Samples (1 ml) were taken at different time intervals and immediately chilled on ice. S. enterica and L. monocytogenes cells in HEPES buffer without ESmr18 was used as controls. The samples were filter sterilised (0.2 µ) to separate the cells and the potassium ion concentration in the supernatants was determined by flame photometry (Systronics 128, Gujarat, India). The experiment was performed thrice in triplicates. Shelf life, stability, and bio preservative effect of crude ESmr18 Before the bio preservative effects of the crude preparation of ESmr18 was tested in chicken samples, it is important to study the shelf life of ESmr18 dissolved in water and sodium acetate buffer. The crude ESmr18 dissolved in distilled water and sodium acetate buffer (pH 4.5) were stored at refrigeration conditions (7–8 °C) for 6 months. At different time points twofold dilutions of the two samples were tested for its antimicrobial activity against S. enterica by agar gel diffusion assay in terms of arbitrary units (AU)/ml. AU is defined as the reciprocal of the highest dilution that showed zone of inhibition. The stability of CS and crude Esmr18 at different pH and temperature treatments were evaluated. CS and crude ESmr18 were exposed to different temperatures (60, 80 and 100 °C) for upto 90 min and autoclaving for 40 min. The residual antimicrobial activity was determined by using agar gel diffusion assay. To determine the effect of pH, the pH of CS and crude ESmr18 was adjusted to different values ranging from 2 to 10 and incubated at 37 °C for 1 h. Thereafter, the pH was reset to 6.5 and the residual antimicrobial activity was determined by using agar gel diffusion assay. The bio preservative effect of crude ESmr18 was determined on the chicken meat inoculated with S. enterica. Fresh boneless chicken meat was purchased from a local vendor in Amritsar, India. Fresh boneless chicken (500 g) was procured from the local market and autoclaved for 10 min for complete sterilization. Overnight cultured S. enterica cell suspension containing 6 × 107 CFUs/g was added to the pieces and then crude ESmr18 (15 µg/g). For vehicle control, sodium acetate buffer was used. The counts of Salmonella in different samples were quantified at different time points over a period of 35 days by plating on SS agar plates. Hemolysis assay Some bacteriocins are known to have toxicity against host cells. Therefore, we tested the hemolytic activity of ESmr18 against human red blood cells (RBCs) by using haemoglobin release assay (Paiva et al. 2012). For the preparation of RBCs, blood was drawn from persons over the age of 18 after obtaining their written informed consent. The protocol was approved by the Institutional Human Ethics Committee, Guru Nanak Dev University, Amritsar, and the study was carried out as per the guidelines of the Ethical Committee. Defibrinated human blood was centrifuged at 135g for 15 min at 37 °C and the RBC-containing pellet was suspended in 10 ml of phosphate-buffered saline (PBS; pH 7.2). RBC suspensions (500 µl) were treated for 1 h at 37 °C with 100 µl of various concentrations of ESmr18. The suspensions were then centrifuged for 5 min at 825g, and the haemoglobin release in the supernatant was measured at OD 415 nm. TritonX-100 (1%)-treated RBCs and PBS-treated RBCs were used as positive and negative controls, respectively. The percentage RBC lysis was calculated by using equation: $$\left( {{\text{OD}}_{{\text{T}}} {-}{\text{OD}}_{{\text{C}}} } \right)/\left( {{\text{OD}}_{{\text{X}}} {-}{\text{OD}}_{{\text{C}}} } \right) \, \times {1}00.$$ ODT is OD415 of ESmr18-treated RBCs; ODC is OD415 of PBS-treated RBCs and ODX is OD415 of 1% triton-treated RBCs. Safety evaluation of ESmr18 in fish The use of ESmr18, warrants oral consumption, therefore it is important to determine the in vivo effects of the orally administered ESmr18. The in vivo effects of crude ESmr18 were evaluated in healthy Cirrhinus mrigala. The fishes having average length of 15–18 cm and average weight of 90–100 g were acquired from the Government Fish Farm, Rajasansi, Amritsar. They were transported to the lab and placed directly in acclimation tanks with tap water temperature at 24.8 ± 0.32 °C, dissolved oxygen 6.4 ± 0.09 mg/L, total dissolved solids 133.3 ± 2.33 mg/L, electrical conductivity 457 ± 1.15 S/cm and pH 7.01. During the acclimatisation and testing phases, the photoperiod was kept at a regular 12 h light–dark cycle. Throughout the trial, fish were given commercial fish food (fishmeal, vegetable proteins, and binding agents such as wheat) ad libitum at a rate of 2% of body weight. The test water was changed daily 1 h after feeding the fish. To study the biosafety of crude ESmr18, 700 µg of crude ESmr18 was orally administered to a group of 6 fishes. The vehicle treated (VC) group was administered 200 µl of sodium acetate buffer (pH 4.5). The third group was left untreated (UT). The fishes were monitored for any behavioural change before and during the experiment. After 96 h fishes were sacrificed, the liver, kidney, and blood were taken and used in the comet assay. Blood was taken through cardiac puncture. Comet assay DNA damage in the blood, liver, and kidney of the fishes in the treated, VC and UC groups was determined by using comet test (Yun et al. 2014) with minor changes. Slides covered with 1% normal melting point agarose were layered with 0.75% low melting point agarose containing blood, liver, and kidney cells and allowed to settle at 4 °C. The slides were subsequently submerged for 2 h at 4 °C in cold lysing buffer (2.5 M NaCl, 100 mM EDTA, 0.25 M tris aminomethane, 0.25 M NaOH, 1% triton X-100, 10% DMSO, pH 10.0). The slides were then coated again with 0.5% normal melting point agarose and allowed to solidify. Electrophoresis was carried out for 20 min at 25 V and 300 mA after the slides were coated with electrophoresis buffer (1 mM EDTA and 300 mM NaOH; pH 13). The slides were neutralised with 0.4 M Tris amino methane (pH 7.5) for 15 min, dried and stained with 20 μg/ml ethidium bromide. Analysis of the slides were done by fluorescence microscope (Nikon ECLIPSE E200) and images shot with Nikon D5300 camera. For each treatment group, 100 cells per sample were scored in triplicate. Various parameters like tail length (TL), tail moment (TM), and % tail DNA were calculated using Casplab Software. Micronucleus test Homogenous blood smear of fish was prepared on a clean glass slide and air-dried for half an hour at room temperature. The slides were fixed in methanol, stained with 5% Giemsa dye for 15–20 min, and 1000cells/group were scanned at 100× by using light microscope (Olympus scanner; CX31) for evaluating any nuclear or cellular abnormalities. All the experiments were carried out in triplicates, and bars depict means ± SD standard deviation. To determine differences between mean values of different groups, a one-way analysis of variance (ANOVA) was used. The different treatment groups were compared by using the Tukey's test and the level of significance was set at 5% (p < 0.05). The software SPSS version 16.0 (SPSS Inc., Chicago, IL, USA) was used for statistical analysis. Bacterial identification E. faecium Smr18 was selected for this study because of its broad-spectrum antimicrobial activities against both Gram-positive and Gram-negative pathogens. The isolate was identified by sequencing 16srRNA gene, and the sequence was submitted to NCBI database with accession no. OK598049. Phylogenetic tree was constructed based on 16SrRNA sequence that showed close similarity to E. faecium type strain LMG 11423 (Additional file 1: Fig. S1). Antimicrobial activity of crude and purified ESmr18 Purification of ESmr18 from cell free CS of E. faecium Smr18 was done using ammonium sulphate precipitation followed by cation-exchange chromatography. The active fractions were lyophilized and subjected to sodium-dodecyl polyacrylamide gel electrophoresis (SDS-PAGE) analysis to ensure purity of the protein. The concentrated purified fractions resolved as a single band with a molecular weight of around 3.8 kDa on SDS-PAGE (Additional file 1: Fig. S2). The CS, crude and the purified ESmr18 of the E. faecium Smr18 had similar spectrum of antimicrobial activities as all of them were active against both Gram positive pathogenic bacteria (L. monocytogenes, St. pyogenes) and Gram negative pathogens, (S. enterica serotype Typhi and S. enterica serotype Typhimurium; Table 1). They exhibited no activity against rest of the tested pathogenic bacterial strains listed in the Table 1. Table 1 Antimicrobial activity of the CS, crude and purified ESmr18 of E. faecium against various pathogenic indicator bacteria The antibacterial activities of the cell free culture supernatant (CS), crude extract, and purified ESmr18 were evaluated by agar well diffusion assay. The pH of the CS was adjusted to 6.5 with 1 N sodium hydroxide before evaluation of antimicrobial activity. The experiment was carried out in triplicates. Effect at various enzymatic treatment Treatment of CS and ESmr18 with various enzymes was done to determine the biochemical nature of the antimicrobial activity. The antimicrobial activity of both CS and purified ESmr18 was completely inactivated after treatment with proteolytic enzymes pepsin, trypsin, and proteinase-k whereas, lipase and catalase had no effect on the antimicrobial activity (Fig. 1). Effect of various enzymes on the antimicrobial activity of CS and purified ESmr18. CS and ESmr18 were treated with 1 mg/ml of various enzymes for 1 h and the residual antimicrobial activity was tested by agar well diffusion assay after heat-inactivation of the enzymes. The untreated CS and ESmr18 were used as control. Error bars are representative of mean ± SD of the three independent experiments performed in triplicates. Letters a, b and c denote significant differences at p < 0.05 Effect of Esmr18 on the cell membrane permeability The mechanism of bactericidal effect of bacteriocins is mostly explained by their ability to interact with the cell membrane and form pores. Thus, the effect of ESmr18 on the cell membrane permeability was evaluated by determining the efflux of potassium ions. As shown in Fig. 2, treatment of S. enterica and L. monocytogenes cells with ESmr18 resulted in significant (p < 0.05) increase in extracellular concentration of potassium ions at all time points as compared to the untreated control cells. The increased extracellular concentration of potassium ions was observed 5 min after the addition of ESmr18 to the cells of both the pathogens. The peak potassium ion concentrations of 14.5 and 16.3 ppm were observed in L. monocytogenes and S. enterica at 10 and 20 min, respectively, after which the effect plateaued (Fig. 2). Efflux of potassium ions from S. enterica and L. monocytogenes cells after the treatment of different concentrations of ESmr18. Untreated S. enterica and L. monocytogenes cells were used as controls. Error bars are representative of ± SD of the three independent experiments performed in triplicates Bio preservative efficacy of E. faecium-immobilised alginate film against S. enterica For testing the use of E. faecium Smr18 cells as food preservative, the viable cells were immobilised in sodium alginate film. The film so produced was translucent, flexible, cohesive and had smooth appearance. A section of the film was tested for antimicrobial activity by using agar spot assay. The film containing E. faecium cells formed zone of inhibition against S. enterica, whereas film without E. faecium cells exhibited no antimicrobial activity against S. enterica cells (Additional file 1: Fig. S3). Further, the antimicrobial activity of E. faecium-containing film was tested against S. enterica-inoculated chicken sample. Our results showed that on day 4, E. faecium-immobilised films reduced the CFU counts of S. enterica by 0.6 log10 as compared to films without E. faecium. On day 8 and day 16, the difference in the CFU10 counts in both the samples further increased by 1.3 and 1.75 log10. Maximum reduction in the S. enterica counts was obtained on day 34, when the number of CFUs decreased by 3.0 log10 CFU in the E. faecium-immobilised films as compared to film without E. faecium (Fig. 3). Viable counts of S. enterica in chicken samples when stored at 7–8 ºC covered with films with and without viable cells of E. faecium. Error bars are representative of mean ± SD of the three independent experiments performed in triplicates. aIndicates significant (p < 0.05) difference between E. faecium and without E. faecium-treated samples of the same day Stability of the antimicrobial potential of crude enterocin Stability of the crude ESmr18 dissolved in distilled water and in sodium acetate buffer (pH 4.5) was determined at 7–8 ˚C at different time points. The antimicrobial activity of the crude ESmr18 dissolved in distilled water remained stable till 6th day (6488 AU/ml). On 12th day of storage, the antimicrobial activity was reduced by 50% and after 24 days the activity became negligibly low (Fig. 4a). On the other hand, in sodium acetate buffer, the activity remained stable till 60 days of storage at 7–8 °C. After 60 days, the antimicrobial activity decreased to half and remained constant till 180 days (Fig. 4b). Stability profile of crude ESmr18 stored at (7–8 °C) (a) in distilled water (b) sodium acetate buffer and tested in terms of its antimicrobial activity against S. enterica. Error bars are representative of mean ± SD of the experiment performed in triplicates. aIndicates data significantly (p < 0.05) different from the activity on day 0 The stability of CS and crude ESmr18 at different pH and temperature was determined. Results in Additional file 1: Table S1 showed that both CS and Esmr18 retained the antimicrobial activities at pH 4, 6 and 8; however, at acidic pH 2 and at alkaline pH 10, both lost their activities. Further, temperature stability of CS and crude ESmr18 was tested by exposing to different temperature treatments for 90 min. Crude Esmr18 was stable at temperatures as high as autoclaving (Additional file 1: Table S1); however, at temperature above 80 °C the activity was reduced as shown by smaller zones of inhibition. On the other hand, CS was stable at temperature 60, 80 and 100 °C but not to autoclaving. Preservative effect of crude ESmr18 of on S. enterica-inoculated chicken meat The CFU counts of S. enterica were determined in fresh chicken meat samples inoculated with 7.0 log10 CFU/g of S. enterica cells in the presence or absence of crude ESmr18 at different time points. In untreated chicken meat samples, the counts of S. enterica increased by 1.6 log10 CFU/g on day 7 as compared to the initial count of 7.0 log10 CFU/g and they peaked (8.5 log10 CFU/g) on day 21, followed by 1 log10 decrease on day 35. However, in the case of ESmr18-treated samples, the Salmonella counts decreased by 3.0 log10 CFU as soon as 1 h after the addition of the enterocin. The differences in the counts of S. enterica in ESmr18-treated and untreated controls further increased to 3.75 log10 CFU on day7, 4.7 log10 CFU on day 21 and 3.9 log10 CFU on day 35 (Fig. 5). The viable cell counts of Salmonella in S. enterica-inoculated chicken meat samples treated with sodium acetate buffer (vehicle) and crude ESmr18 stored at 7 °C. Error bars are representative of mean ± SD of the three independent experiments performed in triplicates. aIndicates significant (p < 0.05) difference on different days when compared with respective controls Evaluation of biosafety of crude ESmr18 in vitro and in vivo The safety evaluation of ESmr18 was done in vitro by using hemolytic assay against human RBCs. When compared to the phosphate buffer saline (PBS)-treated negative control, purified ESmr18 at the maximum dose of 4.98 µg/ml caused no significant hemolysis of RBCs (Additional file 1: Fig. S4). On the other hand, treatment of RBCs with 1% Triton X-100 (positive control) resulted in 98% hemolysis. The acute toxicity of crude ESmr18 in C. mrigala was determined after oral administration of the enterocin for four days. No mortality was observed in any of the groups till 96 h exposure. In addition, no stress indicators, such as anorexia, lethargy, exophthalmia, irregular swimming, gasping at the surface, skin irritations, or changes in body colour were observed in any of the groups. The fishes in all the groups swam actively throughout the experiment. Biochemical analysis Liver and kidney function tests were conducted on the sera of orally administered ESmr18, VC and UC groups (Table 2). There was a non-significant (p < 0.05) difference in the liver and kidney parameters between the ESmr-treated, VC and UC groups. In ESmr18 group, the % change over control was maximum (− 16%) for protein and minimum (0%) for bilirubin. Whereas, in the VC group, % change over control was highest for Direct (D) bilirubin (30%) and minimum (0%) for creatinine. Table 2 Biochemical analysis of serum samples of C. mrigala Some strains of enterococci are known to secrete cytolytic proteins that may show DNA damage (York 2022). Therefore, the in vivo genotoxicity of the crude ESmr18 was determined by performing comet assay on blood, liver and kidney cells of C. mrigala. Three parameters i.e., TL, % tail DNA and TM were determined for assessing the DNA damage. There were non-significant differences in the average values of TL (Fig. 6a), tail DNA (Fig. 6b) and TM (Fig. 6c) between the 3 groups i.e., UT, VC and ESmr18-treated. Highest values of TL and TM were observed in kidney of VC group and liver of ESmr18 group, respectively. The microscopic images of the blood, liver and kidney cells do not show any cell damages (Fig. 7). Effect of crude ESmr18 on tail length (a) % tail DNA (b) and tail moment (c) in blood, liver, and kidney cells of C. mrigala. Error bars are representative of mean ± SD of the experiment performed in triplicates. aIndicates significant (p < 0.05) difference of the treated groups when compared with control Comet assay of the blood, liver, and kidney cells of C. mrigala. UT: untreated; VC: vehicle control; ESmr18; crude ESmr18 treated. Error bars are representative of mean ± SD of the experiment performed in triplicates The micronucleus test is a nucleo-cellular abnormality assay. This test is used in toxicological studies for screening of genotoxic compounds for observing various types of aberrant cells (AC) like micronuclei, necrotic cells, lobed nucleus, and notched nucleus. Mean frequency of AC in the UT, VC and crude ESmr18 treated groups were 51 ± 7.549, 54 ± 7.937 and 59.6 ± 8.020/10,000 cells, respectively. Similarly, micronuclei frequency in UT, VC and crude ESmr18 treated group was 2.33 ± 0.577, 2.33 ± 0.577 and 2.66 ± 0.577. Non-significant differences were observed in AC and micronuclei cell frequency between the three groups (Fig. 8). Nucleo-cellular abnormalities in the blood cells of C. mrigala (a) normal cells (b) micronuclei (c) necrotic cell (d) lobed nucleus Animal-derived foods, such as poultry, and seafood, are susceptible to easy bacterial spoilage due to high water activity, favourable pH, and high nutrient content. Pathogenic microorganisms such as E. coli, Salmonella spp., Campylobacter spp., and L. monocytogenes can be commonly isolated from chicken and meat (Bohaychuk et al. 2006) that caused several food borne outbreaks (Morton et al. 2019; Mead et al. 2006; Nørrung and Buncic 2008). Thus, to prevent infections, raw animal-foods are treated with various chemical preservatives such as chlorine, nitrites, sodium chlorite and hypochlorite etc. These chemicals cause oxidative reactions in meat leading to adverse changes in the nutrient quality and the taste of the food. Secondly, they are also known to result in the formation of carcinogenic compounds in the treated food (Honikel 2008). Thus, safe bio preservatives are highly warranted. Natural preservatives such as LAB and their bacteriocins (Yu et al. 2021) are being explored for food preservation and in antimicrobial food packaging systems. Number of studies have shown the applications of Lactobacillus spp. and their bacteriocins in food preservation for inhibiting the growth of Gram-positive pathogenic bacteria such as Listeria spp. (Woraprayote et al. 2018). However, the use of bacteriocinogenic strains and bacteriocins in food for inhibiting Salmonella spp. has not been explored much. This is probably because most of the bacteriocins secreted by LAB do not have activities against Gram-negative bacteria including Salmonella spp. Among LAB bacteriocins, few enterocins exhibit broad-spectrum activities against both Gram-positive and Gram-negative bacteria (Kasimin et al. 2022; Sharma et al. 2021) and therefore should be explored as food preservative. Enterococci spp. are commonly isolated from fermented cheese (Centeno et al. 1996) and used as starter cuture in other milk products (Wessels et al. 1990; Giraffa et al. 1997), where they impart characteristic flavour. Some strains of E. faecium such as SF68 etc. are also being used as probiotics for improving the health of humans and livestock (Franz et al. 2011). In this study, E. faecium Smr18 and its enterocin ESmr18 was evaluated for its potential as an antimicrobial agent in food packaging and biopreservation of chicken, respectively. The nonpathogenicity of the isolate E. faecium Smr18, was determined by evaluating its susceptibility to antibiotics. Antibiotic susceptibility profile of the Enterococcal isolate can be used to differentiate the commensal nonpathogenic (belonging to clade B) isolates of enterococci from the clinical isolates that belong to clade A as 80% of the pathogenic clinical E. faecium strains are vancomycin-resistant and 90% are resistant to ampicillin (Hidron et al. 2008; Lebreton et al. 2013). E. faecium Smr18 was tested for its antibiotic susceptibility profile by using Kirby Bauer disk diffusion method that revealed that the strain was susceptible to the antibiotics ampicillin, penicillin-G, vancomycin, and erythromycin (data not shown). Further, the strain Smr18 secretes enterocin in the CS as evidenced by complete abrogation of its antimicrobial activity after treatment with proteolytic enzymes. On the other hand, catalase and lipase treatment had no effects on its antimicrobial activity. Further, we purified enterocin ESmr18 of 3.8 kDa that has antimicrobial activity against both Gram-positive and Gram-negative pathogens. Unlike other LAB bacteriocins that mostly inhibit Gram-positive bacteria, enterocins are known to inhibit Gram-negative bacteria also. Anti-salmonella activity of few enterocins from E. faecium has been reported previously. Enterocin B purified from E. faecium por1 had molecular weight of 7.2 kDa and it inhibited S. typhi and S. enterica (Ankaiah et al. 2018). Similarly, enterocin E-760 having molecular weight of 5.3 kDa inhibited several strains of S. enterica (Line et al. 2008). Further, the mode of action of the purified ESmr18 was studied that showed that the treatment of S. enterica and L. monocytogenes cells with purified ESmr18 at the MIC value (3.2 µg/ml) altered the cell membrane permeability in both the cases that resulted in efflux of potassium ions in the extracellular medium. The concentration of potassium ions peaked faster (10 min) in case of L. monocytogenes as compared to S. enterica cells, where the peak was obtained at 20 min. This can be explained by the presence of outer cell membrane in the cell wall of Gram-negative bacteria that acts as barrier to the enterocin. Similarly, bacteriocin produced by L. plantarum i.e., plantaricin MG caused peak efflux of potassium ions from S. enterica cells at 30 min (Gong et al. 2010). In another study, lacticin was shown to disrupt the cell membrane permeability of L. monocytogenes resulting in rapid efflux of potassium ions that peaked at 2.5 min (McAuliffe et al. 1998). However, in a recent study enterocin HDX-2 at 1X MIC (5 µg/ml) was shown to cause maximum efflux of potassium ions from L. monocytogenes cells at 160 min (Du et al. 2022). In this study, E. faecium was immobilized in sodium alginate film and the film was tested for the first time against S. enterica-infected chicken. Sodium alginate is a polysaccharide produced by brown algae that is edible, non-toxic, biodegradable and bio compatible (Stephen et al. 2016). An added advantage of storing meat products in alginate films is that it prevents the surface drying along with weight loss in meat during storage (Silva et al. 2022). Our results showed that there was constant reduction in the CFU counts of Salmonella in E. faecium immobilized film as compared to control film with time. Maximum reduction of 3.0 log10 CFU was observed on 34th day of storage at 7–8 °C. Our results are in accordance with previously reported study, wherein, 3.0 log10 reduction in counts of L. monocytogenes was reported by sodium alginate film containing Carnobacterium spp. on day 28 (Concha-Meyer et al. 2011). Silva et al. (2022) also reported 1.2 log10 CFU reduction of L. monocytogenes on 8th day of storage in alginate film containing Lactococcus lactis and Lc. garvieae as compared to control. Physico-chemical stability studies revealed that the antimicrobial activities of both CS and the crude enterocin was stable at pH ranging from 4 to 8 and at temperatures upto 100 °C. Crude enterocin could resist autoclaving for 40 min. Further, we studied the stability of crude ESmr18 in water and sodium acetate buffer at 7 °C. As the antimicrobial activity of crude ESmr18 dissolved in sodium acetate buffer was stable at 7–8 °C for 6 months, we tested its effect on S. enterica counts in chicken meat stored at 7 °C. Results showed a significant decrease ranging between 2.9 log10 CFU to 3.9 log10 CFU in Salmonella counts in enterocin-treated chicken samples as compared to untreated controls at all time points starting as soon as 1 h of the treatment till day 35. Similar to our studies, Ananou et al. (2010) showed significant reduction in the counts of S. enterica (2 log10 CFU) and L. monocytogenes (1.87 log10 CFU) on day 10 following addition of purified enterocin AS-48 in fermented sausage, fuet. However, in another study, AS-48 alone had no effect on the CFU counts of S. enterica inoculated as a cocktail of 5 different strains in Russian salad (Cobo Molinos et al. 2009). In another study, a formulation containing enterocins A and B used as preservative to sausages inoculated artificially with S. enterica at 3 log10 CFU and stored at 7 °C did not inhibit the growth of S. enterica (Jofre et al. 2009). Next, the biosafety of ESmr18 was tested in an in vitro and an in vivo assay in a fish model. ESmr18 treatment of human RBCs resulted in 4.8% hemolysis at the highest tested dose of 4.98 µg/ ml. The lytic effect of ESmr18 was lower than that reported previously for other enterocins S37 (74.2% at 10 µg/ ml; Belguesmia et al. 2011) and P40 (19% at 2.5 µg/ ml; Vaucher et al. 2010a) but comparable to that caused by nisin (6% at 3.35 µg/ ml; Shin et al. 2015) and P34 (5.84% at 2.5 µg/ ml; Vaucher et al. 2010b). Further, in vivo safety of orally administered ESmr18 was tested as it is a low molecular weight peptide, and it is known that small molecules of less than 4 kDa size can easily transit gut epithelial barrier (Dreyer et al. 2019). Fish is a popular model to study the in vivo toxicity of chemicals as it shows rapid response to chemicals, and the results of the experiment could be fairly extrapolated to humans (Demicco et al. 2010). C. mrigala was used for the in vivo experiments as it is readily available locally, and the biochemical profile of its sera is well studied (Ghayyur et al. 2021). Our results showed that oral administration of crude ESmr18 for four days to fish at a dose of 700 µg did not cause any significant changes in the liver and kidney biochemistry of the fish as compared to the vehicle-treated and normal control. Further, acute dosing of partially purified ESmr18 did not induce any genotoxicity in fish as shown by micronucleus and comet assay. Both these assays detect DNA damage in several tissues from one specimen at the same time. Similar to our studies, the toxicity of enterocin AS-48 was studied in zebra fish and Balb/c mouse model Cebrián et al. (2019). The study showed that the maximum tolerated dose at which no lethality was observed was 10 µg/ml. In mouse model, intraperitoneal injection of AS-48 at a high dose of 500 µg/g induced an alteration in biochemical parameters that reverted back to normal within 7 days. Baños et al. (2019) administrated 100 µg/ ml AS-48 to trout fish for 96 h and observed no toxicity or apparent signs of stress. The nontoxicity of ESmr18 combined with its broad spectrum activity makes it a promising candidate for use as safe biopreservative in foods stored under refrigeration conditions. Further experiments are required to determine the maximum tolerated doses of ESmr18 in animal models before its approval. In conclusion, our study showed that E. faecium Smr18 secretes 3.8 kDa enterocin that inhibits pathogens by altering the cell membrane permeability. Enterocinogenic strain Smr18 was used in antimicrobial food packaging system that was shown for the first time to inhibit Salmonella contamination in chicken meat stored at refrigeration temperature (7–8 °C). The sodium alginate film used for the immobilisation of enterococci allowed the diffusion of the enterocin in the chicken samples that effectively inhibited the growth of Salmonella for 34 days. The direct addition of crude ESmr18 was equally efficient in inhibiting the growth of S. enterica in chicken samples stored at 7–8 °C for nearly 1 month. Further, ESmr18 did not cause hemolysis in human RBCs and was found safe when orally administered at high doses to fish. Data will be made available on request. CS: Culture supernatant CFU: Colony-forming unit RBC: Red blood cell MRS: De Man Rogosa and Sharpe BHI: Brain heart infusion Lactic acid bacteria NCBI-BLAST: National Center for Biotechnology Information: Basic Local Alignment Search Tool NCIM: National Collection of Industrial Microorganisms MTCC: Microbial Type Culture Collection HEPES: 4-(2-Hydroxyethyl)-1-piperazineethanesulfonic acid Optical density Arbitrary units VC: Vehicle control Untreated control TM: Tail moment TL: %TDNA: Percent tail DNA SDS-PAGE: Sodium-dodecyl polyacrylamide gel electrophoresis Agriopoulou S, Stamatelopoulou E, Sachadyn-Król M, Varzakas T (2020) Lactic acid bacteria as antibacterial agents to extend the shelf life of fresh and minimally processed fruits and vegetables: quality quality and safety aspects. Microorganisms. https://doi.org/10.3390/microorganisms8060952 Ananou S, Garriga M, Jofré A, Aymerich T, Gálvez A, Maqueda M, Martínez-Bueno M, Valdivia E (2010) Combined effect of enterocin AS-48 and high hydrostatic pressure to control food-borne pathogens inoculated in low acid fermented sausages. Meat Sci 84(4):594–600. https://doi.org/10.1016/j.meatsci.2009.10.017 Ankaiah D, Palanichamy E, Antonyraj CB, Ayyanna R, Perumal V, Ahamed SI, Arul V (2018) Cloning, overexpression, purification of bacteriocin enterocin-B and structural analysis, interaction determination of enterocin-A, B against pathogenic bacteria and human cancer cells. Int J Biol Macromol 116:502–512. https://doi.org/10.1016/j.ijbiomac.2018.05.002 Baños A, Ariza JJ, Nuñez C, Gil-Martínez L, García-López JD, Martínez-Bueno M, Valdivia E (2019) Effects of Enterococcus faecalis UGRA10 and the enterocin AS-48 against the fish pathogen Lactococcus garvieae. Studies in vitro and in vivo. Food Microbiol 77:69–77. https://doi.org/10.1016/j.fm.2018.08.002 Belguesmia Y, Madi A, Sperandio D, Merieau A, Feuilloley M (2011) Growing insights into the safety of bacteriocins the case of enterocin S37. Res Microbiol 162:159–163. https://doi.org/10.1016/j.resmic.2010.09.019 Bohaychuk VM, Gensler GE, King RK, Manninen KI, Sorensen O, Wu JT, Stiles ME, McMullen LM (2006) Occurrence of pathogens in raw and ready-to-eat meat and poultry products collected from the retail marketplace in Edmonton, Alberta, Canada. J Food Prot 69(9):2176–2182. https://doi.org/10.4315/0362-028x-69.9.2176 Bradford MM (1976) A rapid and sensitive method for the quantitation of microgram quantities of protein utilizing the principle of protein-dye binding. Anal Biochem 72(1–2):248–254. https://doi.org/10.1006/abio.1976.9999 Cebrián R, Rodríguez-Cabezas ME, Martín-Escolano R, Rubiño S, Garrido-Barros M, Montalbán López M, Rosales MJ, Sánchez-Moreno M, Valdivia E, Martínez-Bueno M, Marín C (2019) Preclinical studies of toxicity and safety of the AS-48 bacteriocin. J Adv Res 20:129–139. https://doi.org/10.1016/j.jare.2019.06.003 Callewaert R, Hugas M, De VL (2000) Competitiveness and bacteriocin production of Enterococci in the production of Spanish-style dry fermented sausages. Int J Food Microbiol 57(1–2):33–42. https://doi.org/10.1016/S0168-1605(00)00228-2 Centeno JA, Menéndez S, Rodríguez-Otero JL (1996) Main microbial flora present as natural starters in Cebreiro raw cow's-milk cheese (Northwest Spain). Int J Food Microbiol 33(2–3):307–313. https://doi.org/10.1016/0168-1605(96)01165-8 Chai SJ, Cole D, Nisler A, Mahon BE (2017) Poultry: the most common food in outbreaks with known pathogens, United States, 1998–2012. Epidemiol Infect 145(2):316–325. https://doi.org/10.1017/S0950268816002375 Cleveland J, Montville TJ, Nes IF, Chikindas ML (2001) Bacteriocins: safe, natural antimicrobials for food preservation. Int J Food Microbiol 71:1–20. https://doi.org/10.1016/s0168-1605(01)00560-8 Cobo Molinos A, Lucas López R, Abriouel H, Ben Omar N, Valdivia E, Gálvez A (2009) Inhibition of Salmonella enterica cells in deli-type salad by enterocin AS-48 in combination with other antimicrobials. Probiotics Antimicrob Proteins 1(1):85–90. https://doi.org/10.1007/s12602-009-9005-z Concha-Meyer A, Schöbitz R, Brito C, Fuentes R (2011) Lactic acid bacteria in an alginate film inhibit Listeria monocytogenes growth on smoked salmon. Food Control 22(3–4):485–489. https://doi.org/10.1016/j.foodcont.2010.09.032 Deegan LH, Cotter PD, Hill C, Ross P (2006) Bacteriocins: biological tools for bio-preservation and shelf-life extension. Int Dairy J 16(9):1058–1071. https://doi.org/10.1016/j.idairyj.2005.10.026 Demicco A, Cooper KR, Richardson JR, White LA (2010) Developmental neurotoxicity of pyrethroid insecticides in zebrafish embryos. Toxicol Sci 113(1):177–186. https://doi.org/10.1093/toxsci/kfp258 Dominguez SA, Schaffner DW (2009) Survival of Salmonella in processed chicken products during frozen storage. J Food Prot 72(10):2088–2092. https://doi.org/10.4315/0362-028x-72.10.2088 Dreyer L, Smith C, Deane SM, Dicks LMT, van Staden AD (2019) Migration of bacteriocins across gastrointestinal epithelial and vascular endothelial cells, as determined using in vitro simulations. Sci Rep 9(1):1–11. https://doi.org/10.1038/s41598-019-47843-9 Du R, Ping W, Ge J (2022) Purification, characterization and mechanism of action of enterocin HDX-2, a novel class IIa bacteriocin produced by Enterococcus faecium HDX-2. LWT 153:112451. https://doi.org/10.1016/j.lwt.2021.112451 Franz CMAP, Huch M, Abriouel H, Holzapfel W, Gálvez A (2011) Enterococci as probiotics and their implications in food safety. Int J Food Microbiol 151(2):125–140. https://doi.org/10.1016/j.ijfoodmicro.2011.08.014 Geis A, Singh J, Teuber M (1983) Potential of lactic streptococci to produce bacteriocin. Appl Environ Microbiol 45(1):205–211. https://doi.org/10.1128/aem.45.1.205-211.1983 Ghayyur S, Khan MF, Tabassum S, Ahmad MS, Sajid M, Badshah K, Khan MA, Ghayyur S, Khan NA, Ahmad B, Qamer S (2021) A comparative study on the effects of selected pesticides on hemato-biochemistry and tissue histology of freshwater fish Cirrhinus mrigala (Hamilton, 1822). Saudi J Biol Sci 28(1):603–611. https://doi.org/10.1016/j.sjbs.2020.10.049 Giraffa G (2003) Functionality of enterococci in dairy products. Int J Food Microbiol 88(2–3):215–222. https://doi.org/10.1016/S0168-1605(03)001831 Giraffa G, Carminati D, Neviani E (1997) Enterococci isolated from dairy products: a review of risks and potential technological use. J Food Prot 60(6):732–738. https://doi.org/10.4315/0362-028X-60.6.732 Gong HS, Meng XC, Wang H (2010) Mode of action of plantaricin MG, a bacteriocin active against Salmonella typhimurium. J Basic Microbiol 50: S37S45. https://doi.org/10.1002/jobm.201000130 Hidron AI, Edwards JR, Patel J, Horan TC, Sievert DM, Pollock DA, Fridkin SK (2008) Antimicrobial-resistant pathogens associated with healthcare-associated infections: annual summary of data reported to the national healthcare safety network at the centers for disease control and prevention, 2006–2007. Infect Control Hosp Epidemiol 29(11):996–1011. https://doi.org/10.1086/591861 Honikel KO (2008) The use and control of nitrate and nitrite for the processing of meat products. Meat Sci 78:68–76. https://doi.org/10.1016/j.meatsci.2007.05.030 Jofré A, Aymerich T, Garriga M (2009) Improvement of the food safety of low acid fermented sausages by enterocins A and B and high pressure. Food Control 20(2):179–184. https://doi.org/10.1016/j.foodcont.2008.04.001 Karkey A, Thwaites GE, Baker S (2018) The evolution of antimicrobial resistance in Salmonella typhi. Curr Opin Gastroenterol 33(1):25–30. https://doi.org/10.1097/MOG.0000000000000406 Kasimin ME, Shamsuddin S, Molujin AM, Sabullah MK, Gansau JA, Jawan R (2022) Enterocin: promising biopreservative produced by Enterococcus spp. Microorganisms 10(4):684. https://doi.org/10.3390/microorganisms10040684 Lebreton F, van Schaik W, Manson McGuire A, Godfrey P, Griggs A, Mazumdar V, Corander J, Cheng L, Saif S, Young S, Zeng Q (2013) Emergence of epidemic multidrug-resistant Enterococcus faecium from animal and commensal strains. Mbio 4(4):e00534-13. https://doi.org/10.1128/mBio.00534-13 Line JE, Svetoch EA, Eruslanov BV, Perelygin VV, Mitsevich EV, Mitsevich IP, Levchuk VP, Svetoch OE, Seal BS, Siragusa GR, Stern NJ (2008) Isolation and purification of enterocin E-760 with broad antimicrobial activity against Gram-positive and Gram-negative bacteria. Antimicrob Agents Chemother 52(3):1094–1100. https://doi.org/10.1128/AAC.01569-06 Massey RC, Lees D (1992) Surveillance of preservatives and their interactions in foodstuffs. Food Addit Contam 9(5):435–440. https://doi.org/10.1080/02652039209374095 Maurício E, Rosado C, Duarte MP, Verissimo J, Bom S, Vasconcelos L (2017) Efficiency of nisin as preservative in cosmetics and topical products. Cosmetics 4(4):1–11. https://doi.org/10.3390/cosmetics4040041 McAuliffe O, Ryan MP, Ross RP, Hill C, Breeuwer P, Abee T (1998) Lacticin 3147, a broad spectrum bacteriocin which selectively dissipates the membrane potential. Appl Environ Microbiol 64(2):439–445. https://doi.org/10.1128/aem.64.2.439-445.1998 McMullen LM, Stiles ME (1996) Potential for use of bacteriocin-producing lactic acid bacteria in the preservation of meats. J Food Prot 59:64–71. https://doi.org/10.4315/0362-028X-59.13.64 Mead PS, Dunne EF, Graves L, Wiedmann M, Patrick M, Hunter S, Salehi E, Mostashari F, Craig A, Mshar P, Bannerman T (2006) Nationwide outbreak of listeriosis due to contaminated meat. Epidemiol Infect 134(4):744–751. https://doi.org/10.1017/S0950268805005376 Medved 'ova A, Koňuchová M, Kvočiková K, Hatalová I, Valík L (2020) Effect of lactic acid bacteria addition on the microbiological safety of pasta-flata types of cheeses. Front Microbiol 11:1–16. https://doi.org/10.3389/fmicb.2020.612528 Mokoena MP, Omatola CA, Olaniran AO (2021) Applications of lactic acid bacteria and their bacteriocins against food spoilage microorganisms and foodborne pathogens. Molecules. https://doi.org/10.3390/molecules26227055 Morton VK, Kearney A, Coleman S, Viswanathan M, Chau K, Orr A, Hexemer A (2019) Outbreaks of Salmonella illness associated with frozen raw breaded chicken products in Canada, 2015–2019. Epidemiol Infect 147:1–3. https://doi.org/10.1017/S0950268819001432 Nørrung B, Buncic S (2008) Microbial safety of meat in the European union. Meat Sci 78(1–2):14–24. https://doi.org/10.1016/j.meatsci.2007.07.032 Paiva AD, de Oliveira MD, de Paula SO, Baracat-Pereira MC, Breukink E, Mantovani HC (2012) Toxicity of bovicin HC5 against mammalian cell lines and the role of cholesterol in bacteriocin activity. Microbiol 158(11):2851–2858. https://doi.org/10.1099/mic.0.062190-0 Papagianni M, Anastasiadou S (2009) Pediocins: the bacteriocins of Pediococci. Sources, production, properties and applications. Microb Cell Fact 8(1):1–16. https://doi.org/10.1186/1475-2859-8-3 Piper JD, Piper PW (2017) Benzoate and sorbate salts: a systematic review of the potential hazards of these invaluable preservatives and the expanding spectrum of clinical uses for sodium benzoate. Compr Rev Food Sci 16(5):868–880. https://doi.org/10.1111/1541-4337.12284 Ryan MP, O'Dwyer J, Adley CC (2017) Evaluation of the complex nomenclature of the clinically and veterinary significant pathogen Salmonella. Biomed Res Int 2017:3782182. https://doi.org/10.1155/2017/3782182 Sharma P, Kaur S, Chadha BS, Kaur R, Kaur M, Kaur S (2021) Anticancer and antimicrobial potential of enterocin 12a from Enterococcus faecium. BMC Microbiol 21(1):1–4 Shin JM, Ateia I, Paulus JR, Liu H, Fenno JC (2015) Antimicrobial nisin acts against saliva derived multi-species biofilms without cytotoxicity to human oral cells. Front Microbiol 6:1–14. https://doi.org/10.3389/fmicb.2015.00617 Silva S, Ribeiro SC, Teixeira JA, Silva CCG (2022) Application of an alginate-based edible coating with bacteriocin-producing Lactococcus strains in fresh cheese preservation. Lwt. https://doi.org/10.1016/j.lwt.2021.112486 Sindelar JJ, Milkowski AL (2011) Sodium nitrite in processed meat and poultry meats: a review of curing and examining the risk/benefit of its use. American Meat Sci Association White Paper Series vol 3, pp 1–4 Smadi H, Sargeant JM, Shannon HS, Raina P (2012) Growth and inactivation of Salmonella at low refrigerated storage temperatures and thermal inactivation on raw chicken meat and laboratory media: mixed effect meta-analysis. J Epidemiol Glob Health 2(4):165–179. https://doi.org/10.1016/j.jegh.2012.12.001 Soltani S, Hammami R, Cotter PD, Rebuffat S, Said LB, Gaudreau H, Bédard F, Biron E, Drider D, Fliss I (2021) Bacteriocins as a new generation of antimicrobials: toxicity aspects and regulations. FEMS Microbiol Rev 45(1):1–24. https://doi.org/10.1093/femsre/fuaa039 Stephen AM, Phillips GO, Williams PA (2016) Food polysaccharides and their applications. CRC Press, Florida Trujillo IZ, Quiroz C, Gutierrez MA, Arias J, Renteria M (1991) Fluoroquinolones in the treatment of typhoid fever and the carrier state. Eur J Clin Microbiol Infect Dis 10(4):334–341. https://doi.org/10.1007/BF01967008 Vaucher AR, De Souza, A, Brandelli A (2010a) Evaluation of the in vitro cytotoxicity of the antimicrobial peptide P34. Cell Biol Int 34:317–323. https://doi.org/10.1042/CBI20090025 Vaucher RA, Teixeira M, Brandelli A (2010b) Investigation of the cytotoxicity of antimicrobial peptide p40 on eukaryotic cells. Curr Microbiol 60:1–5. https://doi.org/10.1007/s00284-009-9490-z Wessels D, Jooste PJ, Mostert J F (1990) Technologically important characteristics of Enterococcus isolates from milk and dairy products. Int J Food Microbiol 10:349–352. https://doi.org/10.1016/0168-1605(90)90082-G Woraprayote W, Pumpuang L, Tosukhowong A, Zendo T, Sonomoto K, Benjakul S, Visessanguan W (2018) Antimicrobial biodegradable food packaging impregnated with Bacteriocin 7293 for control of pathogenic bacteria in pangasius fish fillets. LWT 1(89):427–433. https://doi.org/10.1016/j.lwt.2017.10.026 World Health Organization. Typhoid (2018) Typhoid https://www.who.int/health-topics/typhoid#tab=tab_1. Accessed 24 April 2022. York A (2022) Emergent Enterococcus toxins. Nat Rev Microbiol 20:253. https://doi.org/10.1038/s41579-022-00725-w Yu HH, Chin YW, Paik HD (2021) Application of natural preservatives for meat and meat products against food-borne pathogens and spoilage bacteria: a review. Foods 10(10):2418. https://doi.org/10.3390/foods10102418 Yun SH, Lee SW, Koo HN, Kim GH (2014) Assessment of electron beam-induced abnormal development and DNA damage in Spodoptera litura (F.) (Lepidoptera: Noctuidae). Radiat Phys Chem 96:44–49. https://doi.org/10.1016/j.radphyschem.2013.08.008 Zhou L, van Heel AJ, Montalban-Lopez M, Kuipers OP (2016) Potentiating the activity of nisin against Escherichia coli. Front Cell Dev Biol 8:4–7. https://doi.org/10.3389/fcell.2016.00007 This study was supported by grants from Component 4 of Rashtriya Uchchattar Shiksha Abhiyan (RUSA) 2.0 scheme of ministry of Human Resource Development, Government of India. Muzamil Rashid is thankful to Indian Council of Medical Research, New Delhi, India for Senior research fellowship. Department of Microbiology, Guru Nanak Dev University, Amritsar, Punjab, India Muzamil Rashid, Amarjeet Kaur & Sukhraj Kaur Department of Zoology, Guru Nanak Dev University, Amritsar, Punjab, India Sunil Sharma & Arvinder Kaur Muzamil Rashid Arvinder Kaur Amarjeet Kaur Sukhraj Kaur SK and MR conceptualized the overall study. MR performed all the experiments, except that related to studies in the fish model. SS and AK helped with the setting up fish experiments and analysed the results pertaining to fish experiments. MR compiled the results and carried out statistical analysis of the data. AJK helped with statistical analysis of data. The final manuscript was reviewed by all the authors. All authors read and approved the final manuscript. Correspondence to Sukhraj Kaur. The study was approved by the Institutional Human Ethics Committee, Guru Nanak Dev University, Amritsar and the study was carried out as per the guidelines of the Ethical Committee. The authors agree to publish this article. Additional file 1: Figure S1. Phylogenetic tree of E. faecium Smr18 and 21 other Enterococcus strains was constructed with E. coli as outgroup by using MEGA6 software. Using the Neighbour-Joining approach, the evolutionary history was deduced. The evolutionary history of the species studied is shown by the bootstrap consensus tree generated from 500 repetitions. The evolutionary distances were calculated by using the Maximum Composite Likelihood technique. Figure S2. SDS-PAGE showing resolved bands. Lane-1 protein marker, Lane-3 purified ESmr18. Figure S3. (A) Alginate film (B) Antimicrobial activity of alginate film with E. faecium and alginate film without E. faecium Smr18 cells against S. enterica as demonstrated by zone of inhibition on agar spot assay. Figure S4. Hemolytic activity of purified ESmr18 at different concentrations. The error bars show the standard deviation of three separate experiments conducted in triplicate. Table S1. physico-chemical characteristics of CS and ESmr18. Rashid, M., Sharma, S., Kaur, A. et al. Biopreservative efficacy of Enterococcus faecium-immobilised film and its enterocin against Salmonella enterica. AMB Expr 13, 11 (2023). https://doi.org/10.1186/s13568-023-01516-z Received: 27 October 2022 DOI: https://doi.org/10.1186/s13568-023-01516-z Bacteriocin Sodium alginate films Bio preservative Follow SpringerOpen SpringerOpen Twitter page SpringerOpen Facebook page
CommonCrawl
In a city, the record monthly high temperature for March is \(56^{\circ}\mathrm{F}\). The record monthly low temperature for March is \(-4^{\circ}\mathrm{F}\). What is the range of temperatures for the month of march? \(60^{\circ}\mathrm{F}\) The manufacturing cost of an air-conditioning unit $544, and the full-replacement extended warranty costs $113. If the manufacturer sells 506,970 units with extended warranties and must replace 20% of them as a result, how much will the replacement costs be? (Note: Assume manufacturing costs and replacement costs are the same.) a. $55,158,336 b. $11,457,522 c. $2,129,274 d. $66,717,252 A: See Answer #Arithmetic At a shelter, 15% of the dogs are puppies. There are 60 dogs at the shelter. How many are puppies?
CommonCrawl
Network-based prediction of COVID-19 epidemic spreading in Italy Clara Pizzuti ORCID: orcid.org/0000-0001-7297-71261, Annalisa Socievole1, Bastian Prasse2 & Piet Van Mieghem2 Initially emerged in the Chinese city Wuhan and subsequently spread almost worldwide causing a pandemic, the SARS-CoV-2 virus follows reasonably well the Susceptible–Infectious–Recovered (SIR) epidemic model on contact networks in the Chinese case. In this paper, we investigate the prediction accuracy of the SIR model on networks also for Italy. Specifically, the Italian regions are a metapopulation represented by network nodes and the network links are the interactions between those regions. Then, we modify the network-based SIR model in order to take into account the different lockdown measures adopted by the Italian Government in the various phases of the spreading of the COVID-19. Our results indicate that the network-based model better predicts the daily cumulative infected individuals when time-varying lockdown protocols are incorporated in the classical SIR model. The outbreak of the greatest epidemic of the twenty first century caused by the SARS-CoV-2 virus has stimulated researchers to understand and control the spread of the disease inside a population with the help of mathematical models developed in recent years (Hethcote 2000; Pastor-Satorras et al. 2015). A single outbreak of a disease is typically described by a SIR compartmental model, where each individual at a certain time t can only be in one of the three different disease stages: Susceptible (S), i.e. healthy, but vulnerable for the infection, Infected (I) and Recovered (R), i.e. the individual either recovers from the disease or, unfortunately, dies. A diffusion-like SIR epidemic spread on a contact network models the infection between individuals when they come into contact, close enough in space and long enough in time (Chu et al. 2020). By adopting the SIR model, Prasse et al. (2020) predict the spreading of the COVID-19 epidemic on a contact network consisting of 16 cities in the Chinese province Hubei via their Network Inference-based Prediction Algorithm (NIPA). Since the interactions between cities are unknown, Prasse et al. exploit their network reconstruction approach, described in Prasse and Van Mieghem (2020b), to estimate the contact network from the observations of the viral states. In this paper, we use NIPA (Prasse and Van Mieghem 2020b; Prasse et al. 2020) to investigate the spreading of the COVID-19 epidemic in Italy by considering the 21 Italian regions, shown in Fig. 1, as nodes of the network. We extend NIPA to NIPA-LD (NIPA with LockDown), that takes into account the different lockdown measures adopted in the various phases of the COVID-19 spreading in Italy by adapting the ideas of Song et al. (2020). Song et al. (2020) pointed out that the epidemiological models do not consider the several containment measures, such as in-home isolation, travel and social activities restrictions, enforced by governments to dampen the transmission rate over time. Due to the containment measures, the infection rates vary over time, which should be incorporated in a prediction model to reflect the real situations of epidemic and provide more meaningful analyses. We apply NIPA and the extension NIPA-LD to the period between the first of March till June 9th. Our results indicate that NIPA-LD is capable to better predict the daily cumulative infected individuals, because the time-varying lockdown restrictions are considered. In the last months, the number of papers studying the COVID-19 pandemic and proposing models to predict the evolution of the disease sky-rocketed. In Estrada (2020), Estrada discusses how this pandemic is actually modeled and proposes future research directions by reviewing the three main areas of modeling research against COVID-19: epidemiology, drug repurposing, and vaccine design. After the strict policies in China to reduce close contacts between people, which revealed the best strategy to effectively block the virus diffusion, Italy and many other European countries imposed several containment measures, called lockdown. Some researches then investigated how mobility changed during the lockdown phases (Oliver et al. 2020; Klein et al. 2020; Galeazzi et al. 2020; Schlosser et al. 2020), others have shown how lockdown can effectively slow down disease transmission. Flaxman et al. (2020) study the effect on COVID-19 transmission of the major non-pharmaceutical interventions (NPIs) across 11 European countries for the period from the start of the COVID-19 epidemics in February 2020 until May 4th 2020. In a more general work, Haug et al. (2020) quantify the effectiveness of the world-wide NPIs to mitigate the spreading of COVID-19 and SARS-CoV-2 showing that this effectiveness is strongly related to the economic development as well as the dimension of governance of a country. At a country level, Hadjidemetriou et al. (2020) use driving, walking and transit real-time data to investigate the impact of UK government control measures on human mobility reduction and consequent COVID-19 deaths. Pei et al. (2020) assess the effect of NPIs on COVID-19 spread in the United States finding significant reductions of the basic reproductive numbers in major metropolitan areas when applying social distancing and other control measures. Di et al. (2020) study the case of the Île-de-France exploiting a stochastic age-structured transmission model which combines data on age profile and social contacts to evaluate the impact of lockdown and propose possible exit strategies. The Italian town of Vo' Euganeo is finally studied by Lavezzo et al. (2020), where the efficacy of the implemented control measures are evaluated, providing also insights into the transmission dynamics of asymptomatic individuals. Concerning the modeling of the COVID-19 spreading with the imposed restrictions, Maier and Brockmann (2020), for instance, proposed a model that takes into account both quarantine of symptomatic infected individuals and population isolation due to containment policies, and showed that the model agrees with the observed growth of the epidemic in China. Arenas et al. (2020) defined a model that stratifies the Spanish population by age and predicts the incidence of the epidemics through time by considering control measures. They show that the results can be refined by taking into account mobility restrictions imposed at the level of municipalities. Chinazzi et al. (2020) used a global metapopulation disease transmission model to study the impact of travel limitations on the national and international spread of the epidemic in China. The NIPA-LD approach presented in this paper is different from the described proposals since it extends the NIPA method, which assumes no knowledge on the population flows and estimates the interactions between groups of individuals, by considering time-varying lockdown policies in the prediction phase. Modeling the spread of COVID-19 in Italy has followed several approaches. Ferrari et al. (2020), for instance, use an adjusted time-dependent SIRD (Susceptible-Infected-Recovered-Died) model to predict the provincial cases. Caccavo (2020) propose a modified SIRD model to describe both the Chinese and the Italian outbreaks. Giuliani et al. (2020) define a model with \(c = 8 \) compartments or stages of infection: susceptible (S), infected (I), diagnosed (D), ailing (A), recognized (R), threatened (T), healed (H) and extinct (E), collectively termed SIDARTHE. However, only one compartment is measured in the Covid-19 crisis, namely the number of active cases. Thus, for an epidemic model with many compartments, it is not possible to evaluate the accuracy in predicting compartments other than the number of active cases. In this work, we confine to the \(c = 3\) compartmental SIR model for the predictions by NIPA. Kozyreff (2020) provides an SIR modeling comparison between Belgium, France, Italy, Switzerland and New York City suggesting that finer models are unnecessary with the corresponding available macroscopic data. In this section, we briefly review the epidemic SIR model on contact networks (Youssef and Scoglio 2011; Prasse and Van Mieghem 2020b) and the prediction of the COVID-19 infection, caused by the SARS-CoV-2 virus, based on the SIR model (Prasse et al. 2020). Then, we incorporate time-varying protocols introduced by the government to slow down the virus propagation. We consider a network with N nodes, where each node i corresponds to the set of individuals living in the same place, like a city or a region. An individual at any discrete time \(k=1, 2, \ldots \) is in either one of the \(c=3\) compartments Susceptible (S), Infectious (I), Recovered (R). The SIR model assumes that infectious individuals become recovered and cannot infect any longer because of hospitalization, death, or quarantine measures. The viral state of any node i at time k is denoted by the \(3 \times 1\) vector \(v_i[k] =(S_i[k],I_i[k],R_i[k])^T\), where \(S_i[k],~I_i[k],~R_i[k]\) are the fractions of susceptible, infectious, and recovered individuals, respectively, satisfying the conservation law \(S_i[k]+I_i[k]+R_i[k]=1\). The discrete-time SIR model (Youssef and Scoglio 2011; Prasse and Van Mieghem 2020b) defines the evolution of the viral state \(v_i[k]\) of each node i as: $$\begin{aligned} I_i[k+1]= & {} (1 - \delta _i)I_i[k] + (1- I_i[k] - R_i[k] )\sum \limits _{j=1}^N \beta _{ij} I_j[k] \end{aligned}$$ $$\begin{aligned} R_i[k+1]= & {} R_i[k] + \delta _i I_i[k] \end{aligned}$$ where \(\beta _{ij}\) denotes the infection probability when individuals move from place (also called region) j to place i. The self-infection probability \(\beta _{ii}\ne 0\), because individuals inside the same place interact. The \(N \times N\) infection probability matrix B specifies the contact transmission chance between each couple of regions. The curing probability \(\delta _i\) of place i quantifies the capability of individuals in place i to cure from the virus. We assume that the SIR model (1), (2) has both \(\beta _{ij}\) and \(\delta _i\) that do not change over time. Prasse et al. (2020) proposed the Network Inference-based Prediction Algorithm (NIPA), which estimates the spreading parameters \(\delta _i\) and \(\beta _{ij}\) for each region i from the time series \(v_i[1], v_i[2], \ldots , v_i[n]\). These estimates in (1) and (2) predict the evolution of the virus in the next future times \(k>n\). The SIR model has three compartments. In principle, with c compartments, we must have \(c-1\) independent measurements. The input to NIPA is only one compartment, the infectious compartment I, which is less than c \(- 1=2\) compartments necessary to reconstruct the network with the SIR model. NIPA creates observations of the R compartment by iterating over different candidate values of the curing rates \(\delta _i\) and assuming the initial condition R(0) \(=\) 0. Thus, we observe only one compartment, the infectious compartment I, and the recovered compartment R is obtained by Eq. (2) after estimating the curing probability \(\delta _i\) in the training phase. To obtain the curing probability \(\delta _i\), 50 equidistant values between \(\delta _{min}\) and \(\delta _{max}\) have been considered, and then the value giving the best fit of model (1) has been used to estimate the matrix B based on the least absolute shrinkage and selection operator (LASSO). For a general class of dynamics on networks (including the SIR model), completely different network topologies can result in the same dynamics. Hence, it is not possible to deduce the network accurately from observations, regardless of the reconstruction method: two very different networks perfectly match the observations, and there is no reason to infer one network instead of the other. Thus, though NIPA accurately predicts the dynamics, the estimated network B can be very different from the true network (Prasse and Van Mieghem 2020c). Let n be the number of days in which the infection has been observed. To evaluate the prediction accuracy, a fixed number of days \(n_{neglect}\) is removed prior to \(v_i[1], v_i[2], \ldots , v_i[n]\). The model is then trained on the days \(v_i[1], v_i[2], \ldots , v_i[n- n_{neglect}]\). Thereafter, the omitted \(n_{neglect}\) days (\(k=n- n_{neglect}+1, \ldots , n\)) are predicted. It is possible to predict also \(n_{predict}\) days (\(k=n+1, \ldots , n+n_{predict}\)) ahead the number n of available observations, however, in such a case, we cannot evaluate the goodness of the prediction. Prasse et al. (2020) showed that the approach accurately predicts the cumulative infections for \(n_{neglect} \le 5\). However, if the number of neglected days increases, then the prediction capability of NIPA decreases. NIPA assumes constant values for \(\beta _{ij}\), which, however, do not reflect the reality of the COVID-19 pandemic, because the containment measures imposed by the governments diminish \(\beta _{ij}\) and thus the spread of the infection. Hence, infection probabilities \(\beta _{ij}[k]\) which vary over time k should be considered in the epidemic model. Extended SIR model with time-varying infection rate Song et al. (2020) proposed the concept of transmission modifiers, which decrease the probability that a susceptible individual can come into contact with an infected one because of the quarantine measures. At any discrete time k, let \(q_S[k]\) be the chance of an individual to be in home isolation, and \(q_I[k]\) the chance of an infected person to be in hospital quarantine. The transmission modifier \(\pi [k]\) is defined as follows: $$\begin{aligned} \pi [k]=(1-q_S[k])(1-q_I[k]) \in [0,1] \end{aligned}$$ and if no quarantine is active, then \(\pi [k]=1\). In order to have a realistic infection rate \(\beta \), Song et al. (2020) multiply \(\beta \) by \(\pi [k]\) in the classic continuous SIR model. Thus, the infection rate now reflects the effective currently enforced quarantine measures taken in a country. In the extended SIR model, the curing probability \(\delta _i\) remains the same, but the infection probability \(\beta _{ij}\) is replaced by \(\beta _{ij}\pi [k] \). The same considerations can be applied to the discrete-time SIR model by modifying Eq. (1) above: $$\begin{aligned} I_i[k+1] = (1 - \delta _i)I_i[k] + (1- I_i[k] - R_i[k] )\sum \limits _{j=1}^N \beta _{ij} \pi [k] I_j[k] \end{aligned}$$ The transmission modifier \(\pi [k]\), however, should be specified on the base of the effective quarantine protocols undertaken in a specific region. Regarding the Hubei province in China, Song et al. (2020) suggest a step function mirroring the isolation measures established by the government. In the next section, the extended time-varying model (4) is applied to Italy by considering as nodes of the contact network the 21 regions by which Italy is composed. Transmission modifier for Italy In Italy, the outbreak of the COVID-19 epidemic started in February in the North of Italy. A map of Italy with the division in regions is shown in Fig. 1. On February 21st, the first case of infection appeared in the town of Codogno, in Lombardia, and two cases also in the town of Vo' Euganeo in Veneto. These two towns where immediately declared red zones and nobody could either go out or come in. On February 24th, the three regions of Lombardia, Veneto, and Emilia-Romagna registered 172, 33, and 18 cases of infections, respectively. After that date, the virus propagated all over Italy very fast. During the first week, until the first days of March, no other particularly strict safety measures were enforced. On March 9th, however, Italy turned into a lockdown Phase 1 with several strong restrictions and quarantine protocols. Schools, universities, shops, and many offices were closed, travels were not allowed and exits were only allowed for work, health or necessity situations with a mandatory self-certification. Phase 2 followed, in which countermeasures were adopted to reduce the pandemic. Finally, Phase 3 reopened almost all the activities and travels all over Italy. In order to define the values of the transmission modifier for the different quarantine periods, we identified the following time intervalsFootnote 1: \(\pi _0\): \(k \le \) March 9 soft measures; \(\pi _1\): March 10 \(\le k\le \) April 13 lockdown; \(\pi _2\): April 14 \(\le k \le \) May 3 libraries and stationeries reopen; \(\pi _3\): May 4 \( \le k \le \) May 17 manufacturing, construction activities, wholesales reopen, meetings with relatives allowed; \(\pi _4\): May 18 \(\le k \le \) May 24 hair dressers, beauty center, barber shops, bar, restaurants, retailers reopen, outdoor sport, baby parks allowed; \(\pi _5\): May 25 \(\le k \le \) June 2 gym, swimming pools, sport structures reopen \(\pi _6\): \(k \ge \) June 3 inter-regional mobility allowed. The choice of the best values of the transmission modifier reflecting well the quarantine protocols is not an easy task and deserves a deep investigation. In the next sections, a study on the improvement of the NIPA method when different lockdown levels related to the quarantine strategies adopted by authorities is performed. Our measurement data have been collected by the Italian Civil Protection DepartmentFootnote 2 and are daily published on a repository. The available data are national, regional and provincial. We selected the regional ones which refer to the 21 regions depicted in Fig. 1: Abruzzo, Basilicata, P.A. Bolzano, Calabria, Campania, Emilia-Romagna, Friuli Venezia Giulia, Lazio, Liguria, Lombardia, Marche, Molise, Piemonte, Puglia, Sardegna, Sicilia, Toscana, P.A. Trento, Umbria, Valle d'Aosta, Veneto. Thus, for Italy, the entry \(\beta _{ij}\) of the \(21\times 21\) matrix B estimates the infection probability between the regions j and i. In the map, regions have been divided in 4 different colors representing the level of COVID-19 infected individuals. The red regions have been the most affected by COVID-19, followed by the yellow ones, the orange ones and the green regions with a lower number of cases. For each observation day, we focused on the new positives to COVID-19. We considered observations from March 1, 2020 to June 9, 2020. Transmission modifier analysis To compare the NIPA method with the NIPA-LD implementing the lockdown measures, we considered the model generated by NIPA which, in the training phase, neglects \(n_{neglect}\) days, and then applied this model for the prediction phase by using different values of \(\pi \) and an increasing value of \(n_{neglect}\). After that, we computed the average percentage error reduction of NIPA-LD with respect to NIPA. Let \(I_{CF,i}[k]\) be the observed cumulative fraction of infections of region i at time k: $$\begin{aligned} I_{CF,i}[k]=\sum _{\tau =1}^{k} I_{i}[\tau ] \end{aligned}$$ To quantify the prediction accuracy we considered the Mean Absolute Percentage Error (MAPE) defined as: $$\begin{aligned} e[k]=\frac{1}{N}\sum _{i=1}^{N}{\frac{\mid I_{CFpred,i}[k]-I_{CF,i}[k]\mid }{I_{CF,i}[k]}} \end{aligned}$$ where \(I_{CFpred,i}[k]\) is the predicted cumulative fraction of infected individuals in region i at time k. Let e[k] and \(e_{LD}[k]\) denote the MAPE errors when \(I_{CFpred,i}[k]\) is computed by NIPA and NIPA-LD, respectively. The percentage error improvement of NIPA-LD over NIPA is then computed as $$\begin{aligned} pe[k]=\frac{e[k] - e_{LD}[k]}{e[k]}\times 100 \end{aligned}$$ In order to find a good transmission modifier which reflects the real situation best, we tested different \(\pi \) values by supposing a different response from people in respecting the quarantine measures imposed in the 3 months with varying levels of restrictions. Thus, we fixed increasing values of \(\pi \) which intuitively correspond to a lower compliance to the containment protocols by the individuals. In view of the Italian lockdown measures previously described, we considered the following transmission modifier values: $$\begin{aligned} \pi _{LD1}= & {} [1~ 0.1 ~ 0.3~ 0.5 ~ 0.7 ~ 0.8 ~ 1] \\ \pi _{LD2}= & {} [1~ 0.2 ~ 0.4 ~ 0.6 ~ 0.8 ~ 0.9 ~ 1]\\ \pi _{LD3}= & {} [1~ 0.3 ~ 0.5 ~ 0.7 ~ 0.85 ~ 0.95~ 1]\\ \pi _{LD4}= & {} [1 ~ 0.4 ~ 0.55 ~ 0.75 ~ 0.85 ~ 0.95 ~ 1]\\ \pi _{LD5}= & {} [1~ 0.5 ~ 0.7 ~ 0.8 ~ 0.9~ 0.95~ 1]\\ \pi _{LD6}= & {} [1~ 0.6~ 0.75~ 0.85 ~ 0.95 ~ 0.99 ~ 1]\\ \pi _{LD7}= & {} [1 ~ 0.7 ~ 0.8 ~ 0.90 ~ 0.96~ 0.99 ~ 1] \end{aligned}$$ Table 1 reports the improvement of the percentage error of NIPA-LD with respect to NIPA, for the seven transmission modifiers and different numbers of predicted/omitted days, averaged over all the Italian regions and considering all the time windows under study, while Fig. 2 shows the mean absolute prediction error as a function of the predicted/omitted days. From the table we can observe that for \(n_{neglect}\) equals to 10, 30 and 40 the percentage of improvement is overall very significant for most of the transmission modifier vectors. This means that NIPA-LD can be used to reliably perform both short and long term predictions. More specifically, for the short term predictions (\(n_{neglect}=10\)) low transmission modifier values are more suitable: \(\pi _{LD1}\), for example, is able to achieve an improvement of 35.369%. For the long term predictions, on the contrary, where we neglect 30 or even 40 days aiming to predict them, higher transmission modifier values like those of \(\pi _{LD7}\) perform better. When \(n_{neglect} =20\) the error reduces, on average, only for \(\pi _{LD6}\) and \(\pi _{LD7}\). However, as Fig. 2b highlights, for \(\pi _{LD5}\) there is a reduction of the prediction error since the 10th day, and for \(\pi _{LD4}\), \(\pi _{LD3}\), \(\pi _{LD2}\) in the following next days, except for \(\pi _{LD1}\). Hence, for this case, we can conclude that soft lockdown protocols are able to induce a positive improvement in the error for all the values of the number of neglected days. Finally, Fig. 2e depicts a cone of error evolution for \(n_{neglect}=30\) when using as transmission modifiers \(\pi _{LD5},\pi _{LD6},\pi _{LD7}\), considering \(\pi _{LD5}\) and \(\pi _{LD7}\) as lower bound and upper bound of \(\pi _{LD6}\), respectively. Then, we could assume that the future evolution of the epidemic can be predicted with an error that falls in between the predictions based on \(\pi _{ub}\) and \(\pi _{lb}\). The Fig. 2 shows that the differences between the different lockdown measures are meaningful. In the next section, a detailed analysis for all the Italian regions is performed to evaluate the prediction accuracy of NIPA and NIPA-LD. Table 1 Percentage improvement of NIPA-LD over NIPA prediction for different transmission modifier values and increasing number of neglected days In this section, we evaluate the prediction accuracy of NIPA and NIPA-LD by computing the cumulative infections for each observation day when \(n_{neglect}=30\) and compare them to the true data by using \(\pi _{LD6}\) as transmission modifier for the different quarantine periods. In this experiment, thus, NIPA does not consider the 30 last days of the observed daily data of the newly infected individuals for estimating the curing probability \(\delta _i\) and the infection probability \(\beta _{ij}\). Then both NIPA and NIPA-LD predict the cumulative infections from May 10 until June 9 and the are compared with (a) the true data, and (b) to the logistic function as baseline. The logistic function, introduced in the 19th century by Verhulst to model population growth, approximates the solutions of the SIS and SIR models (Kermack and McKendrick 1927; Prasse and Van Mieghem 2020a). The cumulative number of infected cases \(y_i[k]\) at time k for the region i is assumed to follow: $$\begin{aligned} y_i[t]=\frac{y_{\infty ,i}}{1+e^{-K_i(t-t_{0,i})}} \end{aligned}$$ where \(y_{\infty ,i}\) is the long-term fraction of infected individuals, \(K_i\) is the logistic growth rate, t is the time in day. Due to lack of space, we only report the plots for a subset of the North regions, the ones highly affected by the virus spreading in the red and yellow zones (Piemonte, Lombardia, Veneto, Emilia-Romagna), for one representative region of the orange zone (Lazio) and for one of the green zones (Puglia). For the center and the south of Italy, the COVID-19 spreading has been characterized by a lower number of cases and for this reason we report only two representative regions. In Fig. 3, the cumulative infections for Piemonte are shown. Here, the lockdown modified NIPA variant clearly outperforms the classical NIPA, which overestimates the number of infected individuals. For Piemonte, NIPA-LD better matches the true data. Moreover, for this region, a simple logistic regression is not able to well predict the epidemic. Figure 4 depicts the trend of the predictions for the most challenging region in Italy, Lombardia, which has been mostly affected by the COVID-19. Again, the logistic regression excessively understimates the cumulative infections. From May 10 to May 30, both NIPA and NIPA-LD models well match the number of cumulative infections. However, for the next days, NIPA slightly overestimates the infections while NIPA-LD underestimates them. This is probably due to a much higher mobility of the population after the loosening of the lockdown rules on May 25. The Veneto case (Fig. 5), another region of the North Italy highly affected by the COVID-19, on the contrary, is accurately predicted by NIPA-LD, while classical NIPA without lockdown clearly overestimates the number of infections. Here, the logistic regression works better than the previous regions but still understimates the cumulative infections. For the last North region, Emilia-Romagna, the cumulative infections are better predicted by the lockdown modified NIPA, which slightly overestimates the infections but to a lesser extent than the classical NIPA (Fig. 6). The baseline on the contrary, underestimates the infections. In Fig. 7, the results for Lazio confirm the better accuracy of NIPA-LD. Finally, Fig. 8 shows the results obtained for the Puglia region. We observe that the NIPA prediction with the lockdown transmission modifiers is able again for this region to accurately predict the cumulative infections, while the classical NIPA overestimates them from May 15 until June 9 and the logistic regression underestimates the infections even from May 10. Figures 9 and 10 report the mean relative prediction error e[k] for the first 12 and for the last 9 regions, respectively, over an observation period of 30 days from May 10 to June 9. For most of the regions (P.A. Bolzano, Emilia-Romagna, Friuli Venezia Giulia, Marche, Piemonte, Puglia, Sardegna, Sicilia, Toscana, P.A. Trento, Umbria, Valle d'Aosta, Veneto) NIPA-LD results in a substantially lower prediction error. In particular, after few days the re-openings of May 18 (corresponding to the third day in the plots), for which the population gradually started again going to bars, shops, hair dressers and other commercial activities and exploiting other kind of allowed services, the prediction error is much lower with the lockdown applied to NIPA. In other regions, like Abruzzo, Basilicata, Calabria, Campania, and Lazio, NIPA performs better than NIPA-LD for many days after May 16. This behavior could be due to the fact that on May 18 the mobility among the Italian region was allowed, thus there has been a high flow of people moving towards the southern regions. Thus, in spite of the restrictions made by the regional governor, often much more strict than the national ones, like, for instance in Campania, the lockdown measures where not effective. For Liguria and Lombardia, characterized by much more COVID-19 cases compared to the other regions, NIPA results in a lower error. Also for these two regions it seems that lockdown measures did not work. Finally, the Molise case is the only one having no substantial difference between the prediction error with lockdown and without lockdown. This region had the lowest number of COVID-19 cases. Moreover, there has been an erratic change in the number of infections in Molise, due to a single group of people, who did not follow the quarantine measures imposed by the Italian Government. Death prediction The network-based SIR model described in the paper does not consider the death cases. To predict the number of deaths a new compartment should be added. However, by substituting the cumulated cases of infected with those of dead people, the model allows to predict the deaths. Thereby, we assume that the number of deaths is proportional to the number of infections. Thus, we executed NIPA and NIPA-LD on these cumulated death cases to predict the deaths instead of the infections. Even if the death numbers are subject to greater variations and there are significantly fewer deaths than infections, the methods give good results. Table 2 reports for each region the average MAPE error for NIPA and NIPA-LD in predicting COVID-19 deaths. The lower error values are highlighted in Italic. For this experiment we set the number of neglected days to 30 by using the same transmission modifier values of the previous experiments. The table shows that the error values are very low and that NIPA-LD outperforms NIPA in 14 out of the 21 regions. It is worth pointing out that when NIPA performs better, the differences between error values are very low, except for Lombardia. As known, this region had more than 16 thousands deaths in the considered period. NIPA-LD in such a case underestimates the number of deaths. Figure 11 shows the predicted cumulative deaths of these two methods and those predicted by using logistic regression. Note that the baseline function is not able to obtain a good prediction, in fact it overestimates too much the number of deaths. Table 2 Average MAPE prediction error of COVID-19 deaths for NIPA and NIPA-LD, when the number of neglected days is 30 The results reported in the previous section show that NIPA-LD is able to better predict the evolution of COVID-19 in Italy when compared to the original NIPA method, that does not consider the lockdown measures, and to the baseline prediction method. The main contribution of NIPA-LD is the capability of sensibly improving the long-term prediction of NIPA by implementing the different lockdown measures adopted in the various phases of the spreading of the COVID-19 in Italy into the network-based prediction model. In fact, NIPA-LD obtains lower prediction errors than NIPA when the number of training days diminishes. The introduction of the concept of transmission modifiers in NIPA thus allows to have epidemic transmission rates which well reflect the changes in the containment measured imposed by authorities. However, the adoption of the same values of transmission modifier for all the Italian regions has some drawbacks. In Tables 3 and 4, we report the daily error fraction value between NIPA-LD and NIPA for 30 neglected days. In the last column of Table 4, the average value of this error is also shown. When NIPA-LD outperforms NIPA, the daily error fraction is lower than 1. For most of the regions, NIPA-LD shows its superiority. Veneto, for example, is characterized by very low values with an average daily error of 0.15. Exceptions are Abruzzo, Basilicata, Calabria, Campania, Lazio, Liguria, and Lombardia, where NIPA performs better than NIPA-LD. Thus, though on average, NIPA-LD improves the prediction, this improvement is not for all the regions. Future works will investigate specialized transmission modifiers for the different regions. Moreover, whereas the transmission modifier \(\pi [k]\) may change over time, the infection rates \(\beta _{ij}\) are assumed constant. Hence, in NIPA-LD (and classic NIPA) another limitation is that the probabilities of infection are assumed to be constant, or potentially scaled/multiplied by \(\pi [k]\). Similarly, our model assumes constant curing rates \(\delta \). However, (hopefully soon available) vaccinations may be deployed in a time-varying manner. Another observation is that although NIPA and NIPA-LD can obtain good short-term predictions, accurate long-term predictions are generally difficult. When aiming at predicting the infections beyond some time horizons, the accuracy of the forecasting starts decreasing. To provide a case study, in Figs. 12, 13, 14, 15 and 16, we show what happens when trying to predict the last 10, 20, 30, 40, 50 days of cumulative infections, respectively, in Valle d'Aosta. In the short-term of 10 and 20 neglected days, both NIPA and NIPA-LD well match the observed data. When predicting the last 30 days until June 9, NIPA-LD predicts the infections better than NIPA. For 40 neglected days, NIPA-LD is still able to predict with a certain accuracy while NIPA definitely overestimate the cumulative infections. For 50 days, note that both the two NIPA methods are not able to accurately predict the number of cumulative infections while the logistic regression, on the contrary, works better. When thus adding too many predicted days, an accurate prediction is not possible with the NIPA-based methods. However, even if the transmission modifier is equal for all the regions, we point out that NIPA-LD performs generally better than NIPA, also for \(n_{neglect}=30\) and \(n_{neglect}=40\) which can be considered long-term predictions. Finally, we point out that this work is based on the discrete-time SIR model. This model is characterized by 3 compartments. NIPA can be used for any compartmental epidemic model (Prasse and Van Mieghem 2020b) with c compartments, provided that \(c-1\) compartments are measured. We point out that the approach in this work observes only one compartment, the infectious compartment I, and the recovered compartment R is obtained by Eq. (2) after estimating the curing probability \(\delta _i\) in the training phase. Here, the advantage is that the less compartments we use, the less data we need to provide an accurate forecasting. When only macroscopic data, such as those exploited here, are available, a simple epidemiological model like the SIR has shown to be sufficient to predict with a high accuracy the trend of the epidemic (Kozyreff 2020). More complicated models than the SIR, such as SEIR, SIRD, which require more additional states, do not necessarily obtain better accuracy. Table 3 Daily error fraction value between NIPA-LD and NIPA for 30 neglected days, from day 1 to day 15 Table 4 Daily error fraction value between NIPA-LD and NIPA for 30 neglected days, from day 16 to day 30 We exploited a network-based SIR model to predict the curves of the cumulative infections of individuals affected by the SARS-CoV-2 virus in Italy. The classic SIR epidemic model has been expanded by incorporating time-varying lockdown protocols in order to have epidemic transmission rates that change as the government quarantine rules change. Tested on regional data of the COVID-19 in Italy, the network-based prediction method results in a higher prediction accuracy when compared to the classical method that does not consider the lockdown measures. The 21 Italian regions Mean prediction error when the number of the omitted days equals a \(n_{neglect} =10\), b \(n_{neglect}=20\), c \(n_{neglect} =30\) and d \(n_{neglect} =40\), for different transmission modifier vectors. e Cone of error evolution for \(n_{neglect}=30\) Cumulative infections for Piemonte Cumulative infections for Lombardia Cumulative infections for Veneto Cumulative infections for Emilia-Romagna Cumulative infections for Lazio Cumulative infections for Puglia Mean relative prediction error for the period from May 10th to June 9th: 12 regions Mean relative prediction error for the period from May 10th to June 9th: 9 regions Cumulative deaths for Lombardia with \(n_{neglect}=30\) Cumulative infections for Valle d'Aosta with \(n_{neglect}=10\) Experiments, however, pointed out that equal values of the transmission modifiers for all the Italian regions could not be appropriate, because of the differences in people mobility. On the other hand, the NIPA method extended to account for the lockdown measures highlighted the tremendous potential of an optimal transmission modifier. In fact NIPA-LD could be practically used to experiment which lockdown strategies are effective or not and which countermeasures are more appropriate to stop the spreading of COVID-19 epidemic. Future work will investigate how a transmission modifier might be best related to a quarantine strategy also in the training phase of NIPA, in order to improve the prediction capability of the approach. All data generated or analysed during this study can be downloaded from the Italian Civil Protection Department at the address https://github.com/pcm-dpc/COVID-19 Here, we recall the main reopening steps of commercial activities and services. https://github.com/pcm-dpc/COVID-19. NIPA: Network Inference based Prediction Algorithm NIPA-LD: Network Inference based Prediction Approach with LockDown Arenas A, Cota W, Gomez-Gardenes J, Gómez S, Granell C, Matamalas JT, Soriano-Panosand D, Steinegger B (2020) A mathematical model for the spatiotemporal epidemic spreading of COVID19. medRxiv. https://doi.org/10.1101/2020.03.21.20040022v1 Caccavo D (2020) Chinese and Italian COVID-19 outbreaks can be correctly described by a modified SIRD model. medRxiv. https://doi.org/10.1101/2020.03.19.20039388 Chinazzi M, Davis JT, Ajelli M, Gioannini C, Litvinova M, Merler S, Piontti AP, Mu K, Rossi L, Sun K, Viboud C, Xiong X, Yu H, Halloran ME Jr, Vespignani A (2020) The effect of travel restrictions on the spread of the 2019 novel coronavirus (COVID-19) outbreak. Science 368(6489):395–400 Chu DK, Akl EA, Duda S, Solo K, Yaacoub S, Schünemann HJ (2020) Physical distancing, face masks, and eye protection to prevent person-to-person transmission of SARS-CoV-2 and COVID-19: a systematic review and meta-analysis. Lancet 10:10160140673620311429 Di LD, Pullano G, Sabbatini C, Boëlle P, Colizza V (2020) Impact of lockdown on COVID-19 epidemic in île-de-france and possible exit strategies. BMC Med 18(1):240–240 Estrada E (2020) Covid-19 and sars-cov-2. Modeling the present, looking at the future. Phys Rep 869:1–51 Ferrari L, Gerardi G, Manzi G, Micheletti A, Nicolussi F, Salini S (2020) Modelling provincial covid-19 epidemic data in Italy using an adjusted time-dependent SIRD model. arXiv:2005.12170 Flaxman S, Mishra S, Gandy A, Unwin HJT, Mellan TA, Coupland H, Whittaker C, Zhu H, Berah T, Eaton JW et al (2020) Estimating the effects of non-pharmaceutical interventions on COVID-19 in Europe. Nature 584(7820):257–261 Galeazzi A, Cinelli M, Bonaccorsi G, Pierri F, Schmidt AL, Scala A, Pammolli F, Quattrociocchi W (2020) Human mobility in response to COVID-19 in France, Italy and UK. arXiv:2005.06341 Giuliani D, Dickson MM, Espa G, Santi F (2020) Modelling and predicting the spatio-temporal spread of coronavirus disease 2019 (COVID-19) in Italy. Available at SSRN 3559569 Hadjidemetriou GM, Sasidharan M, Kouyialis G, Parlikad AK (2020) The impact of government measures and human mobility trend on COVID-19 related deaths in the UK. Transp Res Interdiscip Perspect 6:100167 Haug N, Geyrhofer L, Londei A, Dervic E, Desvars-Larrive A, Loreto V, Pinior B, Thurner S, Klimek P (2020) Ranking the effectiveness of worldwide COVID-19 government interventions. medRxiv. https://doi.org/10.1101/2020.07.06.20147199 Hethcote HW (2000) The mathematics of infectious diseases. SIAM Rev 42(4):599–653 Kermack WO, McKendrick AG (1927) A contribution to the mathematical theory of epidemics. Proc R Soc Lond Ser A 115:700–721 Klein B, LaRock T, McCabe S, Torres L, Privitera,F, Lake B, Kraemer MUG, Brownstein JS, Lazer D, Eliassi-Rad T, Scarpino SV, Chinazzi M, Vespignani A (2020) Assessing changes in commuting and individual mobility in major metropolitan areas in the united states during the COVID-19 outbreak. https://www.networkscienceinstitute.org/publications Kozyreff G (2020) Hospitalization dynamics during the first COVID-19 pandemic wave: sir modelling compared to Belgium, France, Italy, Switzerland and New York City data. arXiv:2007.01411 Lavezzo E, Franchin E, Ciavarella C, Cuomo-Dannenburg G, Barzon L, Del Vecchio C, Rossi L, Manganelli R, Loregian A, Navarin N et al (2020) Suppression of COVID-19 outbreak in the municipality of VO, Italy. Nature 584:425–429. https://doi.org/10.1038/s41586-020-2488-1 Maier BF, Brockmann D (2020) Effective containment explains subexponential growth in recent confirmed COVID-19 cases in China. Science 4557(eabb4557):742–746 Oliver N, Lepri B, Sterly H, Lambiotte R, Deletaille S, Nadai MD, Letouze E, Salah AA, Benjamins R, Cattuto C, Colizza V, de Cordes N, Fraiberger SP, Koebe T, Lehmann S, Murillo J, Pentland A, Pham PN, Pivetta F, Saramaki J, Scarpino SV, Tizzoni M, Verhulst S, Vinck P (2020) Mobile phone data for informing public health actions across the COVID-19 pandemic life cycle. Science Advances 6(23):eabc0764 Pastor-Satorras R, Castellano C, Van Mieghem P, Vespignani A (2015) Epidemic processes in complex networks. Rev Mod Phys 87(3):925–979 Pei S, Kandula S, Shaman J (2020) Differential effects of intervention timing on COVID-19 spread in the United States. medRxiv. https://doi.org/10.1101/2020.05.15.20103655 Prasse B, Van Mieghem P (2020a) Fundamental limits of predicting epidemic outbreaks. Delft University of Technology, Delft Prasse B, Van Mieghem P (2020b) Network reconstruction and prediction of epidemic outbreaks for general group-based compartmental epidemic models. IEEE Trans Netw Sci Eng. https://doi.org/10.1109/TNSE.2020.2987771 Prasse B, Van Mieghem P (2020c) Predicting dynamics on networks hardly depends on the topology. arXiv:2005.14575 Prasse B, Achterberg MA, Ma L, Van Mieghem P (2020) Network-inference-based prediction of the COVID-19 epidemic outbreak in the Chinese province Hubei. Appl Netw Sci 5(1):1–11 Schlosser F, Maier BF, Hinrichs D, Zachariae A, Brockmann D (2020) COVID-19 lockdown induces structural changes in mobility networks? Implication for mitigating disease dynamics. arXiv:2007.01583v2 Song PX, Wang L, Zhou Y, He J, Zhum B, Wang F, Tang L, Eisenberg M (2020) An epidemiological forecast model and software assessing interventions on COVID-19 epidemic in China. medRxiv. https://doi.org/10.1101/2020.02.29.20029421 Youssef M, Scoglio C (2011) An individual-based approach to SIR epidemics in contact networks. Journal of Theoretical Biology 283(1):136–144 This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. National Research Council of Italy (CNR), Institute for High Performance Computing and Networking (ICAR), Via P. Bucci, 8-9C, 87036, Rende, Italy Clara Pizzuti & Annalisa Socievole Faculty of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, P.O. Box 5031, 2600 GA, Delft, The Netherlands Bastian Prasse & Piet Van Mieghem Clara Pizzuti Annalisa Socievole Bastian Prasse Piet Van Mieghem All authors contributed to the paper, read and approved the manuscript. Correspondence to Clara Pizzuti. This work has been supported by the Universiteits fonds Delft under the program TU Delft Covid-19 Response Fund. Pizzuti, C., Socievole, A., Prasse, B. et al. Network-based prediction of COVID-19 epidemic spreading in Italy. Appl Netw Sci 5, 91 (2020). https://doi.org/10.1007/s41109-020-00333-8 Network inference Transmission modifier Special issue on Epidemics Dynamics & Control on Networks
CommonCrawl
POTW Submission POTW Suggestion Math Guides and Tutorials Latest reviews Search resources University Math Be a part of something great, join today! Vector Triple products Thread starter Prove It MHB Math Helper This semester I was asked to lecture Calculus 2 at the university in which I work. I gladly accepted Anyway, we are up to the second module, which is Vectors. Today's lecture was about Scalar Triple Products and Vector Triple Products. My class is very attentive and was asking a lot of questions today. Unfortunately, the lecture notes I inherited had ZERO information on why we would need to evaluate a vector triple product, though it said there are numerous applications including mathematical modelling of particle and fluid dynamics. Scalar triple products are easy - we use them to evaluate the volume of a paralleliped and to determine if vectors or points are coplanar. Among the participation from the students, I was asked why we would need to evaluate the vector triple product, and for the life of me I could not think of a single application. I even checked Google and could not find one there either, although geometrically we can use the vector triple product to find a vector that lies in the same plane as the final two vectors. Just to be clear, I'm talking about [tex]\displaystyle \mathbf{a} \times \left( \mathbf{b} \times \mathbf{c} \right) [/tex], which can be evaluated more easily using [tex]\displaystyle \left( \mathbf{a} \cdot \mathbf{c} \right) \mathbf{b} - \left( \mathbf{a} \cdot \mathbf{b} \right) \mathbf{c} [/tex]. So my question is, could somebody please give me some real-world examples of applications of the vector triple product? Thanks Ackbach Indicium Physicus It shows up in rotating reference frames, where you have to write down Newton's Second Law quite a bit differently. Ackbach said: Thank you Ackbach, at least I now have something to tell my students next lesson Prove It said: You're very welcome! Klaas van Aarsen MHB Seeker The triple vector product is the volume of the body it spans. Working it out geometrically also shows you why the identity holds. I like Serena said: Are you sure you're not thinking about the scalar triple product? Oops. You're right. Since I know next to nothing about applied mathematics, I know there are other geometric uses for the triple vector product. It is essential in the proof of the formula for the distance between two skew lines, Also in defining the unit tangent vector Big T as \(\displaystyle T=\frac{R'}{\|R'\|}\), we can use the triple vector product to simplify its derivative \(\displaystyle T'=\frac{R'\times(R''\times R')}{\|R'\|^3}\). Plato said: Thank you Plato, since I'm from a more pure mathematics background, that helps a lot too If you have 2 vectors $\mathbf a$ and $\mathbf b$, then $\{ \mathbf a, (\mathbf a \times \mathbf b), \mathbf a \times (\mathbf a \times \mathbf b) \}$ forms an orthogonal basis. This is basically the Gramm-Schmidt orthogonalization process. That's also what happens in Plato's example. $\mathbf T$ is the unit velocity vector, while $\mathbf T' = \kappa \mathbf N$ is the vector perpendicular to it (curvature times unit normal vector). This is how curvature is defined as a component of a local orthonormal basis. And that's also what happens in Ackbach's example, where basically each direction in a local orthonormal basis is labeled with a different name. Advanced Forum Stats, Awards System by AddonFlare - Premium XF2 Addons
CommonCrawl
Home » Introductory Workshop: Derived Algebraic Geometry and Birational Geometry and Moduli Spaces Introductory Workshop: Derived Algebraic Geometry and Birational Geometry and Moduli Spaces January 31, 2019 - February 08, 2019 February 04, 2019 12 months ago To apply for Funding you must register by: October 15, 2018 over 1 year ago Derived Algebraic Geometry Birational Geometry and Moduli Spaces Organizers Julie Bergner (University of Virginia), Bhargav Bhatt (University of Michigan), Christopher Hacon (University of Utah), LEAD Mircea Mustaţă (University of Michigan), Gabriele Vezzosi (Università di Firenze) Benjamin Antieau (University of Illinois, Chicago) Kathryn Hess (École Polytechnique Fédérale de Lausanne (EPFL)) János Kollár (Princeton University) Emanuele Macri (Northeastern University) Tony Pantev (University of Pennsylvania) Sam Raskin (University of Texas, Austin) Christian Schnell (State University of New York, Stony Brook) Christian Schnell Karl Schwede (University of Utah) Carlos Simpson (Universite de Nice Sophia Antipolis) Yuri Tschinkel (New York University, Courant Institute) Akshay Venkatesh (Institute for Advanced Study) Claire Voisin (Collège de France; Institut de Mathématiques de Jussieu) Chenyang Xu (Massachusetts Institute of Technology) A picture of a singularity, courtesy of Herwig Hauser The workshop will survey several areas of algebraic geometry, providing an introduction to the two main programs hosted by MSRI in Spring 2019. It will consist of 7 expository mini-courses and 7 separate lectures, each given by top experts in the field. The focus of the workshop will be the recent progress in derived algebraic geometry, birational geometry and moduli spaces. The lectures will be aimed at a wide audience including advanced graduate students and postdocs with a background in algebraic geometry. Bibliography The workshop will survey several areas of algebraic geometry, providing an introduction to the two main programs hosted by MSRI in Spring 2019. It will consist of 7 expository mini-courses and 7 separate lectures, each given by top experts in the field. The focus of the workshop will be the recent progress in derived algebraic geometry, birational geometry and moduli spaces. The lectures will be aimed at a wide audience including advanced graduate students and postdocs with a background in algebraic geometry. singularities moduli spaces birational geometry stacks and higher stacks derived stacks \infty-categories dg-categories derived dg-categories moduli problems simplicial algebras dg-algebras de Rham cohomology Hochschild and cyclic cohomology minimal model program derived categories fibrations degenerations vanishing theorems positive characteristic methods 14A20 - Generalizations (algebraic spaces, stacks) 14A22 - Noncommutative algebraic geometry [See also 16S38] 14B05 - Singularities [See also 14E15, 14H20, 14J17, 32Sxx, 58Kxx] 14C20 - Divisors, linear systems, invertible sheaves 14D06 - Fibrations, degenerations 14D20 - Algebraic moduli problems, moduli of vector bundles {For analytic moduli problems, see 32G13} 14D22 - Fine and coarse moduli spaces 14E05 - Rational and birational maps 14F05 - Sheaves, derived categories of sheaves and related constructions [See also 14H60, 14J60, 18F20, 32Lxx, 46M20] 14F17 - Vanishing theorems [See also 32L20] 14F42 - Motivic cohomology; motivic homotopy theory [See also 19E15] 14G17 - Positive characteristic ground fields 14J10 - Families, moduli, classification: algebraic theory 14J40 - $n$-folds ($n>4$) 18D10 - Monoidal categories (= multiplicative categories), symmetric monoidal categories, braided categories [See also 19D23] 18E30 - Derived categories, triangulated categories 18G25 - Relative homological algebra, projective classes 19D55 - $K$-theory and homology; cyclic homology and cohomology [See also 18G60] 55P92 - Relations between equivariant and nonequivariant homotopy theory 55U35 - Abstract and axiomatic homotopy theory 55U40 - Topological categories, foundations of homotopy theory Show Funding To apply for funding, you must register by the funding application deadline displayed above. Students, recent Ph.D.'s, women, and members of underrepresented minorities are particularly encouraged to apply. Funding awards are typically made 6 weeks before the workshop begins. Requests received after the funding deadline are considered only if additional funds become available. Show Lodging MSRI does not hire an outside company to make hotel reservations for our workshop participants, or share the names and email addresses of our participants with an outside party. If you are contacted by a business that claims to represent MSRI and offers to book a hotel room for you, it is likely a scam. Please do not accept their services. MSRI has preferred rates at the Hotel Shattuck Plaza, depending on room availability. Guests can call the hotel's main line at 510-845-7300 and ask for the MSRI- Mathematical Science Research Institute discount. To book online visit this page (the MSRI rate will automatically be applied). MSRI has preferred rates at the Graduate Berkeley, depending on room availability. Reservations may be made by calling 510-845-8981. When making reservations, guests must request the MSRI preferred rate. Enter in the Promo Code MSRI123 (this code is not case sensitive). MSRI has preferred rates at the Berkeley Lab Guest House, depending on room availability. Reservations may be made by calling 510-495-8000 or directly on their website. Select "Affiliated with the Space Sciences Lab, Lawrence Hall of Science or MSRI." When prompted for your UC Contact/Host, please list Chris Marshall ([email protected]). MSRI has a preferred rates at Easton Hall and Gibbs Hall, depending on room availability. Guests can call the Reservations line at 510-204-0732 and ask for the MSRI- Mathematical Science Research Inst. rate. To book online visit this page, select "Request a Reservation" choose the dates you would like to stay and enter the code MSRI (this code is not case sensitive). Additional lodging options may be found on our short term housing page. Moduli of canonical models We give an overview of the current state of the moduli problem for canonical models. Notes 626 KB application/pdf The uniqueness of K-polystable Fano degeneration (Joint with Harold Blum) We want to show that a family of Fano varieties has a unique K-polystable degeneration. This is one step of the program of constructing a moduli space of K-stable Fano varieties, i.e., proving there is an Artin stack parametrizing K-semistable Fano varieties, which admits a projective good moduli space parametrizing K-polystable Fano varieties. Birational algebraic geometry in positive characteristic This series of introductory talks will discuss some of the issues and pathologies one runs into when exploring birational geometry in positive characteristic. Some time will be spent talking about ideas coming from commutative algebra and their applications, including connections between singularities from the MMP with singularities arising in commutative algebra. DAG II: moduli of objects in derived categories In this series of lectures, I will give an introduction to derived algebraic geometry aimed at algebraic geometers. The first lecture will introduce simplicial commutative rings and use them to define the cotangent complex and derived de Rham cohomology with several examples. The second lecture will introduce derived stacks and the moduli stack of objects in a derived category. Then, I will give the geometricity theorem of Toën--Vaquié and describe the cotangent complex to the moduli stack. In the third lecture, we will use the machinery developed in the first two lectures to study three examples: cohomology as maps to a geometric derived stack, the (derived) Picard stack, and the stack of Fourier--Mukai equivalences Infinity categories and why they are useful, I In this series, we'll introduce infinity categories and explain their relationships with triangulated categories, dg categories, and Quillen model categories. We'll explain how the infinity-categorical language makes it possible to create a moduli framework for objects that have some kind of up-to-homotopy aspect: stacks, complexes, as well as higher categories themselves. The main concepts from usual category theory generalize very naturally. Emphasis will be given to how these techniques apply in algebraic geometry. In the last talk we'll discuss current work related to mirror symmetry and symplectic geometry via the notion of stability condition. The notion of singular support in DAG and its applications I Stable birational invariants In these lectures, I will describe the formalism of the "decomposition of the diagonal", Chow-theoretic or cohomological, and the way it controls some more classical stable birational invariants. I will then explain the specialization method and some applications. The notion of singular support in DAG and its applications II Topological Hochschild homology and topological cyclic homology: from classical to modern - I Topological Hochschild homology and topological cyclic homology are important and much studied approximations to algebraic K-theory. In this series of lectures I will survey various approaches to constructing and computing these remarkable invariants, from the original methods of Bökstedt and of Bökstedt-Hsiang-Madsen to the infinity-category theoretic methods of Nikolaus-Scholze. Rationality problems I will discuss recent advances in the study of rationality of higher-dimensional varieties (joint work with B. Hassett, A. Kresch, and A. Pirutka). Notes 1.12 MB application/pdf Shifted symplectic structures and applications I will give a brief overview of shifted symplectic and Poisson structures in derived geometry and their quantization. I will survey constructions of these structures on moduli stacks and will discuss several explicit examples. In the rest of the talk I will discuss interesting connections and applications to enumerative geometry, low dimensional topology, and Hodge theory. Time permitting, I will conclude with a sampling of recent results and developments including additivity theorems, connections with Bloch's conductor conjecture, and the Azumaya nature of shifted differential operators in positive characteristic. The notion of singular support in DAG and its applications III Extending holomorphic forms from the regular locus of a complex space to a resolution Suppose we have a holomorphic differential form, defined on the smooth locus of a complex space. Under what conditions does it extend to a holomorphic differential form on a resolution of singularities? In 2011, Greb, Kebekus, Kovacs, and Peternell proved that such an extension always exists on algebraic varieties with klt singularities. I will explain how to solve this problem in general, with the help of Hodge modules and the Decomposition Theorem. This is joint work with Kebekus. Derived categories of cubic fourfolds and non-commutative K3 surfaces The derived category of coherent sheaves on a cubic fourfold has a subcategory which can be thought as the derived category of a non-commutative K3 surface. This subcategory was studied recently in the work of Kuznetsov and Addington-Thomas, among others. In this talk, I will present joint work in progress with Bayer, Lahoz, Nuer, Perry, Stellari, on how to construct Bridgeland stability conditions on this subcategory. This proves a conjecture by Huybrechts, and it allows to start developing the moduli theory of semistable objects in these categories, in an analogue way as for the classical Mukai theory for (commutative) K3 surfaces. I will also discuss a few applications of these results. (Derived) moduli of local systems in number theory If X is a complex variety, we can form a moduli space M of local systems parameterizing representations of pi_1(X). One would like to do the same in arithmetic situations -- if X is a curve over a finite field, or the ring of integers of a number field. Here we can only construct a shadow of M, remembering some of its formal geometry. However, there are many indications that a more satisfactory theory should exist, and I will review three of them in my talk. Registration deadline has passed. Contact: [email protected] Scientific Description Funding & Logistics Directions to Venue Visa/Immigration Reimbursement Guidelines
CommonCrawl
Phase separation in a gravity field DCDS-S Home Asymptotics of the Coleman-Gurtin model April 2011, 4(2): 371-389. doi: 10.3934/dcdss.2011.4.371 Thermodynamically consistent higher order phase field Navier-Stokes models with applications to biomembranes M. Hassan Farshbaf-Shaker 1, and Harald Garcke 1, Fakultät für Mathematik, Universität Regensburg, 93040 Regensburg, Germany, Germany Received June 2009 Revised October 2009 Published November 2010 In this paper we derive thermodynamically consistent higher order phase field models for the dynamics of biomembranes in incompressible viscous fluids. We start with basic conservation laws and an appropriate version of the second law of thermodynamics and obtain generalizations of models introduced by Du, Li and Liu [3] and Jamet and Misbah [11]. In particular we derive a stress tensor involving higher order derivatives of the phase field and generalize the classical Korteweg capillarity tensor. Keywords: fluid interfaces, convection, second law of thermodynamics, dissipation inequality, Phase field model, weak solution., biomembrane, Navier-Stokes equation, momentum equation, bending elastic energy. Mathematics Subject Classification: Primary: 35K55, 74L15; Secondary: 74K15, 92C0. Citation: M. Hassan Farshbaf-Shaker, Harald Garcke. Thermodynamically consistent higher order phase field Navier-Stokes models with applications to biomembranes. Discrete & Continuous Dynamical Systems - S, 2011, 4 (2) : 371-389. doi: 10.3934/dcdss.2011.4.371 M. Arroyo and A. DeSimone, Relaxation dynamics of fluid membranes, Phys. Rev. E, 79 (2009), 031915. doi: 10.1103/PhysRevE.79.031915. Google Scholar T. Biben, K. Kassner and C. Misbah, Phase field approach to three dimensional vesicle dynamics, Phys. Rev. E, 72 (2005), 041921. doi: 10.1103/PhysRevE.72.041921. Google Scholar Q. Du, M. Li and C. Liu, Analysis of a phase field Navier-Stokes vesicle-fluid interaction model, Disc. and Continuous Dyn. Systems. Series B, 8 (2007), 539-556. doi: 10.3934/dcdsb.2007.8.539. Google Scholar Q. Du, C. Liu, R. Ryham and X. Wang, Energetic variational approaches in modeling vesicle and fluid interactions, Physica D, 238 (2009), 923-930. doi: 10.1016/j.physd.2009.02.015. Google Scholar H. Garcke, B. Niethammer, M. A. Peletier and M. Röger, Mini-workshop: Mathematics of biological membranes. Abstracts from the mini-workshop held September 2008. Organized by H. Garcke, B. Niethammer, M. A. Peletier and M. Röger, Oberwolfach Reports, 5 (2008), 447-486. Google Scholar H. Garcke and R. Haas, Modelling of non-isothermal multicomponent, multi-phase systems with convection, in "Phase Transformations in Multicomponent Melts," Wiley-VCH Verlag, Weinheim, (2008), 325-338. doi: 10.1002/9783527624041.ch20. Google Scholar M. E. Gurtin, "An Introduction to Continuum Mechanics," Mathematics in Science and Engineering, Volume 158, 2003. Google Scholar M. E. Gurtin, D. Polignone and J. Vinals, Two-phase binary fluids and immiscible fluids described by an order parameter, Math. Models Methods Appl. Sci., 6 (1996), 815-831. doi: 10.1142/S0218202596000341. Google Scholar W. Helfrich, Elastic properties of lipid bilayers: Theory and possible experiments, Z. Naturforsch. C, 28 (1973), 693-703. Google Scholar D. Jamet and C. Misbah, Towards a thermodynamically consistent picture of the phase field model of vesicles: Local membrane incompressibility, Phys. Rev. E, 76 (2007), 051907. doi: 10.1103/PhysRevE.76.051907. Google Scholar D. Jamet and C. Misbah, Towards a thermodynamically consistent picture of the phase field model of vesicles: Curvature energy, Phys. Rev. E, 78 (2008), 031902. doi: 10.1103/PhysRevE.78.031902. Google Scholar I. S. Liu, Method of Lagrange multipliers for exploitation of the entropy principle, Arch. Rat. Mech. Anal., 46 (1972), 131-148. doi: 10.1007/BF00250688. Google Scholar I. S. Liu and I. Müller, On the thermodynamics and thermostatics of fluids in electromagnetic fields, Arch. Rat. Mech. Anal., 46 (1972), 149-176. doi: 10.1007/BF00250689. Google Scholar J. Lowengrub, A. Rätz and A. Voigt, Phase-field modeling of the dynamics of multicomponent vesicles: Spinodal decomposition, coarsening, budding, and fission, Phys. Rev. E, 79 (2009), 031926. doi: 10.1103/PhysRevE.79.031926. Google Scholar L. Modica, The gradient theory of phase transitions and minimal interface criterion, Arch. Rat. Mech. Anal., 98 (1987), 123-142. doi: 10.1007/BF00251230. Google Scholar M. Röger and R. Schätzle, On a modified conjecture of De Giorgi, Math. Z., 254 (2006), 675-714. doi: 10.1007/s00209-006-0002-6. Google Scholar U. Seifert, Configurations of fluid membranes and vesicles, Advances in Physics, 46 (1997), 13-137. doi: 10.1080/00018739700101488. Google Scholar C. Truesdell and W. Noll, "The Non-Linear Field Theories of Mechanics," Springer Verlag, 1992. Google Scholar Ariane Piovezan Entringer, José Luiz Boldrini. A phase field $\alpha$-Navier-Stokes vesicle-fluid interaction model: Existence and uniqueness of solutions. Discrete & Continuous Dynamical Systems - B, 2015, 20 (2) : 397-422. doi: 10.3934/dcdsb.2015.20.397 Qiang Du, Manlin Li, Chun Liu. Analysis of a phase field Navier-Stokes vesicle-fluid interaction model. Discrete & Continuous Dynamical Systems - B, 2007, 8 (3) : 539-556. doi: 10.3934/dcdsb.2007.8.539 Chérif Amrouche, María Ángeles Rodríguez-Bellido. On the very weak solution for the Oseen and Navier-Stokes equations. Discrete & Continuous Dynamical Systems - S, 2010, 3 (2) : 159-183. doi: 10.3934/dcdss.2010.3.159 Yizhao Qin, Yuxia Guo, Peng-Fei Yao. Energy decay and global smooth solutions for a free boundary fluid-nonlinear elastic structure interface model with boundary dissipation. Discrete & Continuous Dynamical Systems, 2020, 40 (3) : 1555-1593. doi: 10.3934/dcds.2020086 Sun-Ho Choi. Weighted energy method and long wave short wave decomposition on the linearized compressible Navier-Stokes equation. Networks & Heterogeneous Media, 2013, 8 (2) : 465-479. doi: 10.3934/nhm.2013.8.465 Jingrui Wang, Keyan Wang. Almost sure existence of global weak solutions to the 3D incompressible Navier-Stokes equation. Discrete & Continuous Dynamical Systems, 2017, 37 (9) : 5003-5019. doi: 10.3934/dcds.2017215 Kuijie Li, Tohru Ozawa, Baoxiang Wang. Dynamical behavior for the solutions of the Navier-Stokes equation. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1511-1560. doi: 10.3934/cpaa.2018073 C. Foias, M. S Jolly, I. Kukavica, E. S. Titi. The Lorenz equation as a metaphor for the Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2001, 7 (2) : 403-429. doi: 10.3934/dcds.2001.7.403 Bum Ja Jin, Kyungkeun Kang. Caccioppoli type inequality for non-Newtonian Stokes system and a local energy inequality of non-Newtonian Navier-Stokes equations without pressure. Discrete & Continuous Dynamical Systems, 2017, 37 (9) : 4815-4834. doi: 10.3934/dcds.2017207 Shuguang Shao, Shu Wang, Wen-Qing Xu. Global regularity for a model of Navier-Stokes equations with logarithmic sub-dissipation. Kinetic & Related Models, 2018, 11 (1) : 179-190. doi: 10.3934/krm.2018009 Francesca Crispo, Paolo Maremonti. A remark on the partial regularity of a suitable weak solution to the Navier-Stokes Cauchy problem. Discrete & Continuous Dynamical Systems, 2017, 37 (3) : 1283-1294. doi: 10.3934/dcds.2017053 Zhenhua Guo, Zilai Li. Global existence of weak solution to the free boundary problem for compressible Navier-Stokes. Kinetic & Related Models, 2016, 9 (1) : 75-103. doi: 10.3934/krm.2016.9.75 I. Moise, Roger Temam. Renormalization group method: Application to Navier-Stokes equation. Discrete & Continuous Dynamical Systems, 2000, 6 (1) : 191-210. doi: 10.3934/dcds.2000.6.191 Igor Kukavica, Mohammed Ziane. Regularity of the Navier-Stokes equation in a thin periodic domain with large data. Discrete & Continuous Dynamical Systems, 2006, 16 (1) : 67-86. doi: 10.3934/dcds.2006.16.67 Yat Tin Chow, Ali Pakzad. On the zeroth law of turbulence for the stochastically forced Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021270 José Luiz Boldrini, Gabriela Planas. A tridimensional phase-field model with convection for phase change of an alloy. Discrete & Continuous Dynamical Systems, 2005, 13 (2) : 429-450. doi: 10.3934/dcds.2005.13.429 Ken Shirakawa, Hiroshi Watanabe. Energy-dissipative solution to a one-dimensional phase field model of grain boundary motion. Discrete & Continuous Dynamical Systems - S, 2014, 7 (1) : 139-159. doi: 10.3934/dcdss.2014.7.139 G. M. de Araújo, S. B. de Menezes. On a variational inequality for the Navier-Stokes operator with variable viscosity. Communications on Pure & Applied Analysis, 2006, 5 (3) : 583-596. doi: 10.3934/cpaa.2006.5.583 Carlo Morosi, Livio Pizzocchero. On the constants in a Kato inequality for the Euler and Navier-Stokes equations. Communications on Pure & Applied Analysis, 2012, 11 (2) : 557-586. doi: 10.3934/cpaa.2012.11.557 Peter Anthony, Sergey Zelik. Infinite-energy solutions for the Navier-Stokes equations in a strip revisited. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1361-1393. doi: 10.3934/cpaa.2014.13.1361 M. Hassan Farshbaf-Shaker Harald Garcke
CommonCrawl
Tag: Borel From Weil's foundations to schemes Published August 15, 2022 by lievenlb Last time, we've seen that the first time 'schemes' were introduced was in 'La Tribu' (the internal Bourbaki-account of their congresses) of the May-June 1955 congress in Chicago. Here, we will focus on the events leading up to that event. If you always thought Grothendieck invented the word 'schemes', here's what Colin McLarty wrote: "A story says that in a Paris café around 1955 Grothendieck asked his friends "what is a scheme?". At the time only an undefined idea of "schéma" was current in Paris, meaning more or less whatever would improve on Weil's foundations." (McLarty in The Rising Sea) What were Weil's foundations of algebraic geometry? Well, let's see how Weil defined an affine variety over a field $k$. First you consider a 'universal field' $K$ containing $k$, that is, $K$ is an algebraically closed field of infinite transcendence degree over $k$. A point of $n$-dimensional affine space is an $n$-tuple $x=(x_1,\dots,x_n) \in K^n$. For such a point $x$ you consider the field $k(x)$ which is the subfield of $K$ generated by $k$ and the coordinates $x_i$ of $x$. Alternatively, the field $k(x)$ is the field of fractions of the affine domain $R=k[z_1,\dots,z_n]/I$ where $I$ is the prime ideal of all polynomials $f \in k[z_1,\dots,z_n]$ such that $f(x) = f(x_1,\dots,x_n)=0$. An affine $k$-variety $V$ is associated to a 'generic point' $x=(x_1,\dots,x_n)$, meaning that the field $k(x)$ is a 'regular extension' of $k$ (that is, for all field-extensions $k'$ of $k$, the tensor product $k(x) \otimes_k k'$ does not contain zero-divisors. The points of $V$ are the 'specialisations' of $x$, that is, all points $y=(y_1,\dots,y_n)$ such that $f(y_1,\dots,y_n)=0$ for all $f \in I$. Perhaps an example? Let $k = \mathbb{Q}$ and $K=\mathbb{C}$ and take $x=(i,\pi)$ in the affine plane $\mathbb{C}^2$. What is the corresponding prime ideal $I$ of $\mathbb{Q}[z_1,z_2]$? Well, $i$ is a solution to $z_1^2+1=0$ whereas $\pi$ is transcendental over $\mathbb{Q}$, so $I=(z_1^2+1)$ and $R=\mathbb{Q}[z_1,z_2]/I= \mathbb{Q}(i)[z_2]$. Is $x=(i,\pi)$ a generic point? Well, suppose it were, then the points of the corresponding affine variety $V$ would be all couples $(\pm i, \lambda)$ with $\lambda \in \mathbb{C}$ which is the union of two lines in $\mathbb{C}^2$. But then $i \otimes 1 + 1 \otimes i$ is a zero-divisor in $\mathbb{Q}(x) \otimes_{\mathbb{Q}} \mathbb{Q}(i)$. So no, it is not a generic point over $\mathbb{Q}$ and does not define an affine $\mathbb{Q}$-variety. If we would have started with $k=\mathbb{Q}(i)$, then $x=(i,\pi)$ is generic and the corresponding affine variety $V$ consists of all points $(i,\lambda) \in \mathbb{C}^2$. If this is new to you, consider yourself lucky to be young enough to have learned AG from Fulton's Algebraic curves, or Hartshorne's chapter 1 if you were that ambitious. By 1955, Serre had written his FAC, and Bourbaki had developed enough commutative algebra to turn His attention to algebraic geometry. La Ciotat congress (February 27th – March 6th, 1955) With a splendid view on the mediterranean, a small group of Bourbaki members (Henri Cartan (then 51), with two of his former Ph.D. students: Jean-Louis Koszul (then 34), and Jean-Pierre Serre (then 29, and fresh Fields medaillist), Jacques Dixmier (then 31), and Pierre Samuel (then 34), a former student of Zariski's) discussed a previous 'Rapport de Geometrie Algebrique'(no. 206) and arrived at some unanimous decisions: 1. Algebraic varieties must be sets of points, which will not change at every moment. 2. One should include 'abstract' varieties, obtained by gluing (fibres, etc.). 3. All necessary algebra must have been previously proved. 4. The main application of purely algebraic methods being characteristic p, we will hide nothing of the unpleasant phenomena that occur there. (Henri Cartan and Jean-Pierre Serre, photo by Paul Halmos) The approach the propose is clearly based on Serre's FAC. The points of an affine variety are the maximal ideals of an affine $k$-algebra, this set is equipped with the Zariski topology such that the local rings form a structure sheaf. Abstract varieties are then constructed by gluing these topological spaces and sheaves. At the insistence of the 'specialistes' (Serre, and Samuel who had just written his book 'Méthodes d'algèbre abstraite en géométrie algébrique') two additional points are adopted, but with some hesitation. The first being a jibe at Weil: 1. …The congress, being a little disgusted by the artificiality of the generic point, does not want $K$ to be always of infinite transcendent degree over $k$. It admits that generic points are convenient in certain circumstances, but refuses to see them put to all the sauces: one could speak of a coordinate ring or of a functionfield without stuffing it by force into $K$. 2. Trying to include the arithmetic case. The last point was problematic as all their algebras were supposed to be affine over a field $k$, and they wouldn't go further than to allow the overfield $K$ to be its algebraic closure. Further, (and this caused a lot of heavy discussions at coming congresses) they allowed their varieties to be reducible. The Chicago congress (May 30th – June 2nd 1955) Apart from Samuel, a different group of Bourbakis gathered for the 'second Caucus des Illinois' at Eckhart Hall, including three founding members Weil (then 49), Dixmier (then 49) and Chevalley (then 46), and two youngsters, Armand Borel (then 32) and Serge Lang (then 28). Their reaction to the La Ciotat meeting (the 'congress of the public bench') was swift: (page 1) : "The caucus discovered a public bench near Eckhart Hall, but didn't do much with it." (page 2) : "The caucus did not judge La Ciotat's plan beyond reproach, and proposed a completely different plan." They wanted to include the arithmetic case by defining as affine scheme the set of all prime ideals (or rather, the localisations at these prime ideals) of a finitely generated domain over a Dedekind domain. They continue: (page 4) : "The notion of a scheme covers the arithmetic case, and is extracted from the illustrious works of Nagata, themselves inspired by the scholarly cogitations of Chevalley. This means that the latter managed to sell all his ideas to the caucus. The Pope of Chicago, very happy to be able to reject very far projective varieties and Chow coordinates, willingly rallied to the suggestions of his illustrious colleague. However, we have not attempted to define varieties in the arithmetic case. Weil's principle is that it is unclear what will come out of Nagata's tricks, and that the only stable thing in arithmetic theory is reduction modulo $p$ a la Shimura." "Contrary to the decisions of La Ciotat, we do not want to glue reducible stuff, nor call them varieties. … We even decide to limit ourselves to absolutely irreducible varieties, which alone will have the right to the name of varieties." The insistence on absolutely irreducibility is understandable from Weil's perspective as only they will have a generic point. But why does he go along with Chevalley's proposal of an affine scheme? In Weil's approach, a point of the affine variety $V$ determined by a generic point $x=(x_1,\dots,x_n)$ determines a prime ideal $Q$ of the domain $R=k[x_1,\dots,x_n]$, so Chevalley's proposal to consider all prime ideals (rather than only the maximal ideals of an affine algebra) seems right to Weil. However in Weil's approach there are usually several points corresponding to the same prime ideal $Q$ of $R$, namely all possible embeddings of the ring $R/Q$ in that huge field $K$, so whenever $R/Q$ is not algebraic over $k$, there are infinitely Weil-points of $V$ corresponding to $Q$ (whence the La Ciotat criticism that points of a variety were not supposed to change at every moment). According to Ralf Krömer in his book Tool and Object – a history and philosophy of category theory this shift from Weil-points to prime ideals of $R$ may explain Chevalley's use of the word 'scheme': (page 164) : "The 'scheme of the variety' denotes 'what is invariant in a variety'." Another time we will see how internal discussion influenced the further Bourbaki congresses until Grothendieck came up with his 'hyperplan'. The birthplace of schemes Wikipedia claims: "The word scheme was first used in the 1956 Chevalley Seminar, in which Chevalley was pursuing Zariski's ideas." and refers to the lecture by Chevalley 'Les schemas', given on December 12th, 1955 at the ENS-based 'Seminaire Henri Cartan' (in fact, that year it was called the Cartan-Chevalley seminar, and the next year Chevalley set up his own seminar at the ENS). Items recently added to the online Bourbaki Archive give us new information on time and place of the birth of the concept of schemes. From May 30th till June 2nd 1955 the 'second caucus des Illinois' Bourbaki-congress was held in 'le grand salon d'Eckhart Hall' at the University of Chicago (Weil's place at that time). Only six of the Bourbaki members were present: Jean Dieudonne (then 49), the scribe of the Bourbaki-gang. Andre Weil (then 49), called 'Le Pape de Chicago' in La Tribu, and responsible for his 'Foundations of Algebraic Geometry'. Claude Chevalley (then 46), who wanted a better, more workable version of algebraic geometry. He was just nominated professor at the Sorbonne, and was prepping for his seminar on algebraic geometry (with Cartan) in the fall. Pierre Samuel (then 34), who studied in France but got his Ph.D. in 1949 from Princeton under the supervision of Oscar Zariski. He was a Bourbaki-guinea pig in 1945, and from 1947 attended most Bourbaki congresses. He just got his book Methodes d'algebre abstraite en geometrie algebrique published. Armand Borel (then 32), a Swiss mathematician who was in Paris from 1949 and obtained his Ph.D. under Jean Leray before moving on to the IAS in 1957. He was present at 9 of the Bourbaki congresses between 1955 and 1960. Serge Lang (then 28), a French-American mathematician who got his Ph.D. in 1951 from Princeton under Emil Artin. In 1955, he just got a position at the University of Chicago, which he held until 1971. He attended 7 Bourbaki congresses between 1955 and 1960. The issue of La Tribu of the Eckhart-Hall congress is entirely devoted to algebraic geometry, and starts off with a bang: "The Caucus did not judge the plan of La Ciotat above all reproaches, and proposed a completely different plan. I – Schemes II – Theory of multiplicities for schemes III – Varieties IV – Calculation of cycles V – Divisors VI – Projective geometry In the spring of that year (February 27th – March 6th, 1955) a Bourbaki congress was held 'Chez Patrice' at La Ciotat, hosting a different group of Bourbaki members (Samuel was the singleton intersection) : Henri Cartan (then 51), Jacques Dixmier (then 31), Jean-Louis Koszul (then 34), and Jean-Pierre Serre (then 29, and fresh Fields medaillist). In the La Ciotat-Tribu,nr. 35 there are also a great number of pages (page 14 – 25) used to explain a general plan to deal with algebraic geometry. Their summary (page 3-4): "Algebraic Geometry : She has a very nice face. Chap I : Algebraic varieties Chap II : The rest of Chap. I Chap III : Divisors Chap IV : Intersections" There's much more to say comparing these two plans, but that'll be for another day. We've just read the word 'schemes' for the first (?) time. That unnumbered La Tribu continues on page 3 with "where one explains what a scheme is": So, what was their first idea of a scheme? Well, you had your favourite Dedekind domain $D$, and you considered all rings of finite type over $D$. Sorry, not all rings, just all domains because such a ring $R$ had to have a field of fractions $K$ which was of finite type over $k$ the field of fractions of your Dedekind domain $D$. They say that Dedekind domains are the algebraic geometrical equivalent of fields. Yeah well, as they only consider $D$-rings the geometric object associated to $D$ is the terminal object, much like a point if $D$ is an algebraically closed field. But then, what is this geometric object associated to a domain $R$? In this stage, still under the influence of Weil's focus on valuations and their specialisations, they (Chevalley?) take as the geometric object $\mathbf{Spec}(R)$, the set of all 'spots' (taches), that is, local rings in $K$ which are the localisations of $R$ at prime ideals. So, instead of taking the set of all prime ideals, they prefer to take the set of all stalks of the (coming) structure sheaf. But then, speaking about sheaves is rather futile as there is no trace of any topology on this set, then. Also, they make a big fuss about not wanting to define a general schema by gluing together these 'affine' schemes, but then they introduce a notion of 'apparentement' of spots which basically means the same thing. It is still very early days, and there's a lot more to say on this, but if no further documents come to light, I'd say that the birthplace of 'schemes', that is , the place where the first time there was a documented consensus on the notion, is Eckhart Hall in Chicago.
CommonCrawl
Why do single particle states furnish a rep. of the inhomogeneous Lorentz group? Following up on this question: Weinberg says In general, it may be possible by using suitable linear combinations of the $\psi_{p,\sigma}$ to choose the $\sigma$ labels in such a way that $C_{\sigma'\sigma}(\Lambda, p)$ is block-diagonal; in other words, so that the $\psi_{p,\sigma}$ with $\sigma$ within any one block by themselves furnish a representation of the inhomogenous Lorentz group. But why inhomogeneous Lorentz group if, in the first place, we performed a homogeneous Lorentz transformation on the states, via $U(\Lambda)$? I also want to be clear what is meant by the states "furnishing" a representation. Regarding the above confusion, the same scenario again shows up during the discussion on the little group. Here's a little background: $k$ is a "standard" 4-momentum, so that we can express any arbitrary 4-momentum $p$ as $p^{\mu} = L^{\mu}_{\nu}(p) k^{\nu}$, where $L$ is a Lorentz transformation dependent on $p$. We consider the subgroup of Lorentz transformations $W$ that leave $k$ invariant (little group), and find that: $U(W)\psi_{k \sigma} = \sum_{\sigma'} D_{\sigma' \sigma}(W)\psi_{k \sigma'}$. Then he says: The coefficients $D(W)$ furnish a representation of the little group; i.e., for any elements $W$ and $W'$ , we get $D_{\sigma' \sigma}(W'W) = \sum_{\sigma''}D_{\sigma' \sigma''}(W)D_{\sigma''\sigma}(W')$. So is it that even in the first part about the Lorentz group, $C$ matrices furnish the representation and not $\psi$? Also, for the very simplified case if $C_{\sigma'\sigma}(\Lambda, p)$ is completely diagonal, would I be correct in saying the following in such a case, for any $\sigma$? $$U(\Lambda)\psi_{p,\sigma} = k_{\sigma}(\Lambda, p)\psi_{\Lambda p, \sigma}$$ Only in this case it is clear to me that $U(\Lambda)$ forms a representation of Lorentz group, since $\psi_{p,\sigma}$ are mapped to $\psi_{\Lambda p, \sigma}$. quantum-field-theory special-relativity group-representations poincare-symmetry $\begingroup$ Just a tip: no need to mark the questions as follow-up questions so boldly. I've made some cosmetic edits to this and your last question, but if I changed the meaning anywhere from what you wanted to ask, please do fix it. :-) $\endgroup$ – David Z♦ May 25 '13 at 6:33 $\begingroup$ A representation is a vector space with a group action attached to it. In linear algebra, it's a bunch of vectors $v^i$ which move around under the group action, i.e. when $g$ is a group element then there is a matrix $D(g)^i{}_j$ which acts on them as $D(g)^i{}_j v^j$. In QM/QFT, the vector space is spanned by states $|\psi(p,\sigma)\rangle$ (or whatever notation Weinberg uses). That what he means by "furnishing" a rep. $\endgroup$ – Vibert May 25 '13 at 6:51 $\begingroup$ @Vibert: That's what I thought. But $U$ matrices map the states $\psi$ to Lorentz-transformed states, so then it should be the $C_{\sigma \sigma'}$ matrices that furnish a representation of the Lorentz group. I'm confused why Weinberg says that the states $\psi$ are the ones furnishing such a representation. (See the edited question) $\endgroup$ – 1989189198 May 26 '13 at 5:14 $\begingroup$ OK, I see, it's a matter of language. Formally a representation is a linear map from the group to your vector space, so in this case the map $W \mapsto D(W).$ But when you use this construction, of course it depends on both the cases $\psi$ and the Lorentz matrices acting on them. That's why in physics we use the term representation in a sloppier way than our mathematician friends. $\endgroup$ – Vibert May 26 '13 at 7:02 In the inhomogenous Lorentz group $ISO(1,3)$, you have the space-time translation group $\mathbb{R}^{1,3}$, and the Lorentz group $SO(1,3)$. You begin to find a representation of the space-time translation group, by choosing a momentum $p$. So your representation must have a $p$ index, $$\psi_p \, .$$ After this, you will have to get the full representation, by finding a representation of the Lorentz group compatible with the momentum $p$, this will add another index $\sigma$ which corresponds to the polarization, so you will have a representation, $$\psi_{p, \sigma} \, ,$$ which is the representation of the inhomogenous Lorentz group. TrimokTrimok Re the meaning of representation, here is a definition from Peter Woit's "Quantum Mechanics for Mathematicians" lecture notes (available on-line), section 1.3: Definition (Representation). A (complex) representation ($\pi, V$) of a group $G$ is a homomorphism $$ \pi: g \in G \rightarrow \pi(g) \in GL(V) $$ where $GL(V)$ is the group of invertible linear maps $V \rightarrow V$, with $V$ a complex vector space. Saying a map is a homomorphism means $$ \pi(g_1) \pi(g_2) = \pi(g_1g_2) $$ When $V$ is finite dimensional and we have chosen a basis of $V$, then we have an identification of linear maps and matrices $$ GL(V) \simeq GL(n,\boldsymbol{C}) $$ where $GL(n,\boldsymbol{C})$ is the group of invertible $n$ by $n$ complex matrices. So the representation is the homomorphism (the operation-preserving map) from the group $U(\Lambda)$ to the transformation matrices (Weinberg's C's and D's), but these matrices require a vector space (the $\psi$s), on which to act. For the rest, here's my answer (caveat emptor, I'm just a slow student): This section 2.5 is titled "One Particle States". If $C$ turns out to be reducible (block-diagonalizable), the different blocks are independent of one another (no mixing between blocks) and are interpreted as different particles species. So, for a single particle state a single irreducible block is assumed. In this argument it's OK to generalize from homogeneous to inhomogeneous transformations, because translations don't mix $\sigma$'s and hence don't affect the block structure of $C$: $$U(1,a) \Psi_{p,\sigma} = e^{-ip\cdot a} \Psi_{p,\sigma} $$ Finally, in the case you posit of a completely diagonal $C$, I think you are left with a bunch of particle species with no $\sigma$-mixing at all, i.e. scalars, each with a trivial little group ($k=1$). Art BrownArt Brown Not the answer you're looking for? Browse other questions tagged quantum-field-theory special-relativity group-representations poincare-symmetry or ask your own question. Why do we say that irreducible representation of Poincare group represents the one-particle state? Why particles are thought as irreducible representation in plain English? Why are one-particle states called irreducible representations of Poincaré group? Physical Interpretation of Lorentz-transformed Single Particle states being linear What, in natural language terms, does "spinor representation" mean (for students)? Identification of the state of particle types with representations of Poincare group Irreducible Representations Of Lorentz Group Why are non-momentum DoFs of single-particle states discretely labeled? $(A,B)$-Representation of Lorentz Group: Coefficient functions of fields Representations of Lorentz group in interacting QFT Weinberg QFT (2.5.5) Does there exist finite dimensional irreducible rep. of Poincare group where translations act nontrivially? Expansion coefficients of an arbitrary state in the Hilbert space of one-particle states Weinberg's classification of one-particle states and representations of the Poincare group
CommonCrawl
Rigorous security proof for Wiesner's quantum money In his celebrated paper "Conjugate Coding" (written around 1970), Stephen Wiesner proposed a scheme for quantum money that is unconditionally impossible to counterfeit, assuming that the issuing bank has access to a giant table of random numbers and that banknotes can be brought back to the bank for verification. In Wiesner's scheme, each banknote consists of a classical "serial number" $s$, together with a quantum money state $|\psi_s\rangle$ consisting of $n$ unentangled qubits, each one either $$|0\rangle,\ |1\rangle,\ |+\rangle=(|0\rangle+|1\rangle)/\sqrt{2},\ \text{or}\ |-\rangle=(|0\rangle-|1\rangle)/\sqrt{2}.$$ The bank remembers a classical description of $|\psi_s\rangle$ for every $s$. And therefore, when $|\psi_s\rangle$ is brought back to the bank for verification, the bank can measure each qubit of $|\psi_s\rangle$ in the correct basis (either $\{|0\rangle,|1\rangle\}$ or $\{|+\rangle,|-\rangle\}$), and check that it gets the correct outcomes. On the other hand, because of the uncertainty relation (or alternatively, the No-Cloning Theorem), it's "intuitively obvious" that, if a counterfeiter who doesn't know the correct bases tries to copy $|\psi_s\rangle$, then the probability that both of the counterfeiter's output states pass the bank's verification test can be at most $c^n$, for some constant $c<1$. Furthermore, this should be true regardless of what strategy the counterfeiter uses, consistent with quantum mechanics (e.g., even if the counterfeiter uses fancy entangled measurements on $|\psi_s\rangle$). However, while writing a paper about other quantum money schemes, my coauthor and I realized that we'd never seen a rigorous proof of the above claim anywhere or an explicit upper bound on $c$: neither in Wiesner's original paper nor in any later one. So, has such a proof (with an upper bound on $c$) been published? If not, then can one derive such a proof in a more-or-less straightforward way from (say) approximate versions of the No-Cloning Theorem, or results about the security of the BB84 quantum key distribution scheme? I should maybe clarify that I'm looking for more than just a reduction from the security of BB84. Rather, I'm looking for an explicit upper bound on the probability of successful counterfeiting (i.e., on $c$)---and ideally, also some understanding of what the optimal counterfeiting strategy looks like. I.e., does the optimal strategy simply measure each qubit of $|\psi_s\rangle$ independently, say on the basis $$\{ \cos(\pi/8)|0\rangle+\sin(\pi/8)|1\rangle, \sin(\pi/8)|0\rangle-\cos(\pi/8)|1\rangle \}?$$ Or is there an entangled counterfeiting strategy that does better? Right now, the best counterfeiting strategies that I know are (a) the strategy above, and (b) the strategy that simply measures each qubit in the $\{|0\rangle,|1\rangle\}$ basis and "hopes for the best." Interestingly, both of these strategies turn out to achieve a success probability of $(5/8)$$n$. So, my conjecture of the moment is that $(5/8)$$n$ might be the right answer. In any case, the fact that $5/8$ is a lower bound on c rules out any security argument for Wiesner's scheme that's "too" simple (for example, any argument to the effect that there's nothing nontrivial that a counterfeiter can do, and therefore the right answer is $c=1/2$). algorithm mathematics cryptography cryptocurrency quantum-money DIDIx13DIDIx13 $\begingroup$ No, $(5/8)^n$ is not the right answer. $\endgroup$ – Peter Shor Apr 4 '18 at 2:53 Abel Molina, Thomas Vidick, and I proved that the correct answer is $c=3/4$ in this paper: A. Molina, T. Vidick, and J. Watrous. Optimal counterfeiting attacks and generalizations for Wiesner's quantum money. Proceedings of the 7th Conference on Theory of Quantum Computation, Communication, and Cryptography, volume 7582 of Lecture Notes in Computer Science, pages 45–64, 2013. (See also arXiv: 1202.4010.) This assumes the counterfeiter uses what we call a "simple counterfeiting attack," which means a one-shot attempt to transform one copy of a money state into two. (I interpret your question to be about such attacks.) The attack of Brodutch, Nagaj, Sattath, and Unruh that @Rob referred to (and which is a fantastic result in my opinion) requires the counterfeiter to interact repeatedly with the bank and assumes the bank will provide the counterfeiter with the same money state after each verification. The paper describes the optimal channel, which is not an entanglement breaking (i.e., measure and prepare) channel. It's an example of a cloner, and explicitly it looks like this: $$ \Phi(\rho) = A_0 \rho A_0^{\dagger} + A_1 \rho A_1^{\dagger} $$ where $$ A_0 = \frac{1}{\sqrt{12}} \begin{pmatrix} 3 & 0\\ 0 & 1\\ 0 & 1\\ 1 & 0 \end{pmatrix} \quad\text{and}\quad A_1 = \frac{1}{\sqrt{12}} \begin{pmatrix} 0 & 1\\ 1 & 0\\ 1 & 0\\ 0 & 3 \end{pmatrix}. $$ For different sets of money states and figures of merit, you may end up with different optimal values and cloners. For example, if the money states also include $| 0\rangle \pm i |1\rangle$, then the Bužek-Hillery cloner is optimal and the correct value of $c$ drops to 2/3. John WatrousJohn Watrous "I'm looking for an explicit upper bound on the probability of successful counterfeiting ...". In "An adaptive attack on Wiesner's quantum money", by Aharon Brodutch, Daniel Nagaj, Or Sattath, and Dominique Unruh, last revised on 10 May 2016, the authors claim a success rate of: "~100%". The paper makes these claims: Main results. We show that in a strict testing variant of Wiesner's scheme (that is, if only valid money is returned to the owner), given a single valid quantum money state $(s,\left|\$_s\right>)$, a counterfeiter can efficiently create as many copies of $\left|\$_s\right>$ as he wishes (hence, the scheme is insecure). He can rely on the quantum Zeno effect for protection – if he disturbs the quantum money state only slightly, the bill is likely to be projected back to the original state after a test. Interestingly, this allows a counterfeiter to distinguish the four different qubit states with an arbitrarily small probability of being caught. In this paper, we have focused on Wiesner's money in a noiseless environment. That is, the bank rejects the money if even a single qubit is measured incorrectly. In a more realistic setting, we have to deal with noise, and the bank would want to tolerate a limited amount of errors in the quantum state [PYJ+12], say 10%. Also see: "Quantum Bitcoin: An Anonymous and Distributed Currency Secured by the No-Cloning Theorem of Quantum Mechanics", by Jonathan Jogenfors, 5 Apr 2016, where he discusses Wiesner's scheme and proposes one of his own. Not the answer you're looking for? Browse other questions tagged algorithm mathematics cryptography cryptocurrency quantum-money or ask your own question. No-cloning theorem does not seem precise Quantum Bitcoin Subdivision Can quantum money be reliably "burned?" Can a merchant who accepts a knot-based quantum coin mint her own knot-based coin? Do we have to trust the bank in "Quantum Money from Hidden Subspaces?" How many bits do Alice and Bob needs to compare to make sure the channel is secure in BB84? Intuitive Proof: BQP ⊆ PP Changing qubits coefficients to trigonometric functions in Grover Algorithm In the BB84 protocol, for what error thresholds can Alice and Bob not establish secure bits?
CommonCrawl
What famous theorems or results were proven by female mathematicians? We know that there were/are many famous female mathematicians who influenced the mathematics as we know it today, but their numbers are few compared to male mathematicians. While we have numerous famous results by many male mathematicians like Gauss, Euler and many others, what are famous results bearing the name of a female mathematician which also have a very deep impact on our understanding of mathematics? Kushal Bhuyan Kushal BhuyanKushal Bhuyan $\begingroup$ en.wikipedia.org/wiki/Hypatia $\endgroup$ – rickz $\begingroup$ I started thinking about all the important theorems named "Somebody's theorem/lemma/etc." that I know, and realized that, for a substantial fraction of them, I actually don't know the gender of the person they're named after, at least not beyond the default cultural assumption that a majority of them are probably male. Even knowing their given name doesn't always help, if it's non-gender-specific or from a culture whose names I don't easily recognize as male or female specific. $\endgroup$ – Ilmari Karonen $\begingroup$ The MRDP theorem solved Hilbert's tenth problem (to the negative). The R in MRDP references Julia Robinson. $\endgroup$ – David Hammen $\begingroup$ The question is trivial. Easy search on the internet gives plenty of examples. Therefore I vote to close. $\endgroup$ $\begingroup$ @R.. that may be so, but it seems people want to close this one because they are uncomfortable with the question. I've researched it, and while there are lists of women mathematicians, and famous ones, there is little which places their achievements in the context of all mathematics, or their results in the context of all "famous results". I would post a proper answer but I don't have the rep. $\endgroup$ – Ben See at least Emmy Noether : was a German mathematician known for her contributions to abstract algebra and theoretical physics. She was described by Pavel Alexandrov, Albert Einstein, Jean Dieudonné, Hermann Weyl, and Norbert Wiener as the most important woman in the history of mathematics. As one of the leading mathematicians of her time, she developed the theories of rings, fields, and algebras. In physics, Noether's theorem explains the connection between symmetry and conservation laws. Of course, this is only an example; there are many others: see answers below. Mauro ALLEGRANZAMauro ALLEGRANZA $\begingroup$ This answer massively underplays the significance of Noether's work, really hugely! $\endgroup$ – CameronJWhitehead $\begingroup$ I agree with @CameronJWhitehead. It is not possible to overstate Noether's importance. Her theorem in theoretical physics is foundational for 20th century physics, but her work in algebra is much much more extensive and is utterly foundational to all of: number theory, commutative algebra, invariant theory, and algebraic geometry. $\endgroup$ – benblumsmith $\begingroup$ Whereas laws of physics explains how the world works, Noether's theorem explains how laws of physics works $\endgroup$ – slebetman $\begingroup$ @KprimeX: While Emmy Noether's accomplishments are remarkable, and she surely belongs near the top of any list of notable mathematicians, marking an answer mentioning only her as accepted seems rather dismissive of all the other important female mathematicians mentioned in the other answers. $\endgroup$ $\begingroup$ Actually I am not accepting Noether as the only female mathematician of highest class, I know every other female mathematicians mentioned in the answers are also as important as Noether, but I can only accept one answer and when I accepted this answer there were only 2 answers, now it grows to 10, and I am unable to accept anymore so I give +1 to every other. @IlmariKaronen $\endgroup$ – Kushal Bhuyan Perhaps because of its youth, the mathematical end of Computer Science has several notable women in its history. Sheila Greibach was a pioneer in the field of formal language theory, particularly in the area of context-free languages. At the time, that would have been considered more a branch of mathematics, as Computer Science wasn't really a thing of its own. In particular, she developed Greibach Normal Form, which is fairly instrumental in the theory of parsing, which is extremely critical to modern programming languages. Continuing down programming language theory, Barbara Liskov developed the Liskov Substitution Principle, which was critical in developing a formalized model for object-oriented languages. She won the Turing Award (CS equivalent of the Fields medal) for her contributions. Did these have a "deep impact" on our understanding of mathematics? Not in the classical sense, but they've led to some amazing developments, arguably as many as the 400 years of calculus/analysis theory. jmitejmite $\begingroup$ I'd like to point out that Liskov's LSP is still used as a design rule in modern object-oriented programming. It is the thing you have to follow if you want polymorphism to make any kind of sense. $\endgroup$ $\begingroup$ You forgot Grace Hopper, who in 1969 won the inaugural (and incongruously named) Man of the Year award from the Data Processing Management Association. $\endgroup$ $\begingroup$ @DavidHammen they also forgot Ada Lovelace. Not any theorem holds her name (at far as I know) but a famous programming language instead. $\endgroup$ – ypercubeᵀᴹ $\begingroup$ Wow, while knowing the principle I never knew that the Liskov substitution principle is named after a woman. (Well, I didn't really care, in the same sense that I don't know the gender or nationality of most other people theorems are named after.) $\endgroup$ – Josef says Reinstate Monica $\begingroup$ @ypercube: While Ada Lovelace is mostly noted as the world's first programmer, her most important contribution to computing is the invention of subroutines. Babbage wasn't convinced about the usefulness of subroutines (functions) when Lovelace described it to him until she demonstrated an example program that made good use of subroutines. So Babbage added hardware that made it possible to return from a jump. While the theoretical foundations of functions came from mathematics, the practical use of it in computer hardware was introduced by Ada Lovelace $\endgroup$ There is Sophie Germain's theorem, a theorem in number theory, related to Fermat's last theorem and proved by the French mathematician Sophie Germain (1776-1831). ypercubeᵀᴹypercubeᵀᴹ $\begingroup$ Worth noting that for a while "She used the name of a former student Monsieur Antoine-August Le Blanc (...) fearing the ridicule attached to a female scientist". Maybe other great mathematicians were ladies as well : )) $\endgroup$ – moonwave99 There is the work by Ada Lovelace. In the annotations, which were called "Notes", Ada Lovelace described how the analytical engine could be programmed and gave what many consider to be the first ever computer program. In particular, she found and corrected a bug in Babbage's algorithm for computing Bernoulli numbers: We discussed together the various illustrations that might be introduced: I suggested several, but the selection was entirely her own. So also was the algebraic working out of the different problems, except, indeed, that relating to the numbers of Bernoulli, which I had offered to do to save Lady Lovelace the trouble. This she sent back to me for an amendment, having detected a grave mistake which I had made in the process. (from C Babbage, Passages from the Life of a Philosopher (London, 1864).) Of course, the programming language Ada is named after her. FrancescoFrancesco I found an existence theorem for the Cauchy Problem in partial differential equations which has been proven by Sofia Vasilyevna Kovalevskaya. $\begingroup$ I'm glad to see her here. I was about to add her, but I first checked all the answers. $\endgroup$ Jan 8 at 18:11 The following would be my top picks: The Sophie Germain identity says that $a^4+4b^4=(a^2+2b^2+2ab)(a^2+2b^2-2ab)$ for $a, b \in \mathbb{Z}$. This is a very simple identity but is quite useful in many problems of elementary number theory. The Noether normalization lemma is a result in commutative algebra that is taught probably in the very first week of a graduate level course in algebraic geometry. One version of the result says that, for any field $\mathbb{K}$ and any f.g. commutative $\mathbb{K}$-algebra $A$, there exists a non-negative integer $k$ and algebraicly independent elements $y_1, y_2, \ldots, y_k \in A$ such that $A$ is a f.g. module over the ring $\mathbb{K}[y_1, y_2, \ldots, y_k]$. Manjil P. SaikiaManjil P. Saikia Olga Ladyzhenskaya proved a result related to the Navier-Stokes equations. The result by itself is not very famous, but the Navier-Stokes equations are. wythagoraswythagoras Danica McKellar is the McKellar in the Chayes-McKellar-Winn theorem, AnniepooAnniepoo $\begingroup$ I'm not sure that this satisfies the fame requirement in the question. $\endgroup$ $\begingroup$ Danica McKellar is famous. Her name gets 50% more hits on Google than Emmy Noether and anyone who cares about high school math education should know McKellar. But it is true the Chayes-McKellar-Winn theorem is not widely known or influential in mathematics. $\endgroup$ – Colin McLarty The Denjoy–Young–Saks theorem gives some possibilities for the Dini derivatives of a function that hold almost everywhere. Grace Chisholm Young extended Denjoy's result on continuous functions to measurable functions. Her husband was William Henry Young. Ingrid Daubechies did pioneering work in harmonic analysis, which led for instance to the development of finite support wavelets (orthogonal and biorthogonal). This enabled wavelet theory to enter the domain digital signal processing, perhaps similarly to the invention of the Fast Fourier Transform with respect to the mathematical Fourier transform. Two of these wavelets, called CDF 5/3 or CDF 9/7 for Cohen-Daubechies-Fauveau, are at the core of the image compression algorithm JPEG 2000, and the Motion JPEG 2000 used in the Motion Picture industry. She was the first woman to be president of the International Mathematical Union. She was a Noether lecturer, where you can find other influential mathematician women. And she is a baroness now. Her mathematical results not only made the path through industrial applications, but strongly modified the way people analyse data, in a multiscale (zoom-in/zoom-out) fashion. Laurent DuvalLaurent Duval $\begingroup$ ... and, just to quibble, she is still alive and working! :) No past tense necessary! :) $\endgroup$ – paul garrett $\begingroup$ The question was... past. $\endgroup$ – Laurent Duval There are not female mathematicians who have had quite so much impact as Gauss or Euclid, for example, but this is to be expected because of historical reasons which everyone is familiar with. A quick google will have told the questioner that there are many important female mathematicians in history, but I think the question is asking for a really important mathematician, or at least a well known mathematician, like Galois who has Galois Theory named after him, or Hilbert, who has Hilbert spaces named after him. The first person who I thought of when I read this question was Emmy Noether, who, in terms of fame, isn't quite Newton (obviously) but is at least Galois. You probably aren't going to see the names of any women in the titles of any undergrad maths courses, but if you are, it will probably be Noether. It is possible that many Ancient mathematicians were actually woman. This is certainly the case for Egypt but probably less so for Greece (although who knows?) Even in more modern times, a few woman have worked under male pseudonyms and there may be many more that we do not know about (although it is unlikely that any of the 'big name' mathematicians like Newton and Euler were actually women because their lives have been well documented). It is also possible (in fact very likely) that work by women has been plagiarised by men, so some theorems named after males may have actually been developed by females, we probably will never know how many. Danu♦ JohnMillJohnMill $\begingroup$ Hypatia of Alexandria was mathematician. $\endgroup$ – ch7kor Hypatia of Alexandria (AD 350 or 370-AD 417) She worked on several researches most significant of which included her commentaries on the Greek text-book, Arithmetica and On the Conics of Apollonius. She is remembered especially for her detailed description of the early hydrometer. Émilie du Châtelet (1706-1749) A French physicist, mathematician and writer during the Enlightenment era in Europe. In 1740, Châtelet published a book on philosophy and science called Institutions de Physique and later translated and commented on Newton's Principia Mathematica which is its best known translation. Maria Agnesi (1718-1799) She wrote a book on math that still survives, that is: Analytical Institutions for the Use of Italian Youth in English. Another pioneering contribution was the Witch of Agnesi- a curve that she wrote the equation for. Sophie Germain (1776-1831) Sophie Germain's paper on elasticity theory made her the first woman to be awarded from the Paris Academy of Sciences in 1816. She was also a major contributor in proving Fermat's Last Theorem. Ada Lovelace (1815-1852) When asked to translate the memoir of Charles Babbage, the Analytical Engine, Lovelace went ahead and added her own comments and notes about a method of calculating a sequence of Bernoulli numbers: what is today known as the world's first ever computer program subsequently making Lovelace renowned as the world's first computer programmer. Sofia Kovalevskaya (1850-1891) She gave the Cauchy-Kovalevskaya Theorem its end result in 1875, worked on a paper in which she invented the Kovalevskaya Top and published ten papers based on mathematics and mathematical physics. Emmy Noether (1882-1935) Emmy Noether is famous for coining the Noether's Theorem that clarifies the relationship between conservation laws and symmetry, as well as Noether's Ring that changed the basics of abstract algebra. Noether is also famous for other theories based on non-commutative algebras, hyper-complex numbers and commutative rings. Mary Cartwright (1900-1998) She authored over a 100 papers which include her work on level curves, functions in the unit disk, topology and ordinary differential equations among others. Julia Robinson (1919-1985) She is well regarded for her work on Hilbert's Tenth problem and decision problems. Shafi Goldwasser (1958-Date) Her research emphasizes on zero-knowledge proof, complexity theory, computation number theory and cryptography. Big BrotherBig Brother $\begingroup$ See about Elizabeth Williams as well. $\endgroup$ – Big Brother To continue on jmite's excellent answer, Nancy Lynch is a pioneer in the theory of "Distributed Systems" in Computer Science. For example, her work with Michael J.Fischer and Mike Paterson showed that "In an asynchronous distributed system, consensus is impossible if there is one processor that crashes" which is a fundamental result in the field. Islam HassanIslam Hassan Karen Uhlenbeck made huge contribution to gauge theory and more. Not the answer you're looking for? Browse other questions tagged mathematicians or ask your own question. Is there any scientist whose work was not recognised during his/her lifetime and was underestimated? How did Gauss become the "the prince of mathematicians"? Who first considered the $f$ in $f(x)$ as an object in itself, and who decided to call it a function? Who influenced Gauss in his abstract approach to mathematics? For many years, were Emmy Noether and Helene Braun the only female mathematicians to obtain habilitation at Göttingen University? Do you know about this anecdote or its source where the mother reads out the letter to Gauss and made him Gauss, the mathematician? Who was Paul Gerwien? A branch of mathematics which refused to be rigorous? What are some famous mathematicians that disappeared?
CommonCrawl
MSC Classifications MSC 2010: Mathematical Logic and Foundations Refine listing Last 3 years (11) The Review of Symbolic Logic (10) Bulletin of Symbolic Logic (2) The Journal of Symbolic Logic (1) Association for Symbolic Logic (13) 13 results in 03-XX PROBABILISTIC ENTAILMENT ON FIRST ORDER LANGUAGES AND REASONING WITH INCONSISTENCIES Mathematical Logic and Foundations General logic SOROUSH RAFIEE RAD Journal: The Review of Symbolic Logic , First View Published online by Cambridge University Press: 07 July 2022, pp. 1-18 We investigate an approach for drawing logical inference from inconsistent premisses. The main idea in this approach is that the inconsistencies in the premisses should be interpreted as uncertainty of the information. We propose a mechanism, based on Kinght's [14] study of inconsistency, for revising an inconsistent set of premisses to a minimally uncertain, probabilistically consistent one. We will then generalise the probabilistic entailment relation introduced in [15] for propositional languages to the first order case to draw logical inference from a probabilistic set of premisses. We will show how this combination can allow us to limit the effect of uncertainty introduced by inconsistent premisses to only the reasoning on the part of the premise set that is relevant to the inconsistency. MUTUAL INTERPRETABILITY OF WEAK ESSENTIALLY UNDECIDABLE THEORIES Proof theory and constructive mathematics ZLATAN DAMNJANOVIC Journal: The Journal of Symbolic Logic / Volume 87 / Issue 4 / December 2022 Published online by Cambridge University Press: 18 February 2022, pp. 1374-1395 Print publication: December 2022 Kristiansen and Murwanashyaka recently proved that Robinson arithmetic, Q, is interpretable in an elementary theory of full binary trees, T. We prove that, conversely, T is interpretable in Q by producing a formal interpretation of T in an elementary concatenation theory QT+, thereby also establishing mutual interpretability of T with several well-known weak essentially undecidable theories of numbers, strings, and sets. We also introduce a "hybrid" elementary theory of strings and trees, WQT*, and establish its mutual interpretability with Robinson's weak arithmetic R, the weak theory of trees WT of Kristiansen and Murwanashyaka, and the weak concatenation theory WTCε of Higuchi and Horihata. THE GENEALOGY OF ' $\mathbin {\boldsymbol {\vee }}$' History of mathematics and mathematicians LANDON D. C. ELKIND, RICHARD ZACH Published online by Cambridge University Press: 03 January 2022, pp. 1-38 The use of the symbol $\mathbin {\boldsymbol {\vee }}$ for disjunction in formal logic is ubiquitous. Where did it come from? The paper details the evolution of the symbol $\mathbin {\boldsymbol {\vee }}$ in its historical and logical context. Some sources say that disjunction in its use as connecting propositions or formulas was introduced by Peano; others suggest that it originated as an abbreviation of the Latin word for "or," vel. We show that the origin of the symbol $\mathbin {\boldsymbol {\vee }}$ for disjunction can be traced to Whitehead and Russell's pre-Principia work in formal logic. Because of Principia's influence, its notation was widely adopted by philosophers working in logic (the logical empiricists in the 1920s and 1930s, especially Carnap and early Quine). Hilbert's adoption of $\mathbin {\boldsymbol {\vee }}$ in his Grundzüge der theoretischen Logik guaranteed its widespread use by mathematical logicians. The origins of other logical symbols are also discussed. GÖDEL'S THEOREM AND DIRECT SELF-REFERENCE SAUL A. KRIPKE Published online by Cambridge University Press: 02 December 2021, pp. 1-5 In his paper on the incompleteness theorems, Gödel seemed to say that a direct way of constructing a formula that says of itself that it is unprovable might involve a faulty circularity. In this note, it is proved that 'direct' self-reference can actually be used to prove his result. KURT GÖDEL ON LOGICAL, THEOLOGICAL, AND PHYSICAL ANTINOMIES Philosophical aspects of logic and foundations TIM LETHEN Journal: Bulletin of Symbolic Logic / Volume 27 / Issue 3 / September 2021 This paper presents hitherto unpublished writings of Kurt Gödel concerning logical, epistemological, theological, and physical antinomies, which he generally considered as "the most interesting facts in modern logic," and which he used as a basis for his famous metamathematical results. After investigating different perspectives on the notion of the logical structure of the antinomies and presenting two "antinomies of the intensional," a new kind of paradox closely related to Gödel's ontological proof for the existence of God is introduced and completed by a compilation of further theological antinomies. Finally, after a presentation of unpublished general philosophical remarks concerning the antinomies, Gödel's type-theoretic variant of Leibniz' Monadology, discovered in his notes on the foundations of quantum mechanics, is examined. Most of the material presented here has been transcribed from the Gabelsberger shorthand system for the first time. FREGE'S THEORY OF REAL NUMBERS: A CONSISTENT RENDERING General and miscellaneous specific topics FRANCESCA BOCCUNI, MARCO PANZA Journal: The Review of Symbolic Logic / Volume 15 / Issue 3 / September 2022 Frege's definition of the real numbers, as envisaged in the second volume of Grundgesetze der Arithmetik, is fatally flawed by the inconsistency of Frege's ill-fated Basic Law V. We restate Frege's definition in a consistent logical framework and investigate whether it can provide a logical foundation of real analysis. Our conclusion will deem it doubtful that such a foundation along the lines of Frege's own indications is possible at all. GÖDEL ON MANY-VALUED LOGIC Published online by Cambridge University Press: 22 February 2021, pp. 1-17 This paper collects and presents unpublished notes of Kurt Gödel concerning the field of many-valued logic. In order to get a picture as complete as possible, both formal and philosophical notes, transcribed from the Gabelsberger shorthand system, are included. CURRENT RESEARCH ON GÖDEL'S INCOMPLETENESS THEOREMS YONG CHENG Journal: Bulletin of Symbolic Logic / Volume 27 / Issue 2 / June 2021 We give a survey of current research on Gödel's incompleteness theorems from the following three aspects: classifications of different proofs of Gödel's incompleteness theorems, the limit of the applicability of Gödel's first incompleteness theorem, and the limit of the applicability of Gödel's second incompleteness theorem. THE POTENTIAL IN FREGE'S THEOREM WILL STAFFORD Is a logicist bound to the claim that as a matter of analytic truth there is an actual infinity of objects? If Hume's Principle is analytic then in the standard setting the answer appears to be yes. Hodes's work pointed to a way out by offering a modal picture in which only a potential infinity was posited. However, this project was abandoned due to apparent failures of cross-world predication. We re-explore this idea and discover that in the setting of the potential infinite one can interpret first-order Peano arithmetic, but not second-order Peano arithmetic. We conclude that in order for the logicist to weaken the metaphysically loaded claim of necessary actual infinities, they must also weaken the mathematics they recover. LINGUA CHARACTERICA AND CALCULUS RATIOCINATOR: THE LEIBNIZIAN BACKGROUND OF THE FREGE-SCHRÖDER POLEMIC JOAN BERTRAN-SAN MILLÁN Journal: The Review of Symbolic Logic / Volume 14 / Issue 2 / June 2021 After the publication of Begriffsschrift, a conflict erupted between Frege and Schröder regarding their respective logical systems which emerged around the Leibnizian notions of lingua characterica and calculus ratiocinator. Both of them claimed their own logic to be a better realisation of Leibniz's ideal language and considered the rival system a mere calculus ratiocinator. Inspired by this polemic, van Heijenoort (1967b) distinguished two conceptions of logic—logic as language and logic as calculus—and presented them as opposing views, but did not explain Frege's and Schröder's conceptions of the fulfilment of Leibniz's scientific ideal. In this paper I explain the reasons for Frege's and Schröder's mutual accusations of having created a mere calculus ratiocinator. On the one hand, Schröder's construction of the algebra of relatives fits with a project for the reduction of any mathematical concept to the notion of relative. From this stance I argue that he deemed the formal system of Begriffsschrift incapable of such a reduction. On the other hand, first I argue that Frege took Boolean logic to be an abstract logical theory inadequate for the rendering of specific content; then I claim that the language of Begriffsschrift did not constitute a complete lingua characterica by itself, more being seen by Frege as a tool that could be applied to scientific disciplines. Accordingly, I argue that Frege's project of constructing a lingua characterica was not tied to his later logicist programme. WITTGENSTEIN'S ELIMINATION OF IDENTITY FOR QUANTIFIER-FREE LOGIC TIMM LAMPERT, MARKUS SÄBEL Journal: The Review of Symbolic Logic / Volume 14 / Issue 1 / March 2021 Published online by Cambridge University Press: 25 June 2020, pp. 1-21 Print publication: March 2021 One of the central logical ideas in Wittgenstein's Tractatus logico-philosophicus is the elimination of the identity sign in favor of the so-called "exclusive interpretation" of names and quantifiers requiring different names to refer to different objects and (roughly) different variables to take different values. In this paper, we examine a recent development of these ideas in papers by Kai Wehmeier. We diagnose two main problems of Wehmeier's account, the first concerning the treatment of individual constants, the second concerning so-called "pseudo-propositions" (Scheinsätze) of classical logic such as $a=a$ or $a=b \wedge b=c \rightarrow a=c$ . We argue that overcoming these problems requires two fairly drastic departures from Wehmeier's account: (1) Not every formula of classical first-order logic will be translatable into a single formula of Wittgenstein's exclusive notation. Instead, there will often be a multiplicity of possible translations, revealing the original "inclusive" formulas to be ambiguous. (2) Certain formulas of first-order logic such as $a=a$ will not be translatable into Wittgenstein's notation at all, being thereby revealed as nonsensical pseudo-propositions which should be excluded from a "correct" conceptual notation. We provide translation procedures from inclusive quantifier-free logic into the exclusive notation that take these modifications into account and define a notion of logical equivalence suitable for assessing these translations. THE DEVELOPMENT OF GÖDEL'S ONTOLOGICAL PROOF ANNIKA KANCKOS, TIM LETHEN Journal: The Review of Symbolic Logic / Volume 14 / Issue 4 / December 2021 Published online by Cambridge University Press: 20 September 2019, pp. 1011-1029 Gödel's ontological proof is by now well known based on the 1970 version, written in Gödel's own hand, and Scott's version of the proof. In this article new manuscript sources found in Gödel's Nachlass are presented. Three versions of Gödel's ontological proof have been transcribed, and completed from context as true to Gödel's notes as possible. The discussion in this article is based on these new sources and reveals Gödel's early intentions of a liberal comprehension principle for the higher order modal logic, an explicit use of second-order Barcan schemas, as well as seemingly defining a rigidity condition for the system. None of these aspects occurs explicitly in the later 1970 version, and therefore they have long been in focus of the debate on Gödel's ontological proof. DE ZOLT'S POSTULATE: AN ABSTRACT APPROACH EDUARDO N. GIOVANNINI, EDWARD H. HAEUSLER, ABEL LASSALLE-CASANAVE, PAULO A. S. VELOSO A theory of magnitudes involves criteria for their equivalence, comparison and addition. In this article we examine these aspects from an abstract viewpoint, by focusing on the so-called De Zolt's postulate in the theory of equivalence of plane polygons ("If a polygon is divided into polygonal parts in any given way, then the union of all but one of these parts is not equivalent to the given polygon"). We formulate an abstract version of this postulate and derive it from some selected principles for magnitudes. We also formulate and derive an abstract version of Euclid's Common Notion 5 ("The whole is greater than the part"), and analyze its logical relation to the former proposition. These results prove to be relevant for the clarification of some key conceptual aspects of Hilbert's proof of De Zolt's postulate, in his classical Foundations of Geometry (1899). Furthermore, our abstract treatment of this central proposition provides interesting insights for the development of a well-behaved theory of compatible magnitudes.
CommonCrawl
Genome-wide association studies dissect the genetic networks underlying agronomical traits in soybean Chao Fang1,11, Yanming Ma1, Shiwen Wu2, Zhi Liu1,11, Zheng Wang1, Rui Yang1, Guanghui Hu3, Zhengkui Zhou4, Hong Yu2, Min Zhang1, Yi Pan1, Guoan Zhou1, Haixiang Ren5, Weiguang Du6, Hongrui Yan7, Yanping Wang5, Dezhi Han6, Yanting Shen1,11, Shulin Liu1,11, Tengfei Liu1,11, Jixiang Zhang1,11, Hao Qin2, Jia Yuan2, Xiaohui Yuan8, Fanjiang Kong9, Baohui Liu9, Jiayang Li2, Zhiwu Zhang10, Guodong Wang2,11, Baoge Zhu1 & Zhixi Tian ORCID: orcid.org/0000-0001-6051-96701,11 Genome Biology volume 18, Article number: 161 (2017) Cite this article Soybean (Glycine max [L.] Merr.) is one of the most important oil and protein crops. Ever-increasing soybean consumption necessitates the improvement of varieties for more efficient production. However, both correlations among different traits and genetic interactions among genes that affect a single trait pose a challenge to soybean breeding. To understand the genetic networks underlying phenotypic correlations, we collected 809 soybean accessions worldwide and phenotyped them for two years at three locations for 84 agronomic traits. Genome-wide association studies identified 245 significant genetic loci, among which 95 genetically interacted with other loci. We determined that 14 oil synthesis-related genes are responsible for fatty acid accumulation in soybean and function in line with an additive model. Network analyses demonstrated that 51 traits could be linked through the linkage disequilibrium of 115 associated loci and these links reflect phenotypic correlations. We revealed that 23 loci, including the known Dt1, E2, E1, Ln, Dt2, Fan, and Fap loci, as well as 16 undefined associated loci, have pleiotropic effects on different traits. This study provides insights into the genetic correlation among complex traits and will facilitate future soybean functional studies and breeding through molecular design. Soybean (Glycine max [L.] Merr.) is a major crop of agronomic importance as a predominant source of protein and oil [1]. To meet the needs of the rapidly increasing human population, soybean breeders are challenged with finding a high-efficiency breeding strategy for developing soybean varieties with higher yield and improved quality [2]. Molecular breeding has been proposed to be a powerful and effective approach for crop breeding, but requires a better understanding of the genetic architecture and networks underlying agronomical traits [3, 4]. Therefore, a priority task for accelerating the development of soybean varieties is a global dissection of the genetic basis of agronomical traits. Quantitative trait loci (QTL) and positional cloning identified a set of loci that are responsible for flowering and maturity, biotic and abiotic stresses, and growth habits (see review from Xia et al. [5]). However, our understanding of the genetic regulation of agronomic traits remains limited because most of them are naturally adapted into complex traits [6]. Genome-wide association study (GWAS) is a powerful approach for dissecting complex traits [7] and has been successfully applied for the study of many plants, such as Arabidopsis [8], rice [9,10,11], maize [12, 13], and foxtail millet [14]. In soybean, genotyping by either the Illumina Bead Chip or specific locus amplified fragment sequencing, the evaluation of several specific agronomic traits, including seed protein and oil concentration [15, 16], sudden death syndrome resistance [17], cyst nematode resistance [18, 19], and flowering time [20] were conducted through GWAS. These studies provided valuable resources for future molecular breeding of soybean. Nevertheless, the dissection of a specific trait is insufficient for molecular breeding because many complex traits exhibit correlation and tend to be tightly integrated, resulting in heritable covariation [21, 22], which add the complexity for breeding [23]. For instance, it is difficult to simultaneously increase grain yield and protein content for most crops because these two traits exhibit negative correlation and tend to change together [24,25,26]. The objectives of soybean breeding have expanded beyond yield; in fact, multiple selection criteria including oil content and protein content have been applied. Therefore, an understanding of how traits covariation is essential for the genetic improvement of multiple complex traits [27]. In this study, we collected 809 diverse soybean accessions, cultivated them at three locations for two years, and phenotyped them for 84 agronomic traits. Whole-genome sequencing (WGS) at an 8.3 × depth produced more than 11 million genetic markers. The endeavor from comprehensive GWAS analyses enabled the identification of the underlying genetic loci, loci interaction, and genetic networks across traits. Genotyping and phenotyping of 809 diverse soybean accessions On the basis of our previous investigated 130 landraces and 110 cultivars [28], we collected additional 291 landraces and 278 cultivars in this study, which composed a population with a total of 809 soybean accessions (Additional file 1: Table S1). The population consisted of 70 previously reported representative accessions [29], 160 Chinese core collection accessions [30], and 579 other accessions from different countries and regions. The 421 landraces and the 388 cultivars covered the main soybean producing areas, including China, Korea, Japan, Russia, the United States, and Canada, but not South America (Fig. 1a; Additional file 1: Table S1). Of the 809 accessions, 240 were sequenced in a previous study and the other 569 lines were sequenced in the present study. In total, 66.8 billion paired-end reads (7.0 Tb of sequences) were generated with a mean depth of approximately 8.3 × for each accession (Additional file 1: Table S1). After mapping against the reference genome, single-nucleotide polymorphism (SNP) calling, and imputation (see "Methods"), a total of 10,415,168 SNPs and 1,033,071 small indels (≤6 bp) were identified (Additional file 2: Table S2). To assess the quality of the genotype data, we validated 37 randomly selected SNPs in 96 accessions using the Sanger method (see "Methods") and the results demonstrated that the accuracy of the identified SNPs was 99.8% (Additional file 3: Table S3; Additional file 4: Table S4). Geographic distribution and genetic structure of 809 soybean accessions. a Geographic distribution of the 809 soybean accessions. Each accession is displayed as a dot. b Genetic structure of the 809 soybean accessions. The accessions are clustered by the neighbor-joining tree using whole-genome SNPs. The length of the lines on the tree indicates the simple matching distance. c, d The areas with dense collections (Asia and North America) are magnified separately. The colors of the dots in (a, c, and d) correspond to their groups in (b) The neighbor-joining tree suggested that the 809 accessions could be classified into four main clades (Fig. 1b), which were associated with their geographical distribution (Fig. 1c and d). An investigation of population structures with varying levels of K-means using fastStructure [31] also predicted that the optimal number of subpopulations was approximately K = 4 (Additional file 5: Figure S1). The analyses suggested that the accessions exhibited a subpopulation structure, which was used as a covariate within the GWAS model. We grew all of the 809 accessions in Beijing for two years (in 2013 and 2014). We assayed 45 morphology traits each year, including those related to yield, color, architecture, organ shape, and growth period (Additional file 6: Table S5). In 2013, we also measured 39 nutrient composition traits that related to oil content, protein content, fatty acid components, and amino acid components (Additional file 6: Table S5) through gas chromatography–mass spectrometry (GC-MS). Soybean grows across a range of latitudes from 50°N to 35°S [32]. We found significant differences in some of the traits, such as those related to the growth period, architecture, yield, and nutrient composition, between the accessions from higher latitudes (above 40.5°N) and those from lower latitudes (below 40.5°N) (Additional file 5: Figure S2). These differences may have been caused by the tendency of soybean to adapt to a limited latitudinal region due to its photoperiod sensitivity [33]. As a result, we replanted the accessions that from high latitudes (n = 275) at a location northeast to Beijing (Mudanjiang, Heilongjiang Province) and the rest from low latitudes (534) at a southern location (Zhoukou, Henan Province) to fully assess their potentials. For both locations, most of the morphology trait measurements were repeated in 2014 and 2015, and the nutrient composition trait measurements were repeated in 2014. The overall performances of the 809 accessions were predicted as the best linear unbiased prediction (BLUP) using a mixed linear model (MLM), which was implemented using the lme4 package for R. Whole-genome screening for significantly associated loci (SAL) We conducted a GWAS on the 84 traits based on more than four million of the markers (SNPs with a minor allele frequency [MAF] ≥ 0.05]) genotyped from the 809 accessions through a MLM implemented in Efficient Mixed-Model Association eXpedited (EMMAX) software. The population structure was represented by the first three principal components, which was fitted as fixed effects. Kinship was used to define the variance structure of the random variables for the total genetic effects of the 809 accessions. No inflated P values were found and most markers (99%) exhibited P values equal to those expected under the null hypothesis, suggesting that the MLM controlled population structure and cryptic relationships well. To control both false positives and false negatives, we also conducted permutation tests by randomly shuffling the phenotypes to break their relationship with genotypes to derive a genome-wide threshold (see "Methods" and Additional file 7: Table S6). By using the empirical threshold, we identified 150 SAL that significantly associated with 57 of the 84 traits, using all 809 accessions (Additional file 8: Table S7; Additional file 5: Figures S3–86). Epistasis, or the interaction between genes, plays an important role in controlling complex inheritance [34]. For instance, Dt1 exerts an epistatic effect on Dt2 in the regulation of plant height in soybean [35, 36]. In this study, we detected three SAL for plant height using all tested accessions (Fig. 2a–c). Among these SAL, one overlapped with the Dt1 locus [37, 38] and another overlapped with E2, a locus that is responsible for bloom date [39]. However, the Dt2 locus was not detected. If an epistatic gene exhibits a significantly strong effect, it can hinder the identification of other interactive genes that exert minor effects [34, 40]. We then classified the entire population into two sub-populations (termed Dt1 and dt1 subgroups), based on the genotypes of the highest association site of the Dt1 locus. A GWAS of the plant height in each of these two subgroups revealed two additional SAL in the Dt1 subgroup, which included the Dt2 locus (Fig. 2e and f). However, the Dt2 locus cannot be detected in the dt1 subgroup (Fig. 2h and i). This finding confirmed the results of previous epistasis analyses [35, 36]. In contrast, the E2 locus was detected in both the Dt1 and dt1 subgroups (Fig. 2e and h), suggesting that E2 and Dt1 does not exert an epistatic effect. The Dt2 locus precisely explains the phenotypic variation in plant height within the subgroup of the Dt1 allele (Fig. 2g) compared with the Dt1 locus alone (Fig. 2d). GWAS of the soybean plant height. a Distribution of the plant height values across all of the 809 soybean accessions. b GWAS result from all accessions. In the GWAS result, both known genes Dt1 and E2 are identified. c Quantile–quantile plot for plant height. d The plant height variation between different Dt1 alleles in all 809 accessions. The known gene Dt1 separates the 809 accessions into two subgroups with different plant height means. e The GWAS result of plant height using the accessions from the Dt1 subgroup. f Quantile–quantile plot for plant height of Dt1 subgroup. g Plant height variation between different Dt2 genotypes in the Dt1 subgroup. h The GWAS result of plant height using the accessions from the dt1 subgroup. i Quantile–quantile plot for plant height of dt1 subgroup. GWAS results are presented by negative log10 P values against position on each of 20 chromosomes. Horizontal dashed lines indicate the genome-wide significant threshold (2 × 10–7) To validate our method, we performed a new investigation of association loci using the previously reported methods of SNP-fixing [41] and multiple loci analysis [42]. These approaches provided the same results as our method (Additional file 5: Figure S87). We further investigated another trait, namely leaf area. The results obtained from our method, the SNP-fixing, and multiple loci analysis all demonstrated that the locus of Chr19_45150769 can interact with Ln to control leaf area (Additional file 5: Figure S88), confirming the reliability of our method. Following this method, for each of the primary 150 SAL, we subdivided the 809 accessions into two subgroups according to the genotypes of the locus with the lowest P value within the primary SAL. We also conducted permutation tests to derive the empirical thresholds and thereby to determine the secondary associated loci. We found very similar trends for the primary and secondary SAL within each trait type (Additional file 7: Table S6). Under these empirical thresholds, we identified 95 additional secondary SAL (Additional file 9: Table S8). In total, we identified 245 SAL, which included 46 SAL that overlapped with previously reported genes, 64 SAL that overlapped with reported QTLs, and 135 SAL that have not been characterized (Additional file 8: Table S7; Additional file 9: Table S8). Genetic architecture of fatty acid content Soybean is an important oilseed crop. Our analyses dissected the genetic architecture of the fatty acid content in the soybean natural population. Fatty acid biosynthesis-related genes, such as the genes encoding fatty acyl-ACP thioesterases B (FatB), plant stearoyl-acyl-carrier protein desaturase (SAD), and fatty acid desaturase 3 (FAD3), have been reported to be responsible for fatty acid accumulation in soybean [43,44,45]. In this study, we found five additional fatty acid biosynthesis-related genes located within the SAL regions (Fig. 3a; Additional file 10: Table S9). The differential alleles of these eight genes exhibited significant differences in the total fatty acid (TFA) content (Additional file 5: Figure S89). In addition to the genes involved in fatty acid biosynthesis, the genes that participate in lipid biosynthesis could also affect the fatty acid accumulation [46]. We identified six lipid biosynthesis-related genes in the fatty acid-related SAL regions (Additional file 11: Table S10). The different alleles of these lipid biosynthesis-related genes also showed significant differences in the TFA content (Additional file 5: Figure S90). Dissection of genetic regulation of the fatty acid content in soybean. a Candidate genes in the lipid metabolic pathway that are responsible for the variation of fatty acid (FA) synthesis in soybean germplasm. The pathway is modified from Arabidopsis. The dotted lines represent multiple reaction steps. b Plot of the total FA content against the accumulation of high-oil-content alleles. The x-axis indicates the number of accumulated high-oil alleles from all candidate genes in the soybean germplasm; the y-axis shows the total FA content in the corresponding population. c Total FA content of the germplasm from low-latitude and high-latitude areas. ***P < 0.001 (one-sided Student's t-test, n = 461, 219). d Proportion of accumulated high-oil alleles in low-latitude and high-latitude populations. ACP acyl carrier protein, DAG diacylglycerol, G3P glycerol-3-phosphate, FA fatty acid, LPA lysophosphatidic acid, PC phosphatidylcholine, PYR pyruvate, TAG triacylglycerol, ACNA acyl-CoA n-acyltransferase, FAD fatty acid desaturase, FatB fatty acyl-ACP thioesterase B, PDHK pyruvate dehydrogenase kinase, PLC phospholipase C, PLD phospholipase D, ROD1 reduced oleate desaturation 1, SAD stearoyl-acyl-carrier-protein desaturase, ER endoplasmic reticulum We observed that the TFA content increased with the accumulation of high-fatty-acid alleles of these genes in the soybean germplasm (Fig. 3b). Further analysis demonstrated that the TFA content in high-latitude accessions was significantly higher than that of low-latitude accessions (Fig. 3c). Correspondingly, we found that high-latitude accessions accumulated more high-fatty-acid alleles than low-latitude accessions (Fig. 3d; Additional file 5: Figure S91a). The results indicated that, similar to those in maize [13], the oil synthesis-related genes in soybean functioned additively to accumulate fatty acid. A genotype investigation of the ten most widely cultivated high-oil cultivars in China illuminated that they did not possess all of the high-fatty-acid alleles in the 14 genes (Additional file 5: Figure S91b), which suggested that the pyramiding of more high-fatty-acid alleles in these lines will allow the development of a soybean variety with a higher oil content. Genetic network of loci associated with phenotypes We found that the 84 traits related to growth period, architecture, color, seed development, oil content, or protein content tended to be correlated within these trait classifications (Additional file 5: Figure S92), suggesting that they might be genetically co-regulated. The plotting of the SAL across the soybean genome revealed that they were clustered according to the phylogeny relationship of traits rather than distributed randomly on the chromosomes (Additional file 5: Figure S93). Pleiotropy and linkage disequilibrium (LD) play important roles in identifying correlations among phenotypes [23]. To dissect the genetic architecture of the correlations across different traits, we analyzed the association networks using a previously reported method [47] with slight modification (see "Methods"). The network analysis revealed that the SAL were connected for most of the traits (Fig. 4), with the exception of two traits related to color (Additional file 5: Figure S94). Consistent with the correlation pattern of the traits (Additional file 5: Figure S92), the SAL controlling association phenotypes, such as growth period, architecture, yield, oil biosynthesis, or protein biosynthesis prefer to cluster as more closely linked networks (Fig. 4; Additional file 12: Table S11). Additionally, we determined that a number of SAL, such as the E2, E1, Dt1, Dt2, Ln, Fan, Fap, and several newly identified loci, played roles as key nodes in the regulation of different traits (Fig. 4; Additional file 5: Figure S93). One noteworthy example is the Dt1 locus. We revealed that, besides controlling plant height, the Dt1 locus also affected other yield related traits, such as the branch density, stem pod density, stem node number, number of three-seed per pod, and total seed number (Additional file 12: Table S11), which was validated by the comparison of these traits in Dt1 and dt1 isogenic lines (Additional file 5: Figure S95). Association networks across different traits in soybean. The nodes represent traits and their responsible SAL. The edges between the SAL from different traits are linked by LD. Only the edges with an average LD ≥ 0.4 are displayed. The trait abbreviations match those in Additional file 6: Table S5. The overlapped SAL covering Dt1, Dt2, E1, E2, Ln, Fan, and Fap are indicated by the actual circles. Other linked SAL covering unknown QTL are indicated by the dotted circles Yield and quality are two major considerations in variety development for almost all crops. However, the loci simultaneously controlling yield-related and quality-related traits have seldom been reported [48]. In this study, we found that E2 may exhibit pleiotropy across the traits related to yield and seed quality. Plant height (PH) and beginning bloom date (BBD) exhibited a significantly positive correlation (Fig. 5a). We found that these two traits shared a common SAL, which overlapped with the E2 locus (Fig. 5b). This finding was consistent with previous reports that the major genes and QTLs are shared for flowering, maturity, and plant height in soybean [33, 49]. Interestingly, we found that the ratio of linolenic acid to linoleic acid (FA 18:3 to FA18:2, R3:2) also exhibited significantly positive correlations to PH and BBD (Fig. 5a), and shared E2 with these two traits in the association network (Fig. 5b), suggesting that E2 exhibits pleiotropy across PH, BBD, and R3:2. To verify the effects of the E2 locus in the association network, PH, BBD, and R3:2 were compared between two pairs of E2 and e2 isogenic lines (PI 547553, E1E2s-tt vs. PI 547549, E1e2s-tt; ZK164, E1E2E3E4 vs. ZK166, E1e2E3E4). The results showed that the values of PH, BBD, and R3:2 in the E2 lines were significantly higher values than those in the e2 lines (Fig. 5c–e), confirming that the E2 locus plays an important role in regulating these three important agronomic traits in a simultaneous manner. Phenotype correlations and genetic networks of associated loci. a The correlation among three traits: BBD, PH, and R3:2 of linolenic acid (FA18:3) to linoleic acid (FA18:2). b The association networks across PH, BBD, and R3:2. The genetic network presents the SAL with average LD ≥ 0.4. An overlapped SAL covering E2 is indicated by the dotted circle. Phenotype data (mean ± s.d., n = 4) of different alleles of E2 in different E2 near isogenic lines are illustrated for BBC (c), PH (d), and R3:2 of linolenic to linoleic acid (e). NIL1 (PI 547553, E1E2s-tt vs. PI 547549, E1e2s-tt). NIL2 (ZK164, E1E2E3E4 vs. ZK166, E1e2E3E4). E1, E2, E3, E4: loci controlling flowering ability, s-t: locus controlling plant height, T: locus controlling pubescence color. DAS day after sowing. *P < 0.05; **P < 0.01 (one-sided Student's t-test) Plant breeding aims to pyramid multiple desirable traits into a single variety. However, due to trait correlations, breeders must choose to either simultaneously improve correlated traits or accept potentially undesirable effects associated with the correlation [23]. A better understanding of the genetic networks underlying these different traits helps breeders to develop effective strategies for variety development. For example, in past decades, rice functional genomics has progressed rapidly, resulting in the identification of some key genes that control both yield and grain quality [50]. The well-established genetic information has allowed scientists to propose a clear path to design the breeding of high-yield, superior-quality, hybrid super rice [4]. However, compared with rice, fundamental studies on the genetic dissection of complex traits in soybean have to take more progress to reach the same level. Epistasis, or the interaction between genes associated with a trait, add the complexity to the genetic dissection of complex traits. The SNP-fixing [41] and multiple loci analysis [42] have been proven to be two robust methods for the identification of epistasis loci. In this study, we developed another method to identify the epistasis loci by splitting the entire population into sub-populations based on the genotypes of the highest association site and subsequently performing a second-round GWAS for each sub-population. The reliability of our results was comparable to that of the results obtained through the SNP-fixing approach and multiple loci analysis (Fig. 2; Additional file 5: Figures S87 and 88), but is an advantage in determining the epistasis relationship between different haplotypes. For instance, our analysis clearly showed that an epistatic effect was only detected between Dt1 and Dt2 but not between dt1 and Dt2, suggesting that dt1 is a loss/weak-of-function allele compared with Dt1 (Fig. 2). Further validation of detailed epistatic relationship between different alleles we identified (Additional file 9: Table S8) using F2 or recombinant inbred line populations will be helpful for future functional study. In total, we identified 245 SAL for 57 agronomical traits. Most of the reported genes that have been identified through forward genetics to control related agronomical traits, such as Dt1, Dt2, E1, E2, Ln, PDH1, Fan, and Fap, were identified. In addition, a total of 135 SAL were previously unchartered (Additional file 8: Table S7; Additional file 9: Table S8), such as the three SAL for flowering time in Chr5, Chr11, and Chr19 (Additional file 5: Figure S96). However, we indeed failed to detect the SAL for 27 traits. We evaluated the statistical power of our analysis (Additional file 5: Figure S97, see "Methods") and the results demonstrated that the statistical power was mainly determined by the number of quantitative trait nucleotides (QTNs) although it increased with the increase of the heritability. For instance, when a trait is controlled by small number of QTNs, such as QTN = 2, even with a heritability as low as 0.25, the statistical power reached 86%. However, for a trait that is controlled by more QTNs, such as QTN = 10, the statistical power only reached 70% even with a heritability as high as 0.75. Thereafter, we speculated that genetic complexity and lack of a major QTL are the main reasons for the inability to detect traits without SAL. For instance, we found that the seed weight exhibited a heritability of approximately 0.62 in the studied population, but no SAL for this trait was detected, which might be due to the fact that dozens of genes are responsible for the seed weight of plants [51]. Another reason might be due to the stringent threshold applied in this study. For many traits for which we did not find SAL, such as the 100-seed weight (Additional file 5: Figure S40), number of two-seed pod (Additional file 5: Figure S28), seed length (Additional file 5: Figure S42), and FA18:1 content (Additional file 5: Figure S50), clear association signals were detected, even though these signals did not pass the threshold. Taking the flowering time as an example, although a number of GWAS signals did not reach the threshold (Additional file 5: Figure S96), the homologues of the reported Arabidopsis flowering time-related genes were identified surrounding the highest-association loci of the GWAS signals. The stringent criterion might have caused false negatives, but guaranteed a lower false discovery rate (FDR) for every trait. We anticipate that the scientists working in similar areas will be quite interested in the information from this study, which will likely facilitate the identification of the responsible genes. Nevertheless, we also found that the positions of a small number of SAL might be inaccurate due to genome assembly errors (an example is shown in Additional file 5: Figure S98). Consequently, future studies should also use additional genomic approaches to confirm these SAL. In addition to the identification of many SAL, we revealed the association networks across different traits. For example, we identified some SAL that functioned as key nodes for connecting different traits, whereas most SAL specifically controlled individual traits (Fig. 4). This information will be helpful guidance for the breeders attempting to establish a clear strategy for variety development. If the heritable covariation between different traits needs to be broken, using the specific SAL for individual traits might be more effective than the node SAL. In contrast, if the heritable covariation needs to be increased, the selection of the node SAL might be a better choice. Furthermore, the amount of genomic data provided a better understanding of the allelic variation for the genetic resource collections and will also facilitate breeders to propose an efficient path for variety improvement by design. For instance, we found that the five well cultivated high-yield varieties in the middle of China (Huang Huai Hai region) possessed less high-fatty-acid alleles for the 14 fatty acid-related SAL (Additional file 5: Figure S91b). Because the yield-related and fatty-acid-related networks were relatively independent (Fig. 4), by pyramiding all the high-fatty-acid SAL alleles into these high-yield varieties will potentially highly develop both high-yield and high-oil new varieties. Of course, a strict background selection should be performed because the favor alleles for other traits from these high-yield varieties should be maximally maintained. In summary, our work presented here provides a large dataset of loci and genes responsible for important agronomic traits in soybean, which will facilitate future functional studies and variety development. Planting and phenotyping A total of 809 soybean accessions were selected for this study. For phenotyping, all 809 accessions were planted at the Experimental Station of the Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing (40°22′N and 116°23′E) during the summer seasons in 2013 and 2014. The 275 accessions collected from northern areas were planted in Mudanjiang (44°58′N and 129°60′E), Heilongjiang Province during the summer seasons in 2014 and 2015. The remaining 534 accessions collected from Huang Huai Hai and southern areas were planted in Zhoukou (33°62′N and 114°65′E), Henan Province during the summer seasons in 2014 and 2015. Normal seeds were selected and sowed in deeply ploughed fields with proper moisture content (15–20%). The seed was planted in three-row plots in a randomized complete block design with three replications for each environment. Only one accession was planted in each plot and the plots were 5 m in length with a row spacing of 0.4 m. The space between two plots was 0.4 m. After three weeks, the seedlings were manually thinned to achieve an equal density of 120,000 individuals per hectare. We used the same phenotyping procedure and scoring standards in all six environments. In total, we characterized 84 sets of phenotypes related to yield, coloration, architecture, growth period, and seed composition with a miss rate < 10%. The identification of growth periods, including BBD, full bloom date, pod maturity date, and reproduction stage length, was based on a previous description of reproductive stages [52]. Traits related to flower and leaf were observed and measured at the full-bloom stage. Yield-related traits, such as pod number, seed number, and seed weight, were counted or measured in the laboratory after harvest. Detailed information regarding the phenotyping procedure and scoring standards is provided in Additional file 6: Table S5. For the assessment of the traits that need to be evaluated during the growing season, at least five healthy individuals from each plot were randomly selected and used for phenotyping. For the traits that need to be evaluated after harvesting, the healthy plants from the three replications of each accession were first collected and at least five individuals were randomly selected and used for phenotyping. The narrow-sense heritability was estimated by using GAPIT [53]. For the correlation analysis, we treated the binary traits as continued traits and converted the values into 0 or 1 and then did the correlation analysis with other quantitative traits. Oil and protein sample preparation and GC-MS analysis After drying at 80 °C for 2 h, approximately 5 g of mature and well-rounded seeds were milled to a fine powder with an electric grinder. Solid fractions were filtered out using a 0.25-mm sieve. The powders were divided into two sub-samples and measured at the same time. Six micrograms of soybean power were used to determine the lipid content, according to a previously reported protocol with minor modifications [54]. Fatty acids were released from the total lipids and methylated by adding 0.8 mL of 1.25 M HCl-methanol and 20 μL of 5 mg/mL heptadecanoic acid (used as an internal standard) for 4 h at 50 °C. Then, 1 mL of hexane and 1.5 mL of 0.9% NaCl (v/v) were added to the cooled vial. After shaking for 5 min, 750 μL of the hexane layer was transferred to a new injection vial after centrifugation for 10 min at 3000 g and dried by flow nitrogen. The dried samples were re-dissolved in 500 μL of hexane for further GC-MS analysis. For total amino acid analysis, 6 mg of soybean power was completely hydrolyzed by adding 300 μL of 6 M HCl spiked in 0.5 mg/mL L-norleucine (used as an internal standard) for 24 h at 100 °C [55]. After centrifugation for 30 min at 16,500 g, 50 μL of supernatant was transferred to a new 1.5 mL Eppendorf tube and dried at 100 °C. The dried samples were derivatized according to Fiehn's protocol [56]. One microliter of the prepared sample (for both fatty acid and amino acid analysis) was injected into the Trace DSQII GC-MS system (Thermo Fisher Scientific), which was equipped with a DB-23 column (Agilent Technologies, 60 m × 0.25 mm × 0.25 μm) at a split ratio of 1:20 for fatty acid analysis and a DB-5MS column (Agilent Technologies, 30 m × 0.25 mm × 0.25 μm) at a split ratio of 1:50 for amino acid analysis. For fatty acid measurement, the oven was programmed as follows: 150 °C for 1 min, ramp to 200 °C at 4 °C/min, ramp to 220 °C at 2 °C/min, and finally ramp to 250 °C at 25 °C/min, holding 5 min with 1.1 mL/min helium as carrier gas [57, 58]. The temperatures of the injector, transfer line, and ion source were set to 250 °C, 250 °C, and 230 °C, respectively. For amino acid measurement, the oven was programmed as follows: 100 °C for 1 min, ramp to 240 °C at 10 °C/min, and finally ramp to 300 °C at 30 °C/min, holding 5 min with 1.1 mL/min helium as carrier gas. The temperatures of the injector, transfer line, and ion source were set to 250 °C, 250 °C, and 280 °C, respectively. Overall performances of the 809 soybean accessions across environments The overall performances of the 809 soybean accessions were calculated as the best linear unbiased prediction (BLUP), the same method used to calculate the overall performances of 5000 maize inbred lines to eliminate environment effects [12]. The calculation was performed by using the function of "lmer" in the lme4 package. The fixed effects in the MLM included the overall mean and the effects of the planting environment. The planting environments were defined as each combination of year and location. The random effects in the MLM included the line effects, the interaction between environments and lines, and the residuals. The solutions of line effects (i.e. BLUP) were used as the overall performances of the 809 soybean accessions across environments. DNA preparation and sequencing Among the 809 soybean accessions, 240 were obtained from our previous study [28] (Additional file 1: Table S1). The genomic DNA of the other 569 additional accessions was extracted from the young leaves of a single soybean plant for each accession, after three weeks of growth. DNA extraction was performed using the cetyltrimethylammonium bromide (CTAB) method [59]. The library of each accession was constructed with an insert size of approximately 500 bp, following the manufacturer's instructions (Illumina Inc., 9885 Towne Centre Drive, San Diego, CA 92121, USA). All soybean varieties were sequenced on Illumina HiSeq 2000 sequencer and Illumina HiSeq 2500 sequencer at BerryGenomics Company (http://www.berrygenomics.com/. Beijing, China). Detailed information of the 809 accessions, including geographical distribution and sequencing depth, is provided in Additional file 1: Table S1. Read alignment and variation calling The re-sequencing reads of the 809 accessions were mapped to the soybean reference genome [60] (Williams 82 assembly V2.0) at the Phytozome v11.0 website (http://www.phytozome.net/soybean) with BWA [61] (version 0.7.5a-r405) using the default parameters. We generated the BAM format of the mapping results and filtered the non-unique and unmapped reads with SAMtools [62] (version:0.1.19). The Picard package (http://broadinstitute.github.io/picard/, version: 1.87) was applied to filter the duplicated reads. The Genome Analysis Toolkit [63] (GATK, version: 3.1-1-g07a4bf8) was applied for SNP and INDEL calling. Annotations of SNP and INDEL were performed based on gene model set v2.0 from Phytozome v11.0 using ANNOVAR [64] (version: 2015-03-22). The k-nearest neighbor-based method (http://202.127.18.228/fimg/intr.php) was then used for missing data imputation, after which the miss rate decreased from 2.1% to 0.057% and the heterozygous rate decreased from 3.4% to 0.17%. To evaluate the SNPs calling and imputation accuracy, we randomly selected ten fragments (primers information is listed in Additional file 3: Table S3) across the genome that contained 37 SNPs for additional validation. These fragments were amplified in 96 randomly selected soybean accessions and sequenced using the Sanger method. The comparisons between SNP calling and Sanger sequencing are shown in Additional file 4: Table S4. The results showed that the accuracy rate of imputation SNP reached 99.8%. According to the genome annotation, the varieties were divided into exonic regions, splicing sites (within 2 bp of a splicing junction), 5'UTRs, 3'UTRs, intragenic regions, upstream and downstream regions (within a 1-kb region upstream/downstream from the transcription start/end site), and intergenic regions. The SNPs in coding regions were further categorized into non-synonymous SNPs (cause amino acid changes), synonymous SNPs (do not cause amino acid changes), stopgain SNPs (create a stop codon), and stoploss SNPs (eliminate a stop codon). The INDELs in coding regions were further categorized into non-frameshift (do not cause frameshift changes), frameshift (cause frameshift changes), stopgain, and stoploss INDELs. Population genetics analysis and GWAS A neighbor-joining tree was constructed using the PHYLIP software [65] (version 3.68) on the basis of a distance matrix, using the whole-genome SNPs shared by all the accessions. A principal component analysis (PCA) of the population was performed via EIGENSOFT software [66] (version 4.2). The population structure was calculated using the Bayesian clustering program fastStructure [31]. LD was calculated using PLINK [67] (version: 1.90) with the parameter --ld-window-r2 0 --ld-window 99999 --ld-window-kb 1000. Only SNPs with MAF ≥ 0.05 and missing rate < 0.1 in the population were used in the GWAS. An association analysis was performed using the EMMAX (beta version) [68] software package. The matrix of pairwise genetic distances, which were derived from the simple matching coefficients, as the variance-covariance matrix of the random effects, was also calculated by EMMAX. Determination of genome-wide threshold We randomly shuffled observed real phenotypes to break the connections between these phenotypes and their corresponding genotypes. Then, we applied the GWAS on the permuted phenotypes by using the same model that was used for real observed phenotypes. The most significant P value across the whole genome was recorded. This random process was repeated 1000 times. The distribution of the most significant P values across the 1000 replicates was used to determine the threshold, which was the P value corresponding to a 5% chance of a type I error. Ideally, each trait should have its own threshold. To derive robust thresholds, we grouped the 84 traits into four types based on their phenotypic distribution. We found the thresholds were very similar within each of the types we defined as follows: Binary traits: examples include color (purple vs. white) Quantitative traits with normal distribution Quantitative traits with skewed distribution Binary-like quantitative traits: examples include four-seed pod number and ratio with extremely skewed frequency distributions We tested multiple traits in each category and randomly selected one trait out of each category to illustrate the empirical thresholds (Additional file 7: Table S6). The first three types of traits had very similar thresholds (negative log10 P values = 6.5–6.7). We used the most stringent threshold (6.7) as the criterion for these three types of traits. Although this criterion may have caused false negatives, it guaranteed that the type I error was below 5% for every trait. The last type of traits had much more stringent criteria. For example, the four-seed per pod ratio had threshold of 8.3 (negative log10 P value). We used this threshold for all binary-like quantitative traits. Identification of additional minor-effect loci To identify minor-effect loci by eliminating the effect of epistasis or interactions between genes, additional GWAS were performed. We first divided the 809 accessions into two subgroups according to the genotype of the SNP with the lowest P value out of all SAL across the whole genome. Next, association analysis was performed only if the subgroup consisted of more than 100 accessions. With the same method, the significant thresholds of minor loci were determined (Additional file 7: Table S6). Negative log P value of 8.4 was used as threshold for all binary-like quantitative traits. We used the more stringent threshold (6.6) as the criterion for the traits in other two categories. The significant associated loci and not having been identified before grouping were considered as new identified association signals. Assessment of statistical examination Using the genotype data of 809 soybean accessions, a set of SNPs (2, 5, and 10) were randomly selected as causal loci for the simulated traits using the method described previously [69]. Three levels of heritability (h 2 = 0.25, 0.5, and 0.75) were evaluated for examination of statistical power in all settings of causal loci. For each combination of heritability and number of causal loci, a total of 1000 replicates were conducted for the simulation of phenotypes and association tests. In each of GWAS, the threshold was set as 2 × 10–7, the cutoff from permutation tests on real traits with normal distribution. Statistical power and false positive rate (FDR) were evaluated on the intervals around the loci above the threshold. An interval was defined as the consecutive region with SNPs in LD (above 0.6) around the associated locus. Statistical power was calculated as the proportion of intervals containing causal loci over the total number of causal loci weighted by variance they explained. FDR was calculated as the proportion of the intervals without causal locus over the total number of intervals with a SNP above the threshold. The averages and standard error of statistical power and FDR over the 1000 replicates were reported. Construction of association networks The association networks were constructed using the software Cytoscape [70] (Version: 3.2.1), with traits and their corresponding SAL as nodes, and the link between trait and SAL, SAL and SAL (average r 2 ≥ 0.4) as edges. The effective score for each SAL was represented by the lowest P value. The link between each two SAL was represented by their average LD (Inter-LD). Inter-LD was calculated as follows: $$ Inter- LD=1/2\times \left(\frac{LD\left( SAL1, SAL2\right)}{PmaxLD(SAL1)},+,\frac{LD\left( SAL1, SAL2\right)}{PmaxLD(SAL2)}\right), $$ where LD (SAL1, SAL2) equals the mean of pairwise LD value (r 2) between all the SNPs from SAL1 to all the SNPs from SAL2; PmaxLD (SAL1) equals the largest possible LD value within the SAL1 region, obtained by calculating the mean r 2 of each SNP to all SNPs from the SAL1 region, and then choosing the maximum mean of the LD value to represent this region's PmaxLD; and PmaxLD (SAL2) equals the largest possible LD value within the SAL2 region, obtained by calculating the mean r 2 of each SNP to all SNPs from SAL2 region, and then choosing the maximum mean of the LD value to represent this region's PmaxLD. Pairwise r 2 values were calculated between all significant SNPs using PLINK [67]. Wilson RF. Soybean: market driven research needs, vol. 2. New York: Springer Science Press; 2008. Ray DK, Mueller ND, West PC, Foley JA. Yield trends are insufficient to double global crop production by 2050. PLoS One. 2013;8:e66428. Peleman JD, van der Voort JR. Breeding by design. Trends Plant Sci. 2003;8:330–4. Qian Q, Guo L, Smith S, Li J. Breeding high-yield superior quality hybrid super rice by rational design. National Sci Rev. 2016;3:283–94. Xia Z, Zhai H, Lü S, Wu H, Zhang Y. Recent achievement in gene cloning and functional genomics in soybean. World Scientific J. 2013;2013:1–7. Mackay TF, Stone EA, Ayroles JF. The genetics of quantitative traits: challenges and prospects. Nat Rev Genet. 2009;10:565–77. Korte A, Farlow A. The advantages and limitations of trait analysis with GWAS: a review. Plant Methods. 2013;9:29–37. Atwell S, Huang YS, Vilhjalmsson BJ, Willems G, Horton M, Li Y, et al. Genome-wide association study of 107 phenotypes in Arabidopsis thaliana inbred lines. Nature. 2010;465:627–31. Chen W, Gao Y, Xie W, Gong L, Lu K, Wang W, et al. Genome-wide association analyses provide genetic and biochemical insights into natural variation in rice metabolism. Nat Genet. 2014;46:714–21. Huang X, Wei X, Sang T, Zhao Q, Feng Q, Zhao Y, et al. Genome-wide association studies of 14 agronomic traits in rice landraces. Nat Genet. 2010;42:961–7. Huang X, Zhao Y, Wei X, Li C, Wang A, Zhao Q, et al. Genome-wide association study of flowering time and grain yield traits in a worldwide collection of rice germplasm. Nat Genet. 2012;44:32–9. Buckler ES, Holland JB, Bradbury PJ, Acharya CB, Brown PJ, Browne C, et al. The genetic architecture of maize flowering time. Science. 2009;325:714–8. Li H, Peng Z, Yang X, Wang W, Fu J, Wang J, et al. Genome-wide association study dissects the genetic architecture of oil biosynthesis in maize kernels. Nat Genet. 2013;45:43–50. Jia G, Huang X, Zhi H, Zhao Y, Zhao Q, Li W, et al. A haplotype map of genomic variations and genome-wide association studies of agronomic traits in foxtail millet (Setaria italica). Nat Genet. 2013;45:957–61. Hwang EY, Song Q, Jia G, Specht JE, Hyten DL, Costa J, et al. A genome-wide association study of seed protein and oil content in soybean. BMC Genomics. 2014;15:1–12. Bandillo N, Jarquin D, Song QJ, Nelson R, Cregan P, Specht J, et al. A population structure and genome-wide association analysis on the USDA soybean germplasm collection. Plant Genome. 2015;8:1–13. Wen Z, Tan R, Yuan J, Bales C, Du W, Zhang S, et al. Genome-wide association mapping of quantitative resistance to sudden death syndrome in soybean. BMC Genomics. 2014;15:809–19. Han Y, Zhao X, Cao G, Wang Y, Li Y, Liu D, et al. Genetic characteristics of soybean resistance to HG type 0 and HG type 1.2.3.5.7 of the cyst nematode analyzed by genome-wide association mapping. BMC Genomics. 2015;16:598–608. Vuong TD, Sonah H, Meinhardt CG, Deshmukh R, Kadam S, Nelson RL, et al. Genetic architecture of cyst nematode resistance revealed by genome-wide association study in soybean. BMC Genomics. 2015;16:593–605. Zhang J, Song Q, Cregan PB, Nelson RL, Wang X, Wu J, et al. Genome-wide association study for flowering time, maturity dates and plant height in early maturing soybean (Glycine max) germplasm. BMC Genomics. 2015;16:217–27. Klingenberg PC. Morphological integration and developmental modularity. Ann Rev Eco Evo Sys. 2008;39:115–32. Wagner GP. Homologues, natural kinds and the evolution of modularity. Am Zool. 1996;36:36–43. Chen Y, Lubberstedt T. Molecular basis of trait correlations. Trends Plant Sci. 2010;15:454–61. Duvick DN, Cassman KG. Post-green revolution trends in yield potential of temperate maize in the north-central United States. Crop Sci. 1999;39:1622–30. Rotundoa JL, Borrása L, Westgatea ME, Orfc JH. Relationship between assimilate supply per seed during seed filling and soybean seed composition. Field Crop Res. 2009;112:90–6. Rharrabti Y, Elhani S, Martos-Nunez V, Garcia Del Moral LF. Protein and lysine content, grain yield, and other technological traits in durum wheat under Mediterranean conditions. J Agric Food Chem. 2001;49:3802–7. Melo D, Marroig G. Directional selection can drive the evolution of modularity in complex traits. Proc Natl Acad Sci U S A. 2015;112:470–5. Zhou Z, Jiang Y, Wang Z, Gou Z, Lyu J, Li W, et al. Resequencing 302 wild and cultivated accessions identifies genes related to domestication and improvement in soybean. Nat Biotechnol. 2015;33:408–14. Hyten DL, Song QJ, Zhu YL, Choi IY, Nelson RL, Costa JM, et al. Impacts of genetic bottlenecks on soybean genome diversity. Proc Natl Acad Sci U S A. 2006;103:16666–71. Li YH, Guan RX, Liu ZX, Ma YS, Wang LX, Li LH, et al. Genetic structure and diversity of cultivated soybean (Glycine max (L.) Merr.) landraces in China. Theor Appl Gene. 2008;117:857–71. Raj A, Stephens M, Pritchard JK. fastSTRUCTURE: variational inference of population structure in large SNP data sets. Genetics. 2014;197:573–89. Watanabe S, Harada K, Abe J. Genetic and molecular bases of photoperiod responses of flowering in soybean. Breed Sci. 2012;61:531–43. Cober ER, Morrison MJ. Regulation of seed yield and agronomic characters by photoperiod sensitivity and growth habit genes in soybean. Theor Appl Genet. 2010;120:1005–12. Phillips PC. Epistasis--the essential role of gene interactions in the structure and evolution of genetic systems. Nat Rev Genet. 2008;9:855–67. Bernard RL. Two genes affecting stem termination in soybeans. Crop Sci. 1972;12:235–9. Liu Y, Zhang D, Ping J, Li S, Chen Z, Ma J. Innovation of a regulatory mechanism modulating semi-determinate stem growth through artificial selection in soybean. PLoS Genet. 2016;12:e1005818. Liu BH, Watanabe S, Uchiyama T, Kong FJ, Kanazawa A, Xia ZJ, et al. The soybean stem growth habit gene Dt1 is an ortholog of Arabidopsis TERMINAL FLOWER1. Plant Physiol. 2010;153:198–210. Tian ZX, Wang XB, Lee R, Li YH, Specht JE, Nelson RL, et al. Artificial selection for determinate growth habit in soybean. Proc Natl Acad Sci U S A. 2010;107:8563–8. Watanabe S, Xia Z, Hideshima R, Tsubokura Y, Sato S, Yamanaka N, et al. A map-based cloning strategy employing a residual heterozygous line reveals that the GIGANTEA gene is involved in soybean maturity and flowering. Genetics. 2011;188:395–407. Tian Z, Qian Q, Liu Q, Yan M, Liu X, Yan C, et al. Allelic diversities in rice starch biosynthesis lead to a diverse array of rice eating and cooking qualities. Proc Natl Acad Sci U S A. 2009;106:21760–5. Chang HX, Lipka AE, Domier LL, Hartman GL. Characterization of disease resistance loci in the USDA soybean germplasm collection using genome-wide association studies. Phytopathology. 2016;106:1139–51. Segura V, Vilhjalmsson BJ, Platt A, Korte A, Seren U, Long Q, et al. An efficient multi-locus mixed-model approach for genome-wide association studies in structured populations. Nat Genet. 2012;44:825–30. Li ZL, Wilson RF, Rayford WE, Boerma HR. Molecular mapping genes conditioning reduced palmitic acid content in N87-2122-4 soybean. Crop Sci. 2002;42:373–8. Li YH, Reif JC, Ma YS, Hong HL, Liu ZX, Chang RZ, et al. Targeted association mapping demonstrating the complex molecular genetics of fatty acid formation in soybean. BMC Genomics. 2015;16:841. Hoshino T, Watanabe S, Takagi Y, Anai T. A novel GmFAD3-2a mutant allele developed through TILLING reduces alpha-linolenic acid content in soybean seed oil. Breeding Sci. 2014;64:371–7. Li-Beisson Y, Shorrosh B, Beisson F, Andersson MX, Arondel V, Bates PD, et al. Acyl-lipid metabolism. Arabidopsis Book. Am Soc Plant Biol. 2013;11:e0161. Crowell S, Korniliev P, Falcao A, Ismail A, Gregorio G, Mezey J, et al. Genome-wide association and high-resolution phenotyping link Oryza sativa panicle traits to numerous trait-specific QTL clusters. Nat Commun. 2016;7:10527. Wang S, Li S, Liu Q, Wu K, Zhang J, Wang S, et al. The OsSPL16-GW7 regulatory module determines grain shape and simultaneously improves rice yield and grain quality. Nat Genet. 2015;47:949–54. Lee SH, Bailey MA, Mian MA, Shipe ER, Ashley DA, Parrott WA, et al. Identification of quantitative trait loci for plant height, lodging, and maturity in a soybean population segregating for growth habit. Theor Appl Genet. 1996;92:516–23. Jiang Y, Cai Z, Xie W, Long T, Yu H, Zhang Q. Rice functional genomics research: progress and implications for crop genetic improvement. Biotechnol Adv. 2012;30:1059–70. Li N, Li Y. Signaling pathways of seed size control in plants. Curr Opin Plant Biol. 2016;33:23–32. Fehr WR, Caviness CE, Burmood DT, Pennington JS. Stage of development descriptions for soybeans, Glycine Max (L.) Merrill. Crop Sci. 1971;11:929–31. Tang Y, Liu XL, Wang JB, Li M, Wang QS, Tian F, et al. GAPIT Version 2: An enhanced integrated tool for genomic association and prediction. Plant Genome. 2016;9:1–9. James DW, Dooner HK. Isolation of EMS-induced mutants in Arabidopsis altered in seed fatty acid composition. Theor Appl Genet. 1990;80:241–5. Wittmann C. Fluxome analysis using GC-MS. Microb Cell Fact. 2007;6:6. Fiehn O, Kopka J, Trethewey RN, Willmitzer L. Identification of uncommon plant metabolites based on calculation of elemental compositions using gas chromatography and quadrupole mass spectrometry. Anal Chem. 2000;72:3573–80. Dodds ED, McCoy MR, Rea LD, Kennish JM. Gas chromatographic quantification of fatty acid methyl esters: flame ionization detection vs. electron impact mass spectrometry. Lipids. 2005;40:419–28. Kunst L, Taylor DC, Underhill EW. Fatty-acid elongation in developing seeds of Arabidopsis thaliana. Plant Physiol Bioch. 1992;30:425–34. Murray MG, Thompson WF. Rapid isolation of high molecular weight plant DNA. Nucleic Acids Res. 1980;8:4321–5. Schmutz J, Cannon SB, Schlueter J, Ma J, Mitros T, Nelson W, et al. Genome sequence of the palaeopolyploid soybean. Nature. 2010;463:178–83. Li H, Durbin R. Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics. 2009;25:1754–60. Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, Homer N, et al. The Sequence Alignment/Map format and SAMtools. Bioinformatics. 2009;25:2078–9. McKenna A, Hanna M, Banks E, Sivachenko A, Cibulskis K, Kernytsky A, et al. The Genome Analysis Toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. Genome Res. 2010;20:1297–303. Wang K, Li M, Hakonarson H. ANNOVAR: functional annotation of genetic variants from high-throughput sequencing data. Nucleic Acids Res. 2010;38:e164. Felsenstein J. PHYLIP-phylogeny inference package (version 3.2). Cladistics. 1989;5:164–6. Price AL, Patterson NJ, Plenge RM, Weinblatt ME, Shadick NA, Reich D, et al. Principal components analysis corrects for stratification in genome-wide association studies. Nat Genet. 2006;38:904–9. Purcell S, Neale B, Todd-Brown K, Thomas L, Ferreira MA, Bender D, et al. PLINK: a tool set for whole-genome association and population-based linkage analyses. Am J Hum Genet. 2007;81:559–75. Kang HM, Sul JH, Service SK, Zaitlen NA, Kong SY, Freimer NB, et al. Variance component model to account for sample structure in genome-wide association studies. Nat Genet. 2010;42:348–54. Liu XL, Huang M, Fan B, Buckler ES, Zhang ZW. Iterative Usage of fixed and random effect models for powerful and efficient genome-wide association Studies. PLoS Genet. 2016;12:e1005767. Shannon P, Markiel A, Ozier O, Baliga NS, Wang JT, Ramage D, et al. Cytoscape: a software environment for integrated models of biomolecular interaction networks. Genome Res. 2003;13:2498–504. We thank the Platform of National Crop Germplasm Resources of China, the USDA GRIN database, SoyBase, and Dr. Lijuan Qiu for providing publicly available resources. We thank Dr. Songnian Hu (Beijing Institute of Genomics, Chinese Academy of Sciences) for helping us to upload the sequencing data and Dr. Linda R. Klein for providing valuable writing advice and editing the manuscript. This work was supported by the "Strategic Priority Research Program" of the Chinese Academy of Sciences (Grant No. XDA08000000); the National Natural Science Foundation of China (Grant Nos. 31525018 and 91531304); the National Key Research and Development Program (2016YFD0100401, 2017YFD0101401); an Emerging Research Issues Internal Competitive Grant from the Agricultural Research Center in the College of Agricultural, Human, and the Natural Resource Sciences at Washington State University; the Washington Grain Commission (Endowment and Award No. 126593); and the National Institute of Food and Agriculture, U.S. Department of Agriculture (Award Nos. o2015-05798 and 2016-68004-24770). The sequencing data used in this study have been deposited into the Genome Sequence Archive (GSA) database in BIG Data Center (http://gsa.big.ac.cn/index.jsp) under Accession Number PRJCA000205. The previously reported sequence data were deposited into the NCBI database under accession number SRA: SRP045129 and the sequence data newly generated from this study are deposited into Sequence Read Archive (SRA) database in NCBI under Accession Number PRJNA394629. State Key Laboratory of Plant Cell and Chromosome Engineering, Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing, 100101, China Chao Fang, Yanming Ma, Zhi Liu, Zheng Wang, Rui Yang, Min Zhang, Yi Pan, Guoan Zhou, Yanting Shen, Shulin Liu, Tengfei Liu, Jixiang Zhang, Baoge Zhu & Zhixi Tian State Key Laboratory of Plant Genomics, Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing, 100101, China Shiwen Wu, Hong Yu, Hao Qin, Jia Yuan, Jiayang Li & Guodong Wang Institute of maize research, Heilongjiang Academy of Agricultural Sciences, Harbin, 150086, China Guanghui Hu Institute of Animal Science, Chinese Academy of Agricultural Sciences, Beijing, 100193, China Zhengkui Zhou Mudanjiang Branch of Heilongjiang Academy of Agricultural Sciences, Mudanjiang, 157041, China Haixiang Ren & Yanping Wang Institute of Soybean Research, Heilongjiang Academy of Agricultural Sciences, Harbin, 150086, China Weiguang Du & Dezhi Han Heihe Branch of Heilongjiang Academy of Agricultural Sciences, Heihe, 164300, China Hongrui Yan School of Computer Science and Technology, Wuhan University of Technology, Wuhan, 430070, China Xiaohui Yuan Key Laboratory of Soybean Molecular Design Breeding, Northeast Institute of Geography and Agroecology, Chinese Academy of Sciences, Harbin, 130102, China Fanjiang Kong & Baohui Liu Department of Crop and Soil Sciences, Washington State University, Pullman, WA, 99164, USA Zhiwu Zhang University of Chinese Academy of Sciences, Beijing, 100039, China Chao Fang, Zhi Liu, Yanting Shen, Shulin Liu, Tengfei Liu, Jixiang Zhang, Guodong Wang & Zhixi Tian Chao Fang Yanming Ma Shiwen Wu Zhi Liu Zheng Wang Rui Yang Min Zhang Yi Pan Guoan Zhou Haixiang Ren Weiguang Du Yanping Wang Dezhi Han Yanting Shen Shulin Liu Tengfei Liu Jixiang Zhang Hao Qin Jia Yuan Fanjiang Kong Baohui Liu Guodong Wang Baoge Zhu Zhixi Tian ZT designed the experiments and managed the project. CF, ZL, GH, HY, YS, TL, JZ, ZZhou, XY, FK, BL, JL, Z Zhang, and ZT performed the data analyses. YM, BZ, ZW, RY, MZ, YP, GZ, SL, H R, WD, HY, YW, and DH performed the phenotyping and prepared the DNA samples. SW, HQ, JY, and GW performed oil and protein sample preparation and GC-MS analysis. ZT, ZZhang, and CF wrote the manuscript. All authors read and approved the final manuscript. Correspondence to Zhiwu Zhang or Guodong Wang or Baoge Zhu or Zhixi Tian. Additional file 1: Table S1. Summary of the 809 soybean accessions. (XLSX 55 kb) Whole-genome SNP and INDEL distribution. (PDF 133 kb) Primers used for SNP validation. (XLSX 11 kb) The SNPs getting from before imputation, after imputation, and Sanger sequencing validation. (XLSX 55 kb) Additional file 5: Figures S1-S98. with legends. GWAS results of individual traits and the correlation of different traits. (PDF 21,979 kb) (PDF 21766 kb) Information of 84 phenotype traits. (XLSX 15 kb) Permuted significant thresholds of representative traits. (XLSX 10 kb) Genome-wide association signals of 57 agronomic traits. (XLSX 30 kb) Genome-wide association signals of additional minor-effect loci. (XLSX 23 kb) Additional file 10: Table S9. Candidate causal genes in the fatty acid biosynthesis pathway. (XLSX 10 kb) Additional file 11: Table S10. Candidate causal genes in the lipid biosynthesis pathway. (XLSX 11 kb) Link loci across 51 traits. (XLSX 21 kb) Fang, C., Ma, Y., Wu, S. et al. Genome-wide association studies dissect the genetic networks underlying agronomical traits in soybean. Genome Biol 18, 161 (2017). https://doi.org/10.1186/s13059-017-1289-9
CommonCrawl
Ivanov, Alexandr Alexandrovich Statistics Math-Net.Ru Total publications: 28 Scientific articles: 22 Presentations: 1 This page: 2793 Abstract pages: 5082 Full texts: 1857 References: 67 Doctor of physico-mathematical sciences (1985) Speciality: 01.01.04 (Geometry and topology) Website: http://www.pdmi.ras.ru/~aaivanov Keywords: general topology; topological spaces and generalizations; topologies on one set; convergence; categorical methods; extensions of maps; function spaces; set-valued maps; extensions of spaces; sequential spaces; proximity structures; contiguity structures; uniform spaces; bitopological spaces; fixed-points. Scientific interests and principal results mainly concern problems of general topology. They contain creation (with V. M. Ivanova) the general theory of contiguity spaces, which quickly became classical, creation the general theory of extensions of topological spaces under minimal suppositions, the theory of topological type structures, the general theory of bitopological spaces, the theory of duality between topological structures on product $X\times Y$ and on function space $C(Y,Z)$. Outside these problems there are only some results on fixed-point theory of mappings of metric spaces and also the construction of nontrivial embeddings of cantor-discontinuum into euclidean spaces of dimension more than three. I was a student of Faculty of mathematics and mechanics of LGU from 1938 to 1947 with interruption due war from 1941 to 1945. Ph.D. thesis was defended in 1950. D.Sci. thesis was defended in 1985. From 1950 up to now-scientist of LOMI (POMI). Main publications: Ivanova V. M., Ivanov A. A. Prostranstva smezhnosti i bikompaktnye rasshireniya topologicheskikh prostranstv // Izv. akademii nauk SSSR, seriya matematicheskaya, 1959, 23, 613–634. Ivanov A. . Struktury rasshirenii // Zap.nauchn.semin. LOMI, 1973, t. 36, 50–125. Ivanov A. A. Nepodvizhnye tochki otobrazhenii metricheskikh prostranstv // Zap.nauchn.semin. LOMI, 1976, t. 66, 5–102. Ivanov A. A. Problematika teorii bitopologicheskikh prostranstv I, II, III // Zap.nauchn.semin. LOMI (POMI), 1988, 1993, 1995, t. 167, 208, 231, s. 5–62, 5–66, 9–54. Ivanov A. A. Bitopologicheskie prostranstva // Zap.nauchn.semin. POMI, 1997, t. 242, 7–216. http://www.mathnet.ru/eng/person17968 List of publications on Google Scholar https://zbmath.org/authors/?q=ai:ivanov.alexandr-a https://mathscinet.ams.org/mathscinet/MRAuthorID/214788 Publications in Math-Net.Ru 1. A. A. Ivanov, "Weakly metric spaces", Zap. Nauchn. Sem. POMI, 352 (2008), 94–105 ; J. Math. Sci. (N. Y.), 153:1 (2008), 38–42 2. A. A. Ivanov, "Space structures, their theory, and applications. 3", Zap. Nauchn. Sem. POMI, 352 (2008), 7–93 ; J. Math. Sci. (N. Y.), 153:1 (2008), 1–37 3. A. A. Ivanov, "Space structures, their theory and application. 2", Zap. Nauchn. Sem. POMI, 313 (2004), 5–130 ; J. Math. Sci. (N. Y.), 133:5 (2006), 1543–1598 4. A. A. Ivanov, "Space structures, their theory and application", Zap. Nauchn. Sem. POMI, 287 (2002), 5–226 ; J. Math. Sci. (N. Y.), 125:1 (2005), 1–97 5. A. A. Ivanov, "Metric axioms of Euclidean space", Zap. Nauchn. Sem. POMI, 279 (2001), 89–110 ; J. Math. Sci. (N. Y.), 119:1 (2004), 45–54 6. A. A. Ivanov, "Bitopologies of products and ratios", Fundam. Prikl. Mat., 4:1 (1998), 119–125 7. A. A. Ivanov, "Bitopologies of products and ratios", Zap. Nauchn. Sem. POMI, 242 (1997), 217–229 ; J. Math. Sci. (New York), 98:5 (2000), 617–623 8. A. A. Ivanov, "Bitopological spaces", Zap. Nauchn. Sem. POMI, 242 (1997), 7–216 ; J. Math. Sci. (New York), 98:5 (2000), 509–616 9. A. A. Ivanov, "Problems of the theory of bitopological spaces. 3", Zap. Nauchn. Sem. POMI, 231 (1995), 9–54 ; J. Math. Sci. (New York), 91:6 (1998), 3339–3364 10. A. A. Ivanov, "Problems of the theory of bitopological spaces. 2", Zap. Nauchn. Sem. POMI, 208 (1993), 5–67 ; J. Math. Sci., 81:2 (1996), 2465–2496 11. A. A. Ivanov, "On the problem of spacelike properties of continuous mappings", Zap. Nauchn. Sem. LOMI, 193 (1991), 64–71 12. A. A. Ivanov, "Problems of the theory of bitopological spaces", Zap. Nauchn. Sem. LOMI, 167 (1988), 5–62 13. A. A. Ivanov, "Bicompact manifolds", Zap. Nauchn. Sem. LOMI, 143 (1985), 26–68 ; J. Soviet Math., 37:3 (1987), 1063–1089 14. A. A. Ivanov, "Toward a general theory of uniform spaces", Zap. Nauchn. Sem. LOMI, 143 (1985), 7–18 ; J. Soviet Math., 37:3 (1987), 1053–1059 15. A. A. Ivanov, "Bitopological spaces", Zap. Nauchn. Sem. LOMI, 122 (1982), 30–55 16. A. A. Ivanov, "Structures of topological type", Uspekhi Mat. Nauk, 34:6(210) (1979), 44–50 ; Russian Math. Surveys, 34:6 (1979), 49–56 17. A. A. Ivanov, "Structures of topological type", Zap. Nauchn. Sem. LOMI, 83 (1979), 5–62 ; J. Soviet Math., 19:3 (1982), 1207–1249 18. A. A. Ivanov, "Fixed points of mappings of metric spaces", Zap. Nauchn. Sem. LOMI, 66 (1976), 5–102 ; J. Soviet Math., 12:1 (1979), 1–64 19. A. A. Ivanov, "Enrichments", Zap. Nauchn. Sem. LOMI, 36 (1973), 126–133 20. A. A. Ivanov, "Extension structures", Zap. Nauchn. Sem. LOMI, 36 (1973), 50–125 21. A. A. Ivanov, "Reducibility of uniform structures", Dokl. Akad. Nauk SSSR, 205:6 (1972), 1284–1285 22. V. M. Ivanova, A. A. Ivanov, "Contiguity spaces and bicompact extensions of topological spaces", Izv. Akad. Nauk SSSR Ser. Mat., 23:4 (1959), 613–634 23. M. A. Vsemirnov, E. A. Hirsch, D. Yu. Grigor'ev, G. V. Davydov, E. Ya. Dantsin, A. A. Ivanov, B. Yu. Konev, V. A. Lifshits, Yu. V. Matiyasevich, G. E. Mints, V. P. Orevkov, A. O. Slisenko, "Nikolai Aleksandrovich Shanin (on his 80th birthday)", Uspekhi Mat. Nauk, 56:3(339) (2001), 181–184 ; Russian Math. Surveys, 56:3 (2001), 601–605 24. A. A. Ivanov, "Bibliography on bitopological spaces. 3", Zap. Nauchn. Sem. POMI, 231 (1995), 55–61 ; J. Math. Sci. (New York), 91:6 (1998), 3365–3369 25. A. A. Ivanov, "Editorial preface", Zap. Nauchn. Sem. POMI, 231 (1995), 7–8 26. A. A. Ivanov, "Bibliography on bitopological spaces. 2", Zap. Nauchn. Sem. POMI, 208 (1993), 68–81 ; J. Math. Sci., 81:2 (1996), 2497–2505 27. A. A. Ivanov, "Bibliography on bitopological spaces", Zap. Nauchn. Sem. LOMI, 167 (1988), 63–78 28. A. A. Ivanov, "Preface", Zap. Nauchn. Sem. LOMI, 36 (1973), 4–5 Presentations in Math-Net.Ru 1. Topologies on products and function spaces A. A. Ivanov General Mathematics Seminar of the St. Petersburg Division of Steklov Institute of Mathematics, Russian Academy of Sciences St. Petersburg Department of Steklov Mathematical Institute of Russian Academy of Sciences
CommonCrawl
Changes in inequality in utilization of preventive care services: evidence on China's 2009 and 2015 health system reform Yongjian Xu1, Tao Zhang ORCID: orcid.org/0000-0003-0146-53712 & Duolao Wang3 Ensuring equal access to preventive care has always been given a priority in health system throughout world. This study aimed to decompose inequality in utilization of preventive care services into its contributing factors and then explore its changes over the period of China's 2009–2015 health system reform. The concentration index (CI) and decomposition of the CI was performed to capture income-related inequalities in preventive services utilization and identify contribution of various determinants to such inequality using data from China Health and Nutrition Survey. Then, changes in inequality from 2009 to 2015 were estimated using Oaxaca-type decomposition technique. The CI for preventive services utilization dropped from 0.2240 in 2009 to 0.1825 in 2015. Residential location and household income made the biggest contributions to income-related inequalities in these two years. Oaxaca decomposition revealed changes in residential location, regions and medical insurance made positive contributions to decline in inequality. However, alternation in household income, age and medical services utilization pushed the equality toward deterioration. The pro-rich inequality in preventive healthcare services usage is evident in China despite a certain decline in such inequality during observation period. Policy actions on eliminating urban-rural and income disparity should be given the priority to equalize preventive healthcare. Preventive healthcare is widely recognized as the most cost-effective services as it helps find and address health issues before people have any symptoms [1, 2]. For example, obtaining timely screening tests for certain cancers may mean diagnosis and treatment at early stage of the disease, thereby reducing patient's disease economic risk, especially for the poor. Empirical evidences from previous studies revealed inputs to preventive health services can reduce treatment costs and save rescue costs significantly [2, 3]. Moreover, receiving regular preventive care was found to reduce premature mortality and improve quality of life [4, 5]. Therefore, uneven distribution of preventive healthcare services may result in growing inequalities in economic burden of disease and health between the poor and rich. WHO has identified the equal access to prevention as a public health priority in the "Health for All" Agenda [6]. Simultaneously, ensuring even distribution of preventive care is also an important task to realize Sustainable Development Goal "promote well-being for all at all ages" announced by United Nations. Many countries have realized the importance of prevention and adopted targeted measures to ensure equitable accessibility of preventive healthcare [7, 8]. Although China has witnessed a rapid growth in economy over the past decades, population ageing poses a great challenge for entire society. In 2016, there were 230 million elderly people aged over 60, accounting for 16.7% of the total population, but the proportion of elderly people with chronic diseases was over 65% [9]. Therefore, preventive healthcare is especially important under such circumstances bin China. Indeed, a consensus of the need of shifting focus from disease-oriented to wellness and prevention was reached and "equalization of public health services" has become one of the major health-care policies in China [10, 11]. In order to ensure equal access to preventive care, Chinese governments made great efforts since the launching of the new round of healthcare system reform in 2009. Universal health coverage in China is a nonnegligible accomplishment. Changes in health insurance coverage from approximately 56% in urban areas and around 21% in rural regions in 2003 to almost 95% in 2011 had improved access to medical services as well as preventive care [12, 13]. Additionally, National Essential Public Health Services Package (NEPHSP) was implemented to provide free public health services for urban and rural residents. For instance, vaccination for children aged 0–6 and health management for patients with chronic diseases were provided [14]. Moreover, Chinese government invested a lot to reduce financial barriers in preventive health services delivery. For example, funding subsidy for basic public health service increased from 15 Chinese yuan per capita in 2009 to 55 Chinese yuan per capita in 2018 [15]. Although these health reforms showed optimistic signs in the preventive health care, the challenges still persist. Unequal access to preventive health care is one of important issues to be identified and addressed. Liu and colleagues observed that there was a disparity between urban and rural in utilization of preventive care services after China's health reform, and that income and education made a major contribution to this disparity [16]. Also, a study by Huang et al. indicated the level of preventive care usage was low among those who had low income, without a tertiary education and lived in a less affluent region [10]. In additional, a social gap in access to basic preventive care was found to exist before and after the 2009 health reform [13]. These studies provided some evidence on equalities in preventive health services, but gaps in these literatures need to fill. Firstly, there is scant literature examining socioeconomic-related inequalities in access to preventive healthcare using summary measures such as the concentration index (CI) and horizontal inequity index (HI). Secondly, little is known about changes in inequalities of preventive health services utilization and their associated contributor over the period of China's health system reform. In such context, the present study aims to answer the following two questions: 1) Have inequality in utilization of preventive care services changed during the reform of China's health care system? 2) What were the associated factors contributing to such change? In order to examine the change in inequality in preventive care usage over period of China's health system reform, the data sets used in this study were from China Health and Nutrition Survey (CHNS) 2009 and 2015. CHNS is an ongoing nationally longitudinal study on nutrition, health insurance coverage, healthcare system, health behavior, social and economic transition in the Chinese society, and surveys began in 1989, with subsequent exams every 2 to 4 years, for a total of 10 rounds between 1989 and 2015 [10]. The survey areas originally covered nine provinces: Liaoning, Heilongjiang, Jiangsu, Shandong, Henan, Hubei, Hunan, Guangxi and Guizhou. In 2011 wave, Beijing, Shanghai and Chongqing were added. In each participating province or autonomous mega-city, a multistage random cluster process was used to select representative households and individuals [17]. All information was collected through face-to-face interviews. Details of the CHNS study protocol were published elsewhere [18, 19]. A total of 11,296 and 12,567 individuals participated in the survey in 2009 and 2015, respectively. After excluding data with key variables missing and logic error answers, 8574 in 2009 and 9514 in 2015 respondents were included for this study. Outcome variables Preventive healthcare utilization was measured by asking respondents "During the past 4 weeks, did you receive any preventive health service?". This service in the questionnaire contains health examinations, eye examinations, blood tests, blood-pressure screening, tumor screening, prenatal and postnatal examinations, and any other type of preventative examinations [10, 13, 16]. If the respondent used one of these preventive services, value was given 1. Otherwise the value was 0. Following by Andersen's behavioral model [20], independent variables selected in the present study were divided into three categories: predisposing, enabling and need determinants. We classified gender, age and marital status as predisposing variables to reflect the individuals' propensity to use health services. Enabling factors included education, employment status, medical insurance, annual household income per capita, residential location (urban/rural), region (east/central/west), and family size [13]. These variables represent financing and organizational conditions facilitating access to services. Need factors represent potential needs for health services. We used the questions "Have you ever been sick in the past 4 weeks" and "Have you ever received formal medical care in the past 4 weeks" to assess respondents' needs (Table 1). Table 1 Characteristics of study participants Measuring inequality CI, a widely accepted index, was used to depict inequalities in distribution of preventive healthcare. It quantified the degree of income-related inequality with a range from − 1 to 1. A negative value indicates a pro-poor effect with services being more concentrated on the poor, and vice versa. A zero value represents an absent of inequality [21]. The CI formula is as follows: $$ \mathrm{C}=\frac{2}{\mu } COV\left(y,\gamma \right) $$ Where C was defined in terms of the covariance between the outcome variable (y) and the fractional ranks of household income (γ); μ is the mean of y. Decomposing inequality In order to analyze contribution of independent variables to the inequalities, we also followed the method proposed by Wagstaff et al. to decompose CI [22, 23]. Firstly, regression model on the outcome variable (y) was established: $$ {y}_i={a}^m+{\sum}_k{\beta}_k^m{x}_{ki}+{\mu}_i $$ Where \( {\beta}_k^m \) is the marginal effect (dy/dx) of each x; μi indicates the error term. Then, the concentration index for y can be written as: $$ \mathrm{C}={\sum}_k\left({\beta}_k\overline{x_k}/\mu \right){c}_k+{GC}_{\varepsilon }/\mu $$ Where βk is the marginal effect of xk; \( \overline{x_k} \) and ck are the mean and the concentration index of xk; μ is the mean of y; GCε is the generalized concentration index for ε. This equation reveals the total concentration index consistent of two components: explained component and residual component. The first component contains two elements: 1) Elasticity \( {\beta}_k\overline{x_k}/\mu \) as a unit-free measure of association that indicates the amount of change in dependent variable associated with one-unit change in explanatory variable. 2) ck is the normalized CI of K variable. GCε/μ represents the unexplained component which cannot be described by systematic variation in the determinants across economic groups. Decomposing changes in inequality At the final stage, we used Oaxaca-type decomposition to determine the extent to which change in inequality in preventive healthcare usage between 2009 and 2015 was owing to changes in inequality in the determinants [23,24,25]. The decomposition formula is as follows: $$ \Delta \mathrm{C}={\sum}_k{\eta}_{kt}\left({c}_{kt}-{c}_{kt-1}\right)+{\sum}_k{c}_{kt-1}\left({\eta}_{kt}-{\eta}_{kt-1}\right)+\Delta \left(\raisebox{1ex}{${GC}_{\varepsilon t}$}\!\left/ \!\raisebox{-1ex}{${\mu}_t$}\right.\right) $$ Where ηkt and ηkt − 1 represent the elasticities of explanatory variables in terms of preventive health services usage in 2009 and 2015, respectively. Accordingly, ckt and ckt − 1 are the normalized CIs of explanatory variables in these two years, respectively. All data management and statistical analysis were performed on STATA 14.0. Characteristics of study participants Table 1 provided descriptive statistics for key variables in 2009 and 2015. A slight rise in proportion of respondents who used preventive health services over past 4 weeks was observed during this period. Roughly equal proportion of men and women were presented in the sample in these two wave surveys. More than 80% participants got married. Mean age was between 48 and 50. Most of participants completed high school. More than half of respondents reported they were employed. The medical insurance coverage increased from 90.8% in 2009 to 97.5% in 2015. Due to a rapid growth in Chinese economy, annual household income per capita was doubled during this period. Most of people resided in rural and eastern provinces. Those people who reported suffering from illnesses or receiving formal medical care accounted for a small portion. Decomposition of inequality in utilization of preventive healthcare services A positive CI value for preventive healthcare utilization was found in both 2009 (CI = 0.2240) and 2015 (CI = 0.1825), indicating a pro-rich effect (p < 0.05). In other word, the rich people were more likely to use preventive health services frequently than their poor counterparts. The results from decomposition of inequalities in access to preventive healthcare in 2009 and 2015 were reported in Table 2. Overall, those residing in rural (25.99% in 2009; 13.55% in 2015) made a major contribution to the pro-rich distribution of preventive care in two rounds of investigation, despite a decline appeared in the second wave investigation (Fig. 1). It means that rural residents are less likely to use preventive health services. Additionally, the educational level was also an important contributor for such inequality, especially in respondents completed technical school or college. Table 2 Decomposition of concentration index of preventive health services utilization percentage contribution of determinants to CI of preventive health services utilization Compared with respondents from eastern region, those who were from central and western provinces had a smaller probability to access to preventive care in 2009. However, this disparity was narrowed in 2015. Notably, percentage contribution from annual household income per capita to the uneven distribution of preventive healthcare raised from − 1.43 to 14.86% during the observation period. It indicates that the change in household income worsened such pro-rich inequality considerably. Except the above, other factors, such as age (5.11%), medical services (6.01%) and employment status (− 2.69%), also showed a substantial contribution to the observed pro-rich inequality in 2015, even though contribution importance of these variables were smaller in 2009. Furthermore, participants covered by medical insurance contributed 3.26% to the increase in such inequality in 2009, but decreased to 0.03% in 2015. This change implied that the universal health coverage made a difference in reducing inequality in preventive healthcare. For remaining variables such as gender, marital status, the percentage contribution was very small during these two periods. Decomposing changes in inequality in utilization of preventive healthcare services between 2009 and 2015 As shown above, CI of preventive services utilization reduced by 0.0415 (18.5%) from 2009 to 2015. Then, this reduction was decomposed to seek contributing factors following by Oaxaca-type decomposition. The results were presented in Table 3 and Fig. 2. Table 3 Oaxaca-type decomposition for changes in inequality in preventive health services utilization, 2009–2015 contribution from changes in independent variables to changes in CI of preventive survives utilization Overall, changed CI and elasticities of all independent variables contributed differently to the reduction in inequalities in preventive services utilization. Region (72.39%) and residential location (80.69%) accounted for the largest contributions to the observed decrease in inequality, which mainly due to changes of these two variables in elasticity rather than the unequal distribution. It can be inferred that effects of urban-rural and regional disparities on inequality of preventive care decreased significantly. Additionally, changes in medical insurance coverage (17.48%), employment status (9.28%) and marital status (7.79%) can explain the reduction of CI to some extent. However, annual household income per capita (− 73.07%) was found to become the biggest contributor for the increase in CI of preventive care. In other word, a widening income gap further worsened uneven distribution of preventive services. Also, changes in age (− 24.01%) and medical services utilization (− 25.34%) pushed such inequality into the deterioration. Interestingly, we observed that contribution from the change in technical school and college was opposite, which resulted in total effect for education as an offset. This study sheds some insights into the changes in income-related inequalities in preventive care services usage from 2009 to 2015 in China. The main findings were as following: 1) the pro-rich inequality in preventive health services utilization existed in both periods, but such inequality decreased by 18.5% over time. 2) The change in inequality attributed to the alteration in the interaction among the related determinants. Overall, the finding showed an encouraging sign due to the decline in inequality of preventive health services utilization despite a pro-rich inequality still persisted. In recent years, Chinese governments made great efforts to equalize basic public health services after the new round of health care system reform [14, 26, 27]. The establishment of a three-level preventive healthcare service network in rural areas and the provision of physical examinations for the elderly free of charge are such examples [28]. Naturally, these initiatives are supposed to help reduce uneven distribution of preventive care. Similar to other studies in the field of medical services [29, 30], the significant unequal utilization of preventive care services between rural and urban was observed though decomposition of CI, whereas the substantial decline in such inequality in 2015 was found. Also, region was seen as a vital contributor of the observed inequality. For a long time, the China's rural-urban and regional disparity in economic level caused many problems in the field of healthcare, such as distribution of health services [31, 32]. According to the National Health Statistics Yearbook in 2016, the number of health technician per 1000 persons reached 11.1 in the eastern urban areas, while 3.7 in western rural areas [33]. Predictably, such serious shortage of health workforce in rural and western regions limits preventive health services delivery and usage largely. Additionally, household income was identified as the biggest contributor to unequal access to preventive healthcare in 2015. Actually, evidences from previous studies proved income was associated with health services utilization including preventive health services because a high income means a high payment capacity for healthcare [10, 11, 34]. In line with another study [13], educational level also can help to explain the uneven distribution of preventive healthcare over this period. Generally, those with high educational attainment have more knowledge about disease prevention and a higher level of awareness for the needs of preventive care [35], thereby driving preventive services utilization. Oaxaca decomposition revealed that the reduction in inequality arose from the alteration in the interaction among the related determinants. Changes in residential location, regions and medical insurance were observed to have made a major contribution to reduction of inequality. This finding mainly linked to fact that Chinese governments strived to establish the public health system covering rural and urban residents and universal health insurance system since 2009 health system reform. For example, expansion of basic medical insurance coverage reduced a financial burden in seeking health services, especially for the poor [36]. Additionally, other studies also elaborated that plenty of funds as well as health resources were inputted into rural and undeveloped areas, which helped bridge urban-rural and regional disparities in distribution of preventive care [30, 31]. However, we observed that alterations in household income, age and medical services use pushed the inequality in preventive health services usage towards deterioration. A possible explanation is related to the population ageing and widening income gap in China. Previous studies showed low-income families tended to spend a large proportion of disposable income on basic living needs rarely involved in preventive health services [37]. Accordingly, inequality of income in China was continuing to increase over the past few decades [38], ultimately expanding the gap in purchasing healthcare services between the rich and poor. Moreover, the rapid growth in older population in China resulted in an increased chronic illness and disability, and accordingly higher needs for healthcare [39]. Therefore, more preventive care services were biased toward those people. Also, the results indicated that changes in those used medical services over past 4 weeks made a negative contribution to the reduction in such inequality. It was presumed that this group of people seemed to have high needs for health services. Simultaneously, the evidence that improved supply capacity of public health services over such period reduced barrier to seek preventive care [26, 28] can help to explain why those people used more preventive services in comparison to the past. Several limitations in this study should be mentioned. Firstly, using single one variable is limited to assessed preventive health services utilization due to non-availability of other variables in CHNS. Secondly, independent variables used in decomposition of inequality mainly contained characteristics of respondents, rarely involved in supply-side factors affecting preventive services use, such as the distribution density of health workers, price of preventive healthcare. Thirdly, only 7 years were observed since 2009 because CHNS data is currently updated to 2015. Therefore, further study should be focused on changes in inequality over a longer period after China's health system reform if data is available. Finally, causal interpretations should be made with caution since data were drawn from a cross-sectional study. In spite of these shortcomings, we have extended current research using a national representative sample and a frequently used methodology to measure inequality in preventive services utilization. Additionally, this study also provided a deep understanding on the change in uneven distribution of preventive care during the new round of health care reform in China. Overall, preventive healthcare is in favor of the rich in China in spite of a certain degree of decline in such inequality from 2009 to 2015. The Oaxaca decomposition analysis suggested that the reduction in pro-rich inequality mainly attributed to the narrowed urban-rural and regional disparities in terms of healthcare delivery. However, a widening income gap further worsened inequality in preventive healthcare during such period. Policies should still promote balanced development among different regions, and emphasize on eliminating the gaps between rich and poor. In addition, universal health insurance system also should be designed to cover basic preventive health services for all people. The data used in this study can be found at https://www.cpc.unc.edu/projects/china. CHNS: China Health and Nutrition Survey Concentration index NEPHSP: National Essential Public Health Services Package Liu Q, Cai H, Yang LH, Xiang YB, Yang G, Li H, Gao YT, Zheng W, Susser E, Shu XO. Depressive symptoms and their association with social determinants and chronic diseases in middle-aged and elderly Chinese people. Sci Rep. 2018;8:3841. Yan G, Akira B, Takumi N, Toshiki M, Dulamsuren L. Could investment in preventive health care services reduce health care costs among those insured with health insurance societies in Japan? Popul Health Manage. 2014;17:42–7. Maciosek MV, Coffield AB, Flottemesch TJ, Edwards NM, Solberg LI. Greater use of preventive services in U.S. health care could save lives at little or no cost. Health Aff. 2010;29:1656–60. Shirley M, Andrea K, Kubica MA, Sara W, Kevin H. Personalized preventive care reduces healthcare expenditures among Medicare advantage beneficiaries. Am J Manag Care. 2014;20:613–20. Haffty BG, Kornguth P, ., Fischer D, ., Beinfield M, ., Mckhann C, Mammographically detected breast cancer. Results with conservative surgery and radiation therapy. Cancer 1991, 67:2801. Organization WH. Health for all targets: the health policy for Europe. Health for All. 1993;234:119–140. Carrieri V, Wuebker A. Assessing inequalities in preventive care use in Europe. Health Policy. 2013;113:247–57. Adeyanju O, Tubeuf S, Ensor T. Socio-economic inequalities in access to maternal and child healthcare in Nigeria: changes over time and decomposition analysis. Health Policy Plann. 2017;32:1111–8. Xie H, Cheng C, Tao Y, Zhang J, Robert D, Jia J, Su Y. Quality of life in Chinese family caregivers for elderly people with chronic diseases. Health Qual Life Outcomes. 2016;14:1–9. Huang C, Liu CJ, Pan XF, Liu X, Li NX. Correlates of unequal access to preventive care in China: a multilevel analysis of national data from the 2011 China health and nutrition survey. BMC Health Serv Res. 2016;16:177. Fan L, Liu J, Habibov NN. A multilevel Logit estimation on the determinants of utilization of preventive health care and healthy lifestyle practice in China. World Med Health Policy. 2015;7:309–28. Yip WC, Hsiao WC, Chen W. Early appraisal of China's huge and complex health-care reforms. Lancet. 2012;379:833–42. Lee YH, Chiang T, Liu CT. Residents' educational attainment and preventive care utilization in China. Int J Health Care Qual Assur. 2018;31:41–51. Yin D, Wong ST, Wei C, Xin Q, Wang L, Cui M, Tao Y, Li R, Zheng X, Yang H. A model to estimate the cost of the National Essential Public Health Services Package in Beijing, China. BMC Health Serv Res. 2015;15:222. Notice on Doing a Good Job of the National Basic Public Health Service Project in 2018 [http://www.nhc.gov.cn/jws/s3577/201806/acf4058c09d046b09addad8abd395e20.shtml]. Accessed 20 June 2018. Liu X, Li N, Liu C, Ren X, Liu D, Gao B, Liu Y. Urban-rural disparity in utilization of preventive care services in China. Medicine. 2016;95:e4783. Zhang J, Wang H, Wang Z, Du W, Su C, Zhang J, Jiang H, Jia X, Huang F, Ouyang Y. Prevalence and stabilizing trends in overweight and obesity among children and adolescents in China, 2011-2015. BMC Public Health. 2018;18:571. Popkin BM, Shufa D, Fengying Z, Bing Z. Cohort profile: the China health and nutrition survey--monitoring and understanding socio-economic and health change in China, 1989-2011. Int J Epidemiol. 2010;39:1435–40. Zhang B, Zhai FY, Du SF, Popkin BM. The China Health and Nutrition Survey, 1989–2011. Obes Rev. 2014;15:2–7. Andersen RM. Revisiting the behavioral model and access to medical care: does it matter? J Health Soc Behav. 1995;36:1–10. Wagstaff A, Paci P, Van Doorslaer E. On the measurement of inequalities in health. Soc Sci Med. 1991;33:545–57. O'Donnell O, Van Doorslaer E, Wagstaff A. Analyzing health equity using household survey data. Washington: The World Bank; 2008. Wagstaff A, van Doorslaer E, Watanabe N. On decomposing the causes of health sector inequalities, with an application to malnutrition inequalities in Vietnam. J Econ. 2003;112:219–27. Amini RM, Rashidian A, Khosravi A, Arab M, Abbasian E, Khedmati ME. Changes in Socio-Economic Inequality in Neonatal Mortality in Iran Between 1995–2000 and 2005-2010: An Oaxaca decomposition analysis. Int J Health Policy Manag. 2017;6:219–28. Oaxaca R. Male-female wage differentials in urban labor markets. Int Econ Rev. 1973;14:693–709. Zhang D, Pan X, Li S, Liang D, Hou Z, Li Y, Shi L. Impact of the National Essential Public Health Services Policy on hypertension control in China. Am J Hypertens. 2017;31:115–23. Wang F, Yong-Bin LI, Ding X. The national essential public health services project in China: progress and equity. Chin J Health Policy. 2013;6:9–14. Yang L, Sun L, Wen L, Zhang H, Li C, Hanson K, Fang H. Financing strategies to improve essential public health equalization and its effects in China. Int J Equity Health. 2016;15:194. Meina L, Qiuju Z, Mingshan L, Churl-Su K, Hude Q. Rural and urban disparity in health services utilization in China. Med Care. 2007;45:767–74. Jian W, Chan KY, Reidpath DD, Xu L. China's rural-Urban Care gap shrank for chronic disease patients, but inequities persist. Health Aff. 2010;29:2189–96. Hua L, Chang GH. Disparity in health resource allocation between rural and urban areas in China: is it getting worse? Chin Econ. 2008;41:45–55. Li Y. Analysis on the disparity in economic growth and consumption between urban sector and rural sector of China: 1978-2008. Front Econ China. 2010;5:559–81. China NHCotPsRo. China health statistical yearbook. Beijing: Peking Union Medical College Press; 2017. Xie E. Income-related inequalities of health and health care utilization. Econ Res J. 2009;6:131–56. Yang W. China's new cooperative medical scheme and equity in access to health;care: evidence from a longitudinal household survey. Int J Equity Health. 2013;12:20. Fang K, Shia B, Ma S. Health insurance coverage and impact: a survey in three cities in China. PLoS One. 2012;7:e39157. Lufa Z, Nan L. Health reform and out-of-pocket payments: lessons from China. Health Policy Plann. 2014;29:217–26. Yu X, Xiang Z. Income inequality in today's China. Proc Natl Acad Sci U S A. 2014;111:6928. Feng Z, Liu C, Guan X, Mor V. China's rapidly aging population creates policy challenges in shaping a viable long-term care system. Health Aff. 2012;31:2764–73. We thank the National Institute of Nutrition and Food Safety, China Center for Disease Control and Prevention, Carolina Population Center, the University of North Carolina at Chapel Hill, the NIH and the Fogarty International Center, NIH for financial support for the CHNS data collection and analysis files. This study was funded by National Natural Foundation of China (grant number: 71804144), China Postdoctoral Science Foundation (grant number: 2017 M623204) and Xi'an Jiaotong University (grant number: xzd012019015). School of Public Policy and Administration, Xi'an Jiaotong University, Xi'an, Shaanxi, China Yongjian Xu Department of Health Management, School of Medicine and Health Management, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China Department of Clinical Sciences, Liverpool School of Tropical Medicine, Liverpool, L3 5QA, UK Duolao Wang Search for Yongjian Xu in: Search for Tao Zhang in: Search for Duolao Wang in: TZ conceptualized this study. YX collected and analyzed the data. TZ and YX wrote the first draft of the manuscript. YX and DW critically commented the paper. The final version submitted for publication was read and approved by all authors. Correspondence to Tao Zhang. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in the study. Xu, Y., Zhang, T. & Wang, D. Changes in inequality in utilization of preventive care services: evidence on China's 2009 and 2015 health system reform. Int J Equity Health 18, 172 (2019). https://doi.org/10.1186/s12939-019-1078-z Oaxaca decomposition
CommonCrawl
Journal of Mathematics in Industry ECMI Editorials Quantifying the shift in social contact patterns in response to non-pharmaceutical interventions Zachary McCarthy1,2, Yanyu Xiao3, Francesca Scarabel1,2,4, Biao Tang1,2, Nicola Luigi Bragazzi1,2, Kyeongah Nah1,2, Jane M. Heffernan5, Ali Asgary6, V. Kumar Murty7,8, Nicholas H. Ogden9 & Jianhong Wu ORCID: orcid.org/0000-0003-0376-325X1,2 Journal of Mathematics in Industry volume 10, Article number: 28 (2020) Cite this article 22 Altmetric Social contact mixing plays a critical role in influencing the transmission routes of infectious diseases. Moreover, quantifying social contact mixing patterns and their variations in a rapidly evolving pandemic intervened by changing public health measures is key for retroactive evaluation and proactive assessment of the effectiveness of different age- and setting-specific interventions. Contact mixing patterns have been used to inform COVID-19 pandemic public health decision-making; but a rigorously justified methodology to identify setting-specific contact mixing patterns and their variations in a rapidly developing pandemic, which can be informed by readily available data, is in great demand and has not yet been established. Here we fill in this critical gap by developing and utilizing a novel methodology, integrating social contact patterns derived from empirical data with a disease transmission model, that enables the usage of age-stratified incidence data to infer age-specific susceptibility, daily contact mixing patterns in workplace, household, school and community settings; and transmission acquired in these settings under different physical distancing measures. We demonstrated the utility of this methodology by performing an analysis of the COVID-19 epidemic in Ontario, Canada. We quantified the age- and setting (household, workplace, community, and school)-specific mixing patterns and their evolution during the escalation of public health interventions in Ontario, Canada. We estimated a reduction in the average individual contact rate from 12.27 to 6.58 contacts per day, with an increase in household contacts, following the implementation of control measures. We also estimated increasing trends by age in both the susceptibility to infection by SARS-CoV-2 and the proportion of symptomatic individuals diagnosed. Inferring the age- and setting-specific social contact mixing and key age-stratified epidemiological parameters, in the presence of evolving control measures, is critical to inform decision- and policy-making for the current COVID-19 pandemic. In response to the current COVID-19 pandemic, interventions aimed at controlling local transmission such as school and non-essential business closures, physical distancing, contact tracing, enhanced surveillance and diagnostic testing have been adopted throughout many nations of the world [1]. The efficacy of these measures and their influence on the trajectory of local epidemics has been quantified in a series of mechanistic modelling studies [2–5], as well as in systematic reviews and meta-analyses [6–8]. Additionally, key factors associated with demographic heterogeneities such as age-dependent social contact mixing and susceptibility to infection and their implications on transmission patterns of COVID-19 have been explored in prior works [9–11]. While there has been increasing utilization of age- and setting-specific contact mixing patterns to inform COVID-19 pandemic public health decision-making, rigorously quantifying their variations during a pandemic intervened by changing public health measures presents a significant challenge in the absence of time and resources to conduct a high-quality contact survey (e.g., as in prior work [11–13]). Moreover, a methodology for identifying such age- and setting (workplace, household, school and community)-specific contact mixing patterns using readily available data is in great demand and has yet to be established. Such contact mixing patterns are key for the retroactive evaluation and proactive assessment of the effectiveness of different age- and setting-specific interventions. Further, a comprehensive modelling approach that integrates key heterogeneities by age/setting and a generalized intervention package accounting for evolving non-pharmaceutical interventions, diagnostic testing, contact tracing, and case isolation may be utilized for a broad spectrum of risk assessment, preparedness planning, reopening measures, scenario analysis and intervention evaluation. Understanding age- and setting (workplace, household, school and community)-specific transmission is fundamental for retrospectively assessing the effects of non-pharmaceutical interventions on transmission, and essential for planning (smart) relaxation of measures while protecting the most vulnerable populations. The interruption of contact routes (through interventions such as physical distancing, the closing of schools, workplaces and community gathering places) naturally shifts contact patterns among different settings. We may thus expect to observe an increase in transmission in some settings. Understanding these changes is critical to avoid unwanted increases in transmission amongst vulnerable portions of the population. Since many of the non-pharmaceutical intervention measures taken to counteract the spread of COVID-19 are unprecedented and highly disruptive to personal life, their effects are not completely understood and largely depend on the adherence of individuals and their behavior. Hence, retrospectively assessing the consequences of interventions provides important tools to evaluate the effectiveness of such measures, and to prospectively inform the expected outcomes of relaxations [11, 14] and possible reintroduction of intervention measures in the event of resurgence. In addition to contact mixing patterns, recent attention has been given to age-specific epidemiological and clinical parameters for COVID-19 [9, 10, 15], as they allow one to assess which portions of the population may be key drivers of the epidemic or which portion may be most vulnerable to infection. For instance, children and adolescents are likely key contributors to the spread of respiratory infections, as they tend to mix assortatively and have relatively high contact rates [16]. Consequently, school closure is one of the first non-pharmaceutical interventions considered to mitigate the spread of an emerging respiratory infection. However, if children and adolescents were demonstrated to have low transmissibility and/or susceptibility, their contribution to infection could be minor despite higher contact rates and highly assortative mixing patterns. In this light, understanding the age-specific contribution to infection in terms of transmissibility and susceptibility is key for planning interventions [9] and designing effective vaccination strategies and other pharmaceutical interventions. Here we propose a general methodology to investigate the age- and setting-specific contribution to the transmission of an infectious disease and illustrate the approach by taking the COVID-19 epidemic in Ontario, Canada as a case. We develop and utilize a suitable transformation accounting for the specific demographic profile in Ontario so that for a given choice of age group divisions, age-specific contact patterns can be constructed from seminal works on contact mixing [16, 17]. The transmission model we propose accounts for two key control measures for communicable diseases, diagnosis of cases as a result of symptoms, and isolation through contact tracing [2–5]. By fitting the model output to age-stratified incidence data, we inform critical parameters including the age-stratified susceptibility to infection. Finally, by incorporating information about the features and timing of non-pharmaceutical interventions and the consequent shifts in contact patterns, the fitting procedure allows us to quantify such changes and retrospectively inform the effect of intervention measures on the social contact patterns and the setting of transmission events. Specifically, in Ontario we consider four key periods of control measure escalation which we denote by distinct phases: phase 0, monitoring and international travel advisories (until March 13); phase 1, public school closure (March 14–17); phase 2, state of emergency declaration and physical distancing advisories (March 18–23); phase 3, closure of non-essential workplaces (March 24–May 16). We explore the robustness of our estimates using several layers of uncertainty analysis. Transmission model We extended the COVID-19 transmission dynamics model introduced in prior studies [2–5] to include age structure. The model captures essential epidemic features and key public health interventions. The population is divided into susceptible (S), exposed (E), asymptomatic infectious (A), infectious with symptoms (I), and recovered (R) compartments according to the epidemiological status of individuals, and further into diagnosed and isolated (D), quarantined susceptible (\(S_{q}\)), and isolated exposed (\(E_{q}\)) compartments based on control interventions involving testing, contact tracing, quarantine and isolation. In particular, the model accounts for contact tracing, where a proportion, q, of individuals exposed to the virus are quarantined. The quarantined individuals can either move to the compartment \(E_{q}\) or \(S_{q}\), depending on whether they are effectively infected or not, while the other proportion, \(1 - q\), consists of individuals exposed to the virus who are missed from contact tracing and, therefore, move to the exposed compartment E once effectively infected, or stay in the compartment S otherwise. Furthermore, the population is stratified into n age groups, where the social interactions between age groups are described via a contact matrix, C, which incorporates information about age-specific contacts in different settings, including households, schools, workplaces and the general community. We assumed that the susceptibility and diagnosis rates depend on the specific age group, whereas all remaining parameters are constant across age groups. The model reads $$ \begin{gathered} S'_{i} = - \sum _{j=1}^{n} \bigl( \beta _{i} C_{ij} +q (1- \beta _{i} ) C_{ij} \bigr) S_{i} ( I_{j} + \theta A_{j} )/ N_{j} + \lambda S_{qi}, \\ E'_{i} = \sum_{j=1}^{n} \beta _{i} C_{ij} (1-q )S_{i} ( I_{j} + \theta A_{j} )/ N_{j} -\sigma E_{i}, \\ I'_{i} = \sigma \varrho E_{i} - ( \delta _{Ii} + \gamma _{I} ) I_{i}, \\ A'_{i} = \sigma ( 1 - \varrho ) E_{i} - \gamma _{A} A_{i}, \\ S'_{qi} = \sum_{j=1}^{n} (1- \beta _{i} ) C_{ij} q S_{i} ( I_{j} + \theta A_{j} )/ N_{j} -\lambda S_{qi}, \\ E'_{qi} = \sum_{j=1}^{n} \beta _{i} C_{ij} q S_{i} ( I_{j} + \theta A_{j} )/ N_{j} - \delta _{q} E_{qi}, \\ D'_{i} = \delta _{Ii} I_{i} + \delta _{q} E_{qi} - ( \alpha + \gamma _{D} ) D_{i}, \\ R'_{i} = \gamma _{I} I_{i} + \gamma _{A} A_{i} + \gamma _{D} D_{i} \end{gathered} $$ for each age group \(i=1,\ldots,n\), where \(N_{i}\) (\(N_{j} \)) denotes the total population in age group i (j). Additionally, we allowed several parameters to be time-dependent during phase 3, to account for a gradual adaptation of the society to the stricter physical distancing measures. The transmission dynamics in an age-stratified population is illustrated in Fig. 1 with all model compartments and parameters defined in Tables 1 and 2, respectively. The model parameter definitions are also provided in the subsequent section. Flowchart of the transmission model. Schematic diagram of the transmission model accounting for a generalized package of control measures. For the construction of the mathematical model, see Methods Table 1 List of compartments in the transmission model for COVID-19 in Ontario, Canada Table 2 List of parameters used in the transmission model for COVID-19 in Ontario, Canada Parameter definitions The susceptibility to infection \(\beta _{i}\) (i.e., the probability of a susceptible individual being infected upon contact with an infectious individual) and the diagnosis rate from the symptomatic compartment \(\delta _{Ii}\) were assumed to be age-specific. We assumed the age-dependent susceptibility to infection and the diagnosis rates to be constant for each age class during phases 0–3. The incubation period \(1/\sigma _{i} =1/\sigma \), the quarantine period \(1/\lambda _{i} =1/\lambda \), the modification factor for asymptomatic transmission \(\theta _{ij} =\theta \), the recovery rates \(\gamma _{Ai} = \gamma _{A}\), \(\gamma _{Di} = \gamma _{D}\), \(\gamma _{Ii} = \gamma _{I}\), the disease-induced death rate \(\alpha _{i} =\alpha \), the ratio of symptomatic infections \(\varrho _{i} =\varrho \), and the quarantine proportion and diagnosis rates, \(q_{i} =q\) and \(\delta _{qi} = \delta _{q}\), were assumed to be equal for all age groups. All age-independent parameters are listed in Table 2. Most parameters were assumed to be constant through all the escalation phases, except for the quarantine rate, which was assumed to be exponentially increasing in phase 3, in order to capture the intensification of contact tracing effort from the public health system. We set $$ q ( t ) = \textstyle\begin{cases} q_{0}, &t< T_{2}, ( \text{phase 0--2} ), \\ ( q_{0} - q_{b} ) e^{- r_{q} ( t- T_{2} )} + q_{b}, &t\geq T_{2}, ( \text{phase 3} ), \end{cases} $$ where \(q_{0}\) is the constant quarantine proportion prior to March 24, \(q_{b}\) is the maximum quarantine proportion after March 24 and \(r_{q}\) represents the exponential rate of increase. In addition, we assumed time-dependent contact rates in phase 3, to capture the gradual adaption of physical distancing in various locations during this period. To capture the change in contact patterns during different phases of escalation of physical distancing, we defined the social contact matrix C piecewise as follows: $$ C ( t ) = \textstyle\begin{cases} C^{0},& T_{S} < t< T_{0}, ( \text{phase 0} ),\\ C^{1}, &T_{0} < t< T_{1}, ( \text{phase 1} ),\\ C^{2}, &T_{1} < t< T_{2}, ( \text{phase 2} ),\\ C^{3} ( t ), &T_{2} < t< T, ( \text{phase 3} ), \end{cases} $$ where \(T_{S}\), \(T_{0}\), \(T_{1} \), \(T_{2} \) and T mark as the start date of phase 0, 1, 2, 3 and the end date of phase 3 (before the de-escalation phases). The contact matrix in phase 3 was assumed to be time-dependent, to describe a gradual adaptation of the society to adhere to the stricter measures enforced. Each matrix \(C^{0}\), \(C^{1}\), \(C^{2}\), \(C^{3} (t)\) was constructed as a linear combination of the setting-specific contact matrices. Specifically, let \(C^{H}\), \(C^{W}\), \(C^{C}\), \(C^{S}\) denote the baseline contact matrices quantifying the daily contact rate of physical and nonphysical contacts in household, workplace, community and school settings. The superscripts H, W, C, S associated with contact matrices and model parameters will be used to refer to household, workplace and community settings, respectively. The entries of the contact matrices are the number of daily social contacts of a single individual in age class i with individuals in age class j (units contacts/day as defined by those contacts believed to be relevant for the spread of respiratory illnesses) [16]. In what follows, \(p_{l}^{k} >0\) denotes the relative increase (or decrease) in the weight of the daily individual contact rate matrix in setting k from intervention phase l. That is, the superscript k can be H, W, C or S to associate parameters with household, workplace, community or school settings, respectively. Note that, because of the different nature of contacts in different settings, a decrease in contact in one setting does not necessarily mean an equal increase in a different setting in terms of either weight or numbers (and vice versa). In fact, each escalation phase could change the contact patterns both qualitatively and quantitatively, and contacts lost in one setting do not necessarily shift completely to another. For this reason, we did not assume a specific relation between the coefficients \(p_{l}^{k}\) in each phase, allowing contact patterns in each setting to change independently. In the following, the contact matrix in each escalation phase is defined and discussed individually. Phase 0: monitoring and international travel advisories We assumed the contact mixing in the absence of physical distancing and mandatory closures is the linear combination of the four setting-specific matrices, each with an equal weight of 1: $$ C^{0} = C^{H} + C^{W} + C^{C} + C^{S}. $$ Phase 1: public school closure The phase 1 mixing matrix is given by: $$ C^{1} = \bigl( 1+ p_{1}^{H} \bigr) C^{H} + C^{W} + \bigl( 1+ p_{1}^{C} \bigr) C^{C} + 0 C^{S}. $$ Here \(p_{1}^{H}\) is the percent increase in the weight of the household contact matrix from phase 0 to phase 1. Similarly, \(p_{1}^{C}\) is the percent increase in the weight of the community contact matrix from phase 0 to phase 1. In the remaining equations, the school contact matrix is no longer written explicitly to simplify their appearance. Phase 2: physical distancing advisories $$ C^{2} = \bigl( 1+ p_{2}^{H} \bigr) \bigl( 1+ p_{1}^{H} \bigr) C^{H} + C^{W} + \bigl( 1- p_{2}^{C} \bigr) \bigl( 1+ {p}_{1}^{C} \bigr) C^{C}. $$ Here \(p_{2}^{H}\) is the percent increase in the weight of the household contact matrix from phase 1 to phase 2. Also, \(p_{2}^{C}\) is the percent reduction in the weight of the community contact rate matrix from phase 1 to phase 2 due to physical distancing advisories and closures. Phase 3: closure of non-essential workplaces During phase 3, we allowed the coefficients of each setting-specific matrix to be time-dependent (either exponentially increasing or decreasing), to capture the gradual adaptation of society to the new more restrictive measures. Specifically, we assumed that workplace and community contacts were gradually decreasing, whereas household contact was gradually increasing, possibly with different rates. The phase 3 mixing matrix is given by: $$\begin{aligned} \begin{aligned} C^{3} (t) = {}&\bigl[\bigl(1+ p_{3}^{H} \bigr)- e^{- r_{H} ( t- T_{2} )} p_{3}^{H} \bigr] \bigl(1+ p_{2}^{H} \bigr) \bigl(1+ p_{1}^{H} \bigr) C^{H} \\ & {} + \bigl[ p_{3}^{W} e^{- r_{W} (t- T_{2} )} +\bigl(1- p_{3}^{W} \bigr)\bigr] C^{W} \\ &{}+ \bigl[ p_{3}^{C} e^{- r_{C} ( t- T_{2} )} +\bigl(1- p_{3}^{C} \bigr)\bigr]\bigl(1- p_{2}^{C} \bigr) \bigl(1+ p_{1}^{C} \bigr) C^{C}, \end{aligned} \end{aligned}$$ where \(p_{3}^{H}\) is the maximal percent increase in the weight of the household contact matrix from phase 2 to phase 3. \(p_{3}^{W}\) and \(p_{3}^{C}\) are the maximal percent reduction in the weight of the workplace and community contact matrix, respectively, from phase 2 to phase 3 resulting from the closure of non-essential workplaces. Age group subdivision We stratified the population of Ontario into \(n=6\) age groups, comprised of ages 0–5, 6–13, 14–17, 18–24, 25–64, 65+, which broadly represent children (ages 0–5), elementary and middle school (ages 6–13), high school (ages 14–17), university (ages 18–24), workforce (ages 25–64) and seniors (ages 65+), and we ascribe the indices \(i=1, 2,\ldots, 6\) to these classes (Table 3). We considered six age groups since most physical distancing measures taken in Ontario have been formulated and implemented corresponding to these six age-groups. Table 3 Details of the age groups used in transmission model (1) for COVID-19 in Ontario, Canada The initial susceptible populations were fixed as the total population of each age class in Ontario, Canada which were obtained from Statistics Canada [20]. We considered the initial fitting date (\(t=0\)) to be February 26, which marks the date at which sustained case accumulation began in Ontario. Therefore, we supposed there were initially no recovered individuals, that is \(R_{i} ( 0 ) =0\) for each age class i. The initial diagnosed populations were fixed as the numbers of cumulative cases for each class till February 26. Similarly, we set \(S_{qi} ( 0 ) = E_{qi} ( 0 ) =0\) for each age class i. Since the confirmed cases for the three youngest age groups (ages 0–17) are zero for at least one week after February 26, we set \(E_{i} ( 0 ) = A_{i} ( 0 ) = I_{i} ( 0 ) =0\) for \(i=1,2,3\), while we estimated the conditions for \(i=4, 5, 6\). All the initial conditions for model (1) are listed in Table 2 and Table 4. Table 4 Age-specific model parameter estimates in Ontario, Canada (mean and standard deviation) We used prior results and data to construct the baseline social contact matrices for Ontario, Canada. We retrieved the age-stratified population estimates for Ontario in year 2019 and Canada in year 2006 from Statistics Canada [20]. As part of the POLYMOD project, social contact surveys in eight European countries were conducted between May 2005 and September 2006 to quantify age-specific contact heterogeneity [16]. Country-specific data on household structures, labor-force participation rates, and school enrolment were utilized to project the European social contact data to contact rates representative of 152 countries, including Canada [17]. We utilized the projected setting (household, workplace, community and school)-specific contact matrices for Canada [17] in this study and adapted them to represent mixing in Ontario. To proceed with model fitting, there are several additional data sources utilized within this study. While in this case study data specific to Ontario was utilized, we note this approach is based on a general methodology that can be applied broadly. First, we utilized the timeline of interventions taken by the government of Ontario, Canada. The escalation of physical distancing measures in Ontario consisted of three major steps: public school closure (from March 14), declaration of state of emergency, with closure of public venues and events and physical distancing advisories (from March 18), and closure of non-essential establishments (from March 24). On May 16, selected non-essential activities had resumed, marking the end of the non-essential establishment closure in Ontario. Second, we utilized the age-stratified cumulative confirmed positive tests in Ontario, Canada. We obtained this data of the age-specific cumulative cases of COVID-19 in Ontario from the Ontario Ministry of Health, which was made available to us through the Ontario COVID-19 Modeling Consensus Table. Third, the age-structured demographic data specific to Ontario is available publicly by Statistics Canada [20]. These main sources of data enabled the fitting of mathematical model and all subsequent analyses. Contact mixing matrices We established the setting-specific contact matrices in the household, workplace, community, and school in Ontario, Canada denoted \(C^{H}\), \(C^{W}\), \(C^{C}\), \(C^{S}\), respectively (Fig. 2). We derived these mixing matrices from the Canada-specific contact matrices [17] through a series of transformations to account for the Ontario demographic profile and the desired age group division, then utilized these as baseline contact patterns in the absence of physical distancing measures. The details, including the specific definitions and mathematical formulation of the baseline contact matrices, are included in Appendix A: Baseline contact matrices. Heatmaps of estimated social contact matrices in Ontario, Canada. Age-specific contact mixing in the absence of physical distancing interventions in Ontario (A) Households, (B) Workplaces, (C) Schools, (D) Communities and other locations and (E) contact mixing in all four settings combined Model fitting procedure To estimate model parameters, we fit the mathematical model to age-stratified cumulative incidence data. We first informed model (1) with several parameter values from existing studies (Table 2) and also the established social contact matrices. We then run model (1) forward from time \(t = T_{s}\) (chosen as February 26, corresponding to the date at which sustained case accumulation began in Ontario) to time T (chosen as May 16, the date of first easing of restrictions), and determined parameters which minimize the least square error against the age-stratified cumulative incidence. In other words, we estimated parameters associated with model (1) by fitting to six lists of time series data representing cumulative incidence according to age class. To obtain confidence intervals for the estimated parameters, we used a bootstrap method to generate 1000 cumulative incidence time series from a Poisson distribution with mean given by the reported data and fitted the model to each dataset. We assumed a Poisson error structure in the newly reported cases to address noise in this time series data (for context of this topic, see [21]). The control reproduction number The control reproduction number \(R_{t}\) describes the average number of secondary cases that one random infected individual produces during its infectious period, under the control measures (diagnosis and quarantine). We obtained the control reproduction number of model (1) using the next generation method [22]. \(R_{t}\) is the spectral radius of the next generation matrix \(K(t)\). The (time-dependent) next generation matrix for the parameters considered in this paper (i.e. with \(\beta _{i}\) and \(\delta _{Ii}\) both age specific) is $$ \bigl[K(t)\bigr]_{ij} = \bigl(1-q(t)\bigr) A_{j} \beta _{i} C_{ij} (t) N_{i} / N_{j} $$ for \(i, j\in \{ 1, 2,\ldots, n \} \) where \(A_{j} = \frac{\rho }{\delta _{I,j} + \gamma _{I}} + \frac{\theta ( 1-\rho )}{\gamma _{A}}\). Infectious contacts We compared the estimated values for the mean infectious contact, i.e., the mean number of contacts per day that result in an infection with a homogeneous mixing model of similar scope [2]. In the age-structured model (1) each contact contributes differently to transmission; hence, the mean contact rates estimated from the age-structured model in each phase are not directly comparable with the contact rate obtained by fitting a homogeneous model, as done in previous work [2]. For the homogeneous model, this is computed as βc, where c and β denote the contact rate and probability of infection upon contact previously estimated, respectively [2]. For the age-structured model, we considered the combination \(\frac{1}{N} \sum_{i} \beta _{i} N_{i} \sum_{j} C_{ij} (t)\), which accounts for the age-stratified susceptibility to infection. Contact rates and infections acquired in each setting The setting-specific contact rate was computed based on each estimated setting-specific contact matrix and population profile in 2019 for Ontario. The mean connectivity, or the number of daily contacts averaged over all individuals in the population with mixing matrix C, is defined as $$ \langle k\rangle = \frac{1}{N} \sum_{i,j} C_{ij} N_{i}. $$ The all-setting social contact rates were calculated from the sum of the setting-specific contact rates. For the details of the terminology, definitions, and methods associated with the contact mixing matrices, see Appendix: Baseline contact matrices. We computed the infections acquired in each setting by using the estimated model parameters (Tables 2 and 4) and model (1). Specifically, the infections acquired in each setting were tracked in time as the sum of the inflow to the exposed (E) and exposed quarantined (\(E_{q}\)) compartments. We added four additional compartments to the mathematical model, one each for workplace, school, household and community setting, and using the estimated model parameters and setting-specific contact matrices, solve the system of ordinary differential equations to assess the infections acquired in each setting. The robustness of our estimates is explored with several layers of uncertainty analysis. First, we quantified parameters in terms of the uncertainty in reported cases by assuming a Poisson error structure and fit model (1) to 1000 corresponding realizations. The resulting uncertainty in the model fit is expressed in terms of uncertainty in the estimated model parameters. Second, we assessed the empirical distributions of several of the key estimated parameters. The empirical distributions of the age-specific susceptibility to infection and the estimated weights associated with the contact matrices were constructed to investigate the robustness of these estimates. Through model fitting, we estimated the age-independent parameters (Table 2) and age-dependent parameters (Table 4). The model fit with quantified uncertainty against the age-stratified cumulative incidence data is shown in Fig. 3 and with all age classes combined in Fig. 4. By estimating the weights associated with the setting-specific contact matrices, we identified the mixing patterns in all four phases of escalation (Fig. 5) as well as the mean individual contact rate (Fig. 6). The fitted parameters allowed us to estimate the effective reproduction number (Fig. 7) and the mean infectious contact (Table 5), for which we provide comparison with a homogeneous mixing model of similar scope [2]. Cumulative incidence according to age class. Cumulative incidence according to age class (circles) and best fitting transmission model (line), with 95% confidence interval (gray region). The red circles represent data from February 26 to May 16 (fitted). The blue circles represent data May 17 to June 1 (not fitted). Cumulative incidence shown for (A) ages 0–5, (B) ages 6–13, (C) ages 14–17, (D) ages 18–24, (E) ages 25–64 and (F) ages 65+. For an explanation of the increase in reported cases after May 16, see Appendix B: Caution in interpretation Cumulative incidence of all age classes combined. Cumulative incidence of all age classes (circles) and best fitting model (line), with 95% confidence interval (gray region). The red circles represent data from February 26 to May 16 (fitted). The blue circles represent data May 17 to June 1 (not fitted). For an explanation of the increase in reported cases after May 16, see Appendix B: Caution in interpretation Age-specific contact mixing pattern estimated for each escalation phase. Shown are the heatmaps of contact matrices for all settings (workplace, school, community, and household) combined. The intensity of the color of an entry corresponds to the magnitude of the contact rate between the intersecting age classes. The row of the matrix represents the contactor age class and the column represents the age class of the contactee. Heatmaps depicted for contact mixing in (A) phase 0, (B) phase 1, (C) phase 2 and (D) the end of phase 3 on May 16 Mean contact rate during four escalation phases of physical distancing measures. We considered four phases of escalation in Ontario: phase 0, monitoring and international travel advisories (until Mar 13); phase 1, public school closure (Mar 14–17); phase 2, physical distancing advisories (Mar 18–23); phase 3, closure of non-essential workplaces (Mar 24–May 16). The contact mixing matrices are constant for phase 0, 1 and 2 and the contact mixing is modelled as time-dependent for phase 3. (A) The mean contact rate from phase 0 (12.27), 1 (11.42) to 2 (10.92) including the setting breakdown; (B) The time-dependent mean contact rate by setting for phase 3 Estimated effective reproduction number \(R_{t}\). The solid line represents the estimated mean \(R_{t}\) value and the shaded region depicts the 95% confidence interval. \(R_{t}\) declines below 1 between April 5 and April 12 following the implementation of a package of non-pharmaceutical interventions Table 5 Mean infectious contact during different escalation phases Setting-specific social contact mixing We estimated the age-specific contact mixing profile for each phase of escalation (Fig. 5). We estimated the mean contact rate (i.e., the average number of contacts per day of one random individual with the total population) in the absence of physical distancing measures to be 12.27 per day per person in Ontario. This contact rate was assumed during phase 0, which corresponds to the beginning of the epidemic in Ontario where no major physical distancing advisories were in effect (Fig. 6(A)). Figure 6 shows the breakdown of contact rates by their respective setting (school, workplace, household, and general community). Relative to the pre-intervention value, the total increase of household contacts was 13% after school closure, 45% after the additional physical distancing measures, and 51% on May 16 after the closure of non-essential businesses (Table 6). Measures following the closure of non-essential businesses were estimated to have an impact of 59% reduction in the total workplace contacts and 85% community-related contacts as of May 16 (Table 6 and Fig. 6). Table 6 shows a complete summary of the estimated shifts in terms of the mean daily contact rate in Ontario. We also depicted the empirical distributions of the weights of the phase- and setting-specific contact matrices (Fig. 8). Empirical distributions of model parameters associated with the contact matrices. Empirical distributions of the weights of the contact matrices obtained from the fitting results of the 1000 bootstrap realizations. Panels (A)–(H) correspond to the distributions for the model parameters (A) \(p_{1}^{H}\), (B) \(p_{1}^{C}\), (C) \(p_{2}^{H}\), (D) \(p_{2}^{C}\), (E) \(p_{3}^{H}\), (F) \(p_{3}^{W}\), (G) \(p_{3}^{C}\) and (H) \(r_{c}\) Table 6 Estimated mean daily contact rate by setting and escalation phase Infections acquired in workplace, household, community and school settings We quantified the number of cumulative infections acquired in each setting and age group (Fig. 9). During phase 0, the cumulative infections were estimated to primarily result from community contacts, followed by contacts at workplaces and households (Fig. 9). The community contacts played the primary role in contributing effective transmissions till the early stage of phase 3, when the infections from household contacts eventually took over the primary role (Fig. 9). Households are the locations where the highest number of infections were estimated to occur among all age groups by May 16 (Fig. 10). Communities were the second most popular location to gain infections for age groups of individuals younger than 17 and older than 65 (Fig. 10). The age classes composed of young children and individuals aged 65 and above were estimated to gain relatively few infections from workplace contacts, while the working group (ages 25–64) acquired a similar number of infections from workplaces compared to households (Fig. 10). The estimated infections from school setting contacts were relatively few due to the early closure of schools at the beginning of phase 1 (Fig. 10). Overall, the workforce age class (aged 25–64) consistently was estimated to acquire a higher number of infections at workplaces, communities and schools compared to other age groups (Fig. 11). This was followed by individuals aged 18–24 in workplaces and schools (Fig. 11(A)(D)), while households and communities were settings of considerable transmission for the senior age group (aged 65+), shown in Fig. 11(B)(C). Cumulative infections acquired in workplace, households, community and school settings for all age groups. Community contacts initially contributed to more infections than contacts from remaining three locations, while household contacts played a dominant role in contributing new infections after the closure of non-essential workplaces on March 24. Due to the closure of schools at the beginning of phase 1 on March 14, no more new infections occurred at the school setting (shown in the sub-panel). Estimated mean values are represented by solid lines and the 95% confidence interval (CI) by surrounding shaded regions. CIs based on fitting results to 1000 realizations of the cumulative reported case data in Ontario, Canada Cumulative infections by age group and setting. Households were the primary location for the estimated transmission for all age groups, while communities or workplaces were the secondary location for different age groups Cumulative infections by setting and age class. Model-estimated cumulative infections acquired in (A) Workplaces, (B) Household, (C) Community and other locations and (D) Schools Age-specific susceptibility to infection and diagnosis probability We estimated the age-stratified probability of diagnosis for symptomatic individuals and susceptibility to infection (Table 7). More precisely, the susceptibility to infection in our model refers to the probability of infection upon contact with an infectious individual. We also depicted the empirical distributions of the age-specific susceptibilities corresponding to the uncertainty analysis conducted (Fig. 12). The estimated age-specific susceptibilities are robust to error in the reported case counts, which can be observed from their respective empirical distributions (Fig. 12). Empirical distributions of the age-specific susceptibility. Estimated parameters obtained from the fitting results of each of the 1000 bootstrap realizations. Empirical distribution of the estimated age-specific susceptibility for ages (A) ages 0–5; (B) ages 6–13; (C) ages 14–17; (D) ages 18–24; (E) ages 25–64 and (F) ages 65+. See Methods for the details Table 7 Age-specific model parameter estimates in Ontario, Canada: Percentage of symptomatic individuals diagnosed, and susceptibility to infection We estimated that the susceptibility to SARS-CoV-2 infection increases with age, which is consistent with findings from prior works [9, 11]. From our estimates, younger age groups (17 and below) have relatively low susceptibility to infection (less than 3%) compared to the senior age class (65+), with a probability of 50.2% to be infected upon contact (Table 7). Further, we estimated the senior class to be the most susceptible to infection (Table 7). This comes in addition to the relatively high vulnerability of the senior age group [10, 15, 23, 24], and provides further support to the necessity of protecting these individuals. The relatively lower susceptibility of younger children suggests that they may not have been a major driver of the COVID-19 epidemic in Ontario until May 16, if their transmissibility was comparable to the remaining age groups, confirming the existing literature [25–27]. We urge caution in interpreting these results, as in light of emerging data, that rapidly increased cases among non-seniors in Ontario indicate that mixing among these age groups and less abidance to physical distancing measures has been evident (for details see Appendix B: Caution in interpretation). Also, our findings of susceptibility are in line with a retrospective cohort observational study conducted in mainland China, which computed a secondary attack rate among household contacts of 12.4–17.1%, with a lower risk of developing the infection among the younger subjects with respect to the elderly [28]. We also estimated an overall increasing trend with age in the probability of diagnosis among symptomatic individuals (Table 7). This finding is both logical and consistent with prior findings [15], as the severity of illness due to COVID-19 has been found to increase with age and cases requiring medical attention may be more likely to be captured by the virologic surveillance system. Finally, our results are in line with the findings of the existing modeling studies [29–32] that have found that a timely and stringent implementation of non-pharmacological interventions are effective in curbing the spread of the ongoing outbreak, if they are enforced until the virus transmission has been significantly reduced. In this study, we estimated a 46% (12.27 to 6.58) decrease in the mean individual contact rate following the implementation of a series of government interventions in Ontario (Table 6). Studies which also estimated the social contact rates during the COVID-19 era, consisted of contact surveys where participants were asked to provide details about the number and locations of their social contacts. Findings from these studies indicate that physical distancing measures have led to the reduction of daily social contact rates in China [11, 33], Luxembourg [13], and the UK [12], with varying degrees of reduction. The age-stratified contact matrices, in the presence of government interventions, were identified in three studies [11, 12, 33] and their implications explored in terms of impact on the basic reproduction number [12] and model-based analyses [11, 33]. In agreement with our study, the estimated contact mixing patterns following the implementation of interventions had closely resembled the household mixing pattern (Fig. 5) [11, 33]. The estimated average number of daily contacts per participant before and during lockdown, were from 7.9 to 2.2 (72.2% reduction) in Shenzhen, 10.8 to 2.8 (74% reduction) in the UK, 9.5 to 2.2 (76.8% reduction) in Changsha, 18.8 to 2.3 (87.8% reduction) in Shanghai, 17.5 to 3.2 (81.7% reduction) in Luxembourg, to 14.6 to 2 (86.3% reduction) in Wuhan [11–13, 33]. It is possible that the survey-based approaches underestimate the contact rate, as there is a risk of bias from sources such as selection and recall bias. Also, participants may not wish to disclose their true number of contacts during government interventions. These factors may explain differences between the estimates from the two methodologies. Even so, with the data-driven approach presented in this study or the survey-based approach, it has been estimated that a substantial decrease in the individual contact rate occurred following the implementation of government interventions, with a shift to household contact. Overall, through the reduction in social contact rate and alteration of the mixing patterns, model-based analyses indicate interventions have been effective in mitigating transmission [11, 33]. As each physical distancing intervention may cause a shift in contact patterns, for instance by increasing the time that individuals spend at home, estimating the relative increase or decrease of the setting-specific daily contacts in each escalation phase enables the assessment of the expected shift of the infection towards different subpopulations and the contribution of contacts in each setting to the spread of infection. We estimated an effective decrease in contacts in the workplace and community settings; meanwhile, the household contact has increased by 51% from the pre-intervention phase to the end of lockdown phase on May 16 (Table 6). Therefore, additional household transmission should be taken into account by decision-makers when planning and implementing interventions, especially in light of the relatively high contact rates of seniors in household settings. Estimates of the age- and setting-specific social contact patterns during escalation of physical distancing, together with a deeper understanding of the age-specific susceptibility, allow to investigate different scenarios for reopening the economy, including businesses and schools [30–32]. These estimates provide needed tools to simulate scenarios of staged reopening or reopening targeted to specific subgroups, such as resuming of partial school classes, selected business sectors, etc. Additionally, this framework may be used for scenario analysis such as rotating workforce strategies, where the workforce is divided into groups with different working schedules. The specific choice of the age group subdivision in this study is motivated by age targeted intervention, in the spirit of assessing gradual resumption of schools and workplaces. Further, this framework can be used to incorporate vaccination of different age classes in the event an efficacious vaccine comes to light and identify optimal distribution strategies. Although we have focused primarily on age-stratified contact mixing, susceptibility and symptomatic diagnosis probability in this study, we also used the modelling framework to quantify key control parameters related to the efficacy of contact tracing efforts (for the details, see Appendix C: Control parameter assessment). This study and its data sources have several limitations. For our analyses, we primarily used cumulative incidence data, which is subject to several forms of error that may result in inaccuracies and biased estimates. Additional sources of error in our study may result from the specific circumstances in Ontario, in which a disproportionate number of health care workers were affected by COVID-19 and outbreaks had occurred in long-term care homes. For a discussion of these details, see Appendix D: Limitations in incidence data. The methodology introduced and illustrated in this study aims to provide the much-needed tools for intervention evaluation in terms of inferring the age- and setting-specific contact mixing in rapidly evolving circumstances, without the time and resources required for survey-based approaches. The data-driven, model-based approach can provide insights in almost real time based on incoming data, which is key to inform decision- and policy-making in an emergency situation, such as the current pandemic. We also note that the necessary surveillance data for COVID-19 and demographic data for analyses is readily and publicly available in many regions worldwide. Similarly, the age- and setting-specific mixing matrices utilized within are available in 152 countries [17]. Hence, the methodology can be readily adopted in many regions worldwide and could yield insights of the transmission risk and the effectiveness of different age- and setting (workplace, school, community, and household)-specific interventions. The datasets generated during and/or analyzed during the current study are available publicly except for the age-stratified COVID-19 incidence data. World Health Organization. Responding to community spread of COVID-19: interim guidance. 2020. Accessed Aug 7 2020. Tang B, Scarabel F, Bragazzi NL, McCarthy Z, Glazer M, Xiao Y et al.. De-escalation by reversing the escalation with a stronger synergistic package of contact tracing, quarantine, isolation and personal protection: feasibility of preventing a Covid-19 rebound in Ontario, Canada, as a case study. Biology. 2020;9(5):100. Wu J, Tang B, Bragazzi NL, Nah K, McCarthy Z. Quantifying the role of social distancing, personal protection and case detection in mitigating COVID-19 outbreak in Ontario, Canada. J Math Ind. 2020;10(1):1–2. MathSciNet Google Scholar Tang B, Bragazzi NL, Li Q, Tang S, Xiao Y, Wu J. An updated estimation of the risk of transmission of the novel coronavirus (2019-nCov). Infect Dis Model. 2020;5:248–55. Tang B, Wang X, Li Q, Bragazzi NL, Tang S, Xiao Y et al.. Estimation of the transmission risk of the 2019-nCoV and its implication for public health interventions. J Clin Med. 2020;9(2):462. Chu DK, Akl EA, Duda S, Solo K, Yaacoub S, Schünemann HJ et al.. Physical distancing, face masks, and eye protection to prevent person-to-person transmission of SARS-CoV-2 and COVID-19: a systematic review and meta-analysis. Lancet. 2020;395(10242):1973–87. Nussbaumer-Streit B, Mayr V, Dobrescu AI, Chapman A, Persad E, Klerings I et al.. Quarantine alone or in combination with other public health measures to control COVID-19: a rapid review. Cochrane Database Syst Rev. 2020;4(4):CD013574. Viner RM, Russell SJ, Croker H, Packer J, Ward J, Stansfield C et al.. School closure and management practices during coronavirus outbreaks including COVID-19: a rapid systematic review. Lancet Child Adolesc Health. 2020;4(5):397–404. Davies NG, Klepac P, Liu Y, Prem K, Jit M, CMMID COVID-19 working group et al.. Age-dependent effects in the transmission and control of COVID-19 epidemics. Nat Med. 2020;26:1205–11. Wu JT, Leung K, Bushman M, Kishore N, Niehus R, de Salazar PM et al.. Estimating clinical severity of COVID-19 from the transmission dynamics in Wuhan, China. Nat Med. 2020;26(4):506–10. Zhang J, Litvinova M, Liang Y, Wang Y, Wang W, Zhao S et al.. Changes in contact patterns shape the dynamics of the COVID-19 outbreak in China. Science. 2020;368(6498):1481–6. Jarvis CI, van Zandvoort K, Gimma A, Prem K, Klepac P, Rubin GJ et al.. Quantifying the impact of physical distance measures on the transmission of COVID-19 in the UK. BMC Med. 2020;18:1-0. Latsuzbaia A, Herold M, Bertemes J-P, Mossong J. Evolving social contact patterns during the COVID-19 crisis in Luxembourg. PLoS ONE. 2020;15(8):e0237128. Liu Y, Gu Z, Xia S, Shi B, Zhou XN, Shi Y et al.. What are the underlying transmission patterns of COVID-19 outbreak? An age-specific social contact characterization. EClinicalMedicine. 2020. https://doi.org/10.1016/j.eclinm.2020.100354. Verity R, Okell LC, Dorigatti I, Winskill P, Whittaker C, Imai N et al.. Estimates of the severity of coronavirus disease 2019: a model-based analysis. Lancet Infect Dis. 2020;20(6):669–77. Mossong JL, Hens N, Jit M, Beutels P, Auranen K, Mikolajczyk R et al.. Social contacts and mixing patterns relevant to the spread of infectious diseases. PLoS Med. 2008;5(3):e74. Prem K, Cook AR, Jit M. Projecting social contact matrices in 152 countries using contact surveys and demographic data. PLoS Comput Biol. 2017;13(9):e1005697. Special Expert Group for Control of the Epidemic of Novel Coronavirus Pneumonia of the Chinese Preventative Medicine Association TCPMAssociation. An update on the epidemiological characteristics of novel coronavirus pneumonia (COVID-19). Chin J Epidemiol. 2020;41:139–44. Tang B, Xia F, Tang S, Bragazzi NL, Li Q, Sun X et al.. The effectiveness of quarantine and isolation determine the trend of the COVID-19 epidemics in the final phase of the current outbreak in China. Int J Infect Dis. 2020;95:288–93. Statistics Canada. Table 17-10-0009-01 population estimates, quarterly. 2020. Chowell G. Fitting dynamic models to epidemic outbreaks with quantified uncertainty: a primer for parameter uncertainty, identifiability, and forecasts. Infect Dis Model. 2017;2(3):379–98. van den Driessche P. Reproduction numbers of infectious disease models. Infect Dis Model. 2017;2:288–303. Report of the WHO–China joint mission on coronavirus disease 2019 (COVID-19). 2020. https://www.who.int/publications/i/item/report-of-the-who-china-joint-mission-on-coronavirus-disease-2019-(covid-19). Accessed 7 Aug 2020. The Novel Coronavirus Pneumonia Emergency Response Epidemiology Team. The epidemiological characteristics of an outbreak of 2019 novel coronavirus diseases (COVID-19). China CDC Wkly. 2020;2(8):113–22. Saleem H, Rahman J, Aslam N, Murtazaliev S, Khan S. Coronavirus disease 2019 (COVID-19) in children: vulnerable or spared? A systematic review. Cureus. 2020;12(5):e8207. Castagnoli R, Votto M, Licari A, Brambilla I, Bruno R, Perlini S et al.. Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection in children and adolescents: a systematic review. JAMA Pediatr. 2020;174(9):882–9. Ludvigsson JF. Children are unlikely to be the main drivers of the COVID-19 pandemic—a systematic review. Acta Paediatr. 2020;109(8):1525–30. Jing Q-L, Liu M-J, Zhang Z-B, Fang L-Q, Yuan J, Zhang A-R et al.. Household secondary attack rate of COVID-19 and associated determinants in Guangzhou, China: a retrospective cohort study. Lancet Infect Dis. 2020;20(10):1141–50. Friston KJ, Parr T, Zeidman P, Razi A, Flandin G, Daunizeau J et al. Dynamic causal modelling of COVID-19. 2020. arXiv preprint. arXiv:2004.04463. Milne GJ, Xie S. The effectiveness of social distancing in mitigating COVID-19 spread: a modelling analysis. 2020. medRxiv. Liu M, Thomadsen R, Yao S. Forecasting the spread of COVID-19 under different reopening strategies. 2020. medRxiv. Balabdaoui F, Mohr D. Age-stratified model of the COVID-19 epidemic to analyze the impact of relaxing lockdown measures: nowcasting and forecasting for Switzerland. 2020. medRxiv. Zhang J, Litvinova M, Liang Y, Zheng W, Shi H, Vespignani A et al. The impact of relaxing interventions on human contact patterns and SARS-CoV-2 transmission in China. 2020. medRxiv. Arregui S, Aleta A, Sanz J, Moreno Y. Projecting social contact matrices to different demographic structures. PLoS Comput Biol. 2018;14(12):e1006638. JW is a member of the Ontario COVID-19 Modelling Consensus Table, and a member of the Expert Panel of the Public Health Agency of Canada (PHAC) Modeling group. This research was presented to both Ontario Table and PHAC group, and we appreciate very much comments and suggestions from colleagues of these provincial table and federal group. Reported COVID-19 cases were obtained from the Public Health Ontario (PHO) integrated Public Health Information System (iPHIS), via the Ontario COVID-19 Modelling Consensus Table. FS is also member of the INdAM Research Group GNCS. The authors would like to thank Michael Glazer for his assistance. This project has been partially supported by the Canadian Institute of Health Research (CIHR) 2019 Novel Coronavirus (COVID-19) rapid research program. YX is partially supported by Simon (429551) and NIH (1R01AI148551). The funding sources have had no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. Fields-CQAM Laboratory of Mathematics for Public Health (MfPH), York University, Toronto, Ontario, Canada Zachary McCarthy, Francesca Scarabel, Biao Tang, Nicola Luigi Bragazzi, Kyeongah Nah & Jianhong Wu Laboratory for Industrial and Applied Mathematics, York University, Toronto, Ontario, Canada Department of Mathematical Sciences, University of Cincinnati, Cincinnati, OH, USA Yanyu Xiao CDLab—Computational Dynamics Laboratory, Department of Mathematics, Computer Science and Physics, University of Udine, 33100, Udine, Italy Francesca Scarabel Modelling Infection and Immunity Lab, Centre for Disease Modelling, Department of Mathematics and Statistics, York University, Toronto, Ontario, Canada Jane M. Heffernan Disaster & Emergency Management, School of Administrative Studies & Advanced Disaster & Emergency Rapid-Response Simulation (ADERSIM), York University, Toronto, Ontario, Canada Ali Asgary Department of Mathematics, University of Toronto, Toronto, Ontario, Canada V. Kumar Murty The Fields Institute for Research in Mathematical Sciences, Toronto, Ontario, Canada Public Health Risk Sciences Division, National Microbiology Laboratory, Public Health Agency of Canada, St-Hyacinthe, Quebec, Canada Nicholas H. Ogden Zachary McCarthy Biao Tang Nicola Luigi Bragazzi Kyeongah Nah Jianhong Wu ZM, YX, FS, BT, KN and JW conceived and designed the study; ZM, YX, FS, BT, NLB, KN, JH and JW acquired, analyzed, or interpreted the data; ZM, YX, FS and BT drafted the work; ZM, YX, FS, BT, NLB, JH, AA, VKM, NO and JW revised the work. All authors read and approved the final manuscript. Correspondence to Jianhong Wu. Appendix A: Baseline contact matrices To establish a contact matrix representative of contact mixing in Ontario, we utilized the projected Canadian matrices [17] and further adapted the matrices for modelling purposes. Several of the requirements for the target contact matrix to be suitable for integration in the transmission model (1) are: Modified age group subdivisions (6 age groups); Reciprocity condition is satisfied; Accounts for the specific age structure of Ontario, Canada, in 2019; Mean connectivity is representative of individual mean contact rate in Ontario, Canada; Represents contact mixing in distinct social settings (household, workplace, community and other locations, school). The topics of reciprocity and mean connectivity are discussed further in prior work [34]. Briefly, social contact survey data is subject to over-representation and under-representation of age groups, reporting error, etc.; hence, estimated population-level contacts are generally not symmetric. However, as contacts must be reciprocal, the number of total contacts from age group i to j must be identical to those contacts from group j to i. Also, the mean connectivity of each contact matrix, or the average number of contacts per individual in the population, should be preserved during the transformations within the same year. We introduced a series of transformations which address items (i)–(v). We then utilized the resultant contact matrices in the simulations of the transmission model (1) to quantify the age-specific contact mixing in Ontario. Denote the reference contact matrices, which are representative of social contact mixing in Canada in year 2006, \(C^{H}\), \(C^{W}\), \(C^{C}\), \(C^{S}\) for household, workplace, community and school settings, respectively. The entries of the contact matrix are the number of daily social contacts of a single individual in age class i with individuals in age class j (units contacts/day as defined by those contacts believed to be relevant for the spread of respiratory illnesses) [16]. We utilized \(C^{\mathrm{Ref}}\) to obtain a matrix representative of contact mixing in Ontario, Canada to satisfy the properties (i)–(v) above. The subsequent process is applied separately to each of the setting-specific reference contact matrices. Reciprocity correction We corrected each setting-specific reference matrix \(C^{\mathrm{Ref}}\) for reciprocity to ensure that contacts between age classes at the population-level (extensive scale) were reciprocal. Specifically, we ensured symmetry of the population-level contacts and returned back to the individual-level contact scale. Let \(E_{ij} = C_{ij}^{\mathrm{Ref}} N_{i}\) represent the extensive scale contact matrix between age class i and class j, where \(N_{i}\) represents the population of age class i in Canada in year 2006. To ensure the symmetry of matrix E, and thus population-level contacts, we applied the transformation \(E_{ij} \rightarrow \frac{1}{2} ( E_{ij} + E_{ij}^{T} )\). We then converted from population-level total contacts to the individual-level contact scale to redefine \(C_{ij}^{\mathrm{Ref}}:= \frac{E_{ij}}{N_{i}}\). The resultant \(C^{\mathrm{Ref}} \) has now been corrected for reciprocity. We then adjusted each reference matrix \(C^{\mathrm{Ref}}\), for Canada, to the matrix C, for Ontario, according to the demography of Ontario using an established method [34]. To accomplish this, we projected the reference contact matrix \(C^{\mathrm{Ref}}\) to the target matrix C using the transformation $$ C_{ij} = C_{ij}^{\mathrm{Ref}} \frac{N' N_{j} '}{N_{j} \sum_{i,j} C_{ij}^{\mathrm{Ref}} \frac{N_{i} ' N_{j} '}{N_{j}}}, $$ where \(N_{j}\) (\(N_{i} \)) and \(N_{j} '\) (\(N_{i} ' \)) are the number of individuals in age class j (i) in Canada and Ontario in 2006, respectively. Also, N and \(N'\) are the total numbers of individuals in the reference year Canada and Ontario in 2006, respectively. Equation (2) may be interpreted as an adjustment of the contact rate based on the ratio of the target population density of available contactees in Ontario and density of contactees in the reference setting in Canada. Since this transformation adjusted the matrix from the country setting to a provincial setting within the same year, we also normalize the matrix to have a mean degree (or mean connectivity) of 1 during the transformation in order to preserve the mean connectivity. After the transformation, we rescale the matrix C to the original degree of \(C^{\mathrm{Ref}}\) by multiplying the mean connectivity $$ \langle k\rangle = \frac{1}{N} \sum_{i,j} C_{ij}^{\mathrm{Ref}} N_{i}. $$ We preserve the mean connectivity in this transformation, i.e., the average number of daily contacts per individual in the population is assumed to be equal in Ontario and Canada in the same year. We note that an alternative density transformation could also be used, which relaxes this assumption and allows the mean connectivity to depart from its original value (for instance, method M2 in [34]). To quantify the setting-specific contact mixing, we applied this above process, using Equation (2), to the established household, workplace, community, and school contact matrices for Canada [17]. Method to age-transform contact matrix We outline the process used for generating contact matrices in the desired age subdivision format by utilizing known contact mixing data and Canadian demographic data. The key concept was to utilize established contact data which informed a \(16 \times 16\) contact matrix for age groups 0–4, 5–9, 10–14, 15–19, 20–24, 25–29, 30–34, 35–39, 40–44, 45–49, 50–54, 55–59, 60–64, 65–69, 70–74, 75+, and use a series of property-preserving transformations to generate a desired or target \(6 \times 6\) contact matrix for age groups 0–5, 6–13, 14–17, 18–24, 25–64, 65+. Here, we conducted the transformation between different age groups for the year 2006. Based on the homogeneous mixing assumption and a conservation law on contacts received and offered in each age group, we calculated the entries of the target \(6 \times 6\) matrix using the following formula $$ C_{kl}^{6} = \frac{1}{N_{k}^{6}} \sum _{i=1}^{16} \sum_{j=1}^{16} C_{ij}^{16} N_{i} \frac{\overline{N}_{jl}}{N_{j}} \frac{\overline{N}_{ik}}{N_{i}}, $$ where \(\overline{N}_{jl}\) (\(\overline{N}_{ik}\)) represents the overlapped population in the old age group j (i) and new age group l (k). And \(C_{ij}^{16}\) and \(C_{kl}^{6}\) are the entries of the contact matrix for the old and new age structure, respectively. \(N_{k}^{6}\) is the population in age group k for the new age structure. This transformation preserves the mean connectivity and reciprocity from the previous transformation. Method to project contact matrix to different years We now have obtained the contact matrix among 6 age groups in 2006, and the following transformation projects the contact matrix to 2019 based on the Canadian demographic data. In what follows, \(N_{j}\) and \(N_{j} '\) are the number of individuals in the jth age group in the original (i.e., 2006) and projected (i.e., 2019) year, N and \(N '\) are the total population in the original and projected year. Then, we have $$ C_{ij}^{\mathrm{projection}} = C_{ij}^{\mathrm{original}} \frac{N N_{j} '}{N_{j} N '}. $$ The contactee correction terms of the form \(\frac{N N_{j} '}{N_{j} N '}\) represents the ratio of contactees in the projected year to those in the origin year. With the contactee density correction terms, we expressed all entries of the projected contact matrix in terms of the known, original contact matrix entries. Equations were kept general to hold true when N may differ from \(N'\). The above equation may also be interpreted as an adjustment of the contact rate based on the ratio of projected population density of contactees and density of contactees in the original setting. Due to the variations in population profile in different timing, the average connectivity is not preserved exactly; however is representative of mean contact rate in Ontario. We note that an alternative series of transformations could be utilized to preserve the mean connectivity of the Canadian reference contact matrix. Finally, this transformation preserves the reciprocity. The contact matrices resulting from the series of transformations outlined are shown in Fig. 2 and are representative of mixing in Ontario. We then utilized the resultant contact matrices \(C^{H}\), \(C^{W}\), \(C^{C}\), \(C^{S}\) in the simulation of transmission model (1) to quantify heterogeneity in age-specific and setting-specific contact mixing in Ontario. Appendix B: Caution in interpretation The predicted trajectory as of May 16 largely underestimated the cases reported in Ontario as of mid-July (Fig. 3). However, observing the age-stratified model fit vs. cumulative incidence data (Fig. 4), we see that the disparity is among those aged less than 65. The underestimation may be due to the relaxation of measures starting from May 16 and/or decreased abidance to physical distancing measures. For instance, possible culprits include increased warm weather in Ontario and a national holiday weekend. While our study suggests that younger individuals are less susceptible; the observed cases following relaxation suggests increased transmission among these groups may be due to increased mixing among those non-senior individuals. Appendix C: Control parameter assessment The transmission model accounts for two types of detection routes: contact tracing and the diagnosis of individuals presenting symptoms. Estimates of the parameters related to these processes can be utilized to assess the efficacy of interventions implemented and conduct modelling scenario analyses. Our estimates show that the fraction of infectious contacts that were effectively traced and isolated increased from 12% before phase 3 to a limit value of 73% during phase 3 (Table 2). Parameterizing the transmission model with region- or country-specific demographic and incidence data could also allow a comparative evaluation of interventions between different regions. Furthermore, the evaluation of the current levels of contact tracing and diagnosis efforts is fundamental for planning public health policies [2, 3]. Appendix D: Limitations in incidence data For our analysis we used cumulative incidence data, which is subject to several forms of error, including underreporting (COVID-19 positive individuals not having their illness reported) and under-ascertainment (cases not seeking health care), as not all COVID-19 cases are captured by the surveillance system. We note that there may be heterogeneity by age in the proportion of individuals seeking health care and testing due to illness, hence the estimated age-specific parameters are impacted by age-specific reporting rates and ascertainment rates. Specifically, the severity of illness due to COVID-19 has been found to increase with age [15] and cases requiring medical attention may be more likely to be captured by the surveillance system, which is consistent with our findings of diagnosis rate increasing with age (Table 7). However, we stress that testing protocols in Ontario have been variable during the course of the epidemic, altering the scope of individuals and symptoms that have been tested for COVID-19, hence making it difficult to obtain robust estimates when those rates are assumed constant over time. Analyzing data while interventions are in place introduces natural biases: the low relative susceptibility in younger age groups may be a result of early school closure, which prevented transmission in younger age groups very early on in the Ontario epidemic. Such an effective containment measure may not have been possible in settings such as Ontario's long-term care homes where the prominent age class is seniors and transmission continued. Additional sources of error in our study result from the specific situation in Ontario. As of June 3, 2020, Greater Toronto Area public health units accounted for 66.4% of cases. It may then be more appropriate to consider modelling in smaller regions or public health units as cases are not evenly dispersed throughout the province. In addition, as of June 3, 2020, a proportion of 17.7% and 6.3% of all cases were among long-term care home residents and among health care workers associated with long-term care outbreaks, respectively. Heterogeneity in geographical location or finer granularity to the level of long-term care homes and hospitals are not captured by the current model. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. McCarthy, Z., Xiao, Y., Scarabel, F. et al. Quantifying the shift in social contact patterns in response to non-pharmaceutical interventions. J.Math.Industry 10, 28 (2020). https://doi.org/10.1186/s13362-020-00096-y DOI: https://doi.org/10.1186/s13362-020-00096-y Intervention evaluation Heterogeneous mixing Non-pharmaceutical interventions Mathematical models of the spread and consequences of the SARS-CoV-2 pandemics. Effects on health, society, industry, economics and technology.
CommonCrawl
npj climate and atmospheric science A dichotomy between model responses of tropical ascent and descent to surface warming Hui Su ORCID: orcid.org/0000-0003-1265-97021, Chengxing Zhai1, Jonathan H. Jiang1, Longtao Wu1, J. David Neelin2 & Yuk L. Yung3 npj Climate and Atmospheric Science volume 2, Article number: 8 (2019) Cite this article Atmospheric dynamics Climate and Earth system modelling Simulations of tropical atmospheric circulation response to surface warming vary substantially across models, causing large uncertainties in projections of regional precipitation change. Understanding the physical processes that drive the model spread in tropical circulation changes is critically needed. Here we employ the basic mass balance and energetic constraints on tropical circulation to identify the dominant factors that determine multidecadal circulation strength and area changes in climate models. We show that the models produce a robust weakening of descent rate under warming regardless of surface warming patterns; however, ascent rate change exhibits inter-model spread twice as large as descent rate because of diverse model responses in the radiative effects of clouds, water vapor, and aerosols. As ascent area change is dictated by the disparate descent and ascent rate changes due to the mass budget and the inter-model spread in descent rate change is small, the model spread in ascent area change is dominated by that of ascent rate change, resulting in a strong anti-correlation of –0.85 between the fractional changes of ascent strength and area across 77 climate model simulations. This anti-correlation leads to a corresponding inverse relationship between the rates of precipitation intensifying and narrowing of the inter-tropical convergence zone (ITCZ), suggesting tropical ascent area change can be potentially used to constrain the ITCZ precipitation change. Longwave cloud radiative effect at the top-of-atmosphere (TOA) in the convective region is identified to be a major source of uncertainties for tropical ascent rate change and thus for regional precipitation change. Accurate prediction of regional precipitation response to long-term surface warming is critical to decision making regarding adaptation and mitigation strategies. However, climate model predictions of regional precipitation change contain large uncertainties. Previous studies have shown that the circulation-related dynamic component of precipitation change is the primary contributor to the inter-model variance in regional precipitation predictions.1,2 Thus, it is a top priority in climate research to better understand and reduce discrepancies in model simulations of atmospheric circulation response to surface warming.2 Tropical circulation systems such as the Hadley Circulation and Walker Circulation consist of prevailing ascending and descending regions. Thermodynamic theories postulate that the strength of tropical overturning circulation would weaken in response to surface warming in terms of both mean ascent and descent rates3,4,5 and the ascent area would tighten6,7,8,9 while the descent area expands poleward.10,11 During the satellite era since 1979, many studies showed tropical circulation especially the Walker Circulation have strengthened along with a steady increase of global-mean surface temperature (Ts),12,13,14,15,16,17,18 although a weakening of the Walker Cell was found in other studies that covered longer temporal periods,19,20 motivating a number of studies to reconcile the apparent contradiction between the thermodynamic theories and the observations. Climate models driven by observed sea surface temperature (SST) approximately capture the Walker Circulation strengthening, but most coupled model simulations fail to reproduce the observed signal and a large inter-model spread exists in the simulated decadal circulation change.21,22,23 It was shown that the change of east-west SST gradient associated with the decadal variability in the tropical Pacific or the frequency of central-Pacific El Niño is responsible for the observed strengthening of the Walker Circulation since 1979.18,21,22,23 However, large differences exist in the simulated trends of the circulation strength even when observed SST is prescribed in the model experiments, demonstrating that model representations of atmospheric processes have significant discrepancies. In this study, we present a unique perspective to decipher the diverse circulation responses to surface warming in climate models. Using a large number of simulations available from the Couple Model Inter-comparison Project Phase 5 (CMIP5), we seek robust and consistent signals across the models and identify the dominant physical processes that govern the inter-model spread in tropical circulation changes. In particular, we treat the tropics (30°S–30°N) as two boxes corresponding to the ascending and descending branches of the overturning circulation with uniform ascent rate (Wu) and descent rate (Wd) in each box, respectively.24,25 The area fractions of the ascending and descending boxes are denoted as Au and Ad, respectively, with Au + Ad = 1. The caveats for this simplified framework are discussed later. We show that a weakening of Wd with warming is simulated in all models with relatively small model spread, but Wu can increase or decrease because of model differences in simulating the radiative effects of clouds, water vapor, and aerosols. A strong anti-correlation exists between the fractional changes of Wu and Au across the models because of the dichotomy between the model representations of the descent and ascent responses to warming. This anti-correlation was reported in Byrne et al.26 but its mechanism was not fully explored. We further show this anti-correlation leads to "the wetter the narrower" ITCZ response across the models. It is found that different longwave cloud radiative effects in the convective region are a primary contributor to the diverse Wu and Au sensitivities to surface warming. Ascent area change constrained by mass balance The mass balance for the tropical circulation requires WuAu = WdAd. In climatological means, the imbalance between the ascent mass flux WuAu and the descent mass flux WdAd within 30°S-30°N is only 2-3% (Supplementary Figure 1). As the Hadley Circulation expands poleward under global warming,10,11 the imbalance between the ascent and descent mass fluxes within 30°S/N would increase. Let us first ignore the mass imbalance for simplicity but quantify its impact later. Making use of the relation \(\frac{{dA_u}}{{dT_s}} = - \frac{{dA_d}}{{dT_s}}\) in the fixed latitudinal band, we can write the fractional change of the ascent area as $$\frac{1}{{A_u}}\frac{{dA_u}}{{dT_s}} = \left. {\left( {\frac{1}{{W_d}}\frac{{dW_d}}{{dT_s}} - \frac{1}{{W_u}}\frac{{dW_u}}{{dT_s}}} \right.} \right)\left( {1 - A_u} \right).$$ Defining the ascending region by monthly-mean vertical pressure velocity at 500 hPa (ω500) < 0, we compute the fractional changes of Wu, Wd, and Au per degree of Ts increase using the linear trends of Wu, Wd, and Au scaled by their multi-year means and the global-mean Ts trends from 1979 to 2005 in 28 atmosphere-only (AMIP) and 24 historical experiments and from 2006 to 2100 in 25 RCP4.5 experiments. The long-term Ts trends are similar in the three types of experiments, all around 0.2 K decade–1. The typical value of Au is about 0.4 in the models (Supplementary Figure 2), so (1−Au) = 0.6 is used when evaluating Eq (1) by scattering \(\frac{1}{{A_u}}\frac{{dA_u}}{{dT_s}}\) against \(\left. {\left( {\frac{1}{{W_u}}\frac{{dW_u}}{{dT_s}} - \frac{1}{{W_d}}\frac{{dW_d}}{{dT_s}}} \right.} \right)\)(1−Au) for all experiments (Fig. 1). Tropical ascent area change dictated by differential changes of descent and ascent rates. The fractional change of tropical ascent area scattered against the differential responses of mean descent rate and ascent rate to surface warming across the model simulations (see Eq (1)). All fractional changes are normalized by global-mean surface temperature trends over the same period. Each symbol represents a model experiment. The black line marks the least squares linear fit across all model experiments with correlation coefficient (R) shown. The dotted line corresponds to y = x Figure 1 shows that the mass balance places a dominant constraint on the ascent area change in the models. The diagnostic relation (Eq. 1) captures about 86% of the across-model variance in \(\frac{1}{{A_u}}\frac{{dA_u}}{{dT_s}}\), which confirms the validity of the simple two-box model in representing the mass balance of the three-dimensional tropical circulation. It reveals that the ascent area change compensates for the differential responses of the descent and ascent rate to surface warming: The correlation between \(\frac{1}{{A_u}}\frac{{dA_u}}{{dT_s}}\) and the r.h.s. of Eq (1) is 0.93 and all the experiments follow closely the 1:1 line. High correlations are found within each type of experiments (0.90 for AMIP, 0.89 for historical and 0.87 for RCP4.5 experiments). The deviation from the 1:1 line could be caused by the model spread in the exact values of Au, the full width of the Hadley Circulation, and the poleward expansion of the Hadley Circulation with warming (see Supplementary Discussion, Supplementary Figures 2 and 3). As there is only 14% of the inter-model variance in \(\frac{1}{{A_u}}\frac{{dA_u}}{{dT_s}}\) not explained by Eq (1), we use this simple framework to interpret the relationships between the changes in the circulation strength and area. It is also evident in Fig. 1 that most (66 out of 77) model simulations produce a tightening of the ascent area under warming and the values of \(\frac{1}{{A_u}}\frac{{dA_u}}{{dT_s}}\) range from –6% K–1 to 2% K–1. The RCP4.5 runs generally have a small inter-model spread because a longer period is used in deriving the trends. Using a subset of 25 years of data from the RCP4.5 experiments yields larger spread, comparable to those in the AMIP and historical runs. The multi-model-mean \(\frac{1}{{A_u}}\frac{{dA_u}}{{dT_s}}\) in the AMIP experiments is –2.5% K–1, while the multi-mean-mean\(\frac{1}{{A_u}}\frac{{dA_u}}{{dT_s}}\) in the historical and RCP4.5 experiments are –1.0% K–1 and –0.3 % K–1, respectively. To understand the model behaviors in the Au change, we examine the physical processes that govern the changes of Wu and Wd, and the dominant factors that are responsible for the model differences. Energetic control of the ascent and descent rates The moist static energy (MSE) budget framework combines thermodynamic and moisture equations and effectively cancels convective heating and moisture sink. It has been used extensively in studies of tropical dynamics.27,28,29 Considering the weak temperature and moisture gradients in the tropics and the dominance of the first baroclinic mode in vertical motion,30,31,32 the magnitude of Wu is approximately determined by the net energy flux into the atmospheric column \(\left( {F_{net}^u} \right)\) divided by gross moist stability (GMS), Wu = \(F_{net}^u\)/GMS. The GMS represents the efficiency of energy export out of the convective region given unit vertical ascent and is sensitive to the vertical structure of MSE and vertical motion.31,32 In this study, we analyze the relationship between \(\frac{1}{{F_{net}^u}}\frac{{dF_{net}^u}}{{dT_s}}\) and \(\frac{1}{{W_u}}\frac{{dW_u}}{{dT_s}}\) across the models, and the contributions of the GMS changes and other processes can be inferred from the residue, i.e., $$\frac{1}{{W_u}}\frac{{dW_u}}{{dT_s}} = \frac{1}{{F_{net}^u}}\frac{{dF_{net}^u}}{{dT_s}} - \frac{1}{{{\mathrm{GMS}}}}\frac{{d{\mathrm{GMS}}}}{{dT_s}}.$$ Similarly, over the tropical descent zone, the net energy loss in the free troposphere (\(F_{net}^d\)) must be balanced by adiabatic warming associated with subsidence so that Wd = \(F_{net}^d\)/DSS, where DSS represents the dry static stability.4 We use DSS = −\(\frac{T}{\theta }\frac{{\partial \theta }}{{\partial P}}\) at 500 hPa, where T is temperature and θ is potential temperature (see methods). Therefore, the fractional change of Wd depends on the fractional changes of \(F_{net}^d\) and DSS, i.e., $$\frac{1}{{W_d}}\frac{{dW_d}}{{dT_s}} = \frac{1}{{F_{net}^d}}\frac{{dF_{net}^d}}{{dT_s}} - \frac{1}{{{\mathrm{DSS}}}}\frac{{d{\mathrm{DSS}}}}{{dT_s}}.$$ The diverse responses of Wd and Wu in the models are shown in Fig. 2. It is striking that all 77 simulations produce a weakening of Wd despite of a variety of Ts warming patterns, while the fractional change of Wu per unit surface warming \(\left( {\frac{1}{{W_u}}\frac{{dW_u}}{{dT_s}}} \right)\) varies from −5% K–1 to 7% K–1. The inter-model standard deviation for \(\frac{1}{{W_d}}\frac{{dW_d}}{{dT_s}}\) is only 1.3 % K–1, compared to 2.7% K–1 for \(\frac{1}{{W_u}}\frac{{dW_u}}{{dT_s}}.\) Most AMIP runs (25 out of 28) produce a strengthening of Wu, but most historical runs (17 out 24) simulate a weakening of Wu. As shown in previous studies, the discrepancy between the AMIP and historical runs are mainly caused by the misrepresentation of SST gradient in the coupled simulations.18,21,22,23 All RCP4.5 experiments predict a weakening of Wu and Wd with relatively small inter-model spread comparable to their Wd changes, consistent with the small values of \(\frac{1}{{A_u}}\frac{{dA_u}}{{dT_s}}\) in the RCP4.5 runs (Fig. 1). Albern et al. (2018)33 showed that uniform SST warming leads to a robust weakening of descent rate in 8 aqua-planet model simulations regardless of whether or not cloud-radiative interactions are included, but the ascent rate and area changes are more variable with even different signs, suggesting that the dichotomy between the descent and ascent responses to warming also occurs under idealized model configurations. Contrasting behaviors of the simulated trends in the descent and ascent rates in CMIP5 models. Inter-model spread and standard deviation in the fractional changes of mean descent rate (Wd), mean ascent rate (Wu), global-mean convective mass flux (Mc), and dry static stability (DSS) per unit surface warming. Each symbol represents a model experiment. The open circle denotes the ensemble mean of all model experiments and the error bar denotes one inter-model standard deviation It is worth noting that the changes of Wu per degree of surface warming do not follow the effective convective mass flux (Mc) derived from the differential responses of global-mean precipitation and boundary layer moisture, \(\frac{1}{{M_c}}\frac{{dM_c}}{{dT_s}} = \frac{1}{P}\frac{{dP}}{{dT_s}} - \frac{1}{{q_v}}\frac{{dq_v}}{{dT_s}}\), as suggested by early studies.3,5 The fractional changes of Mc are universally negative between –6.3% K–1 to –3.8% K–1 with an ensemble mean of –5.1% K–1 and one standard deviation of 0.5% K–1 (Fig. 2), and they do not have a simple relation with the grid-scale Wu changes (their correlation is 0.026, Supplementary Figure 4a). The positive correlation between the changes of Mc and Wu shown in Fig. 4 of Vecchi and Soden5 originates from the co-variability of both quantities with the Ts increase (Supplementary Figure 4b). Given unit surface warming, Wu can increase or decrease, but Mc always decreases. It is important to distinguish these two quantities even though in some cases their changes are of the same sign, such as for centennial changes in the RCP4.5 runs (Fig. 2). In contrast, the weakening of Wd in all models is consistent with the expectation that the increase of DSS with warmer Ts dominates over the increase of radiative cooling.4,34 When the surface warms, tropical tropospheric temperature profiles tend to follow moist adiabats so that upper troposphere warms more than lower troposphere, resulting in a more stable atmosphere. The increase of DSS with warming is consistently simulated in all models, despite of quantitative differences between the experiments (Fig. 2). The ensemble-mean \(\frac{1}{{{\mathrm{DSS}}}}\frac{{d{\mathrm{DSS}}}}{{dT_s}}\) for all experiments is about 4.8 % K−1 and the standard deviation is 1.3 % K–1, close to that of \(\frac{1}{{W_d}}\frac{{dW_d}}{{dT_s}}.\) A scatterplot between the simulated \(\frac{1}{{W_d}}\frac{{dW_d}}{{dT_s}}\) against the corresponding \(\frac{1}{{{\mathrm{DSS}}}}\frac{{d{\mathrm{DSS}}}}{{dT_s}}\) (Fig. 3a) reveals a negative correlation of –0.34 (statistically significant at 99% level), while the fractional change of Wd without normalization of the Ts trend yields a higher negative correlation of –0.72 with that of DSS (Fig. 3b), confirming the dominance of the DSS change in driving the reduction of large-scale subsidence. The correlation between \(\frac{1}{{W_d}}\frac{{dW_d}}{{dT_s}}\) and \(\frac{1}{{{\mathrm{DSS}}}}\frac{{d{\mathrm{DSS}}}}{{dT_s}}\) is 0.13 (statistically insignificant) for AMIP, –0.60 for historical and –0.57 for RCP4.5. The insignificant correlation in the AMIP experiments may be caused by the relatively small inter-model variance of \(\frac{1}{{{\mathrm{DSS}}}}\frac{{d{\mathrm{DSS}}}}{{dT_s}}\) associated with the same prescribed SSTs, which does not overcome the considerable noises in the short-term trends of DSS. On the other hand, the signal-to-noise ratios are much higher in the historical and RCP4.5 experiments (the uncertainties of the long-term trends are much smaller than the uncertainties of the short-term trends) so that the negative correlations between \(\frac{1}{{W_d}}\frac{{dW_d}}{{dT_s}}\) and \(\frac{1}{{{\mathrm{DSS}}}}\frac{{d{\mathrm{DSS}}}}{{dT_s}}\) are outstanding. Without normalization of the Ts trend, the correlation between \(\frac{1}{{W_d}}\frac{{dW_d}}{{dt}}\) and \(\frac{1}{{{\mathrm{DSS}}}}\frac{{d{\mathrm{DSS}}}}{{dt}}\) is –0.77 for the historical runs and –0.81 for the RCP4.5 runs, but insignificant for the AMIP runs. Weakening of descent rate governed by the increase of dry static stability with warming. The fractional change of mean descent rate scattered against the fractional change of dry static stability averaged over the tropical descending region (a) with and (b) without normalization by global-mean surface temperature trend for the same period. Each symbol represents a model experiment. The black line marks the least squares linear fit across all model experiments with correlation coefficient (R) shown The relatively small inter-model spread in the Ts-normalized \(\frac{1}{{W_d}}\frac{{dW_d}}{{dT_s}}\) and \(\frac{1}{{{\mathrm{DSS}}}}\frac{{d{\mathrm{DSS}}}}{{dT_s}}\) suggests that the sensitivities of Wd and DSS per degree of surface warming are fairly consistent among the models. Model differences in convective adjustment and radiative effects of water vapor and clouds could cause different tropospheric temperature changes and result in different DSS sensitivities and thus Wd changes, but their inter-model spreads are about half of that in \(\frac{1}{{W_u}}\frac{{dW_u}}{{dT_s}}\) (Fig. 2). On the other hand, the model spreads in \(\frac{1}{{W_d}}\frac{{dW_d}}{{dT_s}}\) and \(\frac{1}{{F_{net}^d}}\frac{{dF_{net}^d}}{{dT_s}}\) are not significantly correlated (figure not shown). Different from Wd, the model spread in \(\frac{1}{{W_u}}\frac{{dW_u}}{{dT_s}}\) is significantly influenced by the change of net energy flux into the atmospheric column (Fig. 4a). The AMIP experiments generally have greater net energy flux increase, contributing to a strengthening of ascent, while the historical runs have lower net energy flux input and reduced ascent. The RCP4.5 runs are clustered together with nearly neutral \(F_{net}^u\) change. We recognize that the significant correlation of 0.74 between \(\frac{1}{{W_u}}\frac{{dW_u}}{{dT_s}}\) and \(\frac{1}{{F_{net}^u}}\frac{{dF_{net}^u}}{{dT_s}}\) is dominated by the drastic difference between the uncoupled (AMIP) and coupled (historical and RCP4.5) experiments, between which the Ts warming patterns are very different (Supplementary Figure 5). The correlations within each type of the experiments are not statistically significant at the 95% level partly due to the low signal-to-noise ratios for the inter-model spread relative to the noises in \(\frac{1}{{W_u}}\frac{{dW_u}}{{dT_s}}\) and \(\frac{1}{{F_{net}^u}}\frac{{dF_{net}^u}}{{dT_s}}\) within each type of model ensembles, and because many other processes besides the change of Funet such as the change of GMS, transient and stationary eddies and ocean heat uptake could all play a role in altering the ascent strength. Our simplified framework focuses on identifying the first-order factors that act across all the experiments that span a wide range of surface warming patterns and model physics. Ascent rate change driven by net energy flux into the atmospheric column. The fractional change of mean ascent rate scattered against a the fractional change of net energy flux into the atmospheric column, b the longwave component of cloud radiative effect at the top-of-atmosphere, and c the clear-sky shortwave absorption in the atmosphere averaged over the tropical ascending region across the model simulations. All fractional changes are normalized by global-mean surface temperature trends over the same period. Each symbol represents a model experiment. The black line marks the least squares linear fit across all model experiments with correlation coefficient (R) shown The ensemble-mean Ts trends in the AMIP and historical runs display clear discrepancies in Eastern Pacific and Southern Oceans, and the multi-model-mean Ts trend in the RCP4.5 runs is similar to the historical multi-model-mean (Supplementary Figure 5), suggesting that the AMIP and historical Ts differences are mainly due to natural variability. The SST gradients can drive low-level winds and convergence,35 which directly affect evaporation and sensible heat flux. Greater lower-level convergence can promote stronger convection and higher cloud top, which can lead to stronger cloud longwave warming effect. The increase of water vapor may also contribute to net warming in the atmosphere. The enhanced energy flux into the ascending region can drive a strengthening of the ascent and induces a positive feedback between circulation and the radiation effects of clouds and water vapor,34,36 which can in turn influence the distributions of SST in the coupled simulations. We have examined each component of \(F_{net}^u\) in the models. It is found that the longwave cloud radiative effect at the TOA (positive for outgoing fluxes) has the highest correlation (R = –0.52) with \(\frac{1}{{W_u}}\frac{{dW_u}}{{dT_s}}\) (Fig. 4b), followed by clear-sky shortwave absorption in the atmosphere (R = 0.43) (Fig. 4c). The models with a reduced TOA outgoing longwave radiation (OLR) associated with cloud changes tend to have a strengthening of ascent, i.e., the more longwave trapping by clouds, the greater strengthening of ascent. The increase of cloud top height, cloud amount or cloud thickness all could contribute to the reduction of OLR in the cloudy region. The discrepancies in the modeled cloud radiative effects are caused partly by different SST patterns (such as the differences between the AMIP and historical experiments) and partly by different model parameterizations of deep convection and cloud microphysics (such as the differences between the AMIP experiments). The correlation between \(\frac{1}{{W_u}}\frac{{dW_u}}{{dT_s}}\) and the cloud longwave radiative effects is –0.56 in the historical experiments alone, but becomes insignificant within the AMIP and RCP4.5 experiments, indicating the effects of other processes on the ascent strength change. Inaccurate model parameterizations of water vapor absorption of solar radiation37 and inconsistent treatment of absorbing aerosol trends38 could contribute to the clear-sky shortwave absorption disagreements (Fig. 4c). In the limited number of historical experiments that include the absorbing aerosol concentrations in the outputs, we find the changes of absorbing aerosol concentrations are positively correlated with the changes of clear-sky shortwave absorption in the ascent region (Supplementary Figure 6). Increasing absorbing aerosols would favor stronger ascent. However, it is difficult to quantify the relative contributions of water vapor and absorbing aerosols in clear-sky shortwave absorption using the available model experiments. Nevertheless, it is clear that cloud radiative changes over the convective region add substantial diversity to the tropical ascent sensitivity to surface warming. On the contrary, the physics over the dry and non-convective region is relatively simple and consistently captured in the models with smaller inter-model spread. Albern et al.33 also found that cloud-radiative interactions amplify the model differences in the ascent response to warming in the aqua-planet simulations. Anti-correlation between the ascent rate and area change The mass balance between the ascending and descending boxes stipulates that the tropical ascent area change acts to offset the differential responses of Wd and Wu to warming (Eq. 1). The weakening of Wd is predominantly driven by the rise of DSS, while the Wu sensitivity to warming is largely influenced by net energy flux in the atmospheric column, for which clouds, water vapor, and aerosols can all come into play. Because the increase of DSS with warming overcomes the increase of radiative cooling in the descent region, the weakening of Wd occurs in all model experiments irrespective of the SST warming patterns, although the magnitudes of the changes depend on the SST distributions and model physics. Constrained by the mass budget, ascent area would decrease when descent rate weakens if the ascent rate stays unchanged. When ascent rate also weakens such as in the RCP4.5 experiments or in the aqua-planet simulations under uniform SST warming,33 the ascent area can increase or decrease in order to compensate for the differential changes of descent and ascent rates. Under CMIP5 projected climate change scenarios, a tightening of ascent area is commonly simulated,7,8,9 suggesting there are robust physical processes at work to achieve the mass balance of tropical circulation. Previous studies showed that the tightening of ascent area is caused by the "upped-ante" mechanism39 and the negative MSE advection8 in a warmer climate. The "upped-ante mechanism" states that the margin of the convective zone tends to experience suppressed convection when tropospheric temperature increases because the low-level MSE required to initiate convection has increased relative to the MSE of inflow.39 The negative MSE advection argument posits that the narrowing of the ITCZ is primarily caused by the increased negative MSE advection associated with the mean Hadley Circulation acting on the strengthened MSE gradient between the tropics and subtropics.8 In addition, the MSE divergence caused by transient eddies also contributes to the narrowing of the ITCZ.8 These mechanisms are the dynamic pathways that fulfill the fundamental mass constraint. For inter-model spread that measures the uncertainties of model simulations, the correlation between \(\frac{1}{{W_d}}\frac{{dW_d}}{{dT_s}}\) and \(\frac{1}{{A_u}}\frac{{dA_u}}{{dT_s}}\) across the models is only 0.14 (statistically insignificant) for all the experiments or within each type of experiments (Fig. 5a). This is because the inter-model spread in ascent area change is dominated by the large model spread in ascent rate change since the sensitivity of descent rate change per unit surface warming is similar between the models. The lack of correlation between the inter-model spreads in \(\frac{1}{{W_d}}\frac{{dW_d}}{{dT_s}}\) and \(\frac{1}{{A_u}}\frac{{dA_u}}{{dT_s}}\) does not violate the relations in Eq (1): we find the interannual variations of tropical ascent area are positively correlated with the variations of descent rate, with the correlations of about 0.3 to 0.5. On the other hand, the interannual variations of Au and Wu are negatively correlated with R = −0.7 to −0.5 for different models (figure not show). Relations between the changes of descent/ascent rates and ascent area. The fractional change of a mean descent rate and b mean ascent rate scattered against the fractional change of the ascent area across the model simulations. All fractional changes are normalized by global-mean surface temperature trends over the same period. Each symbol represents a model experiment. The black line marks the least squares linear fit across all model experiments with correlation coefficient (R) shown. The dotted line in b corresponds to y = –x Figure 5b shows there is a remarkably high negative correlation of –0.85 between the inter-model spreads of ascent area and ascent rate sensitivity to warming across the 77 model experiments. Similar high correlations exist with R = –0.85, –0.68, and –0.74 for AMIP, historical and RCP4.5 experiments, respectively. This anti-correlation results from the dichotomy between the inter-model spread in the descent and ascent rate changes. The large inter-model spread in ascent rate change dominates over the small spread in the descent rate change and determines the inter-model spread of ascent area change. Although the ascent rate can weaken or strengthen depending on cloud radiative feedbacks and other net energy flux perturbations, the ascent area would respond in a way to offset the differences between the descent and ascent rate changes. In RCP4.5 and most of the historical runs, the changes of Wu and Au are of the same sign. The tightening of Au and the weakening of Wu work together to balance the weakened descent rate. A relative strengthening of mean ascent rate has to be counteracted by a relative tightening of the ascent area as the weakening of descent is well constrained across the models. Byrne et al.26 pointed out that such a negative correlation is expected for a given fractional change of mass transport, but they did not investigate the relationships between descent and ascent rate and area changes. Our study generalizes the tropical ascent area and strength definitions to include both the Hadley and Walker Circulation changes and employ a large number of model experiments under drastically different surface warming conditions. The mass balance and MSE budget framework allow us to quantitatively interpret the inverse relationship between the ascent area and strength changes for the tropical circulation. More importantly, we find that cloud-radiative interactions are the leading sources of model uncertainties for ascent rate change and therefore ascent area change. It is reasonable to conjecture that the anti-correlation between \(\frac{1}{{A_u}}\frac{{dA_u}}{{dT_s}}\) and \(\frac{1}{{W_u}}\frac{{dW_u}}{{dT_s}}\) is sensitive to cloud radiative effects. Using the aqua-planet simulation results produced by Albern et al.,33 we have computed the correlations between the ascent rate and ascent area changes for the 8 models with and without cloud-radiative interactions. It is striking that the correlation is –0.92 when cloud radiation is active but becomes –0.37 when cloud radiation is turned off (Supplementary Figure 7). Obviously, the large model spread in ascent rate change caused by the diverse cloud radiative effects is essential to the strong anti-correlation between \(\frac{1}{{A_u}}\frac{{dA_u}}{{dT_s}}\) and \(\frac{1}{{W_u}}\frac{{dW_u}}{{dT_s}}.\) "The wetter the narrower" ITCZ changes The strong anti-correlation between the changes of Wu and Au has close relevance to precipitation change over the ITCZ. Previous studies showed that the circulation-driven dynamic component of precipitation change dominates the model differences in regional precipitation change.1,2 Therefore, it is not surprising that the inter-model spread in mean precipitation change over the ITCZ (Pu) is closely related with the Wu change (R = 0.79, Fig. 6a). Pu is the average rain rate over the ascending region where monthly-mean ω500 < 0. Because of the strong anti-correlation between \(\frac{1}{{W_u}}\frac{{dW_u}}{{dT_s}}\) and \(\frac{1}{{A_u}}\frac{{dA_u}}{{dT_s}}\), the fractional change of Pu is negatively correlated with that of Au with R = –0.85 (Fig. 6b) for all the model simulations or within each type of the experiments (R = –0.86, –0.66, and –0.68 for the AMIP, historical and RCP4.5 experiments, respectively). The model simulations with a greater strengthening (or less weakening) of the ascent rate have a larger increase of precipitation over the ITCZ but the widths of the ITCZ in these models are narrower. The wetter the narrower precipitation change over the tropical ascending region. The fractional change of the precipitation averaged over the tropical ascending region scattered against a the fraction change of mean ascent rate and b the fractional change of the ascent area across the model simulations. All fractional changes are normalized by global-mean surface temperature trends over the same period. Each symbol represents a model experiment. The black line marks the least squares linear fit across all model experiments with correlation coefficient (R) shown "The wetter the narrower" model spread in tropical rain belt change is a direct consequence of the compensating ascent rate and area changes constrained by the weakening of the descent rate in response to surface warming, a robust result across all models. Model constraints in other physical processes such as surface evaporation may also lead to "the wetter the narrower" ITCZ precipitation change as thermodynamics and dynamics in the tropics are closely coupled. The strong anti-correlation between the changes of Pu and Au across the models can serve as a powerful constraint on the model simulations of regional precipitation change. There has been observational evidence for the narrowing of the meridional width of the ITCZ in the past few decades40; however, observational metrics that represent the holistic 2-dimensional extent of the ascending region would be more appropriate for comparison with the model results presented here. Quantification of the long-term trends in the observational proxies of the ascent area is imperative to assess the model performance in capturing "the wetter the narrower" ITCZ change signal. This study shows that the predominant constraint on the fractional change of the tropical ascent area is the differential responses of the descent and ascent rates to surface warming. Because the descent rate response is dominated by the increase of dry static stability with warming, a universal weakening of the descent rate is simulated in the models regardless of the warming patterns, although the magnitude of the weakening is influenced by SST distributions and convergence as wave dynamics effectively transport convection-induced tropospheric temperature anomalies throughout the tropics.30,41 Across the models, the magnitudes of descent rate change have much smaller inter-model spread than those of ascent rate change. The disparate model responses of descent and ascent rate to surface warming have important implications. First, a weaker descent tends to be associated with a tighter ascent because of the mass balance constraint unless a greater weakening of ascent rate happens (Eq. 1). When ascent rate intensifies, a pronounced narrowing of the ascent area occurs. This argument is not limited to the Hadley Circulation. Decadal changes of the Walker Circulation could also contribute significantly to the changes of ascent area, such as in the post-1979 period. Su et al.9 showed that the tightening of the tropical ascent is correlated with a decrease of tropical-mean high altitude cloud fraction, which affects OLR and global-mean hydrological sensitivity. Thus, we infer that the tightening of ascent area can promote a macro-physical "iris" effect,42,43 in addition to the weakening of upper tropospheric radiative flux divergence discussed in Bony et al.34 The macro-physical "iris" effect is different from the original "iris" hypothesis that focused on the microphysical processes inside convective clouds concerning precipitation and detrainment efficiencies.44 It may not have a strong negative climate feedback as the original "iris" effect because the reduction of high cloud fraction does not necessarily lead to a net radiative cooling effect on the surface-atmosphere system.9 Byrne and Schneider43 showed that the macro-physical "iris" effect is not likely to exert a strong influence on global climate sensitivity because the circulation-induced TOA radiative perturbations are constrained to be small on global averages and global temperature is relatively insensitive to tropical TOA perturbations. Second, the dominance of the inter-model spread in the ascent rate change over that of the descent rate change results in a strong anti-correlation between the ascent rate and area changes across the models, leading to "the wetter the narrower" ITCZ precipitation responses. As it is possible to measure the tropical ascent area using various space-borne observational proxies, the strong correlation between the ascent area and the ITCZ precipitation enables a useful constraint on the precipitation changes. Moreover, we show that the ascent rate change varies substantially across the models due to the model differences in simulating the radiative effects of clouds, water vapor and aerosols. Accordingly, the magnitudes of the tightening of the ascent area vary drastically between the models. To reduce the uncertainties in tropical circulation and regional precipitation projections, improving the model physics related to deep convective clouds is critically needed. We employ 77 climate model simulations available at the CMIP5 archive (https://cmip.llnl.gov/cmip5/data_portal.html). Three types of model simulations are analyzed: 28 AMIP-type runs driven by the observed SST and sea ice from 1979 to 2005, 24 atmosphere-ocean coupled historical runs from 1979 to 2005, and 25 RCP4.5 runs from 2006 to 2100. The historical runs are driven by natural and anthropogenic forcings, universal across the models. The RCP4.5 runs project climate changes under a scenario with moderate mitigation of greenhouse gases (GHG). All analyses are conducted over the tropical domain (30°S–30°N) including both land and ocean, except for the global surface temperature trends. The annual-mean area coverage of the tropical ascent is based on the monthly mean data where ω500 < 0 Pa s−1. The linear trends in all variables are computed using the least square linear regression. Moist static energy budget framework We consider the tropical circulation consisting of two boxes of horizontally uniform properties in each box: one is moist and cloudy and the other is dry and clear. Deep convection and large-scale ascent prevail in the moist box while subsidence dominates in the dry box. Combing the energy and moisture budgets together, the moist static energy (MSE) budget in the flux form24 for the moist box is expressed as $$\partial _t\left\langle h \right\rangle + \left\langle {\nabla \cdot {\mathbf{v}}h} \right\rangle + \left\langle {\partial _p\omega h} \right\rangle = F_{net},$$ where the angle brackets ⧼*⧽ represent the mass-weighted vertical integral from the surface to the top of atmosphere (TOA) and h denotes MSE, i.e., h = CpT + Lvq + Φ with Φ being geopotential. The variables T, q, v and ω are atmospheric temperature, moisture, horizontal winds and vertical pressure velocity, and Cp and Lv are specific heat at constant pressure and latent heat of condensation. The Fnet on the r.h.s. is the net energy flux into the atmospheric column. It includes the net downward radiative flux at the TOA (Ft↓) and the net upward radiative flux at the surface (Fs↑) plus latent (E) and sensible (H) heat fluxes, i.e., $$F_{net} = F_t^ \downarrow + F_s^ \uparrow + E + H = (S_t^ \downarrow - S_t^ \uparrow - L_t^ \uparrow ) + (S_s^ \uparrow - S_s^ \downarrow + L_s^ \uparrow - L_s^ \downarrow ) + E + H,$$ where S and L denote shortwave and longwave radiative fluxes, respectively, with the flux directions indicated by the arrows, and the subscripts t and s mark the TOA and surface fluxes, respectively. Considering that tropical tropospheric temperature closely follows moist adiabats under quasi-equilibrium and the vertical velocity has a baroclinic structure with a maximum in the middle troposphere,24,28,29 we assume ω(x, y, z, t) = −Ω1(p)Wu, where Ω1(p) is the vertical profile of the first baroclinic mode of the vertical pressure velocity and Wu is the maximum strength of the vertical motion. The gross moist stability (GMS), which considers the effect of moisture on net atmospheric instability, can be defined as GMS = 〈Ω1(−∂ph)〉. Thus, the dominant control on the tropical ascent rate is Wu = \(F_{net}^u\)/GMS. For the descent box, radiative cooling balances against adiabatic warming so that Wd = \(F_{net}^d\)/DSS. The dry static stability DSS = − \(\frac{T}{\theta }\frac{{\partial \theta }}{{\partial P}}\) at 500 hPa is used, where T is temperature and θ is potential temperature. We derive DSS using monthly T and θ at 500 hPa, and \(\frac{{\partial \theta }}{{\partial P}}\) is computed by center differencing of θ at 400, 500, and 600 hPa. The relationships between the changes of Wu, Wd, \(F_{net}^u\) and DSS averaged over the ascent (ω500 < 0) and descent (ω500 > 0) regions within 30°S-30°N are analyzed in the study. Significance tests for the correlation coefficients For 77 independent samples, the 2-sided student-t test requires that the magnitude of correlation coefficient R ≥ 0.22 for the 95% significance level and R ≥ 0.29 for the 99% significance level. In this study, most correlation coefficients are statistically significant at the 99% significance level, unless noted otherwise. The analyses were conducted using MATLAB. All programing codes are available upon request from the corresponding author. The CMIP5 model simulations used in this study are all publicly available at https://cmip.llnl.gov/cmip5/data_portal.html. All analysis results and the code used during the study are available on request from the corresponding author. Bony, S. et al. Robust direct effect of carbon dioxide on tropical circulation and regional precipitation. Nat. Geosci. 6, 447–451 (2013). Xie, S. P. et al. Towards predictive understanding of regional climate change. Nat. Clim. Change 5, 921–930 (2015). Held, I. M. & Soden, B. J. Robust responses of the hydrological cycle to global warming. J. Clim. 19, 5686–5699 (2006). Knutson, T. R. & Manabe, S. Time-mean response over the tropical Pacific to increase CO2 in a coupled ocean–atmosphere model. J. Clim. 8, 2181–2199 (1995). Vecchi, G. A. & Soden, B. J. Global warming and the weakening of the tropical circulation. J. Clim. 20, 4316–4340 (2007). Su, H. et al. Weakening and strengthening structures in the Hadley Circulation change under global warming and implications for cloud response and climate sensitivity. J. Geophys. Res.: Atmospheres 119, 5787–5805 (2014). Lau, W. K.-M. & Kim, K.-M. Robust Hadley Circulation changes and increasing global dryness due to CO2 warming from CMIP5 model projections. Proc. Nat. Acad. Sci. 112, 3630–3635 (2015). Byrne, M. P. & Schneider, T. Narrowing of the ITCZ in a warming climate: Physical mechanisms. Geophys. Res. Lett. 43, 350–11,357 (2016). Su, H. et al. Tightening of tropical ascent and high clouds key to precipitation change in a warmer climate, Nature. Communications 8, 15771 (2017). Fu, Q., Johanson, C. M., Wallace, J. M. & Reichler, T. Enhanced mid-latitude tropospheric warming in satellite measurements. Science 312, 1179 (2006). Seidel, D. J., Fu, Q., Randel, W. J. & Reichler, T. J. Widening of the tropical belt in a changing climate. Nat. Geosci. 1, 21–24 (2008). Tanaka, H. L., Ishizaki, N. & Kitoh, A. Trend and interannual variability of Walker, monsoon and Hadley circulations defined by velocity potential in the upper troposphere. Tellus 56A, 250–269 (2004). Quan, X. W., Diaz, H. F. & Hoerling M. P. Change in the tropical Hadley Circulation since 1950, in The Hadley Circulation: Past, Present, and Future (eds H. F. Diaz & R. S. Bradley) pp. 85–120, (Cambridge Univ. Press, New York, 2004). Mitas, C. M. & Clement, A. Has the Hadley Circulation been strengthening in recent decades? Geophys. Res. Lett. 32, L03809 (2005). Sohn, B. J. & Park, S.-C. Strengthened tropical circulations in past three decades inferred from water vapor transport. J. Geophys. Res. 115, D15112 (2010). Solomon, A. & Newman, M. Reconciling disparate 20th century Indo-Pacific ocean temperature trends in the instrumental record. Nat. Clim. Change 2, 691–699 (2012). L'Heureux, M. L., Lee, S. & Lyon, B. Recent multidecadal strengthening of the Walker circulation across the tropical Pacific. Nat. Clim. Change 3, 571–576 (2013). Sohn, B. J., Yeh, S. W., Schmetz, J. & Song, H. J. Observational evidences of Walker circulation change over the last 30 years contrasting with GCM results. Clim. Dyn. 40, 1721–1732 (2013). Vecchi, G. A., Soden, B. J., Wittenberg, A. T., Held, I. M., Leetmaa, A. & Harrison, M. J. Weakening of tropical Pacific atmospheric circulation due to anthropogenic forcing. Nature 441, 73–76 (2006). Bellomo, K. & Clement, A. C. Evidence for weakening of the Walker circulation from cloud observations. Geophys. Res. Lett. 42, 7758–7766 (2015). Meng, Q. et al. (2012) Twentieth century Walker Circulation change: data analysis and model experiments. Clim. Dyn. 38, 1757–1773 (2012). Sandeep, S., Stordal, F., Sardeshmukh, P. D. & Compo, G. P. Pacific Walker Circulation variability in coupled and uncoupled climate models. Clim. Dyn. 43, 103–117 (2014). Ma, S. & Zhou, T. Robust Strengthening and Westward Shift of the Tropical Pacific Walker Circulation during 1979–2012: A Comparison of 7 Sets of Reanalysis Data and 26 CMIP5 Models. J. Clim. 29, 3097–3118 (2016). Pierrehumbert, R. T. Thermostats, radiator fins, and the run-away greenhouse. J. Atmos. Sci. 52, 1784–1806 (1995). Larson, K., Hartmann, D. L. & Klein, S. A. The role of clouds, circulation, and boundary layer structure in the sensitivity of the tropical climate. J. Clim. 12, 2359–2374 (1999). Byrne, M. P., A. G. Pendergrass, A. D. Rapp, K. R. Wodzicki. Response of the intertropical convergence zone to climate change: Location, Width and strength, current climate. Change Reports, 2198-6061, https://doi.org/10.1007/s40641-018-0110-5, (2018). Neelin, J. D. & Held, I. M. Modeling tropical convergence based on the moist static energy budget. Mon. Wea. Rev. 115, 3–12 (1987). Chou, C. & Neelin, J. D. Mechanisms of global warming impacts on regional tropical precipitation. J. Clim. 17, 2688–2701 (2004). Su, H. & Neelin, J. D. The scatter in tropical average precipitation anomalies. J. Clim. 16, 3966–3977 (2003). Sobel, A. H., Nilsson, J. & Polvani, L. M. The weak temperature gradient approximation and balanced tropical moisture waves. J. Atmos. Sci. 58, 3650–3665 (2001). Neelin, J. D. & Zeng, N. A quasi-equilibrium tropical circulation model—Formulation. J. Atmos. Sci. 57, 1741–1766 (2000). Wills, R. C., Levine, X. J. & Schneider, T. Local energetic constraints on Walker cir-culation strength. J. Atmos. Sci., 2017. https://doi.org/10.1175/JAS-D-16-0219.1. (2017). ISSN 0022-4928. Albern, N., Voigt, A., Buehler, S. A., & Grützun, V. Robust and nonrobust impacts of atmospheric cloud-radiative interactions on the tropical circulation and its response to surface warming. Geophysical Research Letters, 45, https://doi.org/10.1029/2018GL079599 (2018). Bony., S. et al. Thermodynamic control of anvil cloud amount. Proc. Nat. Acad. Sci. 113, 8927–8932 (2016). Lindzen, R. S. & Nigam, S. On the role of sea surface temperature gradients in forcing low-level winds and convergence in the tropics. J. Atmos. Sci. 44, 2418–2436 (1987). Voigt, A., and T. A. Shaw, Circulation response to warming shaped by shaped by radiative changes of clouds and water vapour. Nature Geoscience, 8, https://doi.org/10.1038/NGEO2345 (2015). DeAngelis, A. M., Qu, X., Zelinka, M. D. & Hall, A. An observational radiative constraint on hydrologic cycle intensification. Nature 528, 249–253 (2015). Pendergrass, A. G. & Hartmann, D. L. Global-mean precipitation and black carbon in AR4 simulations. Geophys. Res. Lett. 39, L01703 (2012). Neelin, J. D., Chou, C. & Su, H. Tropical drought regions in global warming and El Nino teleconnections. Geophys. Res. Lett. 30, 2275 (2003). Wodzicki, K. R. & Rapp, A. D. Long-term characterization of the Pacific ITCZ using TRMM, GPCP, and ERA-Interim. J. Geophys. Res. Atmos. 121, 3153–3170 (2016). Su, H., Neelin, J. D. & Meyerson, J. E. Sensitivity of tropical tropospheric temperature to sea surface temperature forcing. J. Clim. 16, 1283–1301 (2003). Choi, Y. et al. Revisiting the iris effect of tropical cirrus clouds with trmm and a-train satellite data. J. Geophys. Res.: Atmospheres 122, 5917–5931 (2017). Byrne, M. P. & Schneider, T. Atmospheric dynamics feedback: concept, simulations and climate implications. J. Clim. 31, 3249–3264 (2018). Lindzen, R. S., Chou, M. ‑D. & Hou, A. U. Does the Earth have an adaptive infrared iris? Bull. Am. Meteorol. Soc. 82, 417–432 (2001). We acknowledge the World Climate Research Programme's Working Group on Coupled Modelling, which is responsible for CMIP, and we thank the climate modeling groups for producing and making available their model output. We thank the funding support from NASA NEWS, ACMAP-AST and MAP programs. J.D.N. is supported by the NSF AGS-1540518 grant. This work was performed at Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA. Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, 91109, USA Hui Su, Chengxing Zhai, Jonathan H. Jiang & Longtao Wu Department of Atmospheric and Oceanic Sciences, University of California, Los Angeles, Los Angeles, CA, 90095, USA J. David Neelin Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA, 91125, USA Yuk L. Yung Hui Su Chengxing Zhai Jonathan H. Jiang Longtao Wu H.S. designed the analysis and wrote the paper. C.Z., J.H.J., and L.W. analyzed the CMIP5 model simulations. J.D.N. and Y.L.Y. provided suggestions for the analysis and comments on the manuscript. Everyone edited the manuscript. Correspondence to Hui Su. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Su, H., Zhai, C., Jiang, J.H. et al. A dichotomy between model responses of tropical ascent and descent to surface warming. npj Clim Atmos Sci 2, 8 (2019). https://doi.org/10.1038/s41612-019-0066-8 Accepted: 14 February 2019 Editorial Summary Atmospheric science: Tropical tradeoffs Tropical precipitation in the future will be influenced by the balance between changes in the area and rate of atmospheric ascent and descent. Hui Su from the Jet Propulsion Laboratory, together with a multidisciplinary team, probe several climate modeling experiments and show that models consistently simulate a lower descent rate in a warming climate. But changes in ascent rate range from strongly positive to strongly negative, a diversity that is dominated by variations in both the simulated longwave energy fluxes at the top of the atmosphere and the absorption of shortwave energy. Because mass must be conserved and the descent rate change is approximately constant, ascent rate change is anticorrelated with the ascending area change. Consequently, models with a major increase in ascent rate also show a narrowing of the ascending area, leading to a "wetter get narrower" environment. Robust observations are critically needed to constrain the simulated ascent rates and controlling processes. npj Climate and Atmospheric Science (npj Clim Atmos Sci) ISSN 2397-3722 (online)
CommonCrawl
Separated nets arising from certain higher rank $\mathbb{R}^k$ actions on homogeneous spaces The geometric discretisation of the Suslov problem: A case study of consistency for nonholonomic integrators August 2017, 37(8): 4239-4247. doi: 10.3934/dcds.2017181 On nonlocal symmetries generated by recursion operators: Second-order evolution equations M. Euler 1, , N. Euler 1,, and M. C. Nucci 2, Division of Mathematics, Department of Engineering Sciences and Mathematics, Luleå University of Technology, SE-971 87 Luleå, Sweden Dipartimento di Matematica e Informatica, Università di Perugia, 06123, Perugia, Italy * Corresponding author: [email protected] Received February 2017 Revised May 2017 Published April 2017 We introduce a new type of recursion operator suitable to generate a class of nonlocal symmetries for those second-order evolution equations in $1+1$ dimension which allow the complete integration of their time-independent versions. We show that this class of evolution equations is $C$-integrable (linearizable by a point transformation). We also discuss some applications. Keywords: Nonlocal symmetries, recursion operators, evolution equations. Mathematics Subject Classification: Primary: 35G20, 35A30, 58J70. Citation: M. Euler, N. Euler, M. C. Nucci. On nonlocal symmetries generated by recursion operators: Second-order evolution equations. Discrete & Continuous Dynamical Systems - A, 2017, 37 (8) : 4239-4247. doi: 10.3934/dcds.2017181 S. C. Anco and G. Bluman, Direct construction method for conservation laws of PDEs Part Ⅱ: General treatment, Euro. J. Applied Mathematics, 13 (2002), 567-585. doi: 10.1017/S0956792501004661. Google Scholar M. Euler and N. Euler, Second-order recursion operators of third-order evolution equations with fourth-order integrating factors, J. Nonlinear Math. Phys., 14 (2007), 313-315. doi: 10.2991/jnmp.2007.14.3.2. Google Scholar N. Euler and M. Euler, On nonlocal symmetries, nonlocal conservation laws and nonlocal transformations of evolution equations: Two linearisable hierarchies, J. Nonlinear Math. Phys., 16 (2009), 489-504. doi: 10.1142/S1402925109000509. Google Scholar M. Euler, N. Euler and N. Petersson, Linearisable hierarchies of evolution equations in (1+1) dimensions, Stud. Appl. Math., 111 (2003), 315-337. doi: 10.1111/1467-9590.t01-1-00236. Google Scholar A. S. Fokas, Symmetries and Integrability, Stud. Appl. Math., 77 (1987), 253-299. doi: 10.1002/sapm1987773253. Google Scholar P. J. Olver, Evolution equations possessing infinitely many symmetries, J. Math. Phys., 18 (1977), 1212-1215. doi: 10.1063/1.523393. Google Scholar P. J. Olver, Applications of Lie Groups to Differential Equations, Springer-Verlag, New York, 1986. doi: 10.1007/978-1-4684-0274-2. Google Scholar N. Petersson, N. Euler and M. Euler, Recursion Operators for a Class of Integrable ThirdOrder Evolution Equations, Stud. Appl. Math., 112 (2004), 201-225. doi: 10.1111/j.0022-2526.2004.01511.x. Google Scholar A. V. Bobylev, Vladimir Dorodnitsyn. Symmetries of evolution equations with non-local operators and applications to the Boltzmann equation. Discrete & Continuous Dynamical Systems - A, 2009, 24 (1) : 35-57. doi: 10.3934/dcds.2009.24.35 Masahiro Kubo. Quasi-subdifferential operators and evolution equations. Conference Publications, 2013, 2013 (special) : 447-456. doi: 10.3934/proc.2013.2013.447 Jin Liang, James H. Liu, Ti-Jun Xiao. Nonlocal Cauchy problems for nonautonomous evolution equations. Communications on Pure & Applied Analysis, 2006, 5 (3) : 529-535. doi: 10.3934/cpaa.2006.5.529 Jin Liang, James H. Liu, Ti-Jun Xiao. Condensing operators and periodic solutions of infinite delay impulsive evolution equations. Discrete & Continuous Dynamical Systems - S, 2017, 10 (3) : 475-485. doi: 10.3934/dcdss.2017023 Fengxin Chen. Stability and uniqueness of traveling waves for system of nonlocal evolution equations with bistable nonlinearity. Discrete & Continuous Dynamical Systems - A, 2009, 24 (3) : 659-673. doi: 10.3934/dcds.2009.24.659 Hongmei Cheng, Rong Yuan. Existence and asymptotic stability of traveling fronts for nonlocal monostable evolution equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 3007-3022. doi: 10.3934/dcdsb.2017160 Liang Zhang, X. H. Tang, Yi Chen. Infinitely many solutions for a class of perturbed elliptic equations with nonlocal operators. Communications on Pure & Applied Analysis, 2017, 16 (3) : 823-842. doi: 10.3934/cpaa.2017039 Miriam Manoel, Patrícia Tempesta. Binary differential equations with symmetries. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 1957-1974. doi: 10.3934/dcds.2019082 Wolfgang Arendt, Patrick J. Rabier. Linear evolution operators on spaces of periodic functions. Communications on Pure & Applied Analysis, 2009, 8 (1) : 5-36. doi: 10.3934/cpaa.2009.8.5 Guo Lin, Wan-Tong Li. Traveling wave solutions of a competitive recursion. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 173-189. doi: 10.3934/dcdsb.2012.17.173 Robert T. Glassey, Walter A. Strauss. Perturbation of essential spectra of evolution operators and the Vlasov-Poisson-Boltzmann system. Discrete & Continuous Dynamical Systems - A, 1999, 5 (3) : 457-472. doi: 10.3934/dcds.1999.5.457 Daliang Zhao, Yansheng Liu, Xiaodi Li. Controllability for a class of semilinear fractional evolution systems via resolvent operators. Communications on Pure & Applied Analysis, 2019, 18 (1) : 455-478. doi: 10.3934/cpaa.2019023 Elimhan N. Mahmudov. Optimal control of evolution differential inclusions with polynomial linear differential operators. Evolution Equations & Control Theory, 2019, 8 (3) : 603-619. doi: 10.3934/eect.2019028 José-Francisco Rodrigues, João Lita da Silva. On a unilateral reaction-diffusion system and a nonlocal evolution obstacle problem. Communications on Pure & Applied Analysis, 2004, 3 (1) : 85-95. doi: 10.3934/cpaa.2004.3.85 Jinrong Wang, Michal Fečkan, Yong Zhou. Approximate controllability of Sobolev type fractional evolution systems with nonlocal conditions. Evolution Equations & Control Theory, 2017, 6 (3) : 471-486. doi: 10.3934/eect.2017024 Dong Li, Xiaoyi Zhang. Global wellposedness and blowup of solutions to a nonlocal evolution problem with singular kernels. Communications on Pure & Applied Analysis, 2010, 9 (6) : 1591-1606. doi: 10.3934/cpaa.2010.9.1591 Aeeman Fatima, F. M. Mahomed, Chaudry Masood Khalique. Conditional symmetries of nonlinear third-order ordinary differential equations. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 655-666. doi: 10.3934/dcdss.2018040 Stephen Anco, Maria Rosa, Maria Luz Gandarias. Conservation laws and symmetries of time-dependent generalized KdV equations. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 607-615. doi: 10.3934/dcdss.2018035 Woocheol Choi, Yong-Cheol Kim. The Malgrange-Ehrenpreis theorem for nonlocal Schrödinger operators with certain potentials. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1993-2010. doi: 10.3934/cpaa.2018095 Nemat Nyamoradi, Kaimin Teng. Existence of solutions for a Kirchhoff-type-nonlocal operators of elliptic type. Communications on Pure & Applied Analysis, 2015, 14 (2) : 361-371. doi: 10.3934/cpaa.2015.14.361 M. Euler N. Euler M. C. Nucci
CommonCrawl
View all Nature Research journals npj systems biology and applications A personalized, multiomics approach identifies genes involved in cardiac hypertrophy and heart failure Marc Santolini1,2,3,4, Milagros C. Romay5, Clara L. Yukhtman6, Christoph D. Rau5,7, Shuxun Ren7, Jeffrey J. Saucerman8, Jessica J. Wang7, James N. Weiss7, Yibin Wang7, Aldons J. Lusis7,9 & Alain Karma1 npj Systems Biology and Applications volume 4, Article number: 12 (2018) Cite this article Regulatory networks A traditional approach to investigate the genetic basis of complex diseases is to identify genes with a global change in expression between diseased and healthy individuals. However, population heterogeneity may undermine the effort to uncover genes with significant but individual contribution to the spectrum of disease phenotypes within a population. Here we investigate individual changes of gene expression when inducing hypertrophy and heart failure in 100 + strains of genetically distinct mice from the Hybrid Mouse Diversity Panel (HMDP). We find that genes whose expression fold-change correlates in a statistically significant way with the severity of the disease are either up or down-regulated across strains, and therefore missed by a traditional population-wide analysis of differential gene expression. Furthermore, those "fold-change" genes are enriched in human cardiac disease genes and form a dense co-regulated module strongly interacting with the cardiac hypertrophic signaling network in the human interactome. We validate our approach by showing that the knockdown of Hes1, predicted as a strong candidate, induces a dramatic reduction of hypertrophy by 80–90% in neonatal rat ventricular myocytes. Our results demonstrate that individualized approaches are crucial to identify genes underlying complex diseases as well as to develop personalized therapies. Contrary to "Mendelian" diseases where causality can be traced back to strong effects of a single gene, common diseases result from modest effects of many interacting genes.1 Understanding which genes are involved and how they affect diseases is a major challenge for designing appropriate therapies. Heart failure (HF) is a well-studied example of a genetically complex disease involving multiple processes that eventually lead to a common phenotype of abnormal ventricular function and cardiac hypertrophy.2 Numerous studies have attempted to pinpoint differentially expressed genes (DEGs) to find biomarkers for the prognosis of the disease and the design of appropriate drugs,3 as well as explore underlying affected signaling pathways.4 Such studies typically compare the average gene expression between samples in healthy and diseased states, such as non-failing vs failing hearts in murine,5 canine,6 or human samples (see7 for a broad review). Genes are ranked by the strength of their differential expression, and top ranking genes are further investigated for pathway enrichment and biomarker potential. However, because of the different genetic backgrounds of the surveyed individuals, as well as different severities of HF, those studies show very limited overlap of DEGs. While separate studies typically identify tens to hundreds of DEGs, not a single DEG is common to all studies.7 Moreover, it is unclear whether the healthy state is itself a well-defined unique state. In particular, several studies have shown that, due to compensatory mechanisms involved in homeostasis, different combinations of ion channel conductances in neurons and cardiac cells can lead to a normal electrophysiological phenotype, e.g., a similar bursting pattern of motor neurons or a similar cardiac action potential and calcium transient.8,9 This has led to the concept that genetically distinct individuals represent different "Good Enough Solutions" corresponding to distinct gene expression patterns underlying a healthy phenotype. Different combinations of gene expression in a healthy state resulting from genetic variations would be expected to yield different DEGs in a diseased state. Thus, small numbers of DEGs that are only shared by a subset of individuals, and would be missed by a standard population-wide DEG analysis, could in principle have a causal role. Identifying these genes remains a central challenge in personalized medicine.10,11 In order to explore the variability of individual trajectories leading to hypertrophy and HF, we leverage the Hybrid Mouse Diversity Panel (HMDP), a model system consisting of >100 genetically diverse strains of mice that we described previously12,13 (see Methods). Gene expression and phenotypic data are acquired before and 3 weeks after implantation of a pump delivering isoproterenol (ISO). This pathological stressor induces a global response characterized mainly by cardiac hypertrophy along with more marginal changes in chamber dilation and contractile function at the population level.12 As a result, we primarily focus on the identification of genes relevant for cardiac hypertrophy. Expression data is collected at the whole heart level and the Total Heart Weight is used to quantify the degree of cardiac hypertrophy. Importantly, the severity of the hypertrophic response is highly variable among strains, ranging from almost no hypertrophy to up to an 80% increase of heart mass. Our study is directed at understanding why certain individuals are more susceptible to or protected against cardiac hypertrophy due to their genetic backgrounds. Because mice from the same strains are isogenic and renewable, the HMDP offers the possibility to analyze differential gene expression and phenotype change in a unique setting where subjects in the control population can be matched to a subject with the same genetic background in the treated population. In that setup, one can correlate the stressor-related gene expression change with the corresponding phenotype change (in our case, heart mass increase) while controlling for genetic background, thereby disentangling intra-strain (stressor-induced) and inter-strain (genetics-induced) variations. In the specific case of HF onto which we focus here, such data could not be obtained in human studies where heart tissue biopsies are extracted from either healthy donor hearts or explanted hearts of late stage HF patients in a genetically diverse population.14 One would indeed require a population of identical twins in which one twin for each pair of twins is a heart donor and the other twin is a late stage HF patient. As such, gene expression data obtained from those biopsies can only be used to perform a population-level differential gene expression analysis. In contrast, here we identify relevant genes by correlating strain-specific temporal changes of gene expression, i.e., differential expression between a post-ISO mouse and another pre-ISO mouse from the same strain, with the corresponding strain-specific changes of phenotype, i.e., ratio of heart mass between the post- and pre-ISO mice of the same strain. Concretely (see Methods), we calculate the Pearson coefficient of correlation \(C_j\) between the strain-specific fold-change of expression of gene \(j\) among \(N\) different strains $${\boldsymbol{F}}_{\boldsymbol{j}} = \left( {{\mathbf{log}}_2\frac{{{\boldsymbol{E}}_1^\prime \left( {\boldsymbol{j}} \right)}}{{{\boldsymbol{E}}_1\left( {\boldsymbol{j}} \right)}},{\mathbf{log}}_2\frac{{{\boldsymbol{E}}_2^\prime \left( {\boldsymbol{j}} \right)}}{{{\boldsymbol{E}}_2\left( {\boldsymbol{j}} \right)}}, \ldots ,{\mathbf{log}}_2\frac{{{\boldsymbol{E}}_{\boldsymbol{N}}^\prime \left( {\boldsymbol{j}} \right)}}{{{\boldsymbol{E}}_{\boldsymbol{N}}\left( {\boldsymbol{j}} \right)}}} \right)$$ where \(E_i\left( j \right)\) and \(E_i^\prime \left( j \right)\) are the expression levels of gene j for two isogenic mice of the \(i{\rm th}\) strain before and after ISO treatment, respectively, and the strain-specific fold-change of heart mass among different strains $${\boldsymbol{F}}_{\boldsymbol{m}} = \left( {{\mathbf{log}}_2\frac{{{\boldsymbol{m}}_1^\prime }}{{{\boldsymbol{m}}_1}},{\mathbf{log}}_2\frac{{{\boldsymbol{m}}_2^\prime }}{{{\boldsymbol{m}}_2}}, \ldots ,{\mathbf{log}}_2\frac{{{\boldsymbol{m}}_{\boldsymbol{N}}^\prime }}{{{\boldsymbol{m}}_{\boldsymbol{N}}}}} \right)$$ where \(m_i\) and \(m_i^\prime\) are the total heart mass of isogenic mice of the \(i{\rm th}\) strain before and after ISO treatment, respectively; we use log2 of expression fold-change to normalize microarrary data and log2 of heart mass fold-change for consistency (Methods). This correlation method of differential gene expression analysis identifies a set of DEGs, referred to hereafter as "fold-change" (FC) genes, for which the absolute value of \(C_j\) is above a threshold of statistical significance determined by randomization of the data as detailed further in the article and the Methods. The ability to study a large number (\(N\sim 100\)) of strains using the HMDP is essential to have enough statistical power to establish such a correlation, a power that has been lacking from previous studies limited to small numbers of strains.15,16,17,18 Moreover, the correlation coefficients \(C_j\) cannot be calculated in the setting of traditional clinical studies since the fold change of gene expression or heart mass of subjects with different genetic background is meaningless. Conversely, it is possible to analyze the HMDP data set using the same type of population-level differential gene expression analysis used in clinical studies, such as SAM (Significance Analysis of Microarrays).19 Applied to the HMDP data set, a method like SAM identifies a gene j as differentially expressed if the expression data in the control population \(\left( {{\mathrm{log}}_2E_1\left( j \right),\log _2E_2\left( j \right),\, \ldots \,,\,\log _2E_N\left( j \right)} \right)\) and the treated population \(\left( {\log _2E_1^\prime \left( j \right),\log _2E_2^\prime \left( j \right),\, \ldots \,,\,{\mathrm{log}}_2E_N^\prime \left( j \right)} \right)\) have statistically distinguishable mean values, irrespective of the individual reaction to the stressor \(F_m\). As further detailed in the methods, SAM genes do not consider the strength of phenotypic change \(F_m\) but rely on the average gene expression change \(\langle {F_j} \rangle\), while FC genes consider both expression and phenotypic changes through an interaction term \(\langle {F_jF_m} \rangle.\) Based on our computation of the \(C_j\) correlation coefficients, we find a small set of 36 FC genes and compare them to a larger set of genes identified with SAM (referred to hereafter as SAM genes). Interestingly, the sets of FC and SAM genes have negligible overlap. The FC genes are not identified as significantly changed at the population level because they typically have opposite fold changes in low and high hypertrophy strains that cancel each other when averaged over all strains in the population-wide case. We show that the FC genes are strongly enriched in cardiac disease genes from previous Genome-Wide Association Studies (GWAS), while SAM genes are in contrary enriched in fibrosis genes. We then show that those two sets form two distinct communities in the co-expression network among healthy as well as ISO-injected strains and we identify potential transcription factors (TFs) to explain the observed co-regulation of FC genes. Moreover, we find that the proteins encoded by the FC genes, but not the SAM genes, interact predominantly with proteins belonging to a cardiac hypertrophic signaling network (CHSN) that has been shown to provide a predictive model of hypertrophy in relation to multiple stressors including ISO.20 Interestingly, we find that one of the FC genes, namely Hes1, is also a predicted TF and an important interactor with the CHSN. Using a knockdown approach, we find that it plays a major role in cardiac hypertrophy, allowing us to validate our personalized, multiomics approach. Two types of responses to stressor-induced cardiac hypertrophy and heart failure We begin with an example showing two distinct ways to describe the response to ISO in the HMDP (see Fig. 1 and Methods). First, one can note that ISO induces a global response across all strains, resulting in cardiac hypertrophy. This is seen in Fig. 1a, where the distribution of heart mass among the post-ISO strains can clearly be distinguished from the pre-ISO distribution (p < 2.2e-16 under Student t-test). At the gene level, such a response is typically analyzed by looking for DEGs at the population level, i.e., genes for which the change in average expression with the stressor is significantly greater than the variability with and without the stressor (Fig. 1b). Typical tools include t-test,21 SAM,19 or LIMMA.22 Genes found with these methods have a differential expression profile at the population level and are therefore potential biomarkers of the trait of interest (see microarray data for Serpina3n, an example high-ranking SAM gene, in Fig. 1c). However, despite the global response in the level of gene expression to ISO, the degree of hypertrophy among individual strains is highly variable, from almost none to an 80% increase of heart weight (Fig. 1d). This calls for an evaluation of the strength of differential gene expression at the individual level. In particular, a whole new class of genes becomes available for analysis. Indeed, even if a gene does not show population-wide average differential expression, it can show extensive variation at the individual, strain-specific level (Fig. 1e). This is the case for the gene Kcnip2 encoding the protein KChIP2, which interacts with pore forming subunits (Kv4.2 and Kv4.3) of the transient outward current Ito expressed in heart, and which has been implicated in cardiac hypertrophy.23,24,25 Though not showing population-wide differential expression (Fig. 1f), its individual fold-change of expression can vary drastically from 2-fold decrease to a 2-fold increase depending on the considered strain (Fig. 1g). Interestingly, when comparing the individual variations of those two types of genes with the degree of hypertrophy (Fig. 1h, i, k), one can see that global DEGs are not necessarily good descriptors of the individual changes of phenotype (Fig. 1j), unlike the second type of genes missed by a traditional population-wide method (Fig. 1l). In particular, in the case of Kcnip2, we observe a significant positive correlation with the severity of hypertrophy (r = 0.4, p = 1.5e-4). This is particularly interesting since Kcnip2 has previously been shown to be down-regulated during cardiac hypertrophy24,26 in the strain 129 × 1/SvJ. While we confirm this finding, we also observe that it is unusual in a broader context, and that Kcnip2 is most of the time up-regulated in strains with marked hypertrophy. Two types of responses to stressor-induced heart failure. a Histograms of the pre-ISO (blue) and post-ISO (red) heart masses of the HMDP strains. b Typical Differentially Expressed Genes (DEGs) show clear population-average fold-change allowing distinguishing the two populations of strains. c An example of such strong DEG, namely Serpina3n. d Histogram of the heart mass fold-change (FC) computed for each strain from the HMDP. e Expression FC at the individual level can lead to cases were the population-average FC is null while the individual FCs are not. f Kcnip2 is a good example of a gene with no population-wide average FC. g However, at the individual level, Kcnip2 shows strong variations, as seen in the histogram of individual FCs at the strain level (log2 of post over pre-ISO expression ratio). In particular, some strains have a 4-fold decrease of expression (−2 in log2), while others have a 4-fold increase (+2). h For better visualization, the strain-specific heart mass FC is shown by decreasing strength. Red bars indicate increase and blue bars decrease in value. i Serpina3n log FC is shown with the same strain ordering than in (h). Its population-wide FC is high (3.9), with most strains showing a strong positive FC (red bars). j However, the correlation of Serpina3n FC with the heart mass FC is not significant (r = −0.09, p = 0.43). k On the other hand, Kcnip2 shows a weak population-wide FC (FC = 0.85). In particular, some strains show an increased expression (red bars) while others show a decreased expression (blue bars). The red arrow indicates the 129 × 1/SvJ strain in which Kcnip2 has previously been shown to be down-regulated during cardiac hypertrophy.24 l Contrary to Serpina3n, Kcnip2 FC is significantly correlated to heart mass FC (r = 0.4, p = 1.5e-4), with increased expression corresponding to high hypertrophy and decreased expression corresponding to low hypertrophy In the following, we generalize these observations to identify a larger set of genes that, like Kcnip2, have an individual FC correlated to the severity of hypertrophy, and we compare this set to the complete set of DEGs identified by the population-level SAM method. Identification of genes associated with the severity of hypertrophy Here we develop a method to determine which genes show individual, strain-specific expression FCs significantly correlated to the individual hypertrophic response measured by the individual fold-change of heart mass. We use microarray and phenotype expression data described in.13 Since our methodology is based on correlations, we choose to select those genes that belong to the giant component of the gene co-expression network above a certain correlation cutoff (see Methods and Figs. S1, 2). The advantages of such a filter compared to one based on absolute expression levels is that it yields a clear, well-defined cutoff (Figure S1b) while also rejecting genes having high expression but artefactual correlations (e.g., hitting the microarray saturation level in Figure S1c). We obtain a filtered set of 11,279 high-confidence genes. We then compute for all genes the absolute Pearson correlation between the gene expression fold-change and the individual hypertrophic response (Fig. 2a, blue histogram). To control for False Positives, we compute the expected correlations when randomizing the phenotype by shuffling strain labels (see Methods and Fig. 2a, red histogram). One can see significant enrichment in genes with high correlation to the trait. To quantify this enrichment, we compute the proportion of observed (blue) correlations divided by the proportion of correlations in the randomized cases (red) above various correlation cutoffs. Figure 2b shows this enrichment as a function of the gene rank, ordered by decreasing absolute value of the correlation with hypertrophy. The enrichment shows a peak at 36 genes, followed by a plateau until ~500 genes, and a subsequent decrease. We define these 36 genes as our candidates to describe the hypertrophic spectrum. These genes are listed in Table 1, along with references supporting the involvement of several of them in cardiac hypertrophy and HF. In the following, we refer to this set of genes as the "FC" set. Identification of genes associated with the severity of cardiac hypertrophy. a Histogram of the absolute values of the correlations between the FCs of genes expression and hypertrophy for all genes (blue, observed, red, randomized phenotype). Genes individual FCs are more correlated to hypertrophy than expected. Inset plot corresponds to the best observed correlation. b The previous enrichment is assessed by computing the ratio of the area under the observed and randomized curve as a function of correlation cutoffs. Cutoffs are matched to the genes correlations ranked in decreasing order. The enrichment peaks at N = 36 genes, which defines the set of "FC" genes. c Boxplot comparing the values of the absolute correlation with hypertrophy for the 2538 SAM genes resulting from a population-wide DEG study (see main text) and for the 36 identified individual FC genes. FC genes have significantly higher correlation. d Heatmap showing the 36 genes (columns) log fold-changes across strains (rows). The left column shows the degree of hypertrophy (yellow = low, dark blue = high). Hierarchical clustering shows a natural grouping of the strains by the severity of hypertrophy. e Enrichment of 36 best FC genes in human disease genes from GWAS studies. The 15 most enriched sets are shown. Red arrows indicate cardiac diseases (11/15). The enrichment of the 36 best SAM genes is shown for comparison, with low enrichment in the found sets. f Similar than (g), for 36 SAM genes. These genes show enrichment in "Fibrosis", a feature of structural remodeling during cardiac hypertrophy Table 1 List of genes predicted with the individual fold-change analysis As a comparison, we compute the population-wide DEGs using Significance Analysis of Microarray or SAM.19 This exhibits 2538 DEGs at a False Discovery Rate of 1e-3 (see Methods). Interestingly, we find no significant overlap (p = 0.68, hypergeometric test) between these SAM genes and the FC set, with six genes common to both sets (Tspan17, Ppp1r9a, Bclaf1, AW549877, Gss, 2310022B05Rik, and 9430041O17Rikm). In general, correlations between the individual fold-changes of the SAM genes and the degree of hypertrophy are found to be quite low (Fig. 2c). This shows that population-wide analyses do not naturally yield genes associated to the individual strength of phenotypic change, calling for a specific method to uncover them. The 36 FC genes are shown in Fig. 2d. As expected from the absence of overlap with SAM genes, the FC genes have both negative (blue) and positive (red) fold-change across the different strains, meaning that they have negligible average fold-change at the population level. A question that arises is whether the variability observed in the individual fold-changes of gene expression across strains is a consequence of genetic variability, or merely reflects environmental or experimental spurious effects. To investigate this question, we take advantage of the fact that gene expression has been replicated in nine strains post-ISO. Since mice from the same strain have a similar genetic background, they should therefore show very comparable individual fold-changes. Expression fold-change is shown for the 36 FC genes for the replicated strains in Figure S3a. We assess the replicability by computing the Spearman rank correlation of the 36 FC genes fold-change profiles between mice from replicated strains. We find a large mean correlation of 0.76, compared to 0.14 for pairs of strains taken at random among the non-replicated pool with a statistically very significant p-value (p = 1.6e-7, Wilcoxon test, see Figure S3b). This result shows that individual fold-changes are tightly controlled at the genetic level and that the ranking of the genes by FC is preserved for approximately 2/3 of the cases. We also assessed replicability by making a scatter plot of the log2 expression fold-change computed with the original and replicated ISO treated hearts compared to the same control heart for the 36 FC genes and 9 strains (Figure S3c). A correlation analysis of this scatter plot yields a correlation of 0.57 and very low p-value (p < 2.2e-16), confirming that individual fold-changes are predominantly genetically determined. In the following, we wish to evaluate further the biological signal carried by these FC genes missed by population-wide methods. Biological relevance of the identified FC genes Given the importance of the genetic control of those genes, they must be more susceptible to genetic variations. To explore that idea, we look at the enrichment in disease genes coming from previous GWAS. We use HuGE database of human genes associated with 2711 different diseases (see Methods). First, we convert the mouse gene names to human as described in the Methods. Then, we rank the diseases according to their enrichment in 36 FC (resp. 36 best SAM) genes using a hypergeometric test assuming as null hypothesis a uniform repartition of the genes across diseases. Results are shown in Figs. 2e, f for the 15 most enriched diseases in each case. We observe that FC genes are strongly enriched in heart diseases (11 in the 15 most enriched diseases) while SAM genes are only enriched in two cardiac diseases and in fibrosis, a feature characteristic of the structural remodeling taking place during HF.27 Those findings exhibit two distinct roles of FC and SAM genes in the progression of cardiac hypertrophy. While the cross-talk between cardiac fibroblasts and myocytes during cardiac hypertrophy has been studied previously,28 here we disentangle their relative contributions into a shared, population-wide fibroblastic component, and a fine-tuned, individualized component capable of explaining the severity of cardiac hypertrophy. Moreover, the enrichment of FC genes in human GWAS genes also highlights the relevance of the present HMDP data analysis to human cardiac hypertrophy and HF. Co-expression and co-regulation The identified sets of population-level and individual FC genes have until now been considered as collections of independent genes. However, in the cell, genes function together to achieve higher-order physiological functions. Such a collective behavior can be assessed in the framework of co-expression networks, where genes are related by the similarity of their profile of expression across different conditions. In the context of the HMDP, we investigate whether the predicted sets of genes show evidence of co-regulation in healthy and post-ISO hypertrophic strains. To that extent, we compute the squared Pearson correlations (r2) between the 36 best genes of both the FC and SAM sets. Correlation matrices are then cut off at r2 > 0.1 to keep significant interactions. We show in Fig. 3a and b the resulting co-expression networks in pre and post-ISO conditions. We clearly see that the two sets of genes form dense modules, and are disconnected from each other, with only few links between the two sets. Interestingly, we see that the biomarker and modulator of hypertrophy Nppb29 acts as a bridge between the two modules in pre-ISO condition (Fig. 3a, top), and is even found strongly co-expressed with the SAM genes in post-ISO mice (Fig. 3b). This suggests a role for Nppb in driving a cross-talk between FC genes and SAM genes. Finally, to quantify the relative density of the modules, we compared them to 1000 sets of a similar number of randomly selected genes. We show the resulting Z scores in Fig. 3c. Both SAM and FC sets show much stronger co-expression than randomly expected, with the SAM module being even denser under ISO condition. On the contrary, the density of links between the two modules is significantly smaller than expected by chance, indicating that the two sets of genes are disjoint sets in the co-expression network. Overall, these results show that the FC and SAM genes form two tight, disjoint communities in the co-expression network, both in pre-ISO and post-ISO mice. FC genes are co-regulated and significantly connected to the cardiac hypertrophy signaling network (CHSN). a Co-expression networks of the 36 best FC and SAM genes in healthy and post-ISO hypertrophic strains. Edges are drawn between two genes if the square Pearson correlation is greater than 0.1 (r2 > 0.1). The two modules segregate naturally using a force layout algorithm, showing that the modules have high clustering but only few links between themselves. Interestingly, Nppb (purple arrow) segregates with SAM genes, especially in ISO condition. b The edge density of the FC module, the SAM module, and the FC to SAM edges is computed and compared to the density expected for random sets of nodes of the same size (see Methods). The corresponding Z scores are significantly high (Z > 2) for both modules, indicating high co-expression. However, there are significantly fewer links than expected between the two modules (Z < −2), indicating that they are disjoint in the co-expression network, (c) List of the 6 most enriched TF motifs in the ±20 kb regions around the 36 FC genes TSSs predicted using iRegulon.30 Interestingly, Snai3 (blue arrow) is a SAM gene and Hes1 (red arrow) a FC gene, suggesting a crosstalk between the two modules at the gene regulatory level. d Proportion of neighbors in the interactome that belong to the Cardiac Hypertrophy Signaling Network or CHSN20 for different gene sets: the FC set (red arrow), the 36 best SAM genes (blue arrow) and 1000 realizations of random nodes in the interactome with the same size as the FC set (gray histogram). Z-scores are computed relative to the gray distribution. The FC set is significantly connected to the CHSN, while the SAM genes are not significantly different than a random set. e Network visualization of the CHSN,56 along with neighbors from the 36 best FC genes (red nodes). A more detailed interaction network is shown in Figure S5 The finding that the FC genes are strongly co-expressed suggests that they are co-regulated. To explore this possibility, we look for enrichment in common TF binding sites in the vicinity of the 36 FC genes. To compute the enrichment, we use iRegulon, a recent algorithm integrating different TF motifs databases and using phylogenic conservation to identify overrepresented binding sites in the −20/ +20 kb regions around the Transcription Start Sites of genes of interest (see Methods).30 The identified motifs are then ranked by target enrichment among selected genes, and are associated with a list of putative TFs that can bind them (Fig. 3c). We find that the best-ranked motif is associated with repressor TFs Scrt1 and Scrt2, known to modulate the action of basic helix-loop-helix TFs.31 Interestingly, the corresponding PWM motif is also matched to Snai3 TF, a gene ranked 3rd among SAM genes. The 2nd motif, VDR, is known to be involved in heart failure and cardiac hypertrophy.32 Finally, the sixth predicted TF is associated with Hes1, which ranks 10th among the FC genes. This indicates that there is a cross-talk between the two modules at the gene regulatory level, with both FC and SAM genes being involved in the regulation of the expression of the FC genes. Exploration of the neighborhood in the interactome While useful to detect gene regulatory changes involved in the disease process, gene expression does not capture post-translational changes and interactions that occur at the protein level. To explore the potential involvement of the predicted sets of genes at the protein level, we use a previously published human interactome combining high-throughput and literature curated protein–protein, metabolic, kinase–substrate, signaling and to a lesser extent regulatory interactions.33 After converting to human gene symbols (see Methods), the proteins encoded by the 36 best FC and SAM genes have respectively 364 and 346 interacting partners. We then compute pathway enrichment for these neighbors (see Methods). The other most highly enriched pathway is linked to NFAT signaling, known to be important in HF.34 Interestingly, we find that the second most enriched pathway for FC neighbors is a previously published Cardiac Hypertrophy Signaling Network (CHSN) containing 106 nodes (corresponding to 218 genes) giving a predictive model of hypertrophy in response to multiple stressors including ISO20 (Figure S4). Indeed, about 14% of FC neighbors are components of this network, compared to a predicted random association of 4% (Z = 4, Fig. 3d). The CHSN is shown in Fig. 3e and in more details in Figure S5, along with FC nodes directly interacting with CHSN nodes. In particular, we find that Hes1 is interacting with several nodes of the CHSN at different levels of the hierarchy, namely FAK, JAK, STAT, CamK, PKC, and HDAC. Experimental validation of Hes1 The previous results point toward a role for Hes1 in cardiac hypertrophy and heart failure. Indeed, Hes1 was found to be a FC gene, an upstream regulator of FC genes, and an interactor with several components of the CHSN. To determine the function of Hes1 in the context of cardiac hypertrophy and heart failure, we performed siRNA knockdown in neonatal rat ventricular myocytes followed by treatment with beta-adrenergic agonist ISO or alpha-adrenergic agonist phenylephrine (PE) containing media. Both agents induce hypertrophy through different molecular pathways, as can be seen in the CHSN (see Fig. 3e). Using siRNA to silence Hes1 expression, we achieved a 20–40% decrease in Hes1 expression when compared to transfection control (Fig. 4a and Table S3). At the molecular level, treatment with either ISO or PE containing media drastically increases the expression of the HF markers Nppa and Nppb, which rose 3.5 and 7.9-fold, respectively under ISO treatment and 11-fold and 13-fold, under PE treatment in cells transfected with the control siRNA. Strikingly, knockdown of Hes1 expression strongly impaired the induction of these two markers under both treatment conditions. Nppa induction was reduced up to 110 and 88% under ISO and PE treatment while Nppb induction was reduced up to 66 and 91% under ISO and PE treatment, respectively. In addition to these molecular changes, we investigated the role of Hes1 in modulating the increase in cardiomyocyte cell cross-sectional area upon treatment with ISO and/or PE. As expected, following ISO/PE treatment, cells transfected with the control siRNA doubled in cellular cross-sectional area (Fig. 4c, Figure S6, and Table S4). In comparison, cells transfected with the Hes1 siRNA showed up to 87 and 79% reduction in cell cross-sectional area increase following treatment with ISO and PE, respectively. This effect is consistent with the fact that HMDP strains showing no or mild hypertrophy exhibit strong negative fold-change of Hes1 (Figure S7). Taken together, these findings strongly suggest a role for Hes1 as a regulator of cardiac hypertrophy in vitro. Validation of Hes1 as a cardiac hypertrophy regulator. a Hes1 mRNA expression following 48 h after siRNA transfection in a control, isoproterenol or phenylephrine medium. Three siRNAs were used, a scrambled, control one and two Hes1 specific siRNAs. Both Hes1 siRNAs show systematic downregulation of Hes1 mRNA in all conditions. b Effect of Hes1 knockdown on the known hypertrophic makers Nppa and Nppb. In both case, Hes1 knockdown leads to a significant change in biomarkers activation in isoproterenol and phenylephrine conditions (*p < 0.05, ***p < 1e-3, Student t-test). c Effect of Hes1 knockdown on neonatal rat ventricular myocytes size relative to control medium cell cross-sectional area. Both siRNAs lead to a drastic 80–90% decrease in hypertrophy in both isoproterenol and phenylephrine media In the present study, we investigated the spectrum of cardiac hypertrophy and HF development in 100+ genetically diverse mice from the HMDP when subjected to chronic ISO infusion. We have analyzed two types of responses. First, the global response at the population level with a large number (1000+) of genes involved, as detected by the SAM algorithm. Their global fold-change is representative of the global hypertrophy observed across all strains. However, the magnitude of their fold-change at the individual level does not predict the degree of individual hypertrophy. Using a correlation-based method, we found another group of ~40 genes that predicts the degree of hypertrophy. We named these the "FC" genes in reference of the fact that we found them using their individual, strain-specific fold-change. Surprisingly, these genes have a near zero fold-change at the population level due to the canceling contributions of up and down-regulation in different strains, so that they are not detected using classical differential expression tools. While several FC genes have previously been implicated in cardiac hypertrophy and HF (see Table 1), their high variability in such a controlled setup has not been explored previously. We showed that these genes are enriched for heart failure gene candidates previously described in the literature, as well as for human cardiac disease genes. On the other hand, the best SAM genes are enriched in fibrosis disease genes. ISO has been shown to induce first myocardial fibrosis concomitantly with myocyte necrosis, followed by myocyte hypertrophy on a longer time scale,35 and fibrosis is also known to be an early manifestation of hypertrophic cardiomyopathy.36 Our results suggest that population-level SAM genes are predominantly associated with the early fibroblast response. On the other hand, since the change of heart mass is primarily determined by myocyte growth, our results suggest that FC genes are associated with the strain-specific degree of myocyte growth induced by beta-adrenergic stimulation. We further investigated the roles of these genes in different biological networks. We found that both FC and SAM genes form distinct co-expressed modules. Interestingly, Nppb (encoding the BNP protein), a widely used biomarker and modulator29 of HF, belongs to the FC set but is co-expressed with SAM genes in healthy mice, providing a unique bridge between the two sets. We note that this result is consistent with the previous finding that Nppb is an antifibrotic hormone produced by myocytes with an important role as a local regulator of ventricular remodeling in mice.37 Indeed, Nppb is correlated to the fibrotic SAM genes in healthy mice, consistent with a regulatory homeostatic behavior, but is found among FC genes after beta-adrenergic stimulation, consistent with a response proportionate to myocytes hypertrophy. It is also interesting to note that the SAM module overlaps significantly (p = 3.4e-6, hypergeometric test) with a co-expression module previously found in post-ISO mice and shown to be involved in cardiac hypertrophy.38 Indeed, it shares the genes Timp1, Tnc, Mfap5, Col14a1 and Adamts2, the latter of which was validated experimentally as a regulator of cardiac hypertrophy. We then predicted several TFs to study this co-regulation. Interestingly, among the top TFs predicted as regulators of the FC genes, one of them, Hes1, belongs to the FC genes, and another one, Snai3, belongs to the SAM genes. We note that both inhibitory (Snai3, Hes1) and activatory (Vdr, Srebf1) TFs were found to have enriched binding sites around FC genes TSSs. This suggests a potential regulatory balance that could explain the up and down-regulation observed for these genes across strains. We then looked at potential post-translational effects at the protein level by using an integrated interactome. We found that FC genes were strongly interacting with a CHSN previously shown to be predictive of cardiac hypertrophy in response to ISO and other stressors.20 This may indicate that several of those genes are upstream of a causal chain of events at the post-translational level that control myocyte growth. We note that the FC gene Nppb is present both as an input and an output of the CHSN. This exemplifies an interesting feedback architecture where downstream effects can causally affect upstream regulation. Overall, the FC genes constitute a HF "disease module" formed of co-regulated genes connected to the CHSN at the protein level. A key finding of our study is that there is strong strain-to-strain variation in response to a stressor under similar well-controlled environmental conditions. This variation is largely explained by the different genetic backgrounds, as shown by the consistent responses in mice from same strains (Figure S3) and the strong enrichment in heart diseases GWAS (Fig. 2e). For example, Kcnip2 is known to be downregulated concomitantly with a reduction of Ito magnitude in cardiac hypertrophy.24,26 Our results are consistent with this finding for the previously studied 129 × 1/SvJ strain,24 but show that Kcnip2 is upregulated in many strains with pronounced hypertrophy leading to an overall positive correlation between Kcnip2 expression and heart mass FC. This indicates that there are multiple possible compensatory mechanisms underlying a similar patho-phenotype. Similarly, we observed strong variation in the fold-change of Nppb. It was previously shown to be over-expressed during cardiac hypertrophy as an anti-fibrotic factor.29 Using our multiple strains setup, we observed a positive correlation between Nppb change of expression and the degree of hypertrophy. However, we also observed some cases were hypertrophic strains exhibit down-regulation of Nppb, including the widely used C57BL/6 J and 129 × 1/SvJ strains (see Fig. 2d and S3). Finally, our approach was validated by testing Hes1's role in cardiac hypertrophy. Hes1 was chosen because of its involvement at different levels: found as a FC gene, Hes1 is also a predicted TF regulating the FC genes and a key interactor of the CHSN. Hes1 is part of the Notch signaling pathway which is highly conserved and involved in cell-cell communication between adjacent cells.39 This pathway is well known to play a crucial role in cardiac development and disease. Notch activity is required in complex organs like the heart that necessitate the coordinated development of multiple parts.40 Specifically, functional studies have shown that Notch activity is required for cardiovascular development and that Notch signaling causes downstream effects such as cell fate specification, cell proliferation, progenitor cell maintenance, apoptosis, and boundary formation.39 In previous studies, Hes1 expression was observed to increase following myocardial infarction and other ischemic cardiomyopathies. Increased expression of Hes1 was also shown to inhibit apoptosis of cardiomyocytes and promote instead their viability. However, whether Hes1 acts as a regulator of heart failure markers has remained unclear.41 Here, we show that Hes1 knock-down induces a dramatic reduction of hypertrophy by 80–90% (Fig. 4c), identifying for the first time Hes1 as a key regulator of cardiac hypertrophy. Importantly, this result is consistent with the HMDP, where strains with no or mild hypertrophy have 20–50% decrease in Hes1 after ISO injection (Figure S7b). Overall, we have explored the individual, strain-specific responses to stressor-induced HF and identify 36 FC genes that are missed by traditional population-wide methods of DEG analysis. We have shown that these FC genes provide a completely distinct, albeit complementary, picture of HF than population-wide DEGs. In particular, FC genes are enriched in human cardiac disease genes and hypertrophic pathways. This is important since previous studies that use population-level methods to identify DEGs have concluded that murine models are of limited relevance to human HF.42,43 In contrast, our findings show that FC genes, identified by a personalized differential expression analysis in a genetically diverse population of mice, are relevant to human HF. By linking those genes both to upstream regulators and to a signaling network predictive of cardiac hypertrophy, we provide new insights into the regulation of the severity of and resistance to cardiac hypertrophy at the individual level, and validate Hes1 as a regulator of cardiac hypertrophy in vitro. We believe this approach to be critically important for the appropriate design of upcoming experiments directed at unraveling causal genes in complex diseases. Overview of the HMDP The HMDP consists of a population of over 100 inbred mouse strains selected for usage in systematic genetic analyses of complex traits. Strain were selected to increase resolution of genetic mapping with a renewable resource that is available to all investigators worldwide as well as to create a shared data repository that would allow the integration of data across multiple scales, including genomic, transcriptomic, metabolomic, proteomic, and clinical phenotypes. The core of our panel for association mapping44,45,46 consists of 29 classic parental inbred strains which are a subset of a group of mice commonly called the mouse diversity panel. HMDP strains were chosen by eliminating closely related strains and removing wild-derived strains. The decision to remove wild-derived stains reflects a tradeoff between statistical power and genetic diversity. While leaving out wild-derived strains sacrifices genetic diversity to some degree, the HMDP increased the statistical power (assuming the same number of animals) to identify genetic variants polymorphic among the classical inbred strains which affect traits. These variants yield a tremendous amount of phenotypic diversity among the classical inbred strains. ISO treatment As previously described,13,47 30 mg per kg body weight per day of ISO was administered for 21 days in 8–10 week old female mice using ALZET osmotic mini-pumps, which were surgically implanted intraperitoneally. All animal experiments were conducted following guidelines established and approved by the University of California, Los Angeles Institutional Animal Care and Use Committee (IACUC) and housed in an IACUC-approved vivarium with daily monitoring by vivarium personnel. Hypertrophy measurement As each mouse in a strain is genetically identical, we used several mice from the same strain for measuring the cardiac hypertrophic response to ISO treatment. More specifically, we used on average three untreated mice serving as control hearts and about three ISO treated mice of the same strain to measure the cardiac hypertrophic response. This response was studied in a total of 104 genetically different strains with the precise number of control and treated hearts for each strain given in Table S1. The number of untreated control hearts per strain was 2.75. The average number of ISO treated hearts per strain was 3.5. At sacrifice, hearts were excised, drained of excess blood and weighed. Each of the four chambers of the heart (left ventricle with inter-ventricular septum, right-ventricular free wall, right and left atria) was isolated and subsequently weighed. Cardiac hypertrophy for a given strain was calculated as the increase in average total heart weight after ISO treatment compared to control mice. Heart biopsy for microarray analysis As for the hypertrophy measurement, we exploited the fact that each mouse in a strain is genetically identical to extract heart tissue for microarray analysis in both untreated and ISO treated mice from the same strain. The left ventricle of each heart was cut into quarters with each piece weighing on average about ±25 mg a few mg depending on the amount of hypertrophy and two pieces were used for microarray data analysis. Due to the large number of strains analyzed and the cost of microarray data analysis, we used one untreated control heart and one ISO treated heart per strain for about 90% of the strains. However, since mice of a given strain are renewable, the HMPD offers the possibility to use triplets, quadruplets, and higher multiples of isogenic subjects for experimentation. This feature was used to measure gene expression in replicates (e.g., two hearts in control or two hearts after ISO treatment) to test for replicability in ~10% of the strains (9 strains analyzed in Figure S3). Microarray data analysis Following homogenization of left ventricular tissue samples in QIAzol, RNA was extracted using the Qiagen miRNAeasy extraction kit, and verified as having a RIN > 7 by Agilent Bioanalyzer. Two RNA samples were pooled for each strain and experimental condition and arrayed on Illumina Mouse Reference 8 version 2.0 chips. Analysis was conducted using the Neqc algorithm included in the limma R package48 and batch effects addressed using COMbat.49 In designing our study, we were cautious and distributed the treated and control conditions evenly across our three batches as well as endeavoring to include a diverse set of genetic backgrounds in each batch. Thus, we do not believe that our data suffer from the potential batch artifacts as reported in.50 Overview of the gene correlation method Traditional analyses of differential gene expression for complex diseases rely on gene expression data for two populations: a control population and a diseased (or drug treated) population. For example, in the case of HF, the control population consists of N donors with healthy hearts intended to be used for transplantation, which are biopsied for gene expression analysis when left unused, and the diseased population consists of M late stage heart failure patients whose hearts are explanted and then biopsied for gene expression analysis. Importantly, the subjects in the control and diseased population are all genetically different. Hence, if we label the subjects by \(S_i\), where the index \(i\) refers to subject \(i\) with its own genetic background distinct from all other subjects, the \(N\) subjects in the control population are \((S_1,S_2,....,S_N)\) (control subjects) and the \(M\) subjects in the diseased population are \((S_{N + 1},S_{N + 2},....,S_{N + M})\) (diseased subjects). The data sets used for the differential gene expression analysis consists then of the expression level (log2 mRNA number) of a large number of K genes for each subject. K is typically in the range of several thousands, and thus much larger than the number of control or diseased subjects (N or M, respectively) that are at most a few hundreds in the most extensive studies to date,51 and only a few subjects in each population in earlier studies.7 Let us label the expression levels by \(E_i(j)\) where the subscript i refers to subject i and the index \(j = 1,K\) refers to gene j. To find out if a given gene j among the K genes is differentially expressed, it suffices to use a standard statistical test analogous to a student t-test to decide if the gene expression data for the control group \(\left( {\log _2E_1\left( j \right),\log _2E_2\left( j \right),\, \ldots \,,\log _2E_N\left( j \right)} \right)\) (expression data for gene j in control population) and for the diseased group \((\log _2E_{N + 1}\left( j \right),\log _2E_{N + 2}(j),\, \ldots \,,\log _2E_{N + M}(j))\) (expression data for gene j in genetically distinct diseased population) have statistically distinguishable mean values. We note that we use the log2 of gene expression here. Indeed, raw gene expression levels measured from microarray fluorescence intensity typically have a skewed log-normal distribution resulting from a multiplicative error during the amplification process. The log transformation allows to normalize the data distribution and use classical parametric statistics such as the t-test for analysis. This test is carried out for all K genes and differentially expressed genes are then ranked in order of statistical significance (e.g., with increasing p-value less than some threshold of statistical significance). This approach is well-established and can be performed using existing bioinformatics tools such as SAM (Statistical Analysis of Microarrays).19 Because mice from the same strains are isogenic and renewable, the HMDP offers the possibility to analyze differential gene expression in a different and unique setting where subjects in the control and diseased populations have the same genetic background. The control population consists of one mouse per strain (for N strains) before treatment with a beta-adrenergic agonist isoproterenol (ISO) inducing cardiac hypertrophy and heart failure. Since all strains are genetically distinct the subjects in the control population are genetically distinct and can be labeled as \((S_1,S_2,...,S_N)\) (genetically identical control and diseased populations in the HMDP). Hearts from those subjects before ISO treatment are biopsied and used for microarray analysis. Biopsy requires sacrificing the animals that cannot be ISO treated. However, another mouse from the same strain can be ISO treated and similarly for all N strains. Therefore, the diseased/treated population is genetically identical to the control population and has the same degree of genetic diversity. From the gene expression data alone, we can then perform the standard SAM type of differential gene expression analysis that consists of deciding if the gene expression data before \(\left( {\log _2E_1\left( j \right),\log _2E_2\left( j \right),\, \ldots \,,\log _2E_N\left( j \right)} \right)\) (expression data for gene \(j\) in control strains) and after \(\left( {\log _2E_1^\prime \left( j \right),\log _2E_2^\prime \left( j \right),\, \ldots \,,\,\log _2E_N^\prime \left( j \right)} \right)\) (expression data for gene \(j\) in genetically identical treated strains) treatment have statistically distinguishable mean values, where \(E_i\left( j \right)\) and \(E_i^\prime \left( j \right)\) are the expression levels of gene \(j\) for the isogenic subjects \(S_i\) before (in control) and after ISO treatment, respectively. To do so, SAM uses a statistics based on the ratio of change in gene expression to standard deviation in the data for that gene, yielding the "relative difference":19 $${\boldsymbol{d}}\left( {\boldsymbol{j}} \right) = \frac{{{\boldsymbol{\mu }}_{\boldsymbol{j}}^\prime - {\boldsymbol{\mu }}_{\boldsymbol{j}}}}{{{\boldsymbol{s}}\left( {\boldsymbol{j}} \right) + {\boldsymbol{s}}_0}}$$ where \(\mu _j\) and \(\mu _j^\prime\) are defined as the average levels of expression for gene j in control and ISO treatment, respectively, and the denominator \(s\left( j \right) + s_0\) is the gene expression scatter as defined in.19 Genes that show a difference of average expression levels across both conditions that is significantly larger than their condition-specific scatters are selected and referred to as SAM genes. One can also perform an entirely different type of differential gene expression analysis owing to the fact that, in addition to control and treated subjects belonging to the same strain having the same genetic background, the change of heart mass in response to ISO, i.e., the ratio \(m_i^\prime /m_i\) of total heart mass before (\(m_i\)) and after ISO treatment (\(m_i^\prime\)) for strain i, can be measured for all strains \((i = 1,2,...,N)\) to assess the degree of hypertrophy among different strains. This ratio is calculated by measuring total heart mass for several mice from the same strain before and after ISO treatment and averaging measured values before and after ISO treatment prior to taking their ratio. Importantly, values of \(m_i^\prime /m_i\) range continuously from about 1 (no change of heart mass) to 2 (two-fold change of heart mass) among strains. Differential gene expression can then be examined by asking whether a given gene \(j\) contributes to the severity of cardiac hypertrophy. This can be readily done by calculating the coefficient of correlation .. (e.g., Pearson or Spearman) between the strain-specific fold change of expression of gene \(j\) in response to ISO treatment among different strains \(F_j = \left( {\log _2\frac{{E_1^\prime \left( j \right)}}{{E_1\left( j \right)}},\log _2\frac{{E_2^\prime \left( j \right)}}{{E_2\left( j \right)}}, \ldots ,\log _2\frac{{E_N^\prime \left( j \right)}}{{E_N\left( j \right)}}} \right)\) and the strain-specific change of heart mass among different strains \(F_m = \left( {\log _2\frac{{m_1^\prime }}{{m_1}},{\mathrm{log}}_2\frac{{m_2^\prime }}{{m_2}}, \ldots ,{\mathrm{log}}_2\frac{{m_N^\prime }}{{m_N}}} \right)\). We note that for consistency with the gene expression we also used the log-ratio of phenotypic change. In our case, we use the Pearson correlation and compute: $${\boldsymbol{C}}_{\boldsymbol{j}} = \frac{\langle {{\boldsymbol{F}}_{\boldsymbol{j}}{\boldsymbol{F}}_{\boldsymbol{m}}\rangle - \langle{\boldsymbol{F}}_{\boldsymbol{j}}\rangle \langle{\boldsymbol{F}}_{\boldsymbol{m}}}\rangle}{{{\boldsymbol{\sigma }}_{{\boldsymbol{F}}_{\boldsymbol{j}}}{\boldsymbol{\sigma }}_{{\boldsymbol{F}}_{\boldsymbol{m}}}}}$$ where σ denotes the standard deviation and 〈 〉 the average. Using that language, we note that the relative difference used for SAM genes can be rewritten as: $${\boldsymbol{d}}\left( {\boldsymbol{j}} \right) = \frac{\langle{{\boldsymbol{F}}_{\boldsymbol{j}}}\rangle}{{{\boldsymbol{s}}\left( {\boldsymbol{j}} \right) + {\boldsymbol{s}}_0}}$$ This readily shows that SAM genes do not consider the strength of phenotypic change, but rely on the average gene expression change \(\langle {F_j} \rangle\), while FC genes reflect how gene expression change affects phenotype change through the interaction term \(\langle {F_jF_m} \rangle.\) Clearly, this correlation coefficient cannot be calculated in the setting of traditional clinical studies since the fold change of gene expression or heart mass of subjects with different genetic background is meaningless. Calculating this correlation would require to use a population of identical twins in which one twin for each pair of twins is a heart donor and the other twin is a late stage HF patient, and donor and explanted hearts could be biopsied. The HMDP provides the experimental tool to carry out this identical twins experiment to measure expression data and trait (heart mass) for the same genetic background under different conditions (before and after ISO treatment). The correlation coefficients \(C_j\) can be positive or negative and the magnitude of \(C_j\) can be used to identify genes and classify them in order of statistical significance assessed by comparing \(C_j\) values computed with actual data to those computed with a randomized data set (e.g., a set obtained by permuting the strain labels). We refer to genes identified by this method as FC genes to reflect the fact that they are obtained by correlating the individual fold-change of gene expression for all strains (\(F_j\)) with the individual fold-change of heart mass for all strains (\(F_m\)). In this conceptual "identical twin" experiment, only two mice per strain are used for microarray data analysis in 90% of the strains (one control mouse and one treated mouse). This experimental limitation stems from the large number of hearts (over 200) that need to be biopsied and analyzed for gene expression. However, since mice of a given strain are renewable, the HMPD offers the possibility to use triplets, quadruplets, and higher multiples of isogenic subjects for experimentation. This feature was used to measure gene expression in replicates (e.g., two hearts in control or two hearts after ISO treatment) to test for replicability in ~10% of the strains. The results of this replicability analysis shows that genetics play a dominant role in controlling gene expression and that using two mice per strain (on in the control group and one in the treated group) is sufficient to identify FC genes. This conclusion is further supported by the fact that, remarkably, the FC genes turn out to be for the most part completely different than the traditional SAM genes, and causally related to hypertrophy as assessed by further analysis of pathway enrichment and direct experimental validation of the role of one FC gene. Pre-filtering of the data In order to reduce false positive predictions and computational time, we first filtered the 25,697 genes expression data. Instead of setting an arbitrary cutoff based on the level of expression as is commonly done, we decided to use a network approach that is consistent with the correlation-based methods used in this study. The idea is that the different genotypic backgrounds across strains lead to global gene expression modulation, thus creating correlation between expressed genes. Genes not associated with the core of varying genes should be the ones that carry too much experimental noise due to low expression or systematic biases. We first computed the absolute Pearson correlation of gene expression fold-change between all pairs of genes. This creates a complete weighted network containing all genes. We then reasoned that genes for which expression is noisy because of low expression or experimental artifacts should have a low association to the other genes. We therefore looked at the size of the Largest Connected Component (LCC) of the network when hard-thresholding with several correlation cutoffs (figure S1a). We observed a fast decrease of the LCC size at low thresholds of 0.35–0.45, followed by milder steady decrease. The derivative of this curve is presented in figure S1b, showing a strong initial trough corresponding to noisy "satellite" nodes being cut from the LCC, followed by stabilization. We chose a cutoff of 0.5 corresponding to that stabilization plateau and kept the 11,279 genes in the LCC. The effect of this filter is made clear by looking at a selection of functional genes linked to the electromechanical coupling in heart cells (figure S1c). The rejected genes (gray bars) have either low expression (e.g., Calm4, Kcnd3) or display systematic saturation effects inherent to the microarray assay, which results in noisy correlations (e.g., Tnnc1, Atp2a2). More generally, we show in Fig S2 that filtered out genes show a correlation profile with hypertrophy similar to the one expected at random. In this paper, we use these 11,279 genes as input to the different methods. Computation of randomized correlations To compute the expected correlations of Fig. 2a, we first shuffle the heart mass fold-changes among strains. We then compute the correlations between all genes FCs and this randomized phenotype. We repeat that step 1000 times. The final histogram is the average over the 1000 randomizations. Computation of population-wide DEGs The population-wide DEGs are computed by using Significance Analysis of Microarray or SAM19 between the post-ISO and the pre-ISO expression data. Using a False Discovery Rate of 1e-3, we find 2538 significant DEGs. Conversion from mouse symbols to human entrez IDs In order to compute pathway and disease genes enrichment, we first needed to compute a table converting mouse gene symbols to human entrez IDs. We used UCSC genome browser mm9.kgXref, mm9.hgBlastTab and hg19.kgXref conversion tables available on the mySQL host genome-mysql.cse.ucsc.edu. The kgXref tables were used for conversion between symbols and entrez IDs while the Blast table was used to get the human orthologs of mouse genes. HuGE database Disease genes were taken from the HuGE database of published GWAS genes,52 with a total of 2711 diseases. HF related diseases were filtered out using keywords "heart," "cardi," "hypert," "aort," "fibro." Pathways were taken from MSigDB v3.153 and Wikipathways,54 with a total of 8690 sets of genes. A group of 106 genes corresponding to a previously published CHSN20 was added under the name "SAUCERMAN_cardiac_hypertrophy_pathway." TF enrichment The cytoscape plugin iRegulon30 was used to predict putative upstream TF regulating the studied sets of genes. Default parameters were used: 9713 PWMs scanning 20 kb centered around TSS. Computation of statistics All statistics (correlations, t-test, Wilcoxon test, hypergeometric test) were computed using R. Hierarchical clustering was performed using default parameters of the R hclust function. Z scores correspond to the number of standard deviations a given observation is away from the mean of the null (random) distribution and are computed as follow: $${\boldsymbol{Z}} = \frac{{{\boldsymbol{x}} - < {\boldsymbol{X}} > }}{{ < \left( {{\boldsymbol{X}} - < {\boldsymbol{X}} > } \right)^2 > }}$$ where x is the observed value, X is a set of random predictions, and < . > denotes the average. Cell Culture and Treatments Right ventricular myocytes were isolated and cultured, as reported55 using 2–4 day old rats. Myocytes and fibroblasts were separated with Percoll density gradient. For knockdown experiments cells were transfected with Hes1 siRNA using lipofectamine RNAimax (life technologies). RNA Isolation and qPCR RNA isolation from cells was performed using Qiazol lysis reagent. cDNA synthesis was performed using the High Capacity Reverse Transcription cDNA Kit (Life Technologies). qPCR was performed using the LightCycler 480 (Roche). The number of replicates per condition is shown in Supplementary Table S2, with values ranging from 6 to 9. Quantification of cardiomyocyte cell cross-sectional area Quantification of cardiomyocyte cell cross-sectional area was done following transfection with either control or Hes1 siRNA and a 48 h treatment with control or isoproterenol or phenylepherine containing media. Images were taken on a Nikon Eclipse TE2000-U microscope. Images were analyzed using the Nikon Imagine System (NIS). A total of 150 cells were used to compute the SEM. Source codes are available for the community: https://github.com/msantolini/FC. Microarray data may be accessed at the Gene Expression Omnibus using accession ID: GSE48760. All phenotypic and expression data may also be accessed at https://systems.genetics.ucla.edu/data/hmdp_hypertrophy_heart_failure Albert, F. W. & Kruglyak, L. The role of regulatory variation in complex traits and disease. Nat. Rev. Genet. (2015). Bui, A. L., Horwich, T. B. & Fonarow, G. C. Epidemiology and risk profile of heart failure. Nat. Rev. Cardiol. 8, 30–41 (2011). Cambronero, F. et al. Biomarkers of pathophysiology in hypertrophic cardiomyopathy: implications for clinical management and prognosis. Eur. Heart J. 30, 139–151 (2009). Heineke, J. & Molkentin, J. D. Regulation of cardiac hypertrophy by intracellular signalling pathways. Nat. Rev. Mol. Cell Biol. 7, 589–600 (2006). Blaxall, B. C., Spang, R., Rockman, H. A. & Koch, W. J. Differential myocardial gene expression in the development and rescue of murine heart failure. Physiol. Genom. 15, 105–114 (2003). Gao, Z. et al. Key pathways associated with heart failure development revealed by gene networks correlated with cardiac remodeling. Physiol. Genom. 35, 222–230 (2008). Asakura, M. & Kitakaze, M. Global gene expression profiling in the failing myocardium. Circ. J. 73, 1568–1576 (2009). Weiss, J. N. et al. "Good enough solutions" and the genetics of complex diseases. Circ. Res 111, 493–504 (2012). Taylor, A. L., Hickey, T. J., Prinz, A. A. & Marder, E. Structure and visualization of high-dimensional conductance spaces. J. Neurophysiol. 96, 891–905 (2006). Salari, K., Watkins, H. & Ashley, E. A. Personalized medicine: hope or hype? Eur. Heart J. 33, 1564–1570 (2012). Creemers, E. E., Wilde, A. A. & Pinto, Y. M. Heart failure: advances through genomics. Nat. Rev. Genet 12, 357–362 (2011). Ghazalpour, A. et al. Hybrid mouse diversity panel: a panel of inbred mouse strains suitable for analysis of complex genetic traits. Mamm. Genome 23, 680–692 (2012). Rau, C. D. et al. Mapping genetic contributions to cardiac pathology induced by Beta-adrenergic stimulation in mice. Circ. Cardiovasc Genet 8, 40–49 (2015). Lin, H. et al. Gene expression and genetic variation in human atria. Heart Rhythm 11, 266–271 (2014). van den Borne, S. W. et al. Mouse strain determines the outcome of wound healing after myocardial infarction. Cardiovasc Res 84, 273–282 (2009). Shah, A. P. et al. Genetic background affects function and intracellular calcium regulation of mouse hearts. Cardiovasc Res 87, 683–693 (2010). Barrick, C. J., Rojas, M., Schoonhoven, R., Smyth, S. S. & Threadgill, D. W. Cardiac response to pressure overload in 129S1/SvImJ and C57BL/6J mice: temporal and background-dependent development of concentric left ventricular hypertrophy. Am. J. Physiol. Heart Circ. Physiol. 292, H2119–H2130 (2007). Kiper, C., Grimes, B., Van Zant, G. & Satin, J. Mouse strain determines cardiac growth potential. PLoS ONE 8, e70512 (2013). Tusher, V. G., Tibshirani, R. & Chu, G. Significance analysis of microarrays applied to the ionizing radiation response. Proc. Natl. Acad. Sci. USA 98, 5116–5121 (2001). Ryall, K. A. et al. Network reconstruction and systems analysis of cardiac myocyte hypertrophy signaling. J. Biol. Chem. 287, 42259–42268 (2012). Callow, M. J., Dudoit, S., Gong, E. L., Speed, T. P. & Rubin, E. M. Microarray expression profiling identifies genes with altered expression in HDL-deficient mice. Genome Res 10, 2022–2029 (2000). Ritchie, M. E. et al. limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res 43, e47 (2015). Jin, H. et al. KChIP2 attenuates cardiac hypertrophy through regulation of Ito and intracellular calcium signaling. J. Mol. Cell Cardiol. 48, 1169–1179 (2010). Kuo, H. C. et al. A defect in the Kv channel-interacting protein 2 (KChIP2) gene leads to a complete loss of I(to) and confers susceptibility to ventricular tachycardia. Cell 107, 801–813 (2001). Grubb, S. et al. Loss of K+ currents in heart failure is accentuated in KChIP2 deficient mice. J. Cardiovasc Electrophysiol. 25, 896–904 (2014). Bignolais, O. et al. Early ion-channel remodeling and arrhythmias precede hypertrophy in a mouse model of complete atrioventricular block. J. Mol. Cell Cardiol. 51, 713–721 (2011). Fan, D., Takawale, A., Lee, J. & Kassiri, Z. Cardiac fibroblasts, fibrosis and extracellular matrix remodeling in heart disease. Fibrogenes. Tissue Repair 5, 15 (2012). Baudino, T. A., Carver, W., Giles, W. & Borg, T. K. Cardiac fibroblasts: friend or foe? Am. J. Physiol. Heart Circ. Physiol. 291, H1015–H1026 (2006). Gardner, D. G. Natriuretic peptides: markers or modulators of cardiac hypertrophy? Trends Endocrinol. Metab. 14, 411–416 (2003). Janky, R. et al. iRegulon: from a gene list to a gene regulatory network using large motif and track collections. PLoS Comput. Biol. 10, e1003731 (2014). Paul, V. et al. Scratch2 modulates neurogenesis and cell migration through antagonism of bHLH proteins in the developing neocortex. Cereb. Cortex 24, 754–772 (2014). Wu-Wong, J. R. Vitamin D therapy in cardiac hypertrophy and heart failure. Curr. Pharm. Des. 17, 1794–1807 (2011). Menche, J. et al. Disease networks. Uncovering Dis.-Dis. Relatsh. incomplete Inter. Sci. 347, 1257601 (2015). Molkentin, J. D. Calcineurin-NFAT signaling regulates the cardiac hypertrophic response in coordination with the MAPKs. Cardiovasc Res 63, 467–475 (2004). Benjamin, I. J. et al. Isoproterenol-induced myocardial fibrosis in relation to myocyte necrosis. Circ. Res 65, 657–670 (1989). Ho, C. Y. et al. Myocardial fibrosis as an early manifestation of hypertrophic cardiomyopathy. N. Engl. J. Med 363, 552–563 (2010). Tamura, N. et al. Cardiac fibrosis in mice lacking brain natriuretic peptide. Proc. Natl. Acad. Sci. USA 97, 4239–4244 (2000). Rau, C. D. et al. Systems genetics approach identifies gene pathways and Adamts2 as drivers of isoproterenol-induced cardiac hypertrophy and cardiomyopathy in mice. Cell Syst. 4, 121–128 e4 (2017). de la Pompa, J. L. Notch signaling in cardiac development and disease. Pediatr. Cardiol. 30, 643–650 (2009). de la Pompa, J. L. & Epstein, J. A. Coordinating tissue interactions: Notch signaling in cardiac development and disease. Dev. Cell 22, 244–254 (2012). Zhou, X. L., Zhao, Y., Fang, Y. H., Xu, Q. R. & Liu, J. C. Hes1 is upregulated by ischemic postconditioning and contributes to cardioprotection. Cell Biochem Funct. 32, 730–736 (2014). Gao, Z. et al. Transcriptomic profiling of the canine tachycardia-induced heart failure model: global comparison to human and murine heart failure. J. Mol. Cell Cardiol. 40, 76–86 (2006). Ruiz, P. & Witt, H. Microarray analysis to evaluate different animal models for human heart failure. J. Mol. Cell Cardiol. 40, 13–15 (2006). Bennett, B. J. et al. A high-resolution association mapping panel for the dissection of complex traits in mice. Genome Res 20, 281–290 (2010). Cervino, A. C., Darvasi, A., Fallahi, M., Mader, C. C. & Tsinoremas, N. F. An integrated in silico gene mapping strategy in inbred mice. Genetics 175, 321–333 (2007). Grupe, A. et al. In silico mapping of complex disease-related traits in mice. Science 292, 1915–1918 (2001). Wang, J. J. et al. Genetic dissection of cardiac remodeling in an isoproterenol-induced heart failure mouse model. PLoS Genet 12, e1006038 (2016). Smyth, G. K. Limma: linear models for microarray data. in Bioinformatics and computational biology solutions using R and Bioconductor 397–420 (Springer, New York, NY, 2005). Johnson, W. E., Li, C. & Rabinovic, A. Adjusting batch effects in microarray expression data using empirical Bayes methods. Biostatistics 8, 118–127 (2007). Nygaard, V., Rodland, E. A. & Hovig, E. Methods that remove batch effects while retaining group differences may lead to exaggerated confidence in downstream analyses. Biostatistics 17, 29–39 (2016). Cordero, P. et al. A community overlap strategy reveals central genes and networks in heart failure. bioRxiv. 038174. https://doi.org/10.1101/038174 (2016). Yu, W., Gwinn, M., Clyne, M., Yesupriya, A. & Khoury, M. J. A navigator for human genome epidemiology. Nat. Genet 40, 124–125 (2008). Liberzon, A. et al. Molecular signatures database (MSigDB) 3.0. Bioinformatics 27, 1739–1740 (2011). Kelder, T. et al. WikiPathways: building research communities on biological pathways. Nucleic Acids Res 40, D1301–D1307 (2012). Brown, D. A. et al. Modulation of gene expression in neonatal rat cardiomyocytes by surface modification of polylactide-co-glycolide substrates. J. Biomed. Mater. Res A 74, 419–429 (2005). Gopalakrishnan, K. et al. Augmented rififylin is a risk factor linked to aberrant cardiomyocyte function, short-QT interval and hypertension. Hypertension 57, 764–771 (2011). Yuan, B. et al. A cardiomyocyte-specific Wdr1 knockout demonstrates essential functional roles for actin disassembly during myocardial growth and maintenance in mice. Am. J. Pathol. 184, 1967–1980 (2014). Wallen, T., Landahl, S., Hedner, T., Nakao, K. & Saito, Y. Brain natriuretic peptide predicts mortality in the elderly. Heart 77, 264–267 (1997). Wei, Z. et al. A common genetic variant in the 3′-UTR of vacuolar H+-ATPase ATP6V0A1 creates a micro-RNA motif to alter chromogranin A processing and hypertension risk. Circ. Cardiovasc Genet 4, 381–389 (2011). Bogomolovas, J. et al. Induction of Ankrd1 in dilated cardiomyopathy correlates with the heart failure progression. Biomed. Res Int 2015, 273936 (2015). Iwamoto, R. et al. Heparin-binding EGF-like growth factor and ErbB signaling is essential for heart function. Proc. Natl. Acad. Sci. USA 100, 3221–3226 (2003). Rochais, F. et al. Hes1 is expressed in the second heart field and is required for outflow tract development. PLoS ONE 4, e6267 (2009). de Villiers, C. P. et al. AKAP9 is a genetic modifier of congenital long-QT syndrome type 1. Circ. Cardiovasc. Genet. 7, 599–606 (2014). Meune, C. et al. Blood glutathione decrease in subjects carrying lamin A/C gene mutations is an early marker of cardiac involvement. Neuromuscul. Disord. 22, 252–257 (2012). Damy, T. et al. Glutathione deficiency in cardiac patients is related to the functional status and structural cardiac abnormalities. PLoS ONE 4, e4871 (2009). Adamy, C. et al. Tumor necrosis factor alpha and glutathione interplay in chronic heart failure. Arch. Mal. Coeur Vaiss. 98, 906–912 (2005). Zhao, Y. Y. et al. Defects in caveolin-1 cause dilated cardiomyopathy and pulmonary hypertension in knockout mice. Proc. Natl. Acad. Sci. USA 99, 11375–11380 (2002). Laurell, T. et al. Identification of three novel FGF16 mutations in X-linked recessive fusion of the fourth and fifth metacarpals and possible correlation with heart disease. Mol. Genet. Genom. Med 2, 402–411 (2014). Gudmundsson, H. et al. EH domain proteins regulate cardiac membrane protein targeting. Circ. Res. 107, 84–95 (2010). Lopes, L. R. et al. Genetic complexity in hypertrophic cardiomyopathy revealed by high-throughput sequencing. J. Med. Genet. 50, 228–239 (2013). Nakamura, T., Nakamura, T. & Matsumoto, K. The functions and possible significance of Kremen as the gatekeeper of Wnt signalling in development and pathology. J. Cell Mol. Med. 12, 391–408 (2008). van de Schans, V. A. et al. Interruption of Wnt signaling attenuates the onset of pressure overload-induced cardiac hypertrophy. Hypertension 49, 473–480 (2007). Wang, W. et al. Salt-sensitive hypertension and cardiac hypertrophy in transgenic mice expressing a corin variant identified in blacks. Hypertension 60, 1352–1358 (2012). This research was supported by NIH/NHLBI grants 5R01HL114437-02 and 5R01HL05242 and by the Laubisch and Kawata Endowments. Center for Interdisciplinary Research on Complex Systems, Department of Physics, Northeastern University, Boston, MA, USA Marc Santolini & Alain Karma Center for Complex Network Research, Department of Physics, Northeastern University, Boston, MA, USA Marc Santolini Center for Cancer Systems Biology (CCSB) and Department of Cancer Biology, Dana-Farber Cancer Institute, 450 Brookline Ave., Boston, MA, 02215, USA Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, 75 Francis Street, Boston, MA, 02115, USA Department of Microbiology, Immunology and Molecular Genetics, University of California, Los Angeles, CA, 90095, USA Milagros C. Romay & Christoph D. Rau Department of Molecular, Cell, and Developmental Biology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA Clara L. Yukhtman Departments of Anesthesiology, Physiology and Medicine, Cardiovascular Research Laboratories, David Geffen School of Medicine, University of California, Los Angeles, CA, 90095, USA Christoph D. Rau, Shuxun Ren, Jessica J. Wang, James N. Weiss, Yibin Wang & Aldons J. Lusis Department of Biomedical Engineering, University of Virginia, Charlottesville, VA, 22908, USA Jeffrey J. Saucerman Departments of Medicine and Human Genetics, David Geffen School of Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA, 90095, USA Aldons J. Lusis Milagros C. Romay Christoph D. Rau Shuxun Ren Jessica J. Wang James N. Weiss Yibin Wang Alain Karma M.S. and A.K. designed the study. M.S. did the analysis. M.C.R. and C.L.Y. did the experimental work. M.S. and A.K. wrote the manuscript. C.D.R. and A.J.L. produced the HMDP expression data. C.D.R., S.R., J.J.S., J.J.W., J.N.W., Y.W., and A.J.L. contributed to the conception and interpretation of the work. Correspondence to Alain Karma. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Table S1 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Santolini, M., Romay, M.C., Yukhtman, C.L. et al. A personalized, multiomics approach identifies genes involved in cardiac hypertrophy and heart failure. npj Syst Biol Appl 4, 12 (2018). https://doi.org/10.1038/s41540-018-0046-3 Revised: 14 December 2017 Human cardiac organoids for the modelling of myocardial infarction and drug cardiotoxicity Dylan J. Richards , Yang Li , Charles M. Kerr , Jenny Yao , Gyda C. Beeson , Robert C. Coyle , Xun Chen , Jia Jia , Brooke Damon , Robert Wilson , E. Starr Hazard , Gary Hardiman , Donald R. Menick , Craig C. Beeson , Hai Yao , Tong Ye & Ying Mei Nature Biomedical Engineering (2020) Sex differences in human adipose tissue gene expression and genetic regulation involve adipogenesis Warren D. Anderson , Joon Yuhl Soh , Sarah E. Innis , Alexis Dimanche , Lijiang Ma , Carl D. Langefeld , Mary E. Comeau , Swapan K. Das , Johan L.M. Björkegren & Mete Civelek Genome Research (2020) Protein arginine methyltransferase 6 mediates cardiac hypertrophy by differential regulation of histone H3 arginine methylation Vineesh Vimala Raveendran , Kamar Al-Haffar , Muhammed Kunhi , Karim Belhaj , Walid Al-Habeeb , Jehad Al-Buraiki , Atli Eyjolsson & Coralie Poizat Heliyon (2020) Multi omics analysis of fibrotic kidneys in two mouse models Mira Pavkovic , Lorena Pantano , Cory V. Gerlach , Sergine Brutus , Sarah A. Boswell , Robert A. Everley , Jagesh V. Shah , Shannan H. Sui & Vishal S. Vaidya Scientific Data (2019) Relevance of Multi-Omics Studies in Cardiovascular Diseases Paola Leon-Mimila , Jessica Wang & Adriana Huertas-Vazquez Frontiers in Cardiovascular Medicine (2019) Editorial Summary Personalized medicine: uncovering missed disease genes A multitude of genes associated with complex diseases are revealed by a novel personalized, as opposed to population-level, analysis of differential gene expression. While traditional investigations of the genetic basis of complex diseases assume homogeneity across individuals and identify genes differentially expressed between a diseased and a healthy population, Northeastern University and University of California Los Angeles researchers have identified a different class of disease genes that exhibit heterogeneous up and down-regulation across 100 genetically distinct mouse strains subject to a stressor inducing heart failure, but show no significant change of expression at the population level. The results, validated by in vitro knockdown, demonstrate that individualized approaches are crucial to unmask all genes involved in complex diseases, opening new avenues for the development of personalized therapies. npj Systems Biology and Applications ISSN 2056-7189 (online) Close banner Close Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
CommonCrawl
2019 Wastewater treatment survey – the results Simon Judd1 and Claire Judd2 1Professor of Membrane Technology; 2Manager of The MBR Site If you're a regular visitor to The MBR Site, you'll be aware that we've carried out a number of short surveys over the last few years. This year, we widened the question to beyond just MBRs. So this time, we asked the question: What influences the selection of wastewater treatment technologies for water reuse? Please find the outcomes of our 2019 survey below. Our 2019 survey was rather different from our preceding surveys − this time the subject was not MBRs, but wastewater treatment in general. The purpose was to try and establish what features or attributes of a process technology are considered to be the most important with reference to its selection for wastewater treatment duties. The survey was conducted using SurveyMonkey during the period February−April 2019, and potential respondents were encouraged to engage via the LinkedIn networking platform. The survey was announced on most of the 24-or-so water related groups on LinkedIn. We received 65 responses in total, of which 57 were usable: a key condition of the survey was that both the 'rating' question (Q.1) and the 'ranking' question (Q.4) had to be answered in full. These questions were identical, the key difference being that Question 1 required respondents to rate the importance of the technology aspect out of a maximum score of 10 whereas Question 4 required these aspects to be ranked in order of importance. You can read the survey questions in Annex 1 below. 2. How the data was processed Since different respondents will assign different absolute values to a qualitative parameter like 'importance', the data were all normalised to give relative values for both the rating and the ranking data, where: \begin{equation*} relative\ value=\ \frac{value-minimum}{maximum-minimum} \end{equation*} The 'relative value' is therefore given as a percentage, where 100% is assigned to the most highly rated or ranked response and 0% to the least. This is more appropriate than the absolute values which, in the case of the rating data, lie within a fairly narrow range. For the ranking data from Q.4 the 'value' term in the numerator is given by 'value = 6 − ranking'. The above equation then yields a relative value which increases with increasing ranking in the same way as the relative rating value from Q.1. So, a high value is associated with increased importance. 3. Respondents The survey originally identified five different groups of stakeholders (or 'cohorts'), these being: OEM/Technology supplier Water company employee Contractor and Academic. The respondents were predominantly from the 'OEM/Technology supplier' and 'Water company employee' cohorts (Fig 1). Some re-categorising of respondents who had placed themselves in the 'Other' cohort was necessary. This affected four responses in total. Also, in order to provide a reasonably statistically significant number within each cohort, the 'Consultant' and 'Contractor' cohorts were combined. This then yielded nine respondents in this cohort, with 10 in the 'Academic' cohort and 19 each in the remaining two cohorts. Fig 1. Distribution of responses between cohorts 4. Survey outcomes The outcomes for the 'Technology suppliers' vs 'Water company employees', and the 'Academics' vs the 'Consultants/Contractors' are given in Figures 2 and 3 respectively and indicate some interesting trends. Firstly, comparing the 'rating' and 'ranking' responses reveals that there is greater consistency between these pairs of data for 'Technology suppliers' and 'Water company employees' (Fig. 2) than there is for 'Academics' and the 'Consultants/Contractors' (Fig. 3). For example, in Fig. 2, NPV*/Life cycle cost is both rated and ranked at 90−100% on average by both cohorts, and Water reuse/waste minimisation ranked 0−22% on average. The average disparity between the rating and the ranking is 21% for the 'Suppliers' and 17% for the 'Water utility employee' cohorts. Compare this with the responses from the 'Academics' and 'Consultant/Contractors', where very significant differences were evident between the rated and ranked data. For example, the 'Academic' cohort rated NPV/Life cycle cost at 90% on average and 0% when ranked. Energy efficiency was rated and ranked at 0% and 59−64% respectively by both the 'Academic' and 'Consultant/Contractor' cohorts. The average disparity between the rating and the ranking is 47% for the 'Academics' and 52% for the 'Consultant/Contractors' − more than double the corresponding figures for the other two cohorts. *NPV: Net present value Figure 2. Rated and ranked data, 'Technology supplier' and 'Water utility employee' cohorts Figure 3. Rated and ranked data, 'Academic' and 'Consultant/Contractor' cohorts Secondly, a comparison of the overall ranking data across all four cohorts (Fig. 4) suggests that, in general, the 'Suppliers' and 'Water utility employees' place the greatest emphasis on NPV/Life cycle cost and Process robustness. Energy efficiency is also identified as being quite important − presumably due to its direct relation to operating expenditure (OPEX). Against this, Water recovery/waste minimisation is ranked the lowest by both these cohorts, and both Environmental Impact and Process flexibility are ranked below 40% by both these cohorts. Different trends are evident from the 'Academic' and 'Consultant/Contractor' cohort data. For the 'Academic' cohort, both Process flexibility and Water recovery/waste minimisation are ranked above 82%, with Environmental Impact at almost 60% and, conversely, NPV/Life cycle cost at 0%. For the 'Consultant/Contractor' cohort the highest ranked aspect is Environmental Impact. Figure 4. Ranked data, all cohorts It's important to take care when interpreting data from surveys of this sort. The number of respondents is small − fewer than 20 in each of the cohorts. For a cohort of fewer than 10, a significant change in the calculated percentage figures can arise from a single anomalous entry. Against this, the inconsistency between the rating and ranking data has been noted in a previous survey conducted by the Gas Processing Center at Qatar University in 2018, based on the same questions and directed at produced water treatment stakeholders. Interestingly, in that survey, the rating vs. ranking inconsistency was also greatest for the 'Consultant' cohort, and for a much larger cohort. However, this perhaps reflects as much on the close relationship between some of the aspects as anything else, and whether these are causal. It could, for example, be argued that the cost of a process is directly linked to the requirement to reduce the environmental impact (i.e. the purity) of the treated effluent. It might have been expected that the trends in the 'rating' and 'ranking' data would be the same, which is clearly not the case within each cohort. Having said this, the top two technology aspects are the same for both the rated and the ranked data when the averaged responses of the 57 respondents is taken as a whole (Table 1). This is, of course, because the respondents are predominantly technology suppliers and water company employees. But, of course, the end users always want it all: low-cost and fit-and-forget solutions Process robustness and Cost are both rated at around 100%, and ranked at 76% and 100% respectively. Against this, overall the Environmental impact and Water recovery/waste minimization are both scored at 37% or less according to both the rating and ranking measures. For the remaining two aspects of Process flexibility and Energy efficiency there was no consistency in the two sets of data, probably reflecting the ambiguous nature of these terms. There are some important caveats to be made concerning cost and plant size. Total cost includes capital and operating expenditure, and the largest contribution to OPEX is usually energy. But the absolute energy cost, and the associated carbon footprint, becomes secondary to process reliability for small plant − as pointed out by two of the respondents. The cost of unscheduled maintenance in terms of $/m3 treated water, on the other hand, becomes hugely significant for small plants. In such cases, reliability becomes crucial. Table 1. Summary of overall rating and ranking positions Aspect, rating Aspect, ranking 1 Process robustness 100% Life cycle cost/NPV 100% 2 Life cycle cost/NPV 98% Process robustness 76% 3 Process flexibility 87% Energy efficiency 54% 4 Environ. impact 37% Process flexibility 30% 5 Water recov./waste min. 36% Environ. impact 23% 6 Energy efficiency 0% Water recov./waste min. 0% In the UK municipal water sector, the total cost (or NPV, which amounts to the same thing) is the main contributing factor to decision making − as evidenced by the survey responses and by some of the comments. But, of course, the end users always want it all: low-cost and fit-and-forget solutions. This normally means reliable performance under variable conditions of flow and load, again highlighted by some of the respondents. Finally, location and available space can also be key factors determining process selection. So where does this leave MBR technology? As a membrane technology, the product water quality is reliably high provided the membrane is unbreached. MBRs are also very compact and, for large installations at least, the NPV works out pretty much the same as for a conventional process delivering a comparable treated water quality (i.e. conventional activated sludge with some sort of downstream polishing). Operationally, MBRs are certainly more complex and, in the experience of some, demand more unscheduled manual intervention than would a conventional process. But it is argued, particularly by the technology suppliers, that in many cases operational issues arise from an insufficiently conservative design, and specifically the pretreatment − both the screening and the degritting. It is worth stressing, not for the first time, that the finest, most fouling resistant and super-strong membrane material in the world behaves pretty much in the same way as a more modestly specified material when the channels between the membranes are ram-filled with sludge. In summary, what this survey illustrates is what pretty much everyone would have suspected: cost, and in particular whole-life cost, is the key criterion in decision making. Perhaps worryingly, this trumps environmental considerations. On the other hand, environmental factors are taken care of by the legislation, and the technology is then duty-bound to meet the stringent environmental criteria. It's pretty much a given that the technology will meet the water quality discharge criteria, but the total cost incurred in doing this is the number one concern. 6. Annex 1 Survey questions Question 1. How important are each of the following six factors in selecting a wastewater treatment technology? Assign a score out of 10 (10 being most influential) to each of the following six factors influencing wastewater treatment technology selection. Life cycle cost (also called Net Present Value − NPV) Energy efficiency (or lowest CO2 emissions) per volume water treated Process robustness (avoiding incidents demanding unscheduled manual intervention or unexpected additional cost) Environmental impact (with reference to environmentally negative discharges other than CO2) Process flexibility (greatest ability to handle high variation in water quality and quantity while still meeting treated water quality objective) Water recovery/lowest waste volume generated. Question 2. Which type of effluent does your response refer to? Question 3. Any comments regarding Question 1? Question 4. Rank the six factors in the order they influence wastewater treatment technology selection. When deciding your ranking, take into account any comments you made in Question 3. 1 = the most influential factor, 6 = the least influential factor. Question 5. Which of the following most closely describes your role? Water/wastewater municipality/company employee The 2016 MBR Survey results The 2012 MBR Survey – the results Thank you to all who contributed to our 2019 wastewater survey and also for providing useful feedback on The MBR Site. Regarding the latter, we will digest all your comments and suggestions which will help inform our decision-making in developing this website in the future. Previous Feature
CommonCrawl
Harish Chandra Rajpoot He is pursuing Master in Production Engineering at Indian Institute of Technology Delhi. He did B. Tech. (Hons) in Mechanical Engineering from M.M.M. University of Technology, Gorakhpur (UP). He authored his first book Advanced Geometry based on research articles in Applied Mathematics & Radiometry for higher education which was first published by Notion Press, Chennai, India in April, 2014. He also authored a new book Electro-Magnetism in Theoretical Physics in Feb, 2020. Published Papers of the author by International Journals of Mathematics 1. "HCR's Rank or Series Formula, (1) & (2)" IJMPSR March, 2014 2. "HCR's Series (Divergence)" IOSR March-April, 2014 3. "HCR's Infinite-series" IJMPSR Oct, 2014 4. "HCR's Theory of Polygon" IJMPSR Oct, 2014 He derived Formula for all five platonic solids using his Theory of Polygon. It is the simplest & most versatile formula to analytically compute all the important parameters of all five platonic solids. He derived HCR's Theorem & Corollary and applied for Mathematical Analysis and Modeling of pyramidal flat containers with regular n-gonal base, n-gonal right pyramids and polyhedrons. He analysed Archimedean solids, Goldberg polyhedra, truncated & expanded polyhedra using his formula for regular polyhedra & generalized a Formula for n-Trapezohedron with congruent right kite faces. Mr H.C. Rajpoot is originally from a rural village Buraura in District Mahoba (Bundelkhand) of U.P. state of India-210429 hcrajpoot.jimdo.com Last seen yesterday Mathematics 35.3k 35.3k 1818 gold badges6767 silver badges102102 bronze badges Physics 2.1k 2.1k 66 gold badges1515 silver badges3333 bronze badges Space Exploration 511 511 33 silver badges1313 bronze badges Chemistry 217 217 22 silver badges99 bronze badges Engineering 187 187 22 silver badges88 bronze badges 90 Calculate $\frac{1}{\sin(x)} +\frac{1}{\cos(x)}$ if $\sin(x)+\cos(x)=\frac{7}{5}$ 41 V.I. Arnold says Russian students can't solve this problem, but American students can -- why? 40 An integral for the New Year 2016 36 How can I integrate $\int\frac{e^{2x}-1}{\sqrt{e^{3x}+e^x} } \mathop{dx}$? 33 How to prove that perpendicular from right angled vertex to the hypotenuse is at most half the length of hypotenuse of a right triangle? 29 If $3x^2 -2x+7=0$ then $(x-\frac{1}{3})^2 =$? 27 How to evaluate the following limit: $\lim_{x\to 0}\frac{12^x-4^x}{9^x-3^x}$? deep-space orbital-mechanics 33 Can't astronauts use the ball point pens in space? Jun 29 '20 4 How is the space probe powered to travel a huge distance in a deep space mission? Aug 4 '20
CommonCrawl
Home » MAA Publications » Periodicals » Convergence » Dear Professor Greitzer - The First Identity Dear Professor Greitzer - The First Identity ‹ Dear Professor Greitzer - Slopes of Lines and Angles of Inclination up Dear Professor Greitzer - The Second Identity › Joe Richards and Don Crossfield Calculus teachers, and their students, know that the relationship $${{\pi}\over{4}}=\arctan{\frac{1}{2}}+\arctan{\frac{1}{3}}$$ is historically famous, as well as a quickly converging way to calculate the digits of \(\pi\), by using the Taylor series for the arctangent function: $$\arctan x = x - \frac{x^3}{3} + \frac{x^5}{5} - \frac{x^7}{7} + \frac{x^9}{9} - \cdots ,$$ but our Geometry classes know only that the two angles that go with slopes of 1/2 and 1/3 have a sum of 45 degrees. That's OK ... we're just glad they're looking for patterns. Here is another one they noticed. Hey, it looks like the angles that go with slopes of 1/5 and 1/8 (11.3o and 7.1o) have a sum of 18.4 degrees, the angle that goes with the slope of 1/3. Slightly varying Figure 2 into Figure 3 gives us assurance that this relationship is also true. This triangle is also clearly a right triangle, since the slopes of the legs are -1/5 and 5. The short leg is 1/3 the length of the long leg, since we drew three 5 x 1 segments, and then turned 90 degrees and drew only one 5 x 1 segment. The angle at the origin must, therefore, be the one associated with a slope of 1/3, and it has also been split into two angles by the x-axis. The segment below, connecting A(0,0) and B(15,-5), has a slope of -1/5. The segment above, connecting A(0,0) and C(16,2), has a slope of 1/8. Again, done. Professor Greitzer, our Geometry kids may not understand anything about the radius of convergence of the Taylor series for the arctan function, but based on their exploring skills, we're thinking that any mathematician of Dase's era could have measured with a protractor, built a table, conjectured our conjectures, and built the coordinate plane arguments to verify that and that $$\arctan{\frac{1}{3}}=\arctan{\frac{1}{5}} +\arctan{\frac{1}{8}},$$ hence, $${{\pi}\over{4}}=\arctan{\frac{1}{2}}+\arctan{\frac{1}{5}} +\arctan{\frac{1}{8}}\quad {\rm !}$$ so we conclude that the single exclamation mark identity is decidedly within the range of a mathematician (or a Geometry class), using only elementary mathematical tools. Cool, huh? Joe Richards and Don Crossfield, "Dear Professor Greitzer - The First Identity," Convergence (July 2010) Dear Professor Greitzer Dear Professor Greitzer - Introduction Dear Professor Greitzer - Slopes of Lines and Angles of Inclination Dear Professor Greitzer - The Second Identity Dear Professor Greitzer - Practically Isosceles Triangles Dear Professor Greitzer - Sam Greitzer
CommonCrawl
A square footing $$2m \times 2m$$ is built in a homogenous bed of sand of density $$\frac{1.9t}{m^{3}}$$ and having an angle of shearing resistance of $$38^\circ$$ The depth of base of footing is 0.8m below the ground surface. What is the safe load according to Terzaghi analysis which can be carried by the footing with a factor of safety of 3 against complete shear failure? Assume $$\psi=38^\circ$$; $$N_{q}=65$$,$$N_{r}=80$$ An elastic medium carries at its surface a uniform load of 10 $$\frac{t}{m^{2}}$$ ( $$\approx$$ 100 Kpa) covering a rectangular area of $$4m \times 3m $$.Find the vertical pressure at a depth of 5m below the center and corner of the loaded area. Assume influence factor of equal Quadrantis 0.0474 and influence factor for the point located at the corners of loaded area is 0.1247 1.896 $$\frac{t}{m^{2}}$$;1.247 $$\frac{t}{m^{2}}$$ 18.96 $$\frac{t}{m^{2}}$$;12.47 $$\frac{t}{m^{2}}$$ 189.6 $$\frac{t}{m^{2}}$$;124.7 $$\frac{t}{m^{2}}$$ 0.1896 $$\frac{t}{m^{2}}$$;0.1247 $$\frac{t}{m^{2}}$$ A retaining wall, 4.5m high has a smooth vertical back. The back fill has a horizontal surface in level with top of the wall. There is a uniformly distributed surcharge load of $$\frac{2t}{m^{2}}$$ intensity over the back fill. The density of the soil is $$\frac{1.9t}{m^{3}}$$,its angle of shearing resistance is $$30^\circ$$ and cohesion is zero. Determine the magnitude of the total active pressure per metre length of wall. The shape factor of a rectangular sectionis At two points '1' and '2' in a pipeline, the velocities of fluid are v and 3v respectively. Both points are at same elevation. The flow can be assumed to be incompressible, inviscid, steady and irrotational. The difference in pressure $$P_{1}$$ and $$P_{2}$$ at point 1 and 2 is 0.5 $$\rho v^{2}$$ 2 $$\rho v^{2}$$ The top width and depth of flow ina rectangular channel were measured as 4m and 1m respectively. The measured velocities on the center line at the water surface, 0.2m and 0.8m below the surface are 0.7 $$\frac{m}{s}$$,0.8 $$\frac{m}{s}$$,0.6 $$\frac{m}{s}$$ respectively. Using two point method of velocity measurement, the discharge $$(in \frac{m^{3}}{s})$$ in a channel is A rectangular open channel of width 5m is carrying a discharge of $$\frac{100 m^{3}}{s}$$. Froude number of flow is 0.8. The depth of flow in the channelis A concrete floor slab of 140 mm thick is reinforced by 16mm diasteel rods placed 38 mm above the lower face of the slab and spaced 150mm oncenter. The distance from the upper face of slab to steel is 100mm. The modulus of elasticity is 25 Gpa for concrete and 200 Gpa for steel. Knowing that a bending moment of 4.5KN-M is applied to each 0.30m width of slab, determine the maximum stress in the concrete and steel respectively 12.9 Mpa & 177.8 Mpa 1.29 Mpa & 1.778 Mpa 129 Mpa & 17.78 Mpa 0.129 Mpa & 0.177 Mpa The Poisson ratios of soil sample 1 & 2 are $$\mu_{1}$$ and $$\mu_{2}$$ respectively and the coefficient of earth pressure at rest for soil sample 1 and 2 are $$K_{1} and K_{2}$$ respectively.If $$\frac{\mu_{1}}{\mu_{2}}$$= 1.5 and $$\frac{1-\mu_{1}}{1-\mu_{2}} $$=0.875 then $$\frac{k_{1}}{k_{2}}$$ will be Maximum cement content, maximum water cement ratio and minimum grade of concrete with nominal weight of aggregate of 20 mm size for very severe exposure condition as per IS456-2000are respectively 340;0.45; $$M_{35}$$ 400;0.5; $$M_{40}$$
CommonCrawl
Mathematicians' intuitions - a survey I'm passing this on from Mark Zelcer (CUNY): A group of researchers in philosophy, psychology and mathematics are requesting the assistance of the mathematical community by participating in a survey about mathematicians' philosophical intuitions. The survey is here: http://goo.gl/Gu5S4E. It would really help them if many mathematicians participated. Thanks! Published by Richard Pettigrew at 10:47 am No comments: Abstract Structure Draft of a paper, "Abstract Structure", cleverly called that because it aims to explicate the notion of "abstract structure", bringing together some things I mentioned a few times previously. Interview at 3am magazine Here is the shameless self-promotion moment of the day: the interview with me at 3am magazine is online. I mostly talk about the contents of my book Formal Languages in Logic, and so cover a number of topics that may be of interest to M-Phi readers: the history of mathematical and logical notation, 'math infatuation', history of logic in general, and some more. Comments are welcome! Published by Catarina at 12:54 pm 1 comment: Methodology in the Philosophy of Logic and Language This M-Phi post is an idea Catarina and I hatched, after a post Catarina did a couple of weeks back at NewAPPS, "Searle on formal methods in philosophy of language", commenting on a recent interview of John Searle, where Searle comments that "what has happened in the subject I started out with, the philosophy of language, is that, roughly speaking, formal modeling has replaced insight". I commented a bit underneath Catarina's post, as this is one thing that interests me. I'm writing a more worked-out discussion. But because I tend to reject the terminology of "formal modelling" (note, British English spelling!), I have to formulate Searle's objection a bit differently. Going ahead a bit, his view is that: the abstract study of languages as free-standing entities has replaced study of the psychology of actual speakers and hearers. This is an interesting claim, impinging on the methodology of the philosophy of logic and language. I think the clue to seeing what the central issues are can be found in David Lewis's 1975 article, "Languages and Language" and in his earlier "General Semantics", 1970. 1. Searle To begin, I explain problems (maybe idiosyncratic ones) I have with both of these words "formal" and "modelling". 1.a "formal" By "formal", I normally mean simply "uninterpreted". So, for example, the uninterpreted first-order language $L_A$ of arithmetic is a formal language, and indeed a mathematical object. Mathematically speaking, it is a set $\mathcal{E}$ of expressions (finite strings from a vocabulary), with several distinguished operations (concatenation and substitution) and subsets (the set of terms, formulas, etc). But it has no interpretation at all. It is therefore formal. On the other hand, the interpreted language $(L_A, \mathbb{N})$ of arithmetic is not a "formal" language. It is an interpreted language, some of whose strings have referents and truth values! Suppose that $v$ is a valuation (a function from the variables of $L_A$ to the domain of $\mathbb{N}$), that $t$ is a term of this language and $\phi$ is a formula of this language. Then $t$ has a denotation $t^{\mathbb{N},v}$ and $\phi$ has a truth value $\mid \mid \phi \mid \mid_{\mathbb{N},v}$. This distinction corresponds to what Catarina calls "de-semantificaiton" in her article "The Different Ways in which Logic is (said to be) Formal" (History and Philosophy of Logic, 2011). My use of "formal" is always "uninterpreted". So, $L_A$ is a formal language, while $(L_A, \mathbb{N})$ is not a "formal" language, but is rather an interpreted language, whose intended interpretation is $\mathbb{N}$. (The intended interpretation of an interpreted language is built-into the language by definition. There is no philosophical problem of what it means to talk about the intended interpretation of an interpreted language. It is no more conceptually complicated that talking about the distinguished order $<$ in a structure $(X,<)$.) 1.b "modelling" But my main problem is with this Americanism, "modelling", which I seem to notice all over the place. It seems to me that there is no "modelling" involved here, unless it is being used to involve a translation relation. For modelling itself, in physics, one might, for example, model The Earth as an oblate spheroid $\mathcal{S}$ embedded in $\mathbb{R}^3$. That is modelling. Or one might model a Starbucks coffee cup as a truncated cone embeddied in $\mathbb{R}^3$. Etc. But, in the philosophy of logic and language, I don't think we are "modelling": languages are languages, are languages, are languages ... That is, languages are not "models" in the sense used by physicists and others -- for if they are "models", what are they models of? A model $\mathcal{A} = (A, \dots)$ is a mathematical structure, with a domain $A$ and some bunch of defined functions and relations on the domain. One can probably make this precise for the case of an oblate spheroid or a truncated cone; this is part of modelling in science. But in the philosophy of logic and language, when describing or defining a language, we not modelling. But: I need to add that Catarina has rightly reminded me that some authors do often talk about logic and language in terms of "modelling" (now I should say "modeling" I suppose), and think of logic as being some sort of "model" of the "practice" of, e.g., the "working mathematician". A view like this has been expressed by John Burgess, Stewart Shapiro and Roy Cook. I am sceptical. What is a "practice"? It seems to be some kind of supra-human "normative pattern", concerning how "suitably qualified experts would reason", in certain "idealized circumstances". Personally, I find these notions obscure and unhelpful; and it all seems motivated by a crypto-naturalistic desire to remain in contact with "practice"; whereas, when I look, the "practice" is all over the place. When I work on a mathematics problem, the room ends up full of paper, and most of the squiggles are, in fact, wrong. So, I don't think a putative logic is somehow to be thought of as "modelling" (or perhaps to be tested by comparing it with) some kind of "practice". For example, consider the inference, $\forall x \phi \vdash \phi^x_t$ Is this meant to "model" a "practice"? If so, it must be something like this: The practice wherein certain humans $h_1, \dots$ tend to "consider" a string $\forall x \phi$ and then "emit" a string $\phi^x_t$ And I don't believe there is such a "practice". This may all be a reflection of my instinctive rationalism and methodological individualism. If there are such "practices", then these are surely produced by our inner cognition. Otherwise, I have no idea what the scientifically plausible mechanism behind a "practice" is. Noam Chomsky of course long ago distinguished performance and competence (and before him, Ferdinand de Saussure distinguished parole and langue), and has always insisted that generative grammars somehow correspond to competence. If what is meant by "practice" is competence, in something like the Chomskyan sense, then perhaps that is the way to proceed in this direction. But in the end, I suspect that brings one back to the question of what it means to "speak/cognize a language", which is discussed below. 1.c Über-language On the other hand, when Searle mentions modelling, it is likely that he has the following notion in mind: A defined language $L$ models (part of) English. In other words, the idea is that English is basic and $L$ is a "tool" used to "model" English. But is English basic? I am sceptical of this, because there is a good argument whose conclusion denies the existence of English. Rather, there is an uncountable infinity of languages; many tens of millions of them, $L_1, L_2, \dots, L_{1000,000}, \dots$, are mutually similar, albeit heterogenous, idiolects, spoken by speakers, who succeed to high degree in mutual communication. Not any these $L_1, L_2, \dots, L_{1000,000}, \dots$ spoken by individual speakers is English. If one of these is English, then which one? The idiolect spoken by The Queen? Maybe the idiolect spoken by President Barack Obama? Michelle Obama? Maybe the idiolect spoken by the deceased Christopher Hitchens? Etc. The conclusion is that, strictly speaking, there is no such thing as English. It seems the opposite is true: there is a heterogeneous speech community $C$ of speakers, whose members speak overlapping and similar idiolects, and these are to a high degree mutually interpretable. But here is no single "über-language" they all speak. By the same reasoning, one may deny altogether the existence of so-called "natural" languages. (Cf., methodological individualism in social sciences; also Chomsky's distinction between I-languages and E-languages.) There are no "natural" languages. There are languages; and there are speakers; and speakers speak a vast heterogeneous array of varying and overlapping languages, called idiolects. 1.d Methodology Next Searle moves on to his central methodological point: Any account of the philosophy of language ought to stick as closely as possible to the psychology of actual human speakers and hearers. And that doesn't happen now. What happens now is that many philosophers aim to build a formal model where they can map a puzzling element of language onto the formal model, and people think that gives you an insight. … The point of disagreement here is again with the phrase "formal model", as the languages we study aren't formal models! The entities involved when we work in these areas are sometimes pairs of languages $L_1$ and $L_2$ and the connection is not that $L_1$ is a "model" of $L_2$ but rather that "$L_1$ has certain translational relations with $L_2$". And translation is not "modelling". A translation is a function from the strings of $L_1$ to the strings of $L_2$ preserving certain properties. Searle illustrates his line of thinking by saying: And this goes back to Russell's Theory of Descriptions. … I think this was a fatal move to think that you've got to get these intuitive ideas mapped on to a calculus like, in this case, the predicate calculus, which has its own requirements. It is a disastrously inadequate conception of language. But this seems to me an inadequate description of Russell's 1905 essay. Russell was studying the semantic properties of string "the" in a certain language English. (The talk of a "calculus" loads the deck in Searle's favour.) Russell does indeed translate between languages. For example, the string (1) The king of France is bald is translated to the string (2) $\exists x(\text{king-of-Fr.}(x) \wedge \text{Bald}(x) \wedge \forall y(\text{king-of-Fr.}(y) \to y = x)).$ But this latter string (2) is not a "model", either of the first string (1), or of some underling "psychological mechanism". … That's my main objection to contemporary philosophy: they've lost sight of the questions. It sounds ridiculous to say this because this was the objection that all the old fogeys made to us when I was a kid in Oxford and we were investigating language. But that is why I'm really out of sympathy. And I'm going to write a book on the philosophy of language in which I will say how I think it ought to be done, and how we really should try to stay very close to the psychological reality of what it is to actually talk about things. Having got this far, we reach a quite serious problem. There is, currently, no scientific understanding of "the psychological reality of what it is to actually talk about things". A cognitive system $C$ may speak a language $L$. How this happens, though, is anyone's guess. No one knows how it can be that Prof. Gowers uses the string "number" to refer to the abstract object $\mathbb{N}$. Prof. Dutilh Novaes uses the string "Aristotle" to refer to Aristotle. SK uses the string "casa" to refer to his home. Mr. Salmond uses the string "the referendum" to refer to the future referendum on Scottish independence. The problem here is that there is no causal connection between Prof. Gowers and $\mathbb{N}$! Similarly, a (currently) future referendum (18 Sept 2014) cannot causally influence Mr. Salmond's present (10 July 2014) mental states. So, it is quite a serious puzzle. 2. Lewis Methodologically, on such issues -- that is, in the philosophy of logic and language -- the outlook I adhere to is the same as Lewis's, whose view echoes that of Russell, Carnap, Tarski, Montague and Kripke. Lewis draws a crucial distinction: (A) Languages (a language is an "abstract semantic system whereby symbols are associated with aspects of the world"). (B) Language as a social-psychological phenomenon. With Lewis, I think it's important not to confuse these. In an M-Phi post last year (March 2013), I quoted Lewis's summary from his "General Semantics" (1970): My proposals will also not conform to the expectations of those who, in analyzing meaning, turn immediately to the psychology and sociology of language users: to intentions, sense-experience, and mental ideas, or to social rules, conventions, and regularities. I distinguish two topics: first, the description of possible languages or grammars as abstract semantic systems whereby symbols are associated with aspects of the world; and second, the description of the psychological and sociological facts whereby a particular one of these abstract semantic systems is the one used by a person or population. Only confusion comes of mixing these two topics. I will just call them (A) and (B). See also Lewis's "Languages and Language" (1975) for this distinction. Most work in what is called "formal semantics" is (A)-work. One defines a language $L$ and proves some results about it; or one defines two languages $L_1, L_2$ and proves results about how they're related. But this is (A)-work, not (B)-work. 3. (Syntactic-)Semantic Theory and Conservativeness For example, suppose I decided I am interested in the following language $\mathcal{L}$: this language $\mathcal{L}$ has strings $s_1, s_2$, and a meaning function $\mu_{\mathcal{L}}$ such that, $\mu_{\mathcal{L}}(s_1) = \text{the proposition that Oxford is north of Cambridge}$ $\mu_{\mathcal{L}}(s_2) = \text{the proposition that Oxford is north of Birmingham}$ Then this is in a deep sense logically independent of (B)-things. And one can, in fact, prove this! First, let $L_O$ be an "empirical language", containing no terms for syntactical entities or semantic properties and relations. $L_O$ may contain terms and predicates for rocks, atoms, people, mental states, verbal behaviour, etc. But no terms for syntactical entities or semantic relations. Second, we extend this observation language $L_O$ by adding: the unary predicate "$x$ is a string in $\mathcal{L}$" (here "$\mathcal{L}$" is not treated as a variable), the constants "$s_1$", "$s_2$", the unary function symbol "$\mu_{\mathcal{L}}(-)$", the constants "the proposition that Oxford is north of Cambridge" and "the proposition that Oxford is north of Birmingham". Third, consider the following six axioms of semantic theory $ST$ for $\mathcal{L}$: (i) $s_1$ is a string in $\mathcal{L}$. (ii) $s_2$ is a string in $\mathcal{L}$. (iii) $s_1 \neq s_2$. (iv) the only strings in $\mathcal{L}$ are $s_1$ and $s_2$. (v) $\mu_{\mathcal{L}}(s_2) = \text{the proposition that Oxford is north of Birmingham}$ (vi) $\mu_{\mathcal{L}}(s_1) = \text{the proposition that Oxford is north of Cambridge}$ Then, assuming $O$ is not too weak ($O$ must prove that there are at least two objects), for almost any choice of $O$ whatsoever, $O+ST$ is a conservative extension of $O$. To prove this, I consider any interpretation $\mathcal{I}$ for $L_O$, and I expand it to a model $\mathcal{I}^+ \models ST$. There are some minor technicalities, which I skirt over. Consequently, the semantic theory $ST$ is neutral with respect to any observation claim: the semantic description of a language $\mathcal{L}$ is consistent with (almost) any observation claim. That is, the semantic description of a language $\mathcal{L}$ cannot be empirically tested, because it has no observable consequences. (There are some further caveats. If the strings actually are physical objects, already referred to in $L_O$, then this result may not quite hold in the form stated. Cf., the guitar language.) 4. The Wittgensteinian View Lewis's view can be contrasted with a Wittgensteinian view, which aims to identify $(A)$ and $(B)$ very closely. But, since this is a form of reductionism, there must be "bridge laws" connecting the (A)-things and the (B)-things. But what are they? They play a crucial methodological role. I come back to this below. Catarina formulates the view like this: I am largely in agreement with Searle both on what the ultimate goals of philosophy of language should be, and on the failure of much (though not all!) of the work currently done with formal methods to achieve this goal. Firstly, I agree that "any account of the philosophy of language ought to stick as closely as possible to the psychology of actual human speakers and hearers". Language should not be seen as a freestanding entity, as a collection of structures to be investigated with no connection to the most basic fact about human languages, namely that they are used by humans, and an absolutely crucial component of human life. (I take this to be a general Wittgensteinian point, but one which can be endorsed even if one does not feel inclined to buy the whole Wittgenstein package.) In short, I think this is a deep (but very constructive!) disagreement about ontology: what a language is. On the Lewisian view, a language is, roughly, "a bunch of syntax and meaning functions"; and, in that sense, it is indeed a "free-standing entity". (Analogously, the Lie group $SU(3)$ is a free-standing entity and can be studied independently of its connection to quantum particles called gluons (gluons are the "colour gauge field" of an $SU(3)$-gauge theory, which explains how quarks interact together). So, e.g., one can study Latin despite there being no speakers of the language; one can study infinitary languages, despite their having no speakers. One can study strings (e.g., proofs) of length $>2^{1000}$ despite their having no physical tokens. The contingent existence of one, or fewer, or more, speakers of a language $L$ has no bearing at all on the properties of $L$. Similarly, the contingent existence or non-existence of a set of physical objects of cardinality $2^{1000}$ has no bearing on the properties of $2^{1000}$. It makes no difference to the ontological status of numbers.) Catarina continues by noting the usual way that workers in the (A)-field generally keep (A)-issues separate from (B)-issues: I also agree that much of what is done under the banner of 'formal semantics' does not satisfy the requirement of sticking as closely as possible to the psychology of actual human speakers and hearers. In my four years working at the Institute for Logic, Language and Computation (ILLC) in Amsterdam, I've attended (and even chaired!) countless talks where speakers presented a sophisticated formal machinery to account for a particular feature of a given language, but the machinery was not intended in any way to be a description of the psychological phenomena underlying the relevant linguistic phenomena. I agree - this is because when such a language $L$ is described, it is being considered as a free-standing entity, and so is not intended to be a "description". Catarina continues then: It became one of my standard questions at such talks: "Do you intend your formal model to correspond to actual cognitive processes in language users?" More often than not, the answer was simply "No", often accompanied by a puzzled look that basically meant "Why would I even want that?". My general response to this kind of research is very much along the lines of what Searle says. I think that the person working in the (A)-field sees that (A)-work and (B)-work are separate, and may not have any good idea about how they might even be related. Finally, Catarina turns to a positive note: However, there is much work currently being done, broadly within the formal semantics tradition, that does not display this lack of connection with the 'psychological reality' of language users. Some of the people I could mention here are (full disclosure: these are all colleagues or former colleagues!) Petra Hendriks, Jakub Szymanik, Katrin Schulz, and surely many others. (Further pointers in comments are welcome.) In particular, many of these researchers combine formal methods with empirical methods, for example conducting experiments of different kinds to test the predictions of their theories. In this body of research, formalisms are used to formulate theories in a precise way, leading to the design of new experiments and the interpretation of results. Formal models are thus producing new insights into the nature of language use (pace Searle), which are then put to test empirically. The methodological issue comes alive precisely at this point. How are (A)-issues related to (B)-issues? The logical point I argued for above was that a semantic theory $ST$ for a fixed well-defined language $L$ makes no empirical predictions, since the theory $ST$ is consistent with any empirical statement $\phi$. I.e., if $\phi$ is consistent, then $ST + \phi$ is consistent. 5. Cognizing a Language On the other hand, there is a different empirical claim: (C) a speaker $S$ speaks/cognizes $L$. This is not a claim about $L$ per se. It is cognizing claim about how the speaker $S$ and $L$ are related. This is something I gave some talks about before, and also wrote about a few times before here (e.g., "Cognizing a Language"), and also wrote about in a paper, "There's Glory for You!" (actually a dialogue, based on a different Lewis - Lewis Carroll) that appeared earlier this year. A cognizing claim like (C) might yield a prediction. Such a claim uses the predicate "$x$ speaks/cognizes $y$", which links together the agent and the language. But without this, there are no predictions. The methodological point is then this: any such prediction from (C) can only be obtained by bridge laws, invoking this predicate linking the agent and language. But these bridge laws have not been stated at all. Such a bridge law might take the generic form: Psycho-Semantic Bridge Law If $S$ speaks $L$ and $L$ has property P, then $S$ will display (verbal) behaviour B. Typically, such psycho-semantic laws are left implicit. But, in the end, to understand how the (A)-issues are connected to the (B)-issues, such putative laws need to be made explicit. Methodologically, then, I say that all of the interest lies in the bridge laws. So, that's it. I summarize the three main points: 1. Against Searle and with Lewis: languages are free-standing entities, with their own properties, and these properties aren't dependent on whether there are, or aren't, speakers of the language. 2. The semantic description of a language $L$ is empirically neutral (indeed, the properties of a language are in some sense modally intrinsic). 3. To connect together the properties of a language $L$ and the psychological states or verbal behaviour of an agent $S$ who "speaks/cognizes" $L$, one must introduce bridge laws. Usually they are assumed implicitly, but from the point of view of methodology, they need to be stated clearly. 7. Update: Addendum I hadn't totally forgotten -- I sort of semi-forgot. But Catarina wrote about these topics before in several M-Phi posts, so I should include them too: Logic and the External Target Phenomena (2 May 2011) van Benthem and System Imprisonment (5 Sept 2011) Book draft: Formal Languages in Logic (19 Sept 2011) (Probably some more, that I actually did forget...) And these raise many questions related to the methodological one here. Published by Jeffrey Ketland at 3:36 am 21 comments:
CommonCrawl
Application of evolutionary and swarm optimization in computer vision: a literature survey Takumi Nakane1, Naranchimeg Bold2, Haitian Sun2, Xuequan Lu3, Takuya Akashi2 & Chao Zhang ORCID: orcid.org/0000-0002-0845-92171 na1 IPSJ Transactions on Computer Vision and Applications volume 12, Article number: 3 (2020) Cite this article Evolutionary algorithms (EAs) and swarm algorithms (SAs) have shown their usefulness in solving combinatorial and NP-hard optimization problems in various research fields. However, in the field of computer vision, related surveys have not been updated during the last decade. In this study, inspired by the recent development of deep neural networks in computer vision, which embed large-scale optimization problems, we first describe a literature survey conducted to compensate for the lack of relevant research in this area. Specifically, applications related to the genetic algorithm and differential evolution from EAs, as well as particle swarm optimization and ant colony optimization from SAs and their variants, are mainly considered in this survey. Many computer vision tasks can be regarded and formulated as a convex optimization, which allows a global optimum to be mathematically computed [110–112]. However, most of these tests can be highly non-convex and even ill-posed. As a result, there may exist numerous optima, with no solution, a non-unique solution, or an unstable solution, particularly under real-world settings that involve noisy or missing data. Regarding the non-convexity, for example, segmentation problems (Section 5) in computer vision can be cast as an energy minimization problem, which is applied to formulate an energy function over labels of pixels, such that the best solution can be obtained by minimizing the amount of energy. However, when the energy function given is complex, finding the exact energy minimum is NP-hard and the convex solvers are unable to explore the exponential number of local optima efficiently without adding additional constraints or hypotheses. Regarding the ill-posed problem, many tasks require optimizing the parameters of a certain mathematical model to reproduce the observations. For example, in face recognition problems (Section 9), there are various parameters that need to be tuned to model a "face likeness." Depending on the amount and quality of the training samples, finding a parameter setting that can reproduce the training labels could be extremely difficult. By contrast, evolutionary algorithms (EAs) and swarm algorithms (SAs) are powerful metaheuristic tools used to search for solutions within a potentially huge solution space or provide approximate solutions for solving combinational constraints that may not hold stable solutions. To avoid being trapped in the local optima and provide a satisfactory solution, EAs and SAs have been successfully adopted to solve various computer vision tasks, which are listed and classified in this survey. To the best of our knowledge, there have been no other studies specifically providing a comprehensive survey of EAs and SAs adopted to solving computer vision problems. Despite the many recent applications in computer vision combining deep neural networks with evolutionary optimization in recent years, we are interested in how the EAs and SAs for computer vision-related tasks have evolved. The main purpose of this paper is to present a comprehensive understanding of the existing research on EAs and SAs for solving computer vision tasks. The remainder of this paper is organized as follows. In Section 2, we briefly introduce the characteristics of the algorithms focused upon in this survey. In Section 3, we discuss why EAs and SAs are needed for computer vision applications based on a simple example. In Section 4 through Section 11, we explain how EAs and SAs have been applied to eight different computer vision tasks. For clarity, the summary of contents of this paper is shown in Table 1. Finally, we summarize the contents of this paper in Section 12. Table 1 Summary of contents from Section 4 to Section 11. Background colors represent different categories of algorithms: , , , and Evolutionary and swarm algorithms EAs and SAs are two important research fields belonging to the nature-inspired metaheuristics known as an evolutionary computation (EC). These metaheuristics share the following two characteristics: population-based presentation for the candidate solutions, and an iterative procedure with a stochastic exploration. A significantly important factor in a population-based optimization method is a balance between exploration and exploitation capabilities. An exploration is the ability to search over a wide range of solution spaces by uniformly distributing the population (i.e., the population maintains its diversity). This brings robustness for a non-convex function landscape to a population. Even if some individuals fall into local optima, others may still be able to find a promising solution. By contrast, an exploitation is the ability to concentrate a population at a promising solution based on information that has been acquired thus far. An exploitation is necessary to obtain a converged population. The more valid and reliable the information shared within a population is, the faster a convergence can be achieved. The successes of EAs and SAs are derived from the nature-inspired operations potentially having a mechanism to adjust the above two abilities. These algorithms start from a state in which individuals are randomly distributed, i.e., in the most diverse state, and operations are designed to encourage convergence within the population and achieve balance between the two abilities automatically. In this survey, we concentrate on studying approaches relevant to the following four representative algorithms: the genetic algorithm (GA) and differential evolution (DE) from the EAs, and particle swarm optimization (PSO) and ant colony optimization (ACO) from the SAs. For a simple comparison, brief flowcharts of these algorithms are shown in Fig. 1. More detailed procedures can be found in the pseudo-codes of Appendix A. In the following, we review the algorithms involved by analyzing their features and differences. Comparison of flowcharts between , , , and . Interested readers can refer to Appendix A for detail pseudo-codes and explanations EAs are optimization algorithms inspired by Darwin's evolutionary theory. This generic category mainly consists of the GA, genetic programming (GP), evolution strategy (ES), and evolutionary programming (EP). It also includes algorithms that have similar frameworks such as a DE algorithm in a broad sense. Each iteration in an EA (i.e., a generation) is composed of parents selection, recombination (i.e., a crossover), mutation, and survivors selection, as shown in Fig. 2. The two selections operate according to the evaluation values (i.e., fitness), which bring about a strong force of exploitation. However, a crossover and mutation are responsible for the exploration within, sometimes outside of, the distribution of the population. These operations together form a simulation of evolution for individuals, which lead the population to the desired solutions. Under the usual settings, an individual represents a single solution candidate. Example of an one-generation cycle of GA. Modified genes in each step are shown in red The GA is the most well-known algorithm in both EAs and EC. An individual is a group of chromosomes, which are typically encoded by a binary code with a fixed length. The parent selection takes the fitness value of all individuals into account, which is implemented probabilistically. The selected parents produce the same number of offspring by a crossover and mutation. These two operations are a partial bit (i.e., gene) manipulation. A crossover produces new individuals by swapping the genes of the parent pairs. That is, a new individual is composed of partial blocks of genes of the parents, which implies the inheritance of the parental characteristics. The purpose of a mutation is to introduce an impact into a population that cannot be acquired by inheritance, which is achieved by changing genes in a completely independent and random manner. A mutation helps the individuals escape from the local optima. Alternation of generation (i.e., survivor selection) is realized by entirely replacing a population of parents with a population of offspring. Because the GA is designed for general purposes, it is often intuitive and simple to apply to real problems. In addition, numerous researchers have been working on developing a real-value coded GA with an improvement of the genetic operators, which enables the GA to be applied to not only combinatorial optimization but also continuous optimization problems. A DE is one of the most popular EA optimization algorithms. An individual is termed a parameter vector and composed of real-valued parameters, which allows the algorithm to solve continuous optimization problems. The most significant characteristic of a DE algorithm is the existence of a donor vector constructed during the mutation step from a parameter vector (i.e., a base vector) and a difference vector of two parameter vectors. These three parameter vectors are randomly selected from the current population. The difference vector represents the direction and magnitude of the change caused by a mutation. In addition, selection from the population can reflect valid information from the distribution fitted into the functional landscape. That is, the donor vector is an indicator of the search with an automatic scaling adjustment, which improves the convergence of the algorithm. The survivor selection step in the DE algorithm is a competitive process between the target vector (i.e., parent) and trial vectors (i.e., offspring created from the target vector and donor vectors) based on the fitness values. Unlike the GA, which preserves all offspring until the next generation without exception, the offspring in the DE algorithm must be equal to or outperform the corresponding parent to survive. This strategy implies the preservation of best-so-far solutions individually, which can make the population maintain its diversity and improve its convergence over the long term. GA and DE repeat the common steps, although the actual implementation of each step differs, as shown in Table 2. Note that this is an example of a simple implementation, and many variants exist. Table 2 Differences between the implementations of GA and DE with respect to the four common steps of EAs Swarm algorithms SAs, inspired by the collective behavior of social animals and insects, are optimization algorithms belonging to metaheuristics called swarm intelligence (SI). A swarm includes multiple agents, and the behavior of each agent is extremely simple, local, and stochastic. Despite a single swarm not having a centralized structure to control the rule of the agent behavior, interactions between agents introduce global swarms and intelligent behavior. The local behavior of each agent and the interactions shared within the swarm correspond to an exploitation and exploration respectively, and are combined as agent movements within a simple implementation. PSO is a continuous optimization algorithm inspired by the collective behavior of flocking birds. All individuals (particles) composing the population (swarm) fly around the search space based on the corresponding velocity vector. The most attractive point of PSO is the preservation of two important elements: the global best (gbest) and the personal best (pbest). These are the memory of the positions where the best fitness values can be observed until the current iteration, with respect to the swarm and each particle, respectively. Here, gbest is an element that promotes the convergence of the swarm to the proper locations, whereas pbest contributes to the maintenance of the swarm diversity by generating unique behaviors for each particle. Both gbest and pbest are mainly used for a velocity update by considering the inertia. The velocity update function is similar to the target-to-best (type of base vector)/1 (number of difference vectors) scheme of the mutation step in the DE algorithm, which means that PSO also benefits from the difference vector. By contrast, PSO does not have a selection step like an EA, and an iteration only consists of a self-update of the velocity and position. The simple composition of this algorithm allows for an easy coding and efficient computations. ACO is a metaheuristic mainly designed for combinatorial optimization problems, inspired by the behaviors of ants. The task of the artificial ants is to construct a candidate solution by adding unused solution components to the current partial solution iteratively. The ants probabilistically choose a solution component based on the pheromone intensity and heuristic information (if available). The pheromone intensity reveals the validity of the corresponding choice, which is updated after the artificial ants construct a candidate solution. The update of the pheromone consists of two mechanisms: deposit and evaporation. Artificial ants increase the pheromones on their own trail according to the evaluation value, and the pheromones will decrease over time. If the choice is optimal, it attracts more artificial ants because the deposit exceeds the evaporation; otherwise, the choice will soon become uncompetitive. The update of the pheromone is a reflection of the experience accumulated by the artificial ant colony, which will improve the quality of the following candidate solutions. Algorithms that share the framework described above are generally referred to as ACO algorithms. The key point of the SAs is the information shared within the swarm, which can directly influence the movement of each agent. The differences between PSO and ACO are summarized in Table 3. Table 3 Differences between the information shared within the swarms of PSO and ACO Comparison of algorithm characteristics Although the above algorithms have a common framework of population-based iterative processing, there are various differences in their specific implementations. In this subsection, we discuss the characteristics of each algorithm, which may provide an indicator to the question of which algorithm is appropriate to exploit. One advantage of the GA is its flexible gene representation. Owing to its long history and popularity, various gene representations (e.g., binary, real-value, and graph) and corresponding genetic operators have been devised. Owing to the accumulation of these abundant implementations, the GA is widely used in various fields including computer vision. The DE algorithm is simple to implement, but achieves a high optimization capability. This fact has been proven through numerous competitions on real parameter optimization [113]. Its effectiveness is expected to make it a powerful tool in computer vision as well. PSO has attracted the interest of researchers owing to its simple implementation. The fast operators are effective for applications that require a high-speed performance. In addition, unlike the crossover operators in the GA and DE algorithm, the majority of PSO processing requires no interactions between particles. This fact shows that PSO is compatible with parallel processing. A characteristic of ACO is a graph exploration for making probabilistic decisions. This unique process is extremely effective in problems that can be modeled using graphs. Applications in computer vision Computer vision aims to extract and understand meaningful information from images and videos. Various processes for performing such tasks are often interspersed with situations that require optimization, and the solution spaces usually constitute a vast and complex landscape. As a simple example, we demonstrated a simple object detection using a sliding window method, as shown Fig. 3. The reference image (Fig. 3a) is slide from the top left of the target image (Fig. 3b), and the sum of absolute differences (SAD) of the pixels at each position is calculated. That is, detection is achieved by finding the position where the SAD is 0. From the plot of the SAD at each position shown in Fig. 3c, we can observe a non-convex functional landscape. The presence of many small valleys makes optimization through a deterministic method difficult to achieve. In addition, the landscape becomes more complex if we must consider the rotation and scaling of the reference image. Therefore, EAs and SAs are expected to be powerful tools for solving the optimization problems occurring in computer vision. Demonstration of a simple object detection by maximizing a SAD score. The plots of SAD at each detection window is shown by c, which can be observed that there exist many local optima We systematically summarize the studies in which four selected algorithms are involved with respect to different computer vision tasks: a neural network (Section 4), image segmentation (Section 5), feature detection and selection (Section 6), image matching (Section 7), visual tracking (Section 8), face recognition (Section 9), human action recognition (Section 10), and a few other studies (Section 11). The timeline of the literature summarized in this paper is listed in Table 4, and the statistics of the literature in terms of applications and algorithms are shown in Fig. 4. In addition, we also present a summary table at the end of each section to categorize the related studies. Statistics of the literature in terms of applications and algorithms. The vertical axis indicates the number of related papers Table 4 Timeline (2000 ∼2018) and categorization of research works summarized in this survey. Background colors represent algorithm categories , , , and During the last two decades, deep neural networks (DNNs) have achieved a state-of-the-art performance on a variety of computer vision tasks, for instance, in object recognition, where problem-specific features can be automatically learned. However, designing and learning optimal network structures and their parameters are challenging tasks, requiring expert knowledge and significant trial and error. Therefore, the development of automated neural architecture search (NAS) methods is an attractive field of research. There are numerous different strategies used by an NAS, including gradient-based methods, a random search, Bayesian optimization, and reinforcement learning. In particular, the strategy of using EAs and SAs, called NeuroEvolution, has received attention since the introduction of this field. Although gradient-based NAS methods (e.g., [115, 116]) are much faster than evolutionary-based NAS methods in many cases, the gradient-free exploration of EAs and SAs is useful for tasks for which gradient-based methods are not typically applicable, such as learning building blocks and architectures of neural networks [117]. Interested readers are also referred to survey papers [118] and [117] for the details of NAS strategies and NeuroEvolution approaches, respectively. Research on NeuroEvolution began in the 1990s with many interesting approaches [119], which were originally used to evolve the weights of a fixed architecture. In 2002, Stanley and Miikkulainen proposed the NEAT algorithm [120] to evolve the structure and connection weights of a small-scale neural network. After the NEAT algorithm, there has been a surging interest in using algorithms such as EAs to automatically design DNNs along with the connection weights and hyperparameters. However, with the dramatically increasing scale of DNNs, it has become difficult for even EAs and SAs to adjust the architectures and weights simultaneously. To address this issue, recent NeuroEvolution approaches again incorporate gradient-based methods to optimize weights [13, 118]. Through a series of efforts, DNNs designed by EAs and SAs achieve competitive performance for reinforcement learning [121] and image classification tasks [1]. Nonetheless, for supervised learning tasks, gradient-based optimization is by far the most common approach. This section reviews NeuroEvolution approaches, which optimize the DNN structure, connection weights and hyperparameters with respect to computer vision tasks (particularly image classification tasks). We first describe several studies on discovering the structure of neural networks for large-scale image classification benchmarks using EAs and SAs in Section 4.1. Next, some studies on the evolving structure for image restoration are elaborated in Section 4.2. Finally, EAs/SAs-based optimization of other aspects of neural networks is discussed in Section 4.3. Evolving DNNs for image classification In recent years, image classification has become one of the most investigated tasks in computer vision, which has been brought about by the development of DNNs, particularly convolutional neural networks (CNNs). A typical CNN consists of multiple building blocks, and the order of placement affects the performance. This characteristic makes it difficult to adopt certain NAS methods that have been successfully applied to DNNs, such as a random search and Bayesian optimization [2]. In this subsection, we place emphasis on studies that aim to automatically design optimum DNNs for large-scale image classification benchmarks using the GA and PSO. The recent active development of NeuroEvolution for large-scale image classification began in 2017. LEIC Real et al. [1] employ a GA at unprecedented scales to discover models for large-scale image classification benchmarks by using large computational resources (e.g., running on 250 GPUs for approximately 10 days), the result of which have demonstrated that NeuroEvolution can achieve a competitive performance as a hand-crafted model. In their study, they developed CNN structures/models, where every individual (i.e., model) is evolved from scratch and encoded as a graph. Through the evolution process, different types of layers can be incorporated into the individuals through specific mutations (e.g., an insert convolution, remove convolution, or alter stride). A weight evolution is also considered in this study, they used pra backpropagation to allow the trained weights to be inherited by the children whenever possible. Specifically, if a layer has matching shapes, the weights are preserved. In addition, a binary tournament selection [122] is used to perform pairwise comparisons of random individuals, and the worst pair is immediately removed from the population. This study is important because it shows that NeuroEvolution can be used for large-scale image classification with a simple algorithm. However, such success requires an enormous amount of computational resources, which has been one of the challenges for later studies. EvoCNN Sun et al. [2] proposed EvoCNN, a GA-based approach to automatically evolving the architecture and initial weights of a CNN. Because the optimal depth of a CNN is unknown, a variable-length gene encoding strategy is employed in EvoCNN. EvoCNN is composed of three different building blocks, a convolutional layer, a pooling layer, and a full connection layer, which are encoded in parallel into one chromosome for evolution. Therefore, each chromosome is separated into two parts. The first part includes a convolutional layer and a pooling layer, and the other part contains a full connection layer based on the convention of a CNN. Two statistical real numbers, the standard deviation and the mean value of the connection weights, are used to represent the numerous weight parameters, which eases the implementations of the GAs. When the optimum mean value and standard deviation are achieved, the weight values are then sampled from the corresponding Gaussian distribution. A slack version of a binary tournament selection is used to select the parent solutions for the crossover operations. The generated offspring conduct mutation by addition, deletion, and modification, with respect to the parent solution. In the fitness evaluation process, every individual is trained by a small number of epochs to speed up the training. Based on their structure and initialized weights, the mean value and standard derivation of the classification error are calculated on the validation set for the fitness of every individual. These elements indicate the performance tendency, which is sufficient information for evaluation. As a result, this process dramatically speeds up the evaluation by avoiding a thorough training as conducted in [1]. CoDeepNEAT CoDeepNEAT [3] is an extension of the NEAT algorithm, which is dedicated to the evolving network structure and hyperparameters of a DNN. The key idea of CoDeepNEAT is the coevolution of the modules and blueprints. The blueprint chromosome is a graph where each node contains a pointer to a particular module species, and each module chromosome is a graph that represents a small DNN. During a fitness evaluation, the modules and blueprints are combined to create a larger assembled network, which is further decoded into a phenotype (DNN) and then trained for a fixed number of epochs. This coevolution strategy allows efficiently acquiring an iterative modular structure, which is a common feature in many successful DNNs. Each node (layer) in the module chromosome contains a table of real and binary valued hyperparameters that are mutated through a uniform Gaussian distribution and random bit-flipping, respectively. Over the generations, a structure (i.e., a layer) is added to the graph incrementally through a mutation. To ensure that the parent layer's output is the same size as the current layer's input, the adjustment process is conducted through a concatenation or element-wise sum operation. CGP-CNN In the study by Suganuma et al. [4], Cartesian genetic programming (CGP) is used in the evolution of a CNN architecture and connectivity, where the hyperparameters and connections of each layer along with the total number of layers are optimized. The architecture of a CNN is represented as a directed acyclic graph with a two-dimensional grid. The genotype consists of integers with a fixed length, and each gene has information regarding the type and connections of the node. Referring to the modern CNN architectures, highly functional modules such as ConvBlock, ResBlock (consisting of convolution processing, batch normalization, ReLU, and a summation), and pooling are selected as node functions. The 1+λ evolutionary strategy is employed to conduct a search within the architecture solution space, which means that λ children are generated from a single parent at each generation by applying a mutation, and the best performing child compared to the parent is updated as the new parent for the next generation. The node type and connections of each node are randomly changed according to the mutation rate. Genetic CNN Xie and Yuille [5] encoded each network structure into a fixed-length binary string and applied the GA to automatically learn the structure of a deep CNN. The search space is restricted by imposing constraints on the network structures such that a network is composed of a limited number of stages, and each stage is defined as a set of predefined building blocks (convolution and pooling). The Russian roulette process [123] is used for the selection. In each generation, the standard genetic operations, for example, a crossover and mutation, are conducted to generate competitive individuals. HREAS Liu et al. [6] proposed a GA-based structure search method using multi-level hierarchical representations of DNNs, allowing flexible network structures (directed acyclic graphs) at each level of the hierarchy. The key idea of a hierarchical representation is to have several graphs (or motifs) at different levels of the hierarchy, and the lower-level graphs (such as a graph of primitive operations, e.g., convolution, pooling, etc.) are used as building blocks during the construction of the higher-level graphs. During the generation, a hierarchical genotype has mutated a sequence of actions that include selecting the hierarchy level, selecting the target graph at the target level, and modifying the target graph using add, alter, and remove operations. Similar to the approach by Real et al. [1], the evolutionary search algorithm is based on a queue-based tournament selection, which is implemented in an asynchronous distributed manner, consisting of a single controller responsible for performing mutations over the genotype and a set of workers responsible for their evaluations. DENSER Assunccao et al. [7, 8] proposed a two-level representation. The outer level, i.e., GA-level, encodes the general structure of the network and is responsible for representing the sequence of layers. The inner level, i.e., dynamic structured grammatical evolution (DSGE), encodes the parameters associated with the layers. Because there is a one-to-one mapping between the layers and their parameters, the evolution of the networks keeps the genetic material of each layer together. This makes the manipulation of the solution easier. Two crossover operations are developed, acting on both levels of the genotype. A one-point crossover is used to exchange the layers within the same module. A module is a set of layers that belongs to the same GA structure index, such as the features (convolution or pooling) and classification (fully connected). A bit-mask crossover is used to exchange modules between two parents. In a mutation, they used two sets of mutation operations that act at the GA and DSGE levels, respectively. For example, the addition, replication, and removal are at the GA level, and the grammatical mutation and integer/float mutation are at the DSGE level. In addition, Kramer [9] utilized a (1+1)-EA for optimization of the structure and hyperparameters of convolutional highway networks, which are methods for constructing networks with a large number (hundreds and even thousands) of layers. The convolutional highway network is represented as a bit string. Several studies have adopted a PSO, taking advantage of its easy implementation and lower computational cost. PSOAO The authors of EvoCNN [2] proposed a flexible convolutional auto-encoder (CAE). This flexible CAE aims to overcome the constraints of the classical CAE, which has only one convolutional layer and one pooling layer in the encoder. Its architecture optimization is achieved by a PSO consisting of variable-length particles, called PSOAO. A variable-length encoding strategy is applied to the PSOAO algorithm, where each particle contains different numbers of layers with different parameters (such as the filter width/height, stride width/height, convolutional type, number of feature maps, and pooling type). The main flow of the PSOAO algorithm follows the simple PSO algorithm. One challenge resulting from the adoption of variable-length particles is the need to calculate the gbest. To this end, the padding and truncation operations are used to keep the length of the layers unchanged in the global best and the reference (current) particle. In addition, the reconstruction error is taken as the fitness. IPPSO In a study by Wang et al. [11], the PSO is utilized to search the optimal architectures of a CNN for image classification tasks. In their approach, a new encoding scheme is proposed, which defines a "network interface" containing the IP address and its corresponding subnet to carry the configurations of a CNN layer. The network IP address can be divided into numerous subsets, each of which can be used to define a specific type of CNN layer (convolution, pooling, or fully connected). This means that a high-dimensional particle vector (i.e., the entire IP address) can be divided into several parts (i.e., CNN layers), which facilitates the convergence of the PSO. To attain variable-length CNN architectures, a new layer called a disabled layer is defined to disable some of the layers in the fixed-length IP address encoding. As a quantitative summary, the classification performances of the discovered models on large-scale image classification benchmarks such as MNIST, Fashion-MNIST, CIFAR-10, CIFAR-100, and ImageNet are listed in Table 5. Table 5 Classification accuracy of discovered models by evolutionary approaches on different datasets Evolving DNNs for image restoration Image restoration, which recovers a given corrupted image to the original clean image, is an important task of computer vision along with image classification. There are several studies that have addressed this task for networks designed using NeuroEvolution. DPPN Fernando et al. [12] proposed a differentiable pattern producing network (DPPN), which combines the evolution of a network structure and learned weights using a Lamarckian approach for an auto-encoder neural network. With DPPN, every individual is encoded using a connection matrix and a node list. During each generation, the auto-encoder is trained through a gradient descent approach, and the learned weights are inherited by the offspring. Two evolutionary algorithms (a microbial genetic algorithm (mGA) and an asynchronous binary tournament selection) are used to select the parent solutions, where two random individuals and random pairs (whenever more than two workers are working simultaneously) are chosen, respectively. The chosen individuals are then trained and their fitness is evaluated. The mutated copy of the winner overwrites the loser. Three types of mutations are applied to generate the network structure: the addition of a random node, the removal of a random edge, and the addition of a random edge. During a crossover operation, hidden units (nodes) of both parents are combined. The mean squared error is used as a fitness function. E-CAE Suganuma et al. [13] introduced an evolutionary algorithm that searches the optimum architecture of the CAEs for an image restoration. The CAEs in this study are built using only standard ConvNet building blocks (i.e., convolutional layers with an optional downsampling and skip connections) that involve symmetric encoder-decoder structures. Nevertheless, the results show that the CAEs generated by an EA can achieve a competitive performance compared to hand-crafted models for image inpainting and denoising tasks. The representation and evolutionary strategy for the CAEs are the same as those described by Suganuma et al. [4]. At each generation, λ children are generated by applying mutations to the parent and are trained to minimize a standard l2 loss. The fitness of every individual is measured using the peak signal-to-noise ratio (PSNR) between the restored and ground truth images on the validation set. The genotype is updated to maximize the fitness as the generation proceeds. Evolving DNNs for other tasks In [14] and [15], the GA is employed to evolve the weights of a fixed CNN and pass the local optimum, moving toward the global optimal during the training. A method presented in [15] shows that this can improve the performance of a pure CNN. To find the best weights for a CNN, the authors used a crossover operation exchanging the layer weights and threshold values between two chromosomes and a mutation operation changing the layer weights and threshold values. In [14], a standard GA is employed to train the weights of a CNN for crack detection on the image. However, the authors reported that the results are no better than when training the CNN through a backpropagation. In short, Table 6 summarizes the information from the literature reviewed in this section. Table 6 Brief information of the literature summarized in Section 4 Image segmentation Image segmentation aims at partitioning a digital image into multiple segments according to the information extracted from the pixels. Many computer vision approaches employ segmentation for a pre-processing to easily understand the parts that construct the image. Informative segmentation such as semantic segmentation and instance segmentation is now active in the field of image segmentation, which is typically powered by high-level deep features with DNNs. On the other hand, most existing segmentation works using EAs and SAs focus on only classical tasks. That is, segmentation is achieved by dividing pixels based on low-level intensity information. The difficulty of an accurate segmentation typically increases as the number of segments increases. In addition, a determination of the optimal number of segments is also a challenging task. In this section, we mainly describe the typical thresholding (in Section 5.1) and clustering (in Section 5.2) approaches used in image partitioning. Other approaches, such as contour-based methods, are described in Section 5.3. Thresholding approaches Thresholding is a simple and popular technique used in image segmentation. This approach is typically divides a histogram of the pixel intensities. As a simple example, a demonstration of two-level segmentation is shown in Fig. 5. The pixel intensity, regarded as boundary, is determined according to the distribution of the histogram. There are two representative thresholding methods: a fuzzy partition and the Otsu method. Fuzzy partition Demonstration of a simple two-level segmentation. Pixels in the original image a is segmented into two levels with a intensity threshold of 200. As a consequence, alphabets are extracted in b The fizzy partition is the probabilistic representation of the likelihood that each pixel intensity belongs to an class. The probability of belonging to each class is defined by the membership function, and the threshold between the two classes is set at the intersection of the membership functions, as shown in Fig. 6. Illustration of fuzzy partition in the case of three-level segmentation. The probability of each pixel intensity belonging to a class is defined by the corresponding membership functions (colored curves). Each threshold between two classes (vertical dotted line) is set at the intersection of two membership functions EAs and SAs are exploited in tuning the parameters of the membership functions. Tao et al. [16] optimized six integer parameters using the GA to segment a gray-level image into three clusters. Each parameter is encoded as a simple 8-bit string. These parameters are tuned such that the fuzzy entropy [124] is maximized. Later, Tao et al. [17] proposed a fuzzy entropy maximization method using ACO and applied it to the two-level segmentation of infrared images. The initial positions of the ants are randomly chosen from all possible solutions. The ants then search for more attractive solutions from the neighborhood according to their transition probability. Puranik et al. [18] presented a modified PSO to select the rules of the fuzzy logic for color the image segmentation. Each color class is described by several fuzzy sets in the HSL color space, specifically, ten sets for hue, five sets for saturation, and four sets for lightness. The task of PSO is to produce a smaller number of fuzzy roles while preserving a low error rate. In the velocity update, each dimension of the particle can be updated by learning the pbest of other particles, including particles in different generations. The algorithm is thus called comprehensive learning PSO (CLPSO). Otsu Method The Otsu method selects the feasible thresholds only from gray-level histograms without any prior knowledge. It considers a threshold that maximizes the variance between classes to be reasonable. However, an exhaustive search conducted using to find the optimal threshold is a time-consuming procedure, and an extension to multi-level thresholds requires additional computational costs. EAs and SAs have attracted attention as feasible search methods. Liang et al. [19] utilized a simple ACO in combination with Otsu thresholding (ACO-Otsu) for image segmentation. This is much faster in terms of 2 ∼4-level segmentation than the Otsu method with an exhaustive search. Ghamisi et al. [20] introduced an improved PSO, called fractional-order Darwinian PSO (FODPSO), to tackle hyperspectral image segmentation. As the two main changes from a traditional PSO, several swarms of traditional PSOs are treated in parallel to enhance the ability to escape from the local optima, and a fractional calculus controlling the convergence rate is added. Particles are encoded with the thresholds and the final optimal thresholds are combined with the results from other methods through a voting procedure. As with the Otsu method, the variance between classes is used for the adaptive degree function. One of the challenging tasks in multi-level thresholding is to determine the number of thresholds. An automatic provisioning of the optimal number of thresholds can be applied to more practical situations. To improve the ACO-Otsu approach [19], Liang et al. [21] proposed an ant colony system (ACS) using the Otsu method (ACS-Otsu), introducing a hierarchical search range and uniformity measure to automatically determine the search ranges and number of thresholds, respectively. ACS-Otsu is also combined with a local search process for the best ant if the informative heuristics cannot be defined. The method is then combined with an expectation-maximization method [22], which initializes the ACS-Otsu method and obtains refined parameters from the approach. With the numbers of thresholds and positions determined through iterative Otsu thresholding, Chander et al. [23] introduced a "momentum" and "social" PSO to refine the thresholds. A particle encoded with such thresholds is placed in the "momentum" weight and "social" weight categorizes in the velocity update, which are variables depending on the fitness of the particle. The "momentum" weight emphasizes the influence of the previous iteration, whereas the "social" weight stresses the relationship with gbest. The two weights can favor each particle in moving toward the global optima. Clustering approaches Clustering is also an important technique in image segmentation. Thresholding-based segmentation determines the boundaries between classes, whereas clustering-based segmentation deals with the centroids of classes. The positions of the cluster centroids are adjusted by minimizing the distance defined between the pixels and centroids based on certain features. Omran et al. [24] proposed optimizing the cluster centroids with a fixed number of clusters to segment the image using the PSO. The evaluation of the particles is applied according to three principles: (1) minimizing the intra-distance between pixels and their cluster means, (2) maximizing the inter-distance between any pair of clusters, and (3) minimizing the quantization error. The fitness function is the sum of the weighted objective functions, which requires no effort to address the multi-objective problem for PSO. In addition, the pheromone matrix of ACO is useful for an image segmentation. Instead of segmenting an image using image primitives such as the intensity and color, Malisia et al. [25] proposed clustering the pheromone matrix of ACO into two clusters using a k-means approach. The ants move to the neighboring pixels and drop their pheromones there. After the ACO iteration is completed, the normalized pheromone matrix is combined with the original normalized gray-level image. K-means clustering classifies the values of the combined dataset as black or white. Determining the optimal number of clusters is an important task in a clustering-based approach. Numerous studies have aimed at developing methods that automatically provide the optimal number of clusters. Maulik et al. [26] proposed a pixel classification method using a variable string length genetic algorithm (VGA), where each chromosome consists of a cluster of centroids encoded by real numbers, and the number of clusters (i.e., the length of the chromosome) is variable. The crossover guarantees that there are more than two clusters owing to the constraints of the range of crossover points. Omran et al. [27] proposed dynamic clustering using PSO (DCPSO). With this method, the position of each particle is a binary representation, and a value of 1 in the binary code means that the corresponding element in the cluster centroids pool is chosen. Then, the best set of centroids is refined using the k-means approach. The process is repeated a user-defined number of times with an updated centroid pool, which is the union of previous results and randomly chosen centroids. Awad et al. [28] proposed a hybrid GA (HGA), which includes a hill-climbing algorithm in the update to quickly find the local optima, for satellite image segmentation. The chromosomes are encoded with the features of self-organizing maps of the full image, avoiding the determination of the number of clusters and allowing the evolution to be the final result. After that, Awad et al. [29] presented a hybrid dynamic GA (HDGA), having the advantages of both HGA [28] and VGA [26], in solving the segmentation problem. On the one hand, HDGA employs the hill-climbing algorithm in the update, demonstrating the "hybrid" aspect of the approach. On the other hand, a chromosome, encoded with a cluster centroid and its pixel value, is set to a fixed length with an ending mark to confirm the actual flexibility of the chromosome, i.e., illustrating the "dynamic" aspect. A crossover occurs only at the cluster centroid bits, rather than at the pixel value bits or the bits after an ending mark. Bansal et al. [30] proposed an approach that focuses on the pheromone matrix, as in [25]. Each ant marks (updates the pheromone) and combines similar traveled pixels until all pixels have been marked. They amend the image to a fully connected graph, i.e., all pixels are connected to each other such that the ants travel toward unmarked pixels during every step. The number of clusters is automatically calculated based on the CMC distance, which is applied as a similarity measure. Halder et al. [31] proposed a GA-based clustering method for gray-level images. They first apply fuzzy c-means (FCM) and encode its result as an individual. This process is repeated until the population pool is filled, and a simple GA framework is then applied. To investigate the appropriate number of clusters, the GA is run multiple times, increasing the clusters to a predefined number. The results for each number of clusters are then evaluated using the validity index. The FCM-GA framework was applied to tumor detection in the brain [32]. Among the different methods available, there are major differences regarding whether the process of finding the optimal number of clusters is built into the EAs and SAs. In [26, 27, 29], EAs and SAs optimize the number of clusters and their centroids simultaneously. Subjected to this setting, the methods can be further categorized according to whether the length of the candidate solutions is fixed. Although a variable-length representation (e.g., [26]) is more natural, fixed-length representations (e.g., [27, 29]) have an advantage in that the traditional operations can be directly embedded. However, in [28, 30, 31], the EAs and SAs are not involved in the optimization of the number of clusters. [28, 30] applied other methods, and [31] adopted a simple approach in which the results of all situations are compared. Other approaches Ouadfel et al. [33] proposed a Markov random field (MRF)-based image segmentation using ACO. The ants trace over a solution space with pixel and label pairs as components and attempt to construct a solution that minimizes the posterior energy function. The search process adopts ACS, which is one of the implementations of ACO that incorporates two-step pheromone updating (local and global). Pignalberi et al. [34] applied the GA to existing methods for parameter tuning. One hindrance to the adoption of the GA is the fact that some of the parameters are real numbers. Thus, they adopted an extended logical binary coding which uses the symbol set as {0, 1, dot}, allowing real numbers to be represented by a fixed precision. The fitness function is defined as a weighted sum of four components, which represents pixel- and cluster-level errors. The studies described below are similar in that they accurately extract the contours of the objects. Jiang et al. [35] proposed a cell image segmentation using a parallel GA. The GA adjusts the parameters of the cell boundary model, which is designed based on prior knowledge about the cell shape. The parallel GA divides the population into multiple sub-populations, which self-evolves in parallel. It also includes an elite migration between the sub-populations randomly. Therefore, the diversity is preserved. Feng and Wang [36] derived a method for searching a space using ACO, given the active contour model. To reduce the computational cost of the pheromone updates, a finite grade ACO (FGACO) is proposed, which classifies the pheromones into finite grades. Pheromone updates are realized by changing the grades, which only requires addition and subtraction operations and allows independence from the objective function value. Ma et al. [37] proposed a texture segmentation and representation scheme based on ACO. They first proposed an ACO-based image processing framework and applied it to an image segmentation and texture representation. The difference between both methods can be seen in the design of the direction probability vector and the difficulty of movement, which affect the transition probability and pheromone update, respectively. With the ACO image segmentation algorithm (ACO-ISA), the direction probability vector considers two additional similarity factors, the gray-level between cells and the texture between sub-images. The difficulty of movement is designed to reduce the pheromone intensity at the edge cells. By contrast, an ACO-based texture representation algorithm (ACO-TRA) requires the ants to become sensitive to local changes in the gray-levels. For the direction probability vector, two elements added into ACO-ISA are changed to emphasize the difference in gray-level. The difficulty of movement is designed to increase the pheromone intensity at the edge cells according to changes in texture. In summary, Table 7 shows a brief outline of the studies described in this section. Table 7 Brief information on the literature referenced in Section 5 Feature detection and selection Analyzing the content of the image for detecting an object or region of interest is highly dependent on the features, which provide rich information of the image. Extracting features from images is fundamental in many computer vision applications, e.g., recognition, detection, matching, and reconstruction. Detecting and selecting high-quality features are challenging tasks owing to the large search space. A variety of methods have been applied to solve the feature detection (Section 6.1) and selection (Section 6.2) problems, among which the EA and SA techniques have received significant attention and achieved a remarkable success. Feature detection aims to find or locate features (e.g., edges, shapes, and interest points). One of the main contributions of EAs and SAs is to reduce the computation time through a parallel and efficient search. Conventional methods typically involve high computational processing, such as linear filtering operations of a Canny edge detector for edge detection and applying a histogram to the transform space of a Hough transform for circle detection. By contrast, several studies have aimed at improving the interest point descriptors. The operators synthesized by EAs and SAs have shown desirable properties. Several ant-based algorithms have been proposed to solve the problem of edge detection. The method proposed by Nezamabadi-pour et al. [38] is one of the earliest approaches employing an ant algorithm to detect edges by formulating the image as a directed graph. In [39], Baterina and Oppus introduced the concept of applying a pheromone matrix that reflects the edge information at each pixel based on the routes formed by the ants. The movement of the ants is guided by the local variation of the pixel intensity values. For a shape detection, Cuevas et al. [40] introduced a circle detection method based on the DE algorithm. This approach uses the encoding of three edge points to represent a candidate circle on the edge image of a scene. Guided by the value of an objective function for evaluating whether a candidate is presented within the edge image, the set of candidates is evolved using the discrete DE (DDE) algorithm. Dong et al. [41] introduced a combined evolutionary search method for circle detection, called chaotic hybrid algorithm (CHA). The authors combined the strengths of both PSO and the GA by including the standard velocity and position update rules of PSO with the ideas of selection, crossover, and mutation from the GA. Specifically, in each generation, after the fitness values of the individuals are calculated, the proportion of the bottom individuals undergoes breeding (selection, crossover, and mutation). The velocities of all individuals are updated and new information is acquired from the population for updating the position. During the mutation process, the chosen individual is reinitialized through the chaos initialization method. Interest point detection can also be formulated as an optimization problem, and Trujillo and Olague [42] solved this problem using GP. In their study, GP was used to synthesize low-level image operators that detect interest points on digital images. In a newer version [43], the authors improved the performance of previously proposed detectors by considering the operator's geometric stability (by presenting 15 new operators) and the global separability of the detected points. Following the same philosophy, Perez and Olague presented several methods [44, 45] in which GP is used as a strategy to evolve the image descriptors for object detection. For example, in [44], the authors used GP to synthesize mathematical formulas to improve the scale-invariant feature transform (SIFT) image descriptor. They further extended their study in [45] by presenting an optimization-based approach using GP and a hill-climbing algorithm, which creates composite image operators for improving the SIFT descriptor. Feature selection [125] is an important task in machine learning and computer vision to reduce the dimensionality of the data by removing irrelevant and redundant features. In the computer vision community, feature selection targets constructing/choosing important visual content (features, e.g., pixel, edges, color, texture, shape, and other problem-specific items) for the interpretation of the image content. Based on its importance, the problem of feature selection has been extensively investigated by researchers from both the machine learning and computer vision communities. To the best of our knowledge, almost all major EC paradigms have been applied to feature selection in the field of computer vision. Studies related to the GA and DE algorithm from EAs, and PSO and ACO from SAs, are mainly discussed in this section. The GA is the earliest EC technique applied widely to feature selection problems. In [46], the GA with a binary representation is employed for feature selection to enhance the performance of hyperspectral data classification. The experiment results show that the number of features obtained can be decreased over the generations. Treptow and Zell [47] showed that the GA can be used within the Adaboost framework to find features, resulting in better classifiers for object detection such as faces and soccer balls. The chromosome encodes the parameters of the features using a string of up to 13 integer variables. The results demonstrate that, instead of an exhaustive search over all features, an evolutionary search can speed up the training and effectively find good features in a large feature pool within a reasonable time. The DE algorithm was introduced to solve the feature selection problems in 2008, when Khushaba et al. [48] proposed a method called DE-based feature subset selection (DEFS) to utilize the DE optimization method for the feature selection problem. The improved version is [49]. A new feature distribution factor is introduced to aid in the replacement of the duplicated features by utilizing a roulette wheel weighting scheme. Experiments show that the proposed DEFS algorithm outperforms GA/PSO-based algorithms and other traditional feature selection algorithms on brain-computer-interface tasks. Gosh et al. [50] applied a self-adaptive DE algorithm for feature selection in a hyperspectral image. In their study, the self-adaptive DE algorithm outperforms the GA [46], ACO, DE algorithm, and combination of ACO and DE-based methods in terms of the classification accuracy and Kappa coefficient. Ghamisi et al. [51] exploited FODPSO to solve the feature selection problems for hyperspectral data. Each particle uses a binary representation for the selection problem. The authors used the overall accuracy of a support vector machine (SVM) classifier on the validation set as the fitness function to evaluate the goodness of the selected features. Because SVM is capable of handling the curse of dimensionality, the proposed approach is capable of handling extremely high dimensional data even with a limited number of training samples. In the following year, Ghamisi et al. [52] proposed a PSO-based CNN method for the classification of hyperspectral data. To tackle the imbalance problem between the high spectral dimensionality and the limited number of training samples available for a CNN, a FODPSO-based feature selection method is employed to find the most informative bands from the hyperspectral data. Al-Ani [53] applied the ACO algorithm for feature selection and claimed that it can perform better than the GA in the texture classification scenario. The algorithm utilizes both the local importance of the features and the overall performance of the feature subsets to search the feature space for optimal solutions. Chen et al. [54] proposed an efficient ACO-based feature selection algorithm for image classification by introducing a new representation scheme to reduce the size of the search space (i.e., a directed graph). Each node/feature is linked by two distinct edges showing whether a node/feature is selected. This representation scheme significantly reduces the total number of edges that the artificial ants need to traverse. In summary, Table 8 shows a brief overview of the studies discussed in this section. Table 8 Brief information on the literature summarized in Section 6 Image matching The purpose of image matching is to superimpose the common parts of multiple images. Matching is typically conducted by transforming the reference image into a coordinate system of the target image. Therefore, image matching is essentially an optimization problem used to find the transformation parameters that maximize the similarity. Template matching and image registration, which are typical applications of image matching, are described in Sections 7.1 and 7.2, respectively. In addition, this section deals with the jigsaw-puzzle-like problems of aligning the given parts to restore the original image (described in Section 7.3). In addition, methods for matching the features extracted from an image are described in Section 7.4. The purpose of template matching is to find the region that is most comparable to a template in the target image. An illustration of template matching is shown in Fig. 7. There are two main categories of methods used to search the target image: feature- and pixel-based approaches. In the former case, the transformation matrix between the template and target image is estimated from the feature descriptor such as SIFT. However, occasionally situations occur in which it is difficult to detect the key points, i.e., blurry and texture-less images [56]. The latter-category of methods such as SAD are robust to the above situation, although an efficient method to search the target image is required. In particular, as the degrees of freedom (DoF) of the template transformation increase, an exhaustive search on the target image becomes a more undesirable approach. Illustration of template matching. Similarity between template and candidate regions (black rectangles) are computed over the target image. Candidate region with the highest similarity is the matching result (red rectangle) EAs and SAs are effective choices to explore in an extensive and complex solution space such as in the above situation. The three studies described below address matching at different DoFs (specifically, 5, 6, and 8, respectively). They commonly incorporate strategies for a further efficient exploration into the GA. Zhang and Akashi [55] proposed a simplified GA for template matching. In this case, the GA is simplified by replacing a crossover and mutation using global and local sampling, where global sampling controls the high-order of the chromosome, and local sampling controls the low-order chromosome. Although the simplified GA is more efficient and accurate in simulated template matching, its operation in real-world cases and in cases with large variations, and in finding the global optimization without a mutation, remain challenges. Zhang and Akashi [56] introduced a level-wise adaptive sampling (LAS) based on the GA to solve affine template matching over a Galois field. With an increase in the number of computations, Galois field can narrow the search range in the target image, and can finally locate the area. To reduce the number of computations, the researchers presented the LAS under the GA framework, which preserves the genetic variety through the selection of individuals from each fitness level uniformly, rules out the inferior individuals using learning thresholds, and simplifies the computational complexity of each individual by inspecting only a small fraction of pixels. The method have turned out to be robust and efficient, but a problem still remains regarding how effective it is in cases with large variations, and no theories exist that prove the method will not converge to the local optimum solutions without a mutation. After that, Zhang and Akashi [57] extended [56] to projective template matching using a binary finite field that can deal with a large DoF. Although the LAS under the GA framework saves considerable computational costs while retaining its accuracy, the algorithm is still far from achieving real-time capabilities for a large DoF. In addition, it may fail when the template image has large variations. Considering a more practical situation, the approach is useful for matching when there are multiple detection targets on the target image. Sato and Akashi [58] proposed a method for distributing the population in deterministic crowding (DC), which is derived from the GA, to deal with multi-object template matching. The crossover in DC involves the interaction between parents and children (which can be mutated), thereby possibly leading to multi-local optimization, which can be solved using a method that loops the selection of the best-fit individual and a local search. This method has been successful in multi-object template matching with a simple background, the accuracy of which decreases with an increase in the background complexity. In addition, it is not necessarily the case that only materials that look perfectly the same (e.g., those produced in factories) are eligible for matching, and several studies use template models based on prior knowledge of the target. Lee et al. [59] proposed the application of GA-based template matching to lung nodule detection in computed tomography (CT) images. They employ GA-based template matching to detect the approximate location of nodules, and a conventional template matching to detect the nodules accurately. The template image used in lung detection is spherical/circular nodular models. Ugolotti et al. [60] compared PSO with the DE algorithm to solve the object detection problem, and validated the methods in two real-world computer vision problems—hippocampus localization in histological images, and human pose estimation in image sequences. Their method requires that the object follow certain models, which are defined to transform the problem into an optimization problem that can be searched using PSO and the DE algorithm individually. In addition, to accelerate the method, they take advantage of a GPU for parallel computations. The task of image registration is to convert multiple images into an unified coordinate system allowing the common parts to overlap. One of the most popular applications is an overlapping of multiple images taken from different viewpoints or at different times by remote sensors, as shown in Fig. 8. De Falco et al. [61] transformed a satellite image registration into the problem of optimizing the affine transformation according to the mutual information between images, and optimized the problem using the DE algorithm. Ma et al. [62] proposed an orthogonal learning DE (OLDE), which combines the orthogonal learning (OL) strategy with the DE algorithm, for remote sensing image registration. During the crossover step, multiple candidate vectors are generated from the parent vectors based on the OL strategy, and the vectors with higher fitness are selected as offspring. This incorporation of OL strategies enhances the ability to select promising search directions toward the global optimum. The two methods above were compared in experiments described in [62] using the Ottawa and Yellow River datasets, and the results demonstrated that OLDE outperforms a simple DE method. Illustration of image registration. Two given images are aligned such that the common parts are overlapped In this subsection, we also cover the registration between 2D images and 3D objects. Wachowiak et al. [63] applied a modified PSO to a single-slice 3D-to-3D biomedical image registration. The authors assumed that the users of their proposed system are skilled clinical experts and can be given an accurate initial orientation, which is an important benefit to the complexity of medical image registration. Therefore, they added a term included in the initial orientation to the velocity update, which is expected to prevent a fall into the local optima. This modified velocity update is incorporated into three modified PSO approach (e.g., hybrid PSO with a crossover) selected through preliminary experiments. Liebelt and Schertler [64] addressed the registration of 3D models for use in images. The six parameters of the 3D model are optimized using a simple PSO. In addition, to accelerate the algorithm, the authors treat each inherently parallel optimization in different threads of the GPU. The similarity measure uses mutual information, which is a typical similarity metric in this field and represents the relative entropy of two images, with improved robustness owing to its fusion with edge-based measurements. Jigsaw-puzzle-like problems Jigsaw puzzles are popular all around the world. The player must reconstruct the original image from the given non-overlapping pieces, as shown in Fig. 9. Automatic jigsaw puzzle solvers with a computational aid can solve puzzles with an extremely large number of pieces, and such a technique can also be applied to a reconstruction, such as archeological artifacts and torn documents. Sholomon et al. [65] proposed a GA-based jigsaw puzzle solver for puzzles of known size and piece orientation. The pairwise compatibility of the adjacent pieces is evaluated based on color similarity along their abutting edges. A chromosome is represented by a matrix of the same size as the puzzle, and each element is assigned a piece number. This simple encoding causes a serious problem: offspring yielded from a traditional crossover may contain duplicate and/or missing pieces. Thus, the authors proposed a novel crossover operator based on a kernel-growing technique, which starts with a single piece and gradually joins other pieces at the available boundaries. The selection and assignment of the pieces to be joined are conducted using a three-phase process from a bank of available pieces, which ensures that every piece appears only once. The development of an applicable crossover operator enables the introduction of the GA into the jigsaw puzzle solver field and brings about significant improvements in the solving power. Specifically, the proposed method achieves an accurate reconstruction of 22,834 pieces, which is more than twice the existing results. After that, Sholomon et al. [66] confirmed the effectiveness of each phase in the crossover, as well as the robustness of the objective function experimentally. They also accelerated the crossover in [65] using multiple threads. In addition to type 1 puzzles (puzzles with pieces whose location is unknown), Sholomon et al. [67] extended the GA-based solver in [65] to solve type 2 puzzles (puzzles with pieces whose location and orientation are unknown) and type 4 puzzles (two-sided puzzle with pieces whose location, orientation, and face are unknown). To consider the orientation, the authors adopt a graph representation where each node corresponds to a piece and each edge corresponds to a joint edge of two adjacent pieces. The crossover operator is similar to that in [65], i.e., it is applied based on a kernel-growing technique. In addition, for type 4 puzzles, a constraint is added to maintain the geometrical validity: the flipping side edge of an already jointed edge is not selected. This method outperforms the existing method in type 2 puzzles and was the first to successfully solve type 4 puzzles. The experiments in these studies were conducted in a common format. We summarize the results with the largest number of pieces regarding the neighbor comparison that measures the fraction of correct neighbors, in Table 9. Illustration of jigsaw puzzle problem. The given non-overlapping pieces are correctly rearranged to construct the original image Table 9 A brief summary of the experimental results of the jigsaw puzzle solvers. The results for each image are the average of multiple runs using different random seeds, and the average best is the largest score among them Wall painting reconstruction is similar to but more complex than jigsaw puzzles, it is not limited by a rectangular shape and can be eroded and lose some of its fragments. Sizikova and Funkhouser [114] proposed solving a wall painting reconstruction using a modified GA, modifying the selection in two steps, including fragment- and binary-based selection, and premodifying the crossover in two categories, a crossover by fragmentation and a crossover by matching. The GA framework starts with one or two fragments, grows to optimize the orientation and translation of the merges, and ends based on a set number of iterations or the completion of all fragments. Feature matching Graph representations are useful for representing local features in an image along with spatial relationships (i.e., nodes and edges represent local features and relationships, respectively) [126, 127], and hence graph matching, which aims at finding similarity between graphs, is used as one of the techniques for image matching. Myers and Hancock [68] proposed a multimodal GA for graph matching. They avoid the extra computations caused by a non-replacement during the selection through a biased selection without a replacement, thereby reducing the computational cost. Similarly, points are also important features in an image. Zhang et al. [69] presented a GA-based, incomplete (not one-to-one mapping), unlabeled (using no other information, e.g., color) point pattern matching method. They select several sets of triple points in two images, maximizing the partial bidirectional Hausdorff distance between the triple points sets in different images. The GA-based point pattern matching is more efficient than that of traditional optimization methods such as geometric hashing [128]. In summary, Table 10 shows a brief overview of the studies analyzed in this section. Table 10 Brief information on the literature referenced in Section 7 The purpose of visual tracking is to find a target object in each frame within a video sequence. Visual tracking can be regarded as a sequential detection problem when considering the variation in the object state. It is essentially equivalent to a dynamical optimization problem. Most tracking algorithms can be classified into deterministic or stochastic methods. Deterministic methods, such as a mean shift, are computationally efficient, but suffer from the local optimum. By contrast, stochastic methods, such as condensation and a particle filter, can provide robust tracking, but require high computational costs. In addition, these methods may cause a degradation in the long-term tracking. EAs and SAs are the rational choice to alleviate these weaknesses. For instance, [72] showed that the iteration process of PSO is helpful for restoring particles sampled from inappropriate transition models to the appropriate (i.e., higher observable likelihood) region. We categorize the visual tracking problem into single-object tracking and multiple-object tracking in Sections 8.1 and 8.2, respectively. Single object tracking Several studies have addressed block matching where the entire image is divided into non-overlapping blocks and the difference in position over successive frames for each block is computed. The difference in position is called the motion vector, and motion vectors of blocks containing the of interest are useful for tracking. Bhaskar et al. [70] proposed a motion estimation algorithm with variable-size block matching using the GA. Block-based motion estimation is accomplished by finding the same region in the next frame for blocks that represent segmented regions of the image. Variable-size block division is achieved through a quad-tree decomposition, which divides an entire image into four equivalent sized regions recursively. The GA is executed on all blocks, and moves the centroid of the block to match the block in the successive frame. With the genetic operator, only a mutation is executed. Individuals that have lower than the mean fitness are the targets of mutation, and others are taken into the next generation directly. Although experimental results show a better performance than that of other methods, the combination of a recursive division and the GA is a time-consuming process. Cuevas et al. [71] attempted to speed up the processing of fixed-size block matching using the DE algorithm by reducing the number of similarity evaluations. During a DE search, all fitness values are stored in the history array, and most individuals are evaluated through a nearest-neighbor-interpolation based estimation based on the stored fitness of the nearby individuals. This fitness estimation strategy substantially reduces the number of evaluations rather than exhaustive evaluations within a search area while maintaining the accuracy. In the following tracking methods, the target is represented by a rectangle. The candidates have rectangular parameters, such as location, rotation angle, and scaling, and look for the most similar region based on the appearance model. Because the rectangular parameters are real values, PSO and the DE algorithm are preferred for optimization. Zhang et al. [72] proposed a sequential PSO that incorporates the sequential information into the traditional PSO. The attractive point of the sequential PSO is the introduction of a re-diversification mechanism using previous results and an adaptive parameter tuning. In addition, a spatially constrained Gaussian mixture model (GMM) for the appearance of the tracked object is used to evaluate each particle. From a Bayesian inference perspective, sequential PSO is a combination of multi-layer importance sampling and a particle filter, which can avoid the sample impoverishment problem in a particle filter. Cheng et al. [73] proposed a visual tracking technique to utilize a fragment-based appearance model, which can acquire the robustness of the target occlusion. The target state is divided by rectangle fragments, and the saliency based on the SIFT feature is assigned to each fragment. Particles of the PSO have affine transform parameters and are evaluated using the saliencies and an HSV color histogram. In addition, the initialization of the particles in each frame uses a Gaussian distribution constructed from the previous results to maintain diversity. Lin and Zhu [74] proposed an improved fast DE (IF-DE) algorithm to alleviate the evolution stagnation. The IF-DE algorithm focuses on inferior parents and trial individuals, which are discarded in the previous generation. These individuals are introduced as a difference vector during the mutation stage, which serves to extend the search space. Three scaling parameters during a mutation operation are changed dynamically based on the best individual or diversity information. The evaluation of each individual for the tracking process utilizes the GMM, similar to that described in [72]. Nenavath and Jatoth [75] introduced a hybrid SCA-DE, which is a combination of the sine-cosine algorithm (SCA) and the DE algorithm. The flow of the hybrid SCA-DE is simple: after every individual is updated by the SCA, DE operations including a mutation, crossover, and survivor selection are applied. Whereas the SCA conducts a global exploration with a large step size, the DE algorithm is in charge of the local search to encourage the population to reach the best solution, which enables the hybrid SCA-DE algorithm to balance between global and local searches. The tracking method using the hybrid SCA-DE approach optimizes a state vector consisting of the location, speed, and scaling with a kernel-based spatial color histogram as the observation model. In the four studies above, the initialization of the candidates in each frame exploits the previous result. To achieve diversity, the authors adopt a Gaussian distribution [72–74] and random walk (RW) model [75]. This commonality is unique to dynamic optimization problems. Most tracking performance results are given as graphs plotting the accuracy per frame. In particular, the comparison results with [72] can be found in [73] and [74], respectively. Multiple object tracking As an extension of single-object tracking, multiple-object tracking requires managing multiple objects, which is a difficult task particularly owing to the occlusions occurring between objects within the same proximity. An occlusion may cause a finding of the foreground regions and a partial or full hiding of the objects. Therefore, it is essential to overcome the occlusion problem to achieve a stable tracking. Huang and Essa [76] proposed a two-level approach, which consists of a region-level association process and an object-level localization process, for tracking with a stationary-camera. In the region-level tracking process, associations of the foreground regions from successive frames are characterized by a binary correspondence matrix. Rows and columns respectively correspond to existing and new foreground regions, and a non-zero element represents an association between the correspondence regions. Association events can be analyzed from a matrix, e.g., if a column has two or more non-zero entries, correspondence regions will be merged into one region in the current frame. Constructing reliable matrices is cast as an optimization problem based on the GA with the likelihood of associations. Initialization, crossover, and mutation operations are customized based on certain heuristics. During the object-level tracking process, objects are labeled for correct regions and localization based on the association events. The object model used by localization adopts an appearance model and a spatial distribution, as well as an occlusion relationship, to deal with a splitting event. Zhang et al. [77] proposed a species-based PSO, which divides a swarm into several species according to the objects. To overcome the occlusion between different objects, competition and repulsion models for species are introduced. A species with a higher competitive ability indicates that the corresponding object is more likely to occlude other objects, and its repulsive force, which is defined as the product of its competitive ability and velocity, is stronger. A species in which an occlusion has occurred encourages an opponent to be repelled by inserting a repulsion force term into its opponent's velocity update process. Moreover, the authors presented an annealed Gaussian-based PSO (AGPSO) approach, which enables a parameter reduction and fast convergence. AGPSO introduce zero-means Gaussian perturbation noise in velocity update procedure. Its covariance matrix elements decrease exponentially as the iteration progresses, which enables a fast convergence rate. From these studies, we can find differences in the parts served by the EAs and SAs. In [76], the GA concentrates on reasoning the complex interrelationships between objects using a binary correspondence matrix. By contrast, in [77], PSO is in charge of the entire tracking framework, and an occlusion resolution is well integrated into the general procedure. In short, Table 11 describes a brief overview of the studies summarized in this section. Table 11 Brief information on the literature summarized in Section 8 Face recognition is an important technology in security systems and human-computer interaction. The goal of face recognition is to identify individuals in database who are identical to the input face image. This classification process is conducted using a model trained with feature sets extracted from face images in the database. An illustration of a face recognition procedure is shown in Fig. 10. Illustration of face recognition procedure. The models for each individual are created by extracting features from the training images and stored in database. The test image is identified by finding the model which is most similar to the test image In many cases, EAs and SAs are exploited to select or weight the features, as described in Section 9.1. Moreover, fusion methods for features extracted from visible and infrared (IR) images are introduced in Section 9.2. In addition, other methods including the localization, detection, and tracking of the faces and eyes, are shown in Section 9.3, which are relevant to the pre-processing of an automatic face recognition framework. Feature selection/weighting The quality of the feature set extracted from a face image has a significant impact on the performance of a face recognition. However, the feature set typically includes noisy, irrelevant, or redundant data. The task of the EAs and SAs is to reduce a feature set of size n to a subset of size m (m<n) to improve the face recognition accuracy. Owing to the large number of possible feature subsets, many existing feature selection methods employ a heuristic or random search strategy [82], including a sequential search, tabu search, and greedy algorithms. By contrast, population-based searches using EAs and SAs significantly contribute to high-quality subsets against an extensive search space. Applying the GA for feature selection is a natural idea because it utilizes bit strings for representing chromosomes. A binary bit value can be used to indicate if a corresponding element assigned is selected or not. Liu and Wechsler [78] proposed an evolutionary pursuit to find the optimal basis by which faces can be projected. Specifically, the GA is applied to search the rotation angles for pairwise axes and the combination of basis vectors against the given whitened principal component analysis (PCA) space. The evaluation of every individual is based on the recognition rate and the scatter index. The GA can improve the results by balancing both the classification accuracy and generalization. Zheng et al. [79] proposed the GA-Fisher algorithm, which combines a GA-PCA for a dimension reduction with a linear discriminant analysis (LDA). The GA-PCA approach searches the optimal principal components based on the PCA dimension reduction theorem (PCA-DRT), which claims that some of the small principal components may have useful information. Crossover and mutation operators are improved to retain the number of selected principal components. Each chromosome is evaluated using a fitness function consisting of three terms based on the PCA-DRT. The GA-Fisher approach integrates the GA-PCA with a whitening transformation into the LDA. Vignolo et al. [80] applied the GA and multiple objective functions for feature selection. An aggregative fitness function integrates two types of evaluation functions with relevant parameters. In addition, the multi-objective GA is used to search the optimal Pareto front over three types of objective functions. The introduction of multiple objective functions provides a more flexible classification (e.g., considering not only the accuracy but also a class overlap), and the GA is an effective tool to accomplish this. Regarding the methods using SAs, Kanan et al. [81] presented a feature selection method based on ACO. In this case, the ants travel on a complete graph consisting of nodes representing the features. Every time an ant chooses any node through a probabilistic transition rule, the current subset is evaluated based on the mean square error (MSE) of the classifier. If the MSE cannot be decreased within several steps, the exploration will terminate and the subset will be output as a candidate. A recognition is achieved using the nearest neighbor classifier, and the obtained MSE is further used by the update of the pheromone in the ACS or rank-based ant system (ASrank). Ramadan and Abdel-Kader [82] utilized a binary PSO whose particles are represented using a binary string similar to a chromosome in the GA. A binary bit indicates whether the corresponding feature is selected. The velocity function is defined as a probability distribution for the update of the particle position. Each particle is evaluated by the class separation term. The term includes the scatter index and searches the optimal subset for discrete cosine transform (DCT) or discrete wavelet transform (DWT) features. Comparison experiments with the GA showed that the binary PSO can acquire smaller feature vectors at the expense of the training time. Even with the common goal of face recognition, there are many choices of features to be targeted. EAs and SAs are useful methods because they only need to encode the target of the selection on the gene or graph directly. In addition, the GA and ACO, which use a two-optional representation, are preferred in terms of selection. Feature weighting Feature weighting assigns a real value to each feature element. It allows determining the contribution of each feature element in a more detailed way, which can expect to improve the quality of the face recognition. Senaratne et al. [83] proposed EBGMPSO, which is an extension of the elastic bunch graph matching (EBGM) [129] algorithm by including PSO during various phases. With the face graph matching procedure, the face graph, which consists of an interior-node grid and a head-boundary-node grid, is represented by particles. The location and size of each grid can vary (each particle has six parameters), and the PSO searches the optimal parameters to maximize the graph similarity. During the recognition phase, recognition-phase-landmark-weights (RPLWs) optimized by the PSO are used to compute a similarity score. In addition, Gabor wavelet features hybridized with eigenface features are also adopted. PSO plays a role in tuning the hybridization weights. The decision to adopt the PSO is based on its lack of limitations (e.g., non-linear and discontinuous). It also has the advantage of a fast convergence. Bhatt et al. [84] introduced a multi-objective evolutionary granular algorithm to recognize surgically altered face images. Each face image is divided into non-disjoint face granules based on three levels of granularity. Because the granules include diverse features, each granule should be assigned a suitable feature extractor and a weight. The GA achieves these two objectives simultaneously using two populations. One population consists of bit string chromosomes for selecting the feature extractors (values of 0 and 1 correspond to SIFT and extended uniform circular local binary patterns (EUCLBP) [130], respectively). The other population consists of chromosomes having real value genes representing weights. All populations evolve independently and are combined when computing the fitness function. To optimize the two objectives with different parameter types simultaneously, the GA is a choice reasonable owing to its encoding flexibility. For a brief performance comparison, recognition rates of the studies described in this subsection are listed in Table 12. Table 12 Recognition rates of the studies summarized in Section 9.1 on different databases Fusion of visible and IR features Variations in the illumination are a significant problem in face recognition for visible images. IR images can be exploited to overcome this problem. However, IR images also have several drawbacks, such as sensitivity to temperature in the surrounding environment and occlusions by eyeglasses. Because the advantages and disadvantages of both are complementary, the combination of both visible and IR images is considered to allow more accurate recognition performance to be achieved. Methods of fusing visible and IR images using the GA have been proposed in several studies. Typically, each gene of the GA represents the weight of the corresponding feature component, and the optimum fusion solution is exploited under the framework of the GA. Bebis et al. [85] used a bit string to represent a chromosome and select the feature components from either a visible or IR image. They used the GA for two different fusion schemes, i.e., pixel- and feature-based fusion. The first assigns wavelet coefficients to each gene to obtain a fused image. The second assigns eigenfeatures to each gene to obtain the fused eigenspace. The gene length corresponds to the number of wavelet coefficients or eigenfeatures, and the value of the gene determines whether the corresponding wavelet coefficient is selected from the IR or visible spectrum. Desa and Hati [86] improved the feature-based fusion scheme [85], using kernel-based face subspaces and real number chromosomes. A chromosome represents a weighted vector against the extracted features. The weight given to the corresponding visible and IR features from a gene is complementary, i.e., if any IR features are weighted as α, the corresponding visible feature is weighted as (1−α). The fused features consist of the sum of the weighted visible and IR features. Similarly, Hermosilla et al. [87] used real number chromosomes to represent weights for the descriptors. However, they assign the weights independently to visible and IR descriptors, and thus, the genetic coding consists of the two corresponding types of weights. The face images are divided into small regions, and the histogram of each region is obtained by the descriptor. The similarities between the probe image and the gallery image are then evaluated based on the sum of the histogram intersections weighted by the corresponding gene values in all regions. Despite requiring genes for both visible and IR features, this method using small regions allows the gene length to be reduced more than in the approaches by [85] and [86]. The three methods detailed in this subsection can be observed as natural extensions of the complexity of a gene representation along the timeline. Consequently, an experiment using the Equinox database in [87] confirms that these extensions directly contribute to an improved recognition rate. In this subsection, we discuss detection and tracking associated with the face, eyes, and facial expressions, which are important technologies for applications such as human-machine interfaces and surveillance systems. Wong et al. [88] proposed a face detection and facial feature extraction method for gray-level images using the GA. As the key idea of this approach, the location of the face can be inferred from the location of both eyes. This is based on the fact that the size of a human face is proportional to the distance between both eyes. Therefore, the task of the GA is to select the appropriate pair from the detected eye candidates. This search space limitation and search capability of the GA allow to the high computational cost, which is a challenge to existing methods, to be overcome. A chromosome contains two indexes in a buffer, which stores the candidates of the eye regions, and the eigenfaces are used to evaluate the fitness. Akashi et al. [89] proposed a size- and orientation-invariant eye tracking method for real-time video processing through template matching with the GA. The GA optimizes the parameters of the geometric transformations for the template, consisting of the coordinates, scaling, and rotation. To achieve real-time tracking, they proposed the use of evolutionary video processing. To utilize information between video frames, genetic information in a previous frame is inherited by the current frame (i.e., the evolution continues throughout the entire video sequence), which enables an exploration within a small population size. Perez et al. [90] derived a template generation method using PSO for face localization. The position of the vector components of the PSO correspond to pixels of the template, which represents an angle for directional images. Only the vector components within the allowed range are used as the template, and the evaluation of each particle is defined as a liner integral value of the template over the face directional image. This method and the iris anthropometry template proposed by [131] are applied to a video sequence to localize the face and iris. The template generated through PSO improves the accuracy of the localization more than using an anthropometric template and reduces the computational time because of the decreased number of pixels in the template. Considering an application operating in a real environment, the front of the target face is not always clearly shown. A head in a 3D space may have an arbitrary pose, which has a significant adverse effect on a face recognition system. Several studies have thus addressed the treatment of 3D face models. Mpiperis et al. [91] proposed a 3D facial expression recognition approach that classifies expressions based on the rule discovered by the PSO or the Ant-Miner (a variant of ACO) framework. ACO explores a graph whose nodes represent the attributes, whereas PSO controls those particles having two parameters (the lower and upper bound per attribute). The difference between the two swarm intelligence approaches is the representation of the attributes. PSO can apply continuous attributes, whereas ACO requires a discretization of the attributes for an assignment of each node. A facial surface is represented as a deformation of a generic 3D mesh, and its facial expression is classified based on a certain rule. Chandar and Savithri [92] introduced an algorithm for estimating a 3D face model from a face with a non-frontal view. This algorithm regards an estimation as an optimization problem when searching for the pose parameters for a face a non-frontal view consisting of angles around the x-, y-, and z-axes, and for depth values of the facial feature points of a frontal-view 3D face model. They used a two-step DE optimization, abbreviated as DE2. In the first step, the pose parameters are optimized, and in the second step, the results of the first step are applied to update the depth values. The two-step optimization enhances the accuracy of the estimated depth values of the facial feature points. In addition, considering the real-time processing, the application suffers from a constraint in terms of the computational cost. As one solution to this issue, several studies have used 2D approximate multiview face models. Sato and Akashi [93] introduced a high-speed multiview face localization and tracking method using template matching and the GA. This method approximates a human head using a cylinder, which allows a multiview face to be represented through the development of the lateral surface of a cylinder. Although this approximation avoids the computational cost from using a 3D head model, an increase in the parameters of the template matching is induced to create the template from the cylinder head model. The GA provides a significant contribution to the matching (i.e., optimization) with a feasible speed from this extended search space. You and Akashi [94] put forward a multiview face detection algorithm using an existing frontal face detector that requires a training process for frontal faces only. The main idea with this algorithm is a flipping scheme that utilizes a mirror reversal. A proper horizontal reversal for the candidate regions makes it possible to generate frontal faces from multiview faces. The GA is applied to search for the candidate region parameters, which consist of a center point, and scale factors for x- and y-axes and angles. Because speeded-up robust features (SURF) cascade is adopted for the frontal face detection, the fitness function for the GA comprises the number of stages passed and the probability output at the exit stage. When this algorithm is applied to video frames, it can further use the evolutionary video processing proposed by [89]. The two multiview face detection methods described above are designed to operate in real-time. The GA provides a sufficient quality solution with high-speed, thereby contributing to the real-time processing. In summary, Table 13 provides a brief overview of the studies discussed in this section. Human action recognition Vision-based human action recognition is an essential part of the development of human-computer interaction technologies. To accurately analyze a complex human body structure, many studies use 3D data with depth information as input. In particular, Kinect has made it possible to easily handle RGB-D information. EAs and SAs are effective for processing 3D images involving a high computational cost. Moreover, the affinity for parallel processing is often useful for implementations with practical processing times. We categorize the parts of interest, and provide discussions of each, as follows: body (Section 10.1), hands (Section 10.2), and head (Section 10.3). A human body posture estimation generally aims to fit a human body model, e.g., a skeleton as shown in Fig. 11, to the observed data. This is important for many computer vision applications and is also an input for action recognition, which is described in the second half of this subsection. Because human models usually have numerous DoFs, the fitting problem constructs a high-dimensional and nonlinear solution space. Many of the existing methods employ Bayesian approaches, such as Kalman filtering and particle filtering, but have difficulties in terms of incorporating anatomical constraints and devising realistic body movement models [95]. Illustration of fitting a upper-body skeleton model to the observation data. The fitting procedure is achieved by the adjustment of joint positions indicated by the red circles EAs and SAs can be effective at overcoming the above weaknesses without prior knowledge. Robertson and Trucco [95] proposed an upper-body posture estimation system from multi-view markerless point cloud sequences scanned using a laser scanner. The posture estimation is accomplished by fitting a 24-DoF skeleton model to the given point cloud. The system exploits PSO by comparing the input point cloud with the positions of pairs describing each limb of the skeleton with the absolute distances. They also applied a hierarchical fitting to PSO, which predicts some simply predictable parts on the human body. Furthermore, they proposed a parallel version of PSO across multiple CPUs to increase the efficiency. Their method is highly accurate and can be applied in real-time. Zhang et al. [96] presented a search strategy for 3D human body tracking using an annealed PSO based particle filter (APSOPF). Compared to normal PSO, they applied the annealing strategy into the velocity updating equation of the PSO, which includes a sampling covariance and annealing factors. The annealing strategy gradually confines the search area. Experimental results show that this strategy can alleviate the inconsistencies between the observation model and ground truth caused by a self-occlusion. The tracking results are represented as a 3D 31-DoF kinematic tree. Panteleris and Argyros [97] proposed a moving object tracking system with a RGB-D camera in a static environment through simultaneous localization and mapping. The RGB-D camera can provide a dense point cloud of the environment and is registered to another 3D point cloud representing a frame (the cloud contains the object). The registration according to the structure and color information is considered as an optimization, which is optimized using the PSO. The system achieves a real-time capability owing to the efficiency of the PSO. Their tracking results are not accompanied with a human model and are used for a cognitive navigation prosthesis, i.e., a safe navigation of cognitively impaired people in public spaces. An action recognition is a fundamental technology for applications such as sports analysis and video surveillance. In many cases, the classification of the input data is achieved using machine learning algorithms. EAs and SAs are engaged in supporting such algorithms, and their domains can be categorized into training data [98, 99] and parameter tuning [100, 101]. Although machine learning can solve the problem of vision-based human body action recognition, Chaaraoui and Flórez-Revuelta [98] believe that the learning is insufficient until the training data are complete. Thus, based on the GA, the authors proposed an evolving bag of key poses. When a new pose is input, it can be added to the training dataset for learning. When the new pose belongs to a present pose class, the new pose is operated by a crossover and can go through a mutation under a certain probability. The results of an evolving bag of key poses with both RGB-D and RGB images reveal the availability of incremental learning. Chaaraoui et al. [99] utilized an evolutionary algorithm to determine the optimal subset of human joints in a bag of key poses based on human action recognition with RGB-D cameras. The proposed evolutionary algorithm is comparable to, and based on, the GA but different in terms of crossover. Because a human pose can be regarded as a tree topology, their crossover is aware of such a topology. Their method achieves better results and higher efficiency than other competitive methods at the same time. Ijjina and Chalavadi [100] combined the GA with a CNN to deal with human action recognition. They trained the initial weights of the CNN using global and local optimization by applying the GA and gradient descent algorithm, respectively. During the fitness evaluation step, the CNN classifier is trained using the decoded weights and the gradient descent algorithm, and its classification accuracy is regarded as the fitness value. That is, the GA identifies several local basins, and the gradient descent algorithm quickly finds the optimum within a basin. The human action recognition framework introduced by Nunes et al. [101] involves the DE algorithm. Their framework consists of a feature extraction from the input skeleton data and classification using random forest (RF). The DE algorithm is employed to find the splitting node with the best condition in each decision tree. RF using the DE algorithm has no thresholds to tune, and the DE has certain parameters that are extremely well adjusted based on past studies (i.e., they are controllable, independent of other parameters, and input and output data.) For these studies for action recognition, the performance on different datasets is listed in Table 14. Table 14 The performance of the action recognition approaches on different datasets Human hands Similar to the body models, hand models also have the problem of a complex configuration owing to the existence of numerous DoFs. In addition, the presence of similar parts and severe self-occlusions make the hand model more difficult to handle. Ye et al. [102] put forward a hand pose estimation method for depth images by incorporating a CNN with PSO in each layer. The results predicted by a CNN can often incur a kinematic error, which is solved by the PSO in their research. Each hierarchy of a hand structure is predicted by a refined CNN, in which the refinement is achieved using PSO. The refinement is treated as another optimization problem that takes kinematic constraints into account. Because the next layer is based on the current layer, refining each layer increases the accuracy, but decreases the efficiency. Panteleris and Argyros [103] introduced stereo RGB images (using two monocular cameras) into a hand tracking method. Unlike a naive method that tracks after recovering the depth information and achieves a low accuracy, they transform the hand tracking to maximize the color consistency between the stereo RGB images through the PSO. Particles take the parameters of the observed or tracked hands and are initialized around the center upon the first iteration. Although their method must process multiple images, the method still achieves a real-time implementation. As a more challenging situation, studies have attempted to track both hands at the same time. More severe occlusions from complex hand interactions have shown that simple extensions of single-hand tracking are insufficient to ensuring the accuracy [104]. Oikonomidis et al. [104] proposed a two-hand tracking method using PSO and RGB-D data. PSO searches in a 54-dimensional solution space to construct an articulation hypothesis. The penalty function is regarded as an objective function consisting of two terms: a prior term that penalizes an invalid articulation hypothesis and a data term that quantifies the incompatibility of the observation with an articulation hypothesis. The proposed method achieves a frame rate of 4 Hz through a parallel implementation using a GPU. Since then, Oikonomidis et al. [105] proposed a novel evolutionary quasi-random search method to achieve a faster processing. The key to this method is the use of a Sobel sequence [132], which allows for a more uniform coverage of the sample space. The method defines a center position at each step of the iteration and generates candidates around it based on the Sobel sequence. All candidates are recorded, and a new center position is determined according to the best candidates and their fitness. This method applies eight parameters, which are tuned using PSO. For two hand tracking, the new method achieves a speedup of 8-times that of the existing method while maintaining the level of accuracy. Human head Head pose estimation is a partial problem of human posture recognition. Padeleris et al. [106] formulated a head pose estimation for depth images as an optimization problem, which is solved using PSO in their research. As the targets of the search, six pose parameters that represent a particular view are applied. The surface model obtained from the depth camera is rendered from the candidate views, and its similarity to the reference range image (the frontal face range image obtained at initialization) is measured. By combining the parallel structure of the PSO with the GPU, the method achieves a frame rate of 10 fps. Table 15 displays a brief overview of the studies analyzed in this section. Table 15 Brief information on the literature referenced in Section 10 This section introduces some studies that are outside the above categories. Rodehorst and Hellwich [107] proposed GASAC, which is a robust parameter estimation approach using the GA. The task of GASAC is to estimate correct projective transformation parameters by avoiding outliers, which are undesirable correspondences. A chromosome is a tuple of a homologous point index, the length of which is defined as the minimum number of points required to construct the transformation model. Genetic operations are considered to avoid a duplication of the index on the chromosome. The parallel evaluation using the GA makes it possible to improve the estimation accuracy. Moreover, the non-linear optimization method can provides more desirable results because small measurement errors are eliminated. However, the computational cost will increase. Ghosh et al. [108] introduced a moving object detection method that solves a task by integrating spatial and temporal segmentation. With spatial segmentation, a segmentation is regarded as a pixel labeling problem, which is solved by the maximum a posteriori (MAP) estimation of a multi-layer compound MRF. The authors proposed a distributed DE (DDE) algorithm for a MAP estimation. A parameter vector in the DDE algorithm corresponds to each pixel in the targeted video frame and consists of a segmented output for each RGB channel. Therefore, the size of a population is equal to the number of pixels in the video frame, and the result of the evolution is output as the complete population. To increase the convergence speed, the authors adopted a neighborhood-based mutation. They defined a neighborhood using a small window centered at the target vector to maintain the spatial regularity. A donor vector is generated by three parameter vectors chosen in the window. Moreover, they use a randomly chosen index in a crossover operation, which ensures that a trial vector including at least one parameter forms a donor vector. The segmented frames are used by the temporal segmentation, which classifies whether the region has changed (i.e., is a moving object). Kumar et al. [109] derived the DE for image enhancement (DE-IE) algorithm. They first design a 2D histogram, which reveals the existence of homogeneous regions based on diagonal values. Because larger diagonal values require a higher intensity enhancement, a color enhancement is achieved by smoothing the 2D histogram. The authors consider a probability distribution, in which each probability density is the pixel length of the gray-level image, as the population for the DE algorithm. The DE algorithm minimizes the difference between the probability distribution of the input image and a satisfactory output image. They endow the mutant factor F a different role than a conventional DE algorithm; the F value changes based on the difference in the probability distribution between the input image and the output image. An adaptive scheme can be adopted during a mutation operation. Table 16 provides a brief overview of the studies summarized in this section. This literature survey extensively summarized various computer vision applications employing EAs and SAs developed since 2000. First, we briefly introduced EAs and SAs, focusing particularly on their characteristics and differences. Next, we analyzed and discussed studies applying EAs and SAs to solve computer vision tasks. Different computer vision tasks are described in Section 4 through Section 11, and each general task was classified into subsections according to the problem setting and types of solutions. The vast number of references considered in this paper demonstrate that EAs and SAs are powerful tools for computer vision applications. Among the four algorithms focused in this paper, the GA and PSO are more popular optimization tools in computer vision applications. Because the GA and PSO are the most representative algorithms of EAs and SAs, respectively. Many characteristics and related researches of GA and PSO have been well verified and studied, such as parameter tuning, improvement of the efficiency, and applications to practical applications. Such accumulated experience in the community make the adoption of GA and PSO to computer vision applications easier. Since the DE is a relatively new algorithm, the application of the DE is not as active as the GA. However, the powerful optimization capability of the DE algorithm may have the potential to attract more researchers in the future. ACO is employed in limited (especially graph-based) topics, such as Sections 5 and 6. Based on the taxonomy applied in this paper, it can be seen that the solution to whether EAs and SAs should be directly or indirectly adopted varies based on the tasks. For instance, because the tasks described in Sections 7 and 8 are essentially similarity maximization problems between images or models, EAs and SAs play a role as fundamental tools in this process. As described in Section 5 and Section 6, EAs and SAs have frequently been directly applied. However, for recognition problems, such as those described in Sections 9 and 10, classification using machine learning methods is the basis of processing, and EAs and SAs are mainly engaged in boosting the performance. In particular, it can be seen from Table 4 that the development of NAS methods using EAs and SAs described in Section 4 has been an extremely attractive field in recent years. There exist many foreseeable challenges when applying EAs and SAs to computer vision tasks. First, due to the variety of algorithms, the optimal choice of algorithm for a particular problem remains an open question. Also, the tuning of hyperparameters and the combination of appropriate operators require human experience [30, 60, 87, 114]. Second, many methods include time-consuming processes especially with expensive fitness function, which makes real-time applications difficult to implement [32, 51, 57, 106]. Finally, but not limited to, many real-world problems embed multi-objective optimization, which requires the EAs and SAs to find solutions on the Pareto front [80, 98, 133, 134]. Such challenges need to be faced in order to achieve a breakthrough in applying EAs and SAs to computer vision tasks. Moreover, since DNNs have drawn much attention in computer vision in recent years, the combination of EAs/SAs and DNNs, such as NAS described in Section 4, is one of the promising research directions in this field. Our study can provide a comprehensive reference to the use of EAs and SAs in helping solve various computer vision problems. In addition, we expect that this paper will help broaden the perspective and motivate new insight and research in the relevant fields in the future. \thelikesection Appendix A: Pseudo-codes In this section, the pseudo-codes of GA, DE, PSO, and ACO for solving a minimization problem are introduced. Before we get into the description of each algorithm, the notation of the common variables are defined. A population or swarm, i.e., a pool of NP candidate solutions is denoted by X={x1,...,xNP} where xi is the ith candidate solution called individual, particle, ant, etc. in each algorithm. Each candidate solution also consists of D elements, denoted as xi={xi,1,...,xi,D} where xi,j is the jth element of ith candidate solution. The domain of elements depends on each algorithm. The evaluation value (fitness) of xi obtained by the fitness function is denoted as f(xi), and the function evaluate() calculates the fitness of all input candidate solutions. In addition, considering that these algorithms require iteration processes, the current number of iterations is denoted as t and given as superscript (i.e., tth X is Xt). In the following subsections, the pools of intermediate solution candidates generated during iteration processes use the same notation rule for X. Also, the alphabets assigned to the variables are only valid within each subsection. Note that the following pseudo-codes are traditional processes, and there are various improved versions. \thelikesubsection GA The pseudo-code of GA is described in Algorithm 1. Each individual xi has a chromosome encoded by a binary string, i.e., xi,j∈{0,1}. O represents the pool of offspring. selectParents() (Algorithm 1, line 6) Two individuals as parents are selected from the input population according to their fitness. For instance, the roulette wheel selection defines the probability of each individual being selected: $$ p_{i} = \frac{f(\boldsymbol{x}_{i})}{\sum_{j=0}^{NP} f(\boldsymbol{x}_{j})}{,} $$ where pi is the probability that ith individual is selected. crossover() (Algorithm 1, line 7) The elements (i.e., genes) of the two input parents xp1 and xp2 are probabilistically exchanged. The two individuals after the operation become part of O as offspring. This operation is performed according to the constant probability called crossover rate pc, and the parents are directly regarded as offspring if crossover is not performed. mutate() (Algorithm 1, line 10) A bit swapping is executed for each gene in each individual with the constant probability called mutation rate pm. The pm is generally set to a small value to prevent the information inherited from the parent from being destroyed excessively. \thelikesubsection DE The pseudo-code of DE is described in Algorithm 2. DE generally assumes a continuous optimization problem, i.e., \(\boldsymbol {x}_{i} \in \mathbb {R}^{D}\). During the iteration, the corresponding donor vector vi and trial vector ui are generated for each parent (target vector) xi. mutate() (Algorithm 2, line 5) The ith donor vector is created using three randomly chosen individuals xr1,xr2, and xr3 from the input population as follows: $$ \boldsymbol{v}_{i} = \boldsymbol{x}_{r1} + F(\boldsymbol{x}_{r2} - \boldsymbol{x}_{r3}){,} $$ where F is a scaling factor. The indices i,r1,r2, and r3 are non-duplicated integers in the range [1,NP]. Similar to the crossover of GA, the input target vector and donor vector probabilistically exchange their own components. Only one trial vector is generated, which is a competitor to the corresponding target vector. \thelikesubsection PSO The pseudo-code of PSO is described in Algorithm 3. PSO also assumes a continuous optimization problem similarly to DE. A candidate solution xi represents the position in a solution space of the corresponding particle, which is modified by velocity vi of the particle. The pbest xp and gbest xg are the best positions so far for personal particles and the swarm, respectively. adjustVelocity() (Algorithm 3, line 8) Velocity is updated using pbest and gbest as follow: $$ \boldsymbol{v}_{i}^{t+1} = \omega\boldsymbol{v}_{i}^{t} + c_{1}r_{1}(\boldsymbol{x}_{pi} - \boldsymbol{x}_{i}^{t}) + c_{2}r_{2}(\boldsymbol{x}_{g} - \boldsymbol{x}_{i}^{t}){,} $$ where ω is the inertia weight, c1 and c2 are the acceleration coefficients, and r1 and r2 are uniformly random values in the range [0.0,1.0]. adjustPosition() (Algorithm 3, line 9) Based on updated velocity, a position update (in other words, generation of the new candidate solution) is executed as follow: $$ \boldsymbol{x}_{i}^{t+1} = \boldsymbol{x}_{i}^{t} + \boldsymbol{v}_{i}^{t+1}. $$ \thelikesubsection ACO The pseudo-code of ACO is described in Algorithm 4. ACO assumes a combinatorial optimization problem, which is achieved by a feasible walk on a graph whose nodes are solution components (denotes a solution component as ck) and edges are connections between components. For instance, if the ith ant is on the ck as initial position and moves to cl (denotes this movement as \(c_{k}^{l}\)), elements of the corresponding candidate solution are constructed as xi,0=ck and xi,1=cl. Every edge is assigned a pheromone (denotes the pheromone on the edge between ck and cl as τkl and set of all pheromone as T) that indicates the validity of selecting the corresponding component. constructSolution() (Algorithm 4, line 4) A feasible solution is constructed by a probabilistically walk of the ant on the graph. Let the current partial solution of xi be sp and the set of solution components with feasibility be \(\mathcal {N}(s_{p})\), then a probability of a solution components is chosen is defined as follow: $$ p(c_{k}^{l} \mid s_{p}) = \frac{\tau_{kl}^{\alpha}\eta_{kl}^{\beta}}{\sum_{c_{k}^{m} \in \mathcal{N}(s_{p})}^{} \tau_{km}^{\alpha}\eta_{km}^{\beta}}, \forall c_{k}^{l} \in \mathcal{N}(s_{p}){,} $$ where η is heuristic information, and α and β are parameters that adjust the influences of pheromone and heuristic information, respectively. The choice according to Eq. (5) is repeated until the construction of xi is complete. daemonActions() (Algorithm 4, line 6) Optional local search operations, called daemon actions, can be applied to constructed solutions. These operations are generally centralized actions that cannot be performed by individual ants. updatePheromones() (Algorithm 4, line 8) The pheromone update is performed by two mechanisms: evaporation and deposit. While the former gives an equal change to all pheromones, the latter gives a change which depends on the fitness of the ants including the corresponding path. The implementation of these mechanisms is achieved as follows: $$ \tau_{kl}^{t+1} = (1 - \rho)\tau_{kl}^{t} + \sum_{\boldsymbol{x} \in \boldsymbol{X}^{t} \mid c_{k}^{l} \in \boldsymbol{x}}^{} \frac{1}{f(\boldsymbol{x})}{,} $$ where ρ is a parameter in the range (0.0,1.0] called evaporation rate. Swarm algorithm EC: Evolutionary computation GA: Differential evolution PSO: ACO: Ant colony optimization GP: Evolution strategy EP: Evolutionary programming Swarm intelligence SAD: Sum of absolute differences DNN: NAS: Neural architecture search CNN: CGP: Cartesian genetic programming CAE: Convolutional auto-encoder PSNR: Peak signal-to-noise ratio ACS: Ant colony system FCM: Fuzzy c-means MRF: Markov random field SIFT: Scale-invariant feature transform SVM: DoF: Degrees of freedom Deterministic crowding OL: Orthogonal learning GMM: Gaussian mixture model SCA: Sine-cosine algorithm RW: IR: PCA: Principal component analysis LDA: Linear discriminant analysis MSE: Mean square error DCT: Discrete cosine transform Discrete wavelet transform SURF: Speeded-up robust features RF: Random forest Maximum a posteriori Real E, Moore S, Selle A, Saxena S, Suematsu YL, Tan J, Le QV, Kurakin A (2017) Large-scale evolution of image classifiers In: International Conference on Machine Learning (ICML), 2902–2911.. JMLR.org. https://dl.acm.org/doi/10.5555/3305890.3305981. Sun Y, Xue B, Zhang M, Yen GG (2020) Evolving Deep Convolutional Neural Networks for Image Classification. IEEE Trans Evol Comput 24(2):394–407. https://doi.org/10.1109/TEVC.2019.2916183. Miikkulainen R, Liang J, Meyerson E, Rawal A, Fink D, Francon O, Raju B, Shahrzad H, Navruzyan A, Duffy N, Hodjat B (2019) Chapter 15 - Evolving Deep Neural Networks. In: Robert Kozma, Cesare Alippi, Yoonsuck Choe, Francesco Carlo Morabito (eds)Artificial Intelligence in the Age of Neural Networks and Brain Computing, 293–312.. Academic Press. https://doi.org/10.1016/B978-0-12-815480-9.00015-3. Suganuma M, Shirakawa S, Nagao T (2017) A genetic programming approach to designing convolutional neural network architectures In: Genetic and Evolutionary Computation Conference (GECCO), 497–504.. ACM. https://doi.org/10.1145/3071178.3071229. Xie L, Yuille A (2017) Genetic cnn In: International Conference on Computer Vision (ICCV).. IEEE. https://doi.org/10.1109/ICCV.2017.154. Liu H, Simonyan K, Vinyals O, Fernando C, Kavukcuoglu K (2018) Hierarchical representations for efficient architecture search In: International Conference on Learning Representations (ICLR). https://openreview.net/forum?id=BJQRKzbA-. Assunção F, Lourenço N, Machado P, Ribeiro B (2018) Evolving the topology of large scale deep neural networks In: European Conference on Genetic Programming (EuroGP), 19–34.. Springer. https://doi.org/10.1007/978-3-319-77553-1_2. Assunçao F, Lourenço N, Machado P, Ribeiro B (2019) DENSER: deep evolutionary network structured representation. Genet Program Evolvable Mach 20(1):5–35. https://doi.org/10.1007/s10710-018-9339-y. Kramer O (2018) Evolution of convolutional highway networks In: International Conference on the Applications of Evolutionary Computation, 395–404.. Springer. https://doi.org/10.1007/978-3-319-77538-8_27. Sun Y, Xue B, Zhang M, Yen GG (2019) A Particle Swarm Optimization-Based Flexible Convolutional Autoencoder for Image Classification. IEEE Trans Neural Netw Learn Syst 30(8):2295–2309. https://doi.org/10.1109/TNNLS.2018.2881143. Wang B, Sun Y, Xue B, Zhang M (2018) Evolving deep convolutional neural networks by variable-length particle swarm optimization for image classification In: 2018 IEEE Congress on Evolutionary Computation (CEC), 1–8. https://doi.org/10.1109/CEC.2018.8477735. Fernando C, Banarse D, Reynolds M, Besse F, Pfau D, Jaderberg M, Lanctot M, Wierstra D (2016) Convolution by evolution: differentiable pattern producing networks In: Genetic and Evolutionary Computation Conference (GECCO), 109–116.. ACM. https://doi.org/10.1145/2908812.2908890. Suganuma M, Ozay M, Okatani T (2018) Exploiting the potential of standard convolutional autoencoders for image restoration by evolutionary search In: International Conference on Machine Learning (ICML).. PMLR. http://proceedings.mlr.press/v80/suganuma18a.html. Oullette R, Browne M, Hirasawa K (2004) Genetic algorithm optimization of a convolutional neural network for autonomous crack detection In: IEEE Congress on Evolutionary Computation (CEC), vol 1, 516–521.. IEEE. https://doi.org/10.1109/CEC.2004.1330900. Zhining Y, Yunming P (2015) The genetic convolutional neural network model based on random sample. Int J U- E-Serv Sci Technol (UNESST) 8(11):317–326. Tao W-B, Tian J-W, Liu J (2003) Image segmentation by three-level thresholding based on maximum fuzzy entropy and genetic algorithm. Pattern Recogn Lett 24(16):3069–3078. Tao W, Jin H, Liu L (2007) Object segmentation using ant colony optimization algorithm and fuzzy entropy. Pattern Recogn Lett 28(7):788–796. Puranik P, Bajaj P, Abraham A, Palsodkar P, Deshmukh A (2009) Human perception-based color image segmentation using comprehensive learning particle swarm optimization In: International Conference on Emerging Trends in Engineering and Technology (ICETET), 630–635.. IEEE. https://doi.org/10.1109/ICETET.2009.116. Liang Y-C, Chen AH-L, Chyu C-C (2006) Application of a hybrid ant colony optimization for the multilevel thresholding in image processing In: International Conference on Neural Information Processing (ICONIP), 1183–1192.. Springer. https://doi.org/10.1007/11893257_129. Ghamisi P, Couceiro MS, Martins FM, Benediktsson JA (2014) Multilevel image segmentation based on fractional-order darwinian particle swarm optimization. IEEE Trans Geosci Remote Sens (TGRS) 52(5):2382–2394. Liang Y-C, Yin Y-C (2011) Optimal multilevel thresholding using a hybrid ant colony system. J Chin Inst Ind Eng 28(1):20–33. Liang Y, Yin Y (2013) Int J Innov Comput Inf Control (IJICIC) 9(1):319–337. Chander A, Chatterjee A, Siarry P (2011) A new social and momentum component adaptive pso algorithm for image segmentation. Expert Syst Appl 38(5):4998–5004. Omran M, Engelbrecht AP, Salman A (2005) Particle swarm optimization method for image clustering. Int J Patt Recog Artif Intell (IJPRAI) 19(03):297–321. Malisia AR, Tizhoosh HR (2006) Image thresholding using ant colony optimization In: Conference on Computer and Robot Vision (CRV), 26–26.. IEEE. https://doi.org/10.1109/CRV.2006.42. Maulik U, Bandyopadhyay S (2003) Fuzzy partitioning using real coded variable length genetic algorithm for pixel classiocation. IEEE Trans Geosci Remote Sens (TGRS) 41(5):1075–1081. Omran MG, Salman A, Engelbrecht AP (2006) Dynamic clustering using particle swarm optimization with application in image segmentation. Pattern Anal Appl 8(4):332. MathSciNet Google Scholar Awad M, Chehdi K, Nasri A (2007) Multicomponent image segmentation using a genetic algorithm and artificial neural network. IEEE Geosci Remote Sens Lett (GRSL) 4(4):571–575. Awad M, Chehdi K, Nasri A (2009) Multi-component image segmentation using a hybrid dynamic genetic algorithm and fuzzy c-means. IET Image Process 3(2):52–62. Bansal S, Aggarwal D (2011) Color image segmentation using cielab color space using ant colony optimization. Int J Comput Appl (IJCA) 29(9):28–34. Halder A, Pramanik S, Kar A (2011) Dynamic image segmentation using fuzzy c-means based genetic algorithm. Int J Comput Appl (IJCA) 28(6):15–20. Halder A, Pradhan A, Dutta SK, Bhattacharya P (2016) Tumor extraction from mri images using dynamic genetic algorithm based image segmentation and morphological operation In: International Conference on Communication and Signal Processing (ICCSP), 1845–1849.. IEEE. https://doi.org/10.1109/ICCSP.2016.7754489. Ouadfel S, Batouche M (2003) MRF-based image segmentation using ant colony system. Electronic Letters on Computer Vision and Image Analysis (ELCVIA) 2(1):12–24. Pignalberi G, Cucchiara R, Cinque L, Levialdi S (2003) Tuning range image segmentation by genetic algorithm. EURASIP J Adv Signal Process 2003(8):683043. MATH Google Scholar Tianzi J, Faguo Y, Yong F, David JE (2001) A parallel genetic algorithm for cell image segmentation. Electron Notes Theor Comput Sci (ENTCS) 46:214–224. Wang X-N, Feng Y. -j., Feng Z-R (2005) Ant colony optimization for image segmentation In: International Conference on Machine Learning and Cybernetics (ICMLC), vol 9, 5355–5360.. IEEE. https://doi.org/10.1109/ICMLC.2005.1527890. Ma L, Wang K, Zhang D (2009) A universal texture segmentation and representation scheme based on ant colony optimization for iris image processing. Comput Math Appl 57(11-12):1862–1868. Nezamabadi-Pour H, Saryazdi S, Rashedi E (2006) Edge detection using ant algorithms. Soft Comput 10(7):623–628. Baterina AV, Oppus C (2010) Image edge detection using ant colony optimization. WSEAS Trans Signal Process 6(2):58–67. Cuevas E, Zaldivar D, Pérez-Cisneros M, Ramírez-Ortegón M (2011) Circle detection using discrete differential evolution optimization. Pattern Anal Appl 14(1):93–107. Dong N, Wu C-H, Ip W-H, Chen Z-Q, Chan C-Y, Yung K-L (2012) An opposition-based chaotic GA/PSO hybrid algorithm and its application in circle detection. Comput Math Appl 64(6):1886–1902. MathSciNet MATH Google Scholar Trujillo L, Olague G (2006) Using evolution to learn how to perform interest point detection. Int Conf Pattern Recog (ICPR) 1:211–214. Trujillo L, Olague G (2008) Automated Design of Image Operators That Detect Interest Points In: Evolutionary Computation, vol. 16, 483–507.. MIT Press. https://doi.org/10.1162/evco.2008.16.4.483. Perez CB, Olague G (2009) Evolutionary learning of local descriptor operators for object recognition In: Genetic and Evolutionary Computation Conference (GECCO), 1051–1058.. ACM. https://doi.org/10.1145/1569901.1570043. Perez CB, Olague G (2013) Genetic programming as strategy for learning image descriptor operators. Intell Data Anal 17(4):561–583. Yu S, De Backer S, Scheunders P (2002) Genetic feature selection combined with composite fuzzy nearest neighbor classifiers for hyperspectral satellite imagery. Pattern Recogn Lett 23(1-3):183–190. Treptow A, Zell A (2004) Combining adaboost learning and evolutionary search to select features for real-time object detection In: IEEE Congress on Evolutionary Computation (CEC), vol 2, 2107–2113.. IEEE. https://doi.org/10.1109/CEC.2004.1331156. Khushaba RN, Al-Ani A, Al-Jumaily A (2008) Differential evolution based feature subset selection In: International Conference on Pattern Recognition (ICPR), 1–4.. IEEE. https://doi.org/10.1109/ICPR.2008.4761255. Khushaba RN, Al-Ani A, Al-Jumaily A (2011) Feature subset selection using differential evolution and a statistical repair mechanism. Expert Syst Appl 38(9):11515–11526. Ghosh A, Datta A, Ghosh S (2013) Self-adaptive differential evolution for feature selection in hyperspectral image data. Appl Soft Comput 13(4):1969–1977. Ghamisi P, Couceiro MS, Benediktsson JA (2015) A novel feature selection approach based on FODPSO and SVM. IEEE Trans Geosci Remote Sens Lett (TGRS) 53(5):2935–2947. Ghamisi P, Chen Y, Zhu XX (2016) A self-improving convolution neural network for the classification of hyperspectral data. IEEE Geosci Remote Sens Lett (GRSL) 13(10):1537–1541. Al-Ani A (2005) Feature subset selection using ant colony optimization. Int J Comput Intell (IJCI) 2(1):53–58. Chen B, Chen L, Chen Y (2013) Efficient ant colony optimization for image feature selection. Signal Process 93(6):1566–1576. Zhang C, Akashi T (2015) Simplifying genetic algorithm: a bit order determined sampling method for adaptive template matching In: Irish Machine Vision and Image Processing Conference (IMVIP).. Irish Pattern Recognition & Classification Society. http://www.tara.tcd.ie/handle/2262/74714. Zhang C, Akashi T (2015) Fast affine template matching over Galois field In: British Machine Vision Conference (BMVC), 121–112111, BMVA Press. https://dx.doi.org/10.5244/C.29.121. Zhang C, Akashi T (2016) Inst Electron Inf Commun Eng (IEICE) 99(9):2341–2350. Sato J, Akashi T (2018) Deterministic crowding introducing the distribution of population for template matching. IEEJ Trans Electr Electron Eng 13(3):480–488. Lee Y, Hara T, Fujita H, Itoh S, Ishigaki T (2001) Automated detection of pulmonary nodules in helical CT images based on an improved template-matching technique. IEEE Trans Med Imaging (T-MI) 20(7):595–604. Ugolotti R, Nashed YS, Mesejo P, Ivekovič Š, Mussi L, Cagnoni S (2013) Particle swarm optimization and differential evolution for model-based object detection. Appl Soft Comput 13(6):3092–3105. De Falco I, Della Cioppa A, Maisto D, Tarantino E (2008) Differential evolution as a viable tool for satellite image registration. Appl Soft Comput 8(4):1453–1462. Ma W, Fan X, Wu Y, Jiao L (2014) An orthogonal learning differential evolution algorithm for remote sensing image registration. Math Probl Eng 2014:1–11. Wachowiak MP, Smolíková R, Zheng Y, Zurada JM, Elmaghraby AS (2004) An approach to multimodal biomedical image registration utilizing particle swarm optimization. IEEE Trans Evol Comput (TEVC) 8(3):289–301. Liebelt J, Schertler K (2007) Precise registration of 3D models to images by swarming particles In: Computer Vision and Pattern Recognition (CVPR), 1–8.. IEEE. https://doi.org/10.1109/CVPR.2007.383167. Sholomon D, David O, Netanyahu NS (2013) A genetic algorithm-based solver for very large jigsaw puzzles In: Computer Vision and Pattern Recognition (CVPR), 1767–1774.. IEEE. https://doi.org/10.1109/CVPR.2013.231. Sholomon D, David OE, Netanyahu NS (2016) An automatic solver for very large jigsaw puzzles using genetic algorithms. Genet Program Evolvable Mach 17(3):291–313. Sholomon D, David OE, Netanyahu NS (2014) A generalized genetic algorithm-based solver for very large jigsaw puzzles of complex types In: Association for the Advancement of Artificial Intelligence (AAAI), 2839–2845. https://www.aaai.org/ocs/index.php/AAAI/AAAI14/paper/view/8650. Myers R, Hancock ER (2001) Least-commitment graph matching with genetic algorithms. Patt Recogn 34(2):375–394. Zhang L, Xu W, Chang C (2003) Genetic algorithm for affine point pattern matching. Pattern Recogn Lett 24(1-3):9–19. Bhaskar H, Kingsland R, Singh S (2006) Multi-resolution based motion estimation for object tracking using genetic algorithm In: 2006 IET International Conference on Visual Information Engineering, 583–588.. IET. https://doi.org/10.1049/cp:20060596. Cuevas E, Zaldivar D, Pérez-Cisneros M, Oliva D (2013) Block matching algorithm based on differential evolution for motion estimation. Eng Appl Artif Intell 26(1):488–498. Zhang X, Hu W, Maybank S, Li X, Zhu M (2008) Sequential particle swarm optimization for visual tracking In: Computer Vision and Pattern Recognition (CVPR), 1–8.. IEEE. https://doi.org/10.1109/CVPR.2008.4587512. Cheng X, Li N, Zhang S, Wu Z (2014) Robust visual tracking with sift features and fragments based on particle swarm optimization. Circ Syst Signal Process 33(5):1507–1526. Lin L, Zhu M (2018) Efficient tracking of moving target based on an improved fast differential evolution algorithm. IEEE Access 6:6820–6828. Nenavath H, Jatoth RK (2018) Hybridizing sine cosine algorithm with differential evolution for global optimization and object tracking. Appl Soft Comput 62:1019–1043. Huang Y, Essa I (2005) Tracking multiple objects through occlusions In: Computer Vision and Pattern Recognition (CVPR), vol 2, 1051–1058.. IEEE. https://doi.org/10.1109/CVPR.2005.350. Zhang X, Hu W, Qu W, Maybank S (2010) Multiple object tracking via species-based particle swarm optimization. IEEE Trans Circ Syst Video Technol 20(11):1590–1602. Liu C, Wechsler H (2000) Evolutionary pursuit and its application to face recognition. IEEE Trans Patt Anal Mach Intell (TPAMI) 22(6):570–582. Zheng W-S, Lai J-H, Yuen PC (2005) IEEE Trans Syst Man Cybern B (Cybernet) 35(5):1065–1078. Vignolo LD, Milone DH, Scharcanski J (2013) Feature selection for face recognition based on multi-objective evolutionary wrappers. Expert Syst Appl 40(13):5077–5084. Kanan HR, Faez K, Hosseinzadeh M (2007) Face recognition system using ant colony optimization-based selected features In: IEEE Symposium on Computational Intelligence for Security and Defense Applications (CISDA), 57–62.. IEEE. Ramadan RM, Abdel-Kader RF (2009) Face recognition using particle swarm optimization-based selected features. Int J Signal Process Image Process Patt Recogn (IJSIP) 2(2):51–65. Senaratne R, Halgamuge S, Hsu A (2009) Face recognition by extending elastic bunch graph matching with particle swarm optimization. J Multimed 4(4):204–214. Bhatt HS, Bharadwaj S, Singh R, Vatsa M (2013) Recognizing surgically altered face images using multiobjective evolutionary algorithm. IEEE Trans Inf Forensics Secur (TIFS) 8(1):89–100. Bebis G, Gyaourova A, Singh S, Pavlidis I (2006) Face recognition by fusing thermal infrared and visible imagery. Image Vis Comput 24(7):727–742. Desa SM, Hati S (2008) IR and visible face recognition using fusion of kernel based features In: International Conference on Pattern Recognition (ICPR), 1–4.. Citeseer. https://doi.org/10.1109/ICPR.2008.4761862. Hermosilla G, Gallardo F, Farias G, Martin CS (2015) Fusion of visible and thermal descriptors using genetic algorithms for face recognition systems. Sensors 15(8):17944–17962. Wong K-W, Lam K-M, Siu W-C (2001) An efficient algorithm for human face detection and facial feature extraction under different conditions. Patt Recogn 34(10):1993–2004. Akashi T, Wakasa Y, Tanaka K, Karungaru S, Fukumi M (2007) Using genetic algorithm for eye detection and tracking in video sequence. J System Cybern Inform (JSCI) 5(2):72–78. Perez CA, Aravena CM, Vallejos JI, Estevez PA, Held CM (2010) Face and iris localization using templates designed by particle swarm optimization. Patt Recogn Lett 31(9):857–868. Mpiperis I, Malassiotis S, Petridis V, Strintzis MG (2008) 3D facial expression recognition using swarm intelligence In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2133–2136.. IEEE. Chandar KP, Savithri TS (2015) 3D face model estimation based on similarity transform using differential evolution optimization. Procedia Comput Sci 54:621–630. Sato J, Akashi T (2017) High-speed multiview face localization and tracking with a minimum bounding box using genetic algorithm. IEEJ Trans Electr Electron Eng (TEEE) 12(5):736–743. You M, Akashi T (2018) Multi-view face detection using frontal face detector. IEEJ Trans Electr Electron Eng (TEEE) 13(7):1011–1019. Robertson C, Trucco E (2006) Human body posture via hierarchical evolutionary optimization In: Br Mach Vis Conf (BMVC), 999. http://www.macs.hw.ac.uk/bmvc2006/volume3.html. Zhang X, Hu W, Wang X, Kong Y, Xie N, Wang H, Ling H, Maybank S (2010) A swarm intelligence based searching strategy for articulated 3D human body tracking In: Comput Vis Patt Recogn Workshops (CVPRW), 45–50.. IEEE. https://doi.org/10.1109/CVPRW.2010.5543804. Panteleris P, Argyros AA (2014) Vision-based slam and moving objects tracking for the perceptual support of a smart walker platform In: European Conference on Computer Vision (ECCV), 407–423.. Springer. https://doi.org/10.1007/978-3-319-16199-0_29. Chaaraoui AA, Florez-Revuelta F (2014) Adaptive human action recognition with an evolving bag of key poses. IEEE Trans Auton Mental Dev (TAMD) 6(2):139–152. Chaaraoui AA, Padilla-López JR, Climent-Pérez P, Flórez-Revuelta F (2014) Evolutionary joint selection to improve human action recognition with RGB-D devices. Expert Syst Appl 41(3):786–794. Ijjina EP, Chalavadi KM (2016) Human action recognition using genetic algorithms and convolutional neural networks. Patt Recogn 59:199–212. Nunes UM, Faria DR, Peixoto P (2017) A human activity recognition framework using max-min features and key poses with differential evolution random forests classifier. Patt Recogn Lett 99:21–31. Ye Q, Yuan S, Kim T-K (2016) Spatial attention deep net with partial pso for hierarchical hybrid hand pose estimation In: European Conference on Computer Vision (ECCV), 346–361.. Springer. https://doi.org/10.1007/978-3-319-46484-8_21. Panteleris P, Argyros A (2017) Back to RGB: 3D tracking of hands and hand-object interactions based on short-baseline stereo. IEEE Int Conf Comput Vis Workshops (ICCVW) 2(63):39. Oikonomidis I, Kyriazis N, Argyros AA (2012) Tracking the articulated motion of two strongly interacting hands In: Computer Vision and Pattern Recognition (CVPR).. IEEE. https://doi.org/10.1109/CVPR.2012.6247885. Oikonomidis I, Lourakis MI, Argyros AA (2014) Evolutionary quasi-random search for hand articulations tracking In: Computer Vision and Pattern Recognition (CVPR), 3422–3429.. IEEE. https://doi.org/10.1109/CVPR.2014.437. Padeleris P, Zabulis X, Argyros AA (2012) Head pose estimation on depth data based on Particle Swarm Optimization In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 42–49.. IEEE. https://doi.org/10.1109/CVPRW.2012.6239236. Rodehorst V, Hellwich O (2006) Genetic Algorithm SAmple Consensus (GASAC) - A Parallel Strategy for Robust Parameter Estimation In: 2006 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'06), 103–103.. IEEE. https://doi.org/10.1109/CVPRW.2006.88. Ghosh A, Mondal A, Ghosh S (2014) Moving object detection using Markov random field and distributed differential evolution. Appl Soft Comput 15:121–136. Kumar S, Pant M, Ray AK (2018) DE-IE: differential evolution for color image enhancement. Int J Syst Assur Eng Manag 9(3):577–588. Chen Q, Koltun V (2015) Robust nonrigid registration by convex optimization In: Proceedings of the IEEE International Conference on Computer Vision, 2039–2047.. IEEE. https://doi.org/10.1109/ICCV.2015.236. Zhou X, Leonardos S, Hu X, Daniilidis K (2015) 3D shape estimation from 2D landmarks: a convex relaxation approach In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4447–4455.. IEEE. https://doi.org/10.1109/CVPR.2015.7299074. Cheng Y, Lopez JA, Camps O, Sznaier M (2015) A convex optimization approach to robust fundamental matrix estimation In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2170–2178.. IEEE. https://doi.org/10.1109/CVPR.2015.7298829. Das S, Suganthan PN (2011) Differential evolution: a survey of the state-of-the-art. IEEE Trans Evol Comput 15(1):4–31. Sizikova E, Funkhouser T (2016) Wall painting reconstruction using a genetic algorithm. EUROGRAPHICS Work Graph Cult Herit (GCH) 11(1):3. Nayman N, Noy A, Ridnik T, Friedman I, Jin R, Zelnik L (2019) Xnas: neural architecture search with expert advice In: Advances in Neural Information Processing Systems, 1977–1987.. Curran Associates, Inc.https://doi.org/http://papers.nips.cc/paper/8472-xnas-neural-architecture-search-with-expert-advice.pdf. Cai H, Zhu L, Han S (2019) ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware In: International Conference on Learning Representations. Stanley KO, Clune J, Lehman J, Miikkulainen R (2019) Designing neural networks through neuroevolution. Nat Mach Intell 1(1):24–35. Elsken T, Metzen JH, Hutter F (2019) Neural architecture search: a survey. J Mach Learn Res 20(55):1–21. Xin Y (1999) Evolving artificial neural networks. Proc IEEE 87(9):1423–1447. Stanley KO, Miikkulainen R (2002) Evolving neural networks through augmenting topologies. Evol Comput 10(2):99–127. Chrabaszcz P, Loshchilov I, Hutter F (2018) Back to basics: benchmarking canonical evolution strategies for playing atari In: Proceedings of the 27th International Joint Conference on Artificial Intelligence, 1419–1426.. AAAI Press. https://dl.acm.org/doi/10.5555/3304415.3304617. Goldberg DE, Deb K (1991) A comparative analysis of selection schemes used in genetic algorithms. Found Genet Algoritm 1:69–93. Carlson SE (1996) Genetic algorithm attributes for component selection. Res Eng Des 8(1):33–51. Zhao M, Fu AM, Yan H (2001) A technique of three-level thresholding based on probability partition and fuzzy 3-partition. IEEE Trans Fuzzy Syst 9(3):469–479. Xue B, Zhang M, Browne WN, Yao X (2016) A survey on evolutionary computation approaches to feature selection. IEEE Trans Evol Comput 20(4):606–626. Zhang D-Q, Chang S-F (2004) Detecting image near-duplicate by stochastic attributed relational graph matching with learning In: Proceedings of the 12th Annual ACM International Conference on Multimedia, 877–884.. Association for Computing Machinery. https://doi.org/10.1145/1027527.1027730. Dasigi P, Jawahar CV (2008) Efficient graph-based image matching for recognition and retrieval In: Proceedings of National Conference on Computer Vision, Pattern Recognition. http://web2py.iiit.ac.in/publications/default/download/inproceedings.pdf.f4613d92-8ea2-4905-b7fb-08702c4b301d.pdf. Lamdan Y, Schwartz JT, Wolfson HJ (1988) Object recognition by affine invariant matching In: Computer Vision and Pattern Recognition (CVPR), 335–344.. IEEE. https://doi.org/10.1109/CVPR.1988.196257. Wiskott L, Fellous J-M, Kruger N, Von Der Malsburg C (1997) Face recognition by elastic bunch graph matching In: IEEE Transactions on Pattern Analysis and Machine Intelligence, vol 19, 775–779.. IEEE. https://doi.org/10.1109/34.598235. Bhatt HS, Bharadwaj S, Singh R, Vatsa M (2010) On matching sketches with digital face images In: 2010 Fourth IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), 1–7.. IEEE. https://doi.org/10.1109/BTAS.2010.5634507. Perez CA, Lazcano VA, Estevez PA (2007) Real-time iris detection on coronal-axis-rotated faces. IEEE Trans Syst Man Cybern C (Appl Rev) 37(5):971–978. Sobol' IM (1967) On the distribution of points in a cube and the approximate evaluation of integrals. USSR Comput Math Math Phys 7(4):86–112. Sarkar S, Das S (2013) Multi-level image thresholding based on two-dimensional histogram and maximum tsallis entropy - a differential evolution approach. IEEE Trans Image Process 22(12):4788–4797. Maulik U, Saha I (2010) Automatic fuzzy clustering using modified differential evolution for image classification. IEEE Trans Geosci Remote Sens (TGRS) 48(9):3503–3510. This work is supported by the Foundation for the Fusion of Science and Technology, JSPS KAKENHI Grant Number [JP20K19568], and Tateisi Science and Technology Foundation. Takumi Nakane, Naranchimeg Bold and Haitian Sun contributed equally to this work. Department of Engineering, University of Fukui, Fukui, Japan Takumi Nakane & Chao Zhang Department of Electrical Engineering and Computer Science, Iwate University, Iwate, Japan Naranchimeg Bold, Haitian Sun & Takuya Akashi Department of School of Info Technology, Deakin University, Waurn Ponds, Australia Xuequan Lu Takumi Nakane Naranchimeg Bold Haitian Sun Takuya Akashi Chao Zhang Takumi Nakane, Naranchimeg Bold, and Haitian Sun contributed equally to this work. Correspondence to Chao Zhang. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Nakane, T., Bold, N., Sun, H. et al. Application of evolutionary and swarm optimization in computer vision: a literature survey. IPSJ T Comput Vis Appl 12, 3 (2020). https://doi.org/10.1186/s41074-020-00065-9 Literature survey
CommonCrawl
Kinetic control concept for the diffusion processes of paracetamol active molecules across affinity polymer membranes from acidic solutions Sanae Tarhouchi ORCID: orcid.org/0000-0003-3658-95771, Rkia Louafy1, El Houssine El Atmani1 & Miloudi Hlaïbi1 BMC Chemistry volume 16, Article number: 2 (2022) Cite this article Paracetamol compound remains the most used pharmaceutical as an analgesic and antipyretic for pain and fever, often identified in aquatic environments. The elimination of this compound from wastewater is one of the critical operations carried out by advanced industries. Our work objective was to assess studies based on membrane processes by using two membranes, polymer inclusion membrane and grafted polymer membrane containing gluconic acid as an extractive agent for extracting and recovering paracetamol compound from aqueous solutions. The elaborated membrane characterizations were assessed using Fourier-transform infrared spectroscopy (FTIR) and scanning electron microscopy (SEM). Kinetic and thermodynamic models have been applied to determine the values of macroscopic (P and J0), microscopic (D* and Kass), activation and thermodynamic parameters (Ea, ΔH#, ΔS#, ΔH#diss, and ΔH#th). All results showed that the PVA–GA was more performant than its counterpart GPM–GA, with apparent diffusion coefficient values (107 D*) of 41.807 and 31.211 cm2 s−1 respectively, at T = 308 K. In addition, the extraction process for these membranes was more efficient at pH = 1. The relatively low values of activation energy (Ea), activation association enthalpy (ΔH≠ass), and activation dissociation enthalpy (ΔH≠diss) have indicated a kinetic control for the oriented processes studied across the adopted membranes much more than the energetic counterpart. The results presented for the quantification of oriented membrane process ensured clean, sustainable, and environmentally friendly methods for the extraction and recovery of paracetamol molecule as a high-value substance. In the last few decades, increasing attention has been paid to pharmaceutical industries that generate liquid wastes containing several pollutants and toxic substances [1,2,3,4]. These pollutants induce undesirable effects on the ecosystem and can potentially cause unexpected consequences and unintended effects on living organisms [5,6,7,8]. Consequently, treating these wastes becomes a major environmental issue for modern pharmaceutical industries and scientific research institutions. New technologies for the extraction, separation, and elimination of organic or inorganic substances and the recovery of several value-added molecules for evaluating these releases must be developed [9,10,11,12] to minimize and reduce the formation rate of toxic products [13]. Paracetamol is the raw material of many pharmaceutical products. Due to commercial and medical uses, modern industries use special methods to develop this active ingredient, which is not effectively removed by conventional methods during wastewater treatment. Thus, this pharmaceutical compound remains in municipal effluents, and different paracetamol concentrations have been detected in various parts of the world [14, 15]. Long-term exposure to drugs containing this active pharmaceutical ingredient can cause severe damage to humans and other animals [16,17,18,19]. Therefore, its recovery and extraction from industrial liquid waste is the need of the hour. In recent years, various membrane processes have been reported for various applications [20,21,22] (such as removal, purification, recovery, and extraction of organic compounds present in liquid wastewater). Membrane-based technology has become critical and has attracted much attention as a valuable technology for many industries due to the distinctive capability of selective and efficient extraction of target species (e.g., ions/small molecules). It is an environmentally friendly alternative that considerably reduces the volume of used chemical products, and minimal energy is consumed during the process. These methods are successfully applied in several fields, such as environment, energy, health, water treatment, cosmetic, food, chemical, and pharmaceutical industries. Depending on their structure, composition, and morphology, a wide range of membranes (including organic polymer membranes) can be developed for use in different fields. These favorable properties and functionalities exhibit clear and important advantages compared to other separation and extraction techniques such as resin separation, liquid–liquid extraction (ELL), solid-phase extraction (EPS), and chromatography [23,24,25,26,27]. These properties help in determining the selectivity parameters in particular. In general, the extraction mechanism through a membrane is based on facilitated diffusion. These oriented membranes that promote facilitated extraction are now the subject of several studies. Facilitated extraction membranes employ chemicals (hereafter denoted extractive agent) to specifically and reversibly react with the target spices to form (Substrate-Extractive agent) complexes, then transport the complexes from the feed phase to the receiving phase allowing the regeneration of the substrate via reverse reactions. The separation efficiency of the facilitated extraction through the polymer membranes is principally governed by reaction kinetics at membrane/aqueous solution interfaces, together with the extraction rate of the complexes (Substrate-Extractive agent) through the membrane matrix. According to the extraction agent mobility and physicochemical properties of the facilitated extraction membrane, an extraction mechanism based on the successive jump of the substrate via a semi-mobile and fixed extractive agent has been proposed [28]. The studied membranes polymer inclusion membranes (PIMs), and grafted polymer membranes (GPMs), as two major types of facilitated extraction membranes, have attracted much attention in fundamental studies and practical applications [29,30,31,32]. Due to the simple preparation steps, stability, better mechanical properties, good chemical resistance, better mechanical properties, and particularly stable integration of the extractive agent into the polymer support, special attention is paid to PIMs and GPMs [33,34,35,36]. This study highlights the development of a clean and sustainable treatment process in the pharmaceutical industry. Accordingly, in our laboratory, experiments related to the facilitated extraction process of paracetamol, which is used here as a model drug to evaluate the extraction capabilities of the membrane process, were carried out to extract the active substances from the liquid solution. Our challenge was to determine the proper and selective extractive agent and examine its effectiveness in developing a stable and efficient membrane for extracting paracetamol. We also aimed to evaluate the parameters to achieve high recovery, high throughput, and low consumption time. This extraction process was performed using PIM and GPM containing gluconic acid (GA) as the extractive agent. The prepared membranes were characterized by two techniques: (i) Fourier-transform infrared spectroscopy and (ii) scanning electron microscopy (SEM) techniques to confirm the presence of the extractive agent in the polymeric support. The developed membranes were used to perform oriented processes for facilitated extraction and to recover paracetamol substrates under the influence of the initial substrate concentration, acidity, and temperature of the medium. The dynamics of mass transfer and the effect of the different factors on the extracting the paracetamol substrate were discussed. The kinetically determining step, which controls the rate of paracetamol extraction when PIMs and GPMs are used, has been elucidated by analyzing the kinetic data. Methods/experimental Chemicals and reagents Paracetamol was purchased from ICN Biomedicals. All polymers, polyvinyl alcohol (PVA) (Mw = 72,000 g mol−1), polysulfone (PSU) (Mw = 35,000 g mol−1), polyvinyl-pyrrolidone (PVP) (Mw = 45,000 g mol−1), GA (Mw = 218, 2 g mol−1), and the solvent N,N-dimethylformamide (DMF, 99.8%) and dimethylsulfoxide (DMSO ˃ 99.8%), are commercial products (Aldrich, Fluka). Double distilled water was used in all experiments. The pH of aqueous solutions was adjusted with an analytical grade solution of hydrochloric acid (HCl) from Sigma. Instruments and apparatus The acidities of the aqueous solutions (feed phase and receiving phase) were measured using a pH meter (HANNA Instruments HI 8519N). A UV–visible spectrophotometer (Rayleigh. U.V.—2601) was used, to determine the paracetamol concentration (CR) in the receiving phase. Two infrared spectrophotometers, AVATAR 360 FTIR ESP and JASCO model 4600 were used to plot the FTIR spectra to identify the presence of extractive agents in the polymer matrix. Similarly, the scanning electron microscopy (SEM) technique was used to produce different micrographs and study the morphology and porosity of the developed membranes by (ZEISS EVO40 EP) and (JEOL NeoScope JCM-500). Their thicknesses were measured using an electronic micrometer (Mitutoyo). Membrane preparation To conduct the oriented processes of the facilitated extraction of paracetamol, we have prepared two types of polymer membranes PIM and GPM, based on polyvinyl alcohol and polysulfone as polymer support with the same extractive agent (GA). The adopted GPM was developed according to the following experimental protocol [37]: a 3 g of polysulfone polymer dissolved in 13 cm3 of dimethylformamide (DMF) was introduced into a closed bottle to isolate the mixture from the air. The system was stirred for 12 h until polysulfone was completely solubilized. Next, a 0.625 g of polyvinylpyrrolidone (PVP) was added to this homogeneous solution, followed by the slow addition of an equivalent mass of 3 × 10−3 mol of GA. The mixture was stirred for 3 to 4 days to solubilize the extractive agent to produce a homogeneous phase. The resulting phase was cast on a glass plate and then spread with a ruler. This glass plate was rapidly immersed in a bath containing distilled water. The solvent DMF leaves the membrane matrix and a rigid membrane in the form of a paper (phase inversion method) was obtained [38, 39]. After this operation, the GPM membrane was dried, its mass (0.030 g) and thickness (l = 162 µm) were determined. Its total surface area (10 cm2) was measured and the concentration of the extractive agent [T]0 = 0.20 mol L−1 was calculated. PIM [40] was prepared by dissolving 10 g of polyvinyl alcohol (PVA) in a mixture of 20 cm3 of DMSO and 80 cm3 of distilled water. The mixture was stirred for 24 h at a temperature of 120 °C to dissolve the PVA in the solution. In this homogeneous solution, an equivalent mass of 3 × 10−3 mol of GA was added slowly under a condition of constant stirring to avoid polymer aggregation. The resulting solution was poured carefully into a Petri dish and placed on a stove at a temperature range of 70 to 80 °C to evaporate the solvent completely. The heating temperature promotes solvent evaporation, allowing the polymer and extractive agent chains to come together. This step is important in the PIM membrane development process. The approach facilitates the cross-linking between the extractive agent and the polymer, inducing faster cross-linking kinetics [41]. The PIM obtained by this experimental protocol (heat vulcanization method) was homogenous, transparent, flexible, and mechanically strong [42, 43]. Its thickness was measured (l = 228 µm) and the extractive agent concentration was calculated ([T]0 = 0.30 mol L−1). Experimental protocol for the facilitated extraction of paracetamol The experimental cell (Additional file 1: Fig. S1) was used to carry out the facilitated extraction processes of the paracetamol compound. It consists of two compartments of identical volume separated by the produced membrane. The feed phase (F) contained the paracetamol solution in the concentration range of 0.01 to 0.08 (mol L−1), and the receiving phase (R) contained distilled water [40, 44]. The aqueous phase volume was 70 cm3 in each compartment. The system was immersed in a thermostatic bath (TB) containing water to keep the temperature constant throughout the experimental procedure. Homogeneity was ensured by using a multi-station magnetic stirrer. Samples were collected from the receiving phase every 30 min and were measured at absorption maximum wavelength (ƛmax = 244 nm). Knowing these values is necessary to calculate the membrane volume to determine the fixed concentration [T]0 of GA in the membrane phase. Before adopting these membranes for the facilitated extraction process of paracetamol substrate under different experimental conditions, various studies on their compositions and their morphologies were performed. Fourier transform-infrared (FTIR) analysis After drying the sample for 48 h to remove traces of residual water and solvent, the obtained membranes (PSU–PVP) and (PSU–PVP–AG) were characterized by the FTIR-spectroscopy technique (Fig. 1) to record the vibration bands corresponding to the membrane components. The PSU–PVP/GA membrane spectrum shows that all the characteristic absorption bands of the PSU + PVP support are present. These FT-IR spectra related to the PSU + PVP support membrane show the peaks existing in the range of 700–1400 cm−1 correspond to PSU fingerprints, and two vibration peaks (1462 and 1424 cm−1), corresponding to the tertiary amine group of PVP copolymer. The spectrum also indicates the presence at around 3200–3600 cm−1 of a characteristic broad absorption band corresponding to the alcohol group (OH). A peak at 1720 cm−1 was also observed, which was attributed to the vibration of the C=O group of GA. These spectral evolutions proved that the extractive agent GA has been successfully integrated into the polymer matrix of the membrane. The FT-IR spectrum for the PSU + PVP support and the PSU/PVP–GA membrane The PIM was analyzed and characterized using the FT-IR and SEM in the same manner as the membrane described in the previous section. The results indicated and proved that the extractive agent; was trapped in the polymer matrix of the membrane, whose porosity increased with the concentration of the extractive agent. Figure 2 shows the FTIR spectra of the PVA support and PVA–AG membranes. The common stretching vibration bands for some relative wavenumbers of the PVA polymer are: from 3283 to 3400 cm−1 attributed to the OH stretching vibration; from 2850 to 3000 cm−1 associated with the asymmetric stretching vibration of CH2 or CH; the bands 1327 and 1424 cm−1 are due to the bending vibrations of CH2 and CH3. It is expected that the inclusion of AG agent in the PVA support increases the number of hydroxyl groups. As a result, the absorbance intensity band for –OH increases, and a new slightly intense peak for the vibration of the C=O (carboxylic) bond appears at 1660 cm−1. A homogeneous dispersion of the extractive agent in the polymer matrix has a crosslinking effect due to covalent bonds' formation involving chemical interactions between polymer functional groups and organic acids at a high temperature [45, 46]. On the other hand, several crosslinking methods have been published for different uses, since as a rule, all multifunctional compounds capable of reacting with hydroxyl groups can be used to obtain tridimensional networks in PVA [47, 48]. In addition, Heat-treatment above the glass transition temperature is also used as means of achieving the same results [49, 50]. The FT-IR spectrum for the PVA support and the PVA–GA membrane Scanning electron microscopy (SEM) analysis Various samples of the elaborated membranes were visualized using the SEM technique. The samples were irradiated with an electron beam (15 kV). This study was carried out under suitable magnification. Electrons were precisely focused for better visualization of the membrane surface and to properly record SEM micrographs of the upper surface of the polymer support (PSU + PVP) and the GPM membrane (PSU + PVP + GA). SEM images of the membranes with different compositions are grouped in the scheme of Fig. 3. SEM micrographs: a support polymer cross-section (PSU/PVP), b, c membrane cross-section (PSU/PVP–GA) The SEM micrograph, presented in Fig. 3a, represents the morphology of the polymer support (PSU-PVP). A considerably smooth and dense surface without apparent porosity was observed. Figure 3b, c reveal that the extractive agent was efficiently grafted onto the membrane phase. It also influenced the structure, morphology, and porosity of the polymeric support. The synthesized membrane contained pores along the membrane width (surface layers; Fig. 3b, c. Figure 4 represents the SEM images relative to the two prepared PVA and PVA–AG. These SEM micrographs generally showed a remarkable change in morphology and porosity with the inclusion of the extractive agent in the polymeric support. Image (a) corresponds to the surface of the PVA support and clearly shows that the surface is homogeneous and smooth without apparent porosity. In contrast, the membrane modified by the inclusion of gluconic acid exhibits a clear porous membrane structure with a largely homogeneous porosity (b, c) which are included in the polymer matrix. SEM micrographs of support polymer surface (PVA) (a) and membrane surface (PVA–GA) (b) Degree of swelling The degree of swelling versus time was investigated by measuring the change in weight of the membrane before and after the swelling. The different sample membranes of 3 × 3 cm were immersed into distilled water at pH = 1, 2 and 3 for 48 h. The membranes were taken out from the water every time tx, and carefully wiped with an absorbent paper, and quickly weighed. Increase in weight of the film was determined at preset time intervals until a constant weight was observed. The experiments were performed in triplicate, and average values were reported. The degree of swelling was calculated using the following equation [51, 52]: $${\text{DS}}\left( \% \right) \, = \frac{{{\text{W}}_{{\text{t}}} - {\text{W}}_{0} }}{{{\text{W}}_{0} }} \times 100,$$ where Wt is the weight of film at time t, and W0 is the weight of film at time zero, and the value of Wt is the result of the average of three weighings for each membrane. Additional file 1: Figure S2 depicts the degree of swelling of the membranes PIM-based PVA and PIM–GA at pH = 1, 2 and 3. The results reveal that the pH of the medium doesn't play a role in affecting the swelling of membranes PIM–GA and improves mechanical properties [53, 54]. In addition, PIMs cross-linked by the GA have DS ≤ 23.5% compared to the membrane-based PVA only DS < 52%. These confirm that the cross-linking effect with GA reduces the swelling degree. The efficiency of cross-linking and swelling ratio of the membranes are the main parameters to define its physicochemical properties. The same experiment was conducted for GPM, and the membrane maintains practically the same weight after immersing in distilled water. The membrane has a well-aligned layer structure that does not swell. This result is probably due to the reason the polymer properties (hydrophobic), and the crosslinking type play an important role in effectively stabilizing the membrane and preventing them from swelling. Furthermore, this result was explained by Gupta et al. [55] that considered that the crosslinking factor influences the swelling behavior and hence the resistivity of the membranes. The higher resistivity in both the 2 and 4% cross-linked membranes for higher graft levels is therefore due to the lower water content as observed in the swelling behavior. Theoretical models for quantification of processes The facilitated extraction processes for substrate S were conducted using an affinity polymer membrane. The process depends on the association and the dissociation of the substrate-extractive agent entity (ST) at the membrane-solution interfaces and in the membrane phase during the substrate diffusion. To quantify the processes carried out and to study the performances of the adopted membranes, kinetic and thermodynamic models based on the first and second Fick's laws and a saturation law of the extractive agent (T) by the substrate (S) have been developed in the laboratory [37, 40, 56,57,58]. The equilibrium "association/dissociation" is presented according to the following relationships. $${\varvec{P}} \times \left( {{\mathbf{t}} - {\mathbf{t}}_{{\mathbf{I}}} } \right) = \left( {{\varvec{l}} \times {{\text{V}} \mathord{\left/ {\vphantom {{\text{V}} {\text{S}}}} \right. \kern-\nulldelimiterspace} {\text{S}}}} \right)\left[ {1/2 \times Ln\left( {{\mathbf{C}}_{{\mathbf{0}}} /{\mathbf{C}}_{{\mathbf{0}}} - 2{\mathbf{C}}_{{\mathbf{R}}} } \right)} \right],$$ $${\varvec{J}}_{{\mathbf{0}}} = \left( {{\varvec{D}}^{\user2{*}} /{\varvec{l}}} \right) \times \left[ {\left[ {\varvec{T}} \right]_{{\mathbf{0}}} \times {\varvec{K}}_{{{\varvec{ass}}}} \times {\mathbf{C}}_{{\mathbf{0}}} /\left( {{\mathbf{1}} + {\varvec{K}}_{{{\varvec{ass}}}} \times {\mathbf{C}}_{{\mathbf{0}}} } \right)} \right].$$ l: membrane thickness (cm), S: membrane active area (cm2) and V: receiving phase volume (cm3). C0, CR, and [T]0: initial substrate concentration in the feed phase (mol L−1), substrate concentration in the receiving phase at time t (mol L−1) and extractive agent concentration in the organic phase (mol L−1), respectively. P: membrane permeability (cm2 s−1), J0: substance initial flux across the membrane (mmol s−1 cm−2), Kass: association constant of entity ST (L mol−1), and D*: apparent diffusion coefficient of the substrate S through the membrane phase (cm2 s−1). If the kinetic model is verified, after an induction time (tI), the function (− Ln (C0 − 2CR) versus time) evolves linearly. The slope (a) of the obtained straight line allows the determination of the permeability parameter P according to the following equation [59, 60]. $${\varvec{P}} = \left( {{\varvec{a}}*{\varvec{V}}*{\varvec{l}}} \right)/{\mathbf{2}}{\varvec{S}},$$ The initial flux J0 can be calculated from the permeability coefficient P by the following equation: $${\varvec{J}}_{{\mathbf{0}}} = \left( {{\varvec{P}} \times {\varvec{C}}_{{\mathbf{0}}} } \right)/{\varvec{l}}.$$ To determine the nature of the movement of the substrate S during its diffusion through the membrane phase and to elucidate the mechanism that governs the studied processes, it is necessary to determine the values of the microscopic parameters D* and Kass. We used the Lineweaver–Burk method (L–B) to linearize the expression in Eq. 2, according to the following equation [44, 61]: $${\mathbf{1}}/{\varvec{J}}_{{\mathbf{0}}} = \left( {{\varvec{l}}/{\varvec{D}}^{\user2{*}} } \right) \times \left[ {\left( {{\mathbf{1}}/\left( {\left[ {\mathbf{T}} \right]_{{\mathbf{0}}} \times {\varvec{K}}_{{{\varvec{ass}}}} } \right)} \right) \times \left( {{\mathbf{1}}/{\mathbf{C}}_{{\mathbf{0}}} } \right) + \left( {{\mathbf{1}}/\left[ {\mathbf{T}} \right]_{{\mathbf{0}}} } \right)} \right].$$ The linear evolution of the term 1/J0 = f (1/C0) (from Eq. 5) allows us to confirm that the thermodynamic model is based on the interaction of the substrate (S) with the extractive agent (T). The interaction in the membrane phase was checked. Similarly, the values of slopes (p) and intercepts (OO) of the obtained straight line segments are used to calculate the values of D* and Kass according to the following equation: $${\varvec{K}}_{{{\varvec{ass}}}} = {{{\varvec{intercept}}\left( {{\varvec{OO}}} \right)} \mathord{\left/ {\vphantom {{{\varvec{intercept}}\left( {{\varvec{OO}}} \right)} {{\varvec{slope}}}}} \right. \kern-\nulldelimiterspace} {{\varvec{slope}}}}\;{\text{and}}\;{\varvec{D}}^{\user2{*}} = \left( {{\varvec{l}}/{\varvec{OO}}} \right) \times \left( {{{\mathbf{1}} \mathord{\left/ {\vphantom {{\mathbf{1}} {\left[ {\mathbf{T}} \right]_{{\mathbf{0}}} }}} \right. \kern-\nulldelimiterspace} {\left[ {\mathbf{T}} \right]_{{\mathbf{0}}} }}} \right).$$ The initial flux is related to the temperature factor by the Arrhenius law [62, 63], according to the following equation: $${\varvec{J}}_{{\mathbf{0}}} \left( {\varvec{T}} \right) = {\varvec{A}}_{{{\varvec{j}} }} {\varvec{exp}}\left( { - {\varvec{E}}_{{\varvec{a}}} /{\varvec{RT}}} \right),$$ R: gas constant (8.314 J mol−1 K−1). Aj: proportional term to the favorable interactions (mol−1 s−1 m2), Ea: transition state activation energy of the formation-dissociation reaction of the entity (TS) (J mol−1). The expression was linearized according to the following equation: $${\varvec{lnJ}}_{{\mathbf{0}}} = \left( {\left( {\left( { - {\varvec{E}}_{{\varvec{a}}} } \right)/{\varvec{R}}} \right) \times \left( {{\mathbf{1}}/{\varvec{T}}} \right) + {\varvec{lnA}}_{{\varvec{j}}} } \right).$$ The values of activation parameters Ea and Aj were determined from the slope and the intercept of the linear function Ln (J0) = f (1/T). According to the transition state theory (Eyring theory), these values allow the calculation of the activation enthalpy ΔH# (J mol−1) and entropy ΔS# (J K−1 mol−1) parameters from the following equation: $$\user2{\Delta H}^{\user2{ \ne }} = {\varvec{E}}_{{\varvec{a}}} - 2500 \left( {{\text{J}}\;{\text{mol}}^{ - 1} } \right)\;{\text{and}} \;\user2{\Delta S}^{\user2{ \ne }} = {\varvec{R}}\left( {{\varvec{lnA}}_{{\varvec{j}}} - 30.46} \right) \left( {{\text{J}}\;{\text{K}}^{ - 1} \;{\text{mol}}^{ - 1} } \right)\;{\text{at}}\;298\;^\circ {\text{K}}{.}$$ The thermodynamic enthalpy parameter ΔH≠th (Kj mol−1) represents the amount of energy exchanged during the equilibrium reaction related to the formation of the ST entity. The value of this parameter is determined directly from the slope of the linear representation of Van't Hoff's law (Eq. 10). $${\mathbf{ln}}\left( {{\varvec{K}}_{{{\varvec{ass}}}} } \right) = ( - (\user2{\Delta H}_{{t{\varvec{h}}}}^{\user2{ \ne }} )/{\mathbf{RT}}) + {\varvec{cste}}.$$ On the other hand, according to the transition state theory, for an elementary reaction, this important thermodynamic parameter is related to the activation enthalpies, association ΔH≠ass, and dissociation ΔH≠diss (Kj mol−1) by the following relation: $$\Delta {\varvec{H}}_{{{\varvec{th}}}}^{ \ne } = \Delta {\varvec{H}}_{{{\varvec{ass}}}}^{ \ne } - \Delta {\varvec{H}}_{{{\varvec{diss}}}}^{ \ne } .$$ Influence of the initial substrate concentration (C0) on the performance of the developed membranes Before adopting PIM–GA and GPM–AG, we have carried out experiments related to the extraction of paracetamol through PVA and PSU–PVP membranes without the extractive agent. We have noticed that these membranes are impermeable and confirm that the extractive agent is essential, which is responsible for interactions with the target species and their diffusion through the membrane phase. In this section, we have examined the effect of C0 on the evolution of macroscopic parameters P and J0 relative to the facilitated extraction processes of paracetamol through all the developed membranes. Indeed, we have studied the processes at different C0: 0.08, 0.04, 0.02, and 0.01 (mol L−1) at pH = 1 and T = 298 K. At all concentrations, the kinetic model has been verified, and the function − Ln (C0 − 2CR) = f (t) generated straight lines (Fig. 5). The values of P and J0 were determined from the slopes of the straight lines (according to the expressions in Eqs. 3 and 4), presented in Table 1. Evolution of the kinetic function − Ln (C0 − 2CR) = f (t) relative to the paracetamol extraction through the developed membranes at different concentrations C0, pH = 1 and T = 298 K Table 1 Evolution of P and J0 parameters for extraction oriented processes of the paracetamol substrate at T = 298 K Analysis of the results grouped in Table 1 demonstrates that the used membranes are effective for paracetamol extraction. Based on the obtained values of macroscopic parameters (P and J0), the PIM membrane was more efficient than the GPM counterpart. However, it was noticed that the permeability P of the adopted membranes varies inversely with the initial paracetamol concentration in the feed phase C0, and an increase in the substrate concentration leads to a decrease in the parameter P. However, the initial flux of paracetamol (J0) through each of the membranes increases with the substrate concentration C0. This reason can explain this is that during facilitated extraction of the substrate across the membrane, the association/dissociation mechanism of paracetamol with the extractive agent is faster when the initial substrate concentration is higher. This is due to the high difference in concentration between the feed and receiving phase (concentration gradient). Moreover, the results obtained for P indicate that this parameter is influenced by the competition of the substrate molecules to diffuse through the membrane phase. This evolution of the values of the parameters P and J0 related to this oriented process has been observed and indicated by some previous works for similar processes related to the extraction of some organic compounds and metal ions [64,65,66,67]. Acidity factor influence on the evolution of paracetamol extraction processes To investigate the effect of acidity (feed and receiving aqueous solutions) on extraction efficiency through the adopted membranes, a series of experiments were performed at different pH (1, 2, and 3). Different substrate concentrations (0.01–0.08 mol L−1) were used for the experiments. The values of the macroscopic parameters P and J0 were determined at each pH value (Table 2). The Lineweaver–Burk (L–B) representation 1/J0 = f (1/C0) was plotted using the values of initial fluxes. The slopes and intercepts of the straight lines are shown in Fig. 6. The D* and Kass values (microscopic parameters) were estimated. The results are presented as histograms in Fig. 7. Table 2 Evolution of P and J0 with pH during the extraction of paracetamol at T = 298 K Lineweaver–Burk representations (1/J0 = f (1/C0)) for facilitated extraction processes across PIM and GPM membranes Influences acidity on the evolution of D* and Kass parameters for the paracetamol extraction through the developed PIM and GPM membranes According to the results grouped in Table 2, it is clear that the pH of the aqueous solutions does not significantly influence the extraction oriented processes of paracetamol. On the other hand, it has been confirmed that the performance of the PIM membrane is better than that of the GPM counterpart at 3 acidic mediums. As shown from Fig. 7, the apparent diffusion coefficient D* and association constant Kass vary inversely. The highest values of D* and the lowest Kass values are obtained for the most efficient membrane (PIM). These results explain the performances of the developed polymer membranes. The low values of Kass explain that the entity (Paracetamol-GA) in the membrane phase of PIM is less stable, which reflects by a higher diffusion in contrast to GPM. The high values of D* propose, firstly, that the diffusion of the substrate through the PIM was conditioned by successive interactions of substrate molecules with semi-mobile interaction sites of extractive agent in the membrane phase. Secondly, the passage of paracetamol through the GPM is a diffusion movement by successive jumps of substrate molecules from one site to another of fixed-sites of the extractive agent (Additional file 1: Fig. S3). Temperature influence on the evolution of oriented extraction processes of paracetamol To confirm the previous results and determine the activation and thermodynamic parameters we examined the temperature factor influence on the evolution of the extraction process. The experiments were conducted at better acidity (pH = 1), C0 was varied in the range of 0.01 to 0.08 mol L−1 and at different temperatures (298, 303, and 308 K). The values of macroscopic parameters P and J0 have been summarized in Table 3. The data reveals the impact of temperature on the facilitated extraction processes employed for paracetamol extraction. In addition, an increase in temperature leads to an increase in membrane performance. It was noted that the permeability and initial fluxes through the PIM membrane were higher than the permeability of the GPM membrane at all temperatures. To complete our study, we plotted the L–B curve (1/J0 = f (1/C0)) (Fig. 8). Table 3 Evolution of P and J0 parameters according to the medium temperature for extraction oriented processes of paracetamol The L–B (1/J0 = f (1/C0)) for the oriented extraction processes of paracetamol through PIM and GPM membranes at different temperatures The linear evolution verified the adopted thermodynamic model and slopes and intercepts of the straight lines were used to determine the values of the apparent diffusion coefficient D* and the association constant Kass. These two parameters are relative to the movement of the paracetamol molecules when they diffuse through each membrane. The values for these specific parameters and their evolution as a function of temperature are presented by the histograms in Fig. 9. Evolution of specific D* and Kass parameters according to the medium temperature for the facilitated extraction process of paracetamol through elaborated membranes The results obtained for the microscopic parameters (Fig. 9) indicate the inverse evolution of Kass and D*. Therefore, confirming that an increase in the temperature leads to a decrease in stability of the ST entity formed in the membrane phase by interaction between substrate S and extractive agent T. Indeed, the low stability of the entity (ST) (translated by low values of Kass) explain the faster substrate diffusion (S). The high D* and lower Kass values obtained at high temperatures can be potentially explained by improved membranes performance. The high values of apparent diffusion coefficient (D*), might indicate that the movement of paracetamol molecule across the organic phase of PIM and GPM membranes containing GA as an extractive agent is not pure diffusion. In addition, according to the reviews and papers published by Hlaibi et al. [68, 69] related to the extraction of some organic compounds through SLM membranes types indicate identical evolutions for the specific parameters Kass and D* with a similar mechanism. Moreover, the values of Kass and D* parameters show that in the membrane phase, the interactions between molecules of organic compounds and extractive agent are low. In contrast, the values of apparent diffusion coefficient (D*) are high. At this step of the studies, we confirmed that the PIM membrane is more efficient than its counterpart GPM in terms of performance. Activation and thermodynamic parameters for the extraction studied processes To elucidate the energetic or kinetic aspect that controls the mechanism of the studied processes, and to explain the performances of the prepared membranes, it is necessary to determine the values of the activation and thermodynamic parameters (Ea, ΔH≠ass, ΔS≠, ΔH≠diss, and ΔH≠th) corresponding to the transition state of the substrate diffusion step across each organic membrane phase. For this, we have studied the evolution of J0 and Kass values with temperature factor according to Arrhenius (Ln (J0moy) = f (1/T)) and Van't Hoff (Ln (Kass) = f (1/T) relationships (Eqs. 8 and 10) respectively (Fig. 10). The slopes and intercepts determined from the obtained straight line segments were used to determine the values of the activation and the thermodynamic parameters. Evolution of Arrhenius and Van't Hoff relationships for the paracetamol extraction processes through the PIM and GPM membranes Table 4 presents the values of all the activation and thermodynamic parameters. Analysis of the activation parameters indicates that the transition state corresponding to the diffusion step requires little energy (Ea and ΔH≠ass). On the other hand, the negative activation entropy (ΔS≠) indicates that the transition state is perfectly ordered, depends on the substrate and extractive agent structures, and the orientation of their interaction sites. These results indicate that a favorable orientation of the interaction sites is required to achieve a good association between the paracetamol molecules and GA in the transition state with the bidentate sites (ΔS# = − 300 J mol−1 K−1) (Additional file 1: Figs. S4 and S5). On the other hand, the low values of the important parameters (ΔH≠ass and ΔH≠diss) reveal the kinetic control aspect of the mechanisms of the oriented processes leading to good membrane performances even at low temperatures. This kinetic control aspect for the diffusion of paracetamol molecules through affinity polymer membranes can be described the structure of the molecules and the pharmacological and biological activities of these molecules that diffuse through cell membranes at a constant temperature. Table 4 Activation and thermodynamic parameters corresponding to the transition state of the extraction process occurring through developed membranes The very low values of (Ea, ΔH≠ass, and ΔH≠diss) parameters relative to the facilitated extraction process across PIM–GA explain the good performance of this membrane type against to GPM–GA counterpart. Moreover, they confirm the influence of temperature factor and the inverse evolution of Kass and D* parameters. They also indicate that the substrate migration through the membrane phase was done by a mechanism of successive jumps of substrate molecules with semi-mobile interaction sites of the extractive agent in the PIM membrane phase. In contrast, the diffusion of paracetamol across the GPM is a movement of successive jumps from one site to another of the extractive agent fixed-sites. Indeed, several studies [28, 70, 71] confirmed these types of mechanisms in which the substrate moves while binding successively to several semi-mobile and fixed extractive agents (considered as a complexation site). Reversible association-dissociation reactions leading to the formation and decomposition of an unstable "host–guest" complex were carried out. Test for membrane stability The stability test of elaborated PIM and GPM has been conducted under several conditions. The highest stability was observed at the tested pH and temperature during the extraction of paracetamol. The PVA and GPM membranes stability in an acidic medium was determined by repeating every 1–3 days at the end of the workday, an extraction of paracetamol was conducted in the same conditions. During every day, the membrane was also used for other experiments. The membrane was stable for about 6 months. This result is in good accordance with experiments described in the previous study [61, 70]. Moreover, after 6 months, the membranes were used for the same experiments without losing their effectiveness. They provided practically the same results as the obtained for the first experiment (A gap of 4.2% in the case of PIM and 3.8% for GPM). However, no degradation of membrane morphology occurred during the investigation. Therefore, it can be affirmed that the PIM and GPM membranes based on PVA and PSU with Gluconic acid as extractive agent manifested a stable characteristic with a good reproducibility during the proposed period. Additional file 1: Fig. S6 presented the evolution of the permeability relative to the facilitated extraction processes of paracetamol at C0 = 0.08 M, pH = 1 and T = 298 K, during a period of 6 months. Moreover, the membranes stability was also evaluated in terms of membrane mass change [72, 73]. Before and after the experiments, PIM and GPM membrane pieces were carefully weighed, and it was found a mass loss between 7 and 18% of the total weight for PIM and 5–11% for GPM. Hence, these results provide the use of these membranes since they preserve their performance features, such as low cost, and the possibility to prepare selective membranes, while providing the necessary stability to perform long-term experiments. The membranes obtained after the extraction process were recovered and conditioned for SEM imaging. The observation from SEM shown in Additional file 1: Fig. S7 provides a qualitative view of the membrane morphology. The images obtained offer an idea of the stability of the PIM and GPM membranes after the extraction step. The almost similar morphology proves that the adopted membranes preserve the same characteristic before and after extraction experiments. Furthermore, concerning the PIM confirms that the relative swelling rate does not seem to influence its morphology. This work, conceptualized and quantified the performances and the mechanisms of oriented membrane processes for the facilitated extraction of paracetamol through PIM and GPM. Two membranes PIM–GA and GPM–GA, were developed following the heat vulcanization and phase inversion methods and characterized by FT-IR and SEM techniques. The optimal identified operating conditions as substrate concentration, acidity, and temperature factors on the evolution of the different parameters were investigated, and the better paracetamol extraction was obtained for C0 = 0.01 mol L−1 at pH = 1 and 308 K, with D* (10−7 cm2 s−1) = 29.812 through PIM membrane. Analysis of obtained results shows a good membrane performance observed even at low temperatures and indicates a kinetic aspect controlled the mechanism of extraction of this biologically active compound (paracetamol) through the two developed membranes. We can conclude that the paracetamol molecules can potentially diffuse through the cell walls having well-adapted structures at a constant temperature. Consequently, the kinetic control of the extraction processes of paracetamol is an original idea, and the studies produced logical results. It can be correlated to the molecular structures of paracetamol and the extractive agent. After these all studies, we can consider the adopted membranes would be very efficient for extracting and recovering paracetamol from industrial liquid discharges and providing a clean, sustainable, and environmentally friendly method for the extraction and recovery of the paracetamol molecule as a high-value substance. [T]0 : Concentration of carrier in the membrane phase (mol L−1) C0 : Initial concentration of paracetamol in the feed phase (mol L−1) C R : Concentration of paracetamol in the receiving phase (mol L−1) P : Permeability of the membrane (cm2 s−1) J 0 : Initial flux (mmol s−1 cm−2) D* : Apparent diffusion coefficient (cm2 s−1) K ass : Association constant l : Membrane thickness (µm) t : Time (s) Volume of the receiving phase (cm3) Active area of the membrane (cm2) R : Ideal gas constant (J mol−1 K−1) E a : Activation energy (KJ mol−1) ΔH ≠ ass : Activation association enthalpy (KJ mol−1) ΔH ≠ diss : Activation dissociation enthalpy (KJ mol−1) ΔS ≠ : Activation entropy (KJ mol−1 K−1) ΔH th : Thermodynamic enthalpy (KJ mol−1) Ang WL, Mohammad AW, Hilal N, Leo CP. A review on the applicability of integrated/hybrid membrane processes in water treatment and desalination plants. Desalination. 2015;363:2–18. https://doi.org/10.1016/j.desal.2014.03.008. Boleda MR, Galceran MT, Ventura F. Validation and uncertainty estimation of a multiresidue method for pharmaceuticals in surface and treated waters by liquid chromatography–tandem mass spectrometry. J Chromatogr A. 2013;1286:146–58. https://doi.org/10.1016/j.chroma.2013.02.077. Kümmerer K. The presence of pharmaceuticals in the environment due to human use—present knowledge and future challenges. J Environ Manag. 2009;90:2354–66. Shi X, Leong KY, Ng HY. Anaerobic treatment of pharmaceutical wastewater: a critical review. Bioresour Technol. 2017;245:1238–44. https://doi.org/10.1016/j.biortech.2017.08.150. Wang Y, Huang H, Wei X. Influence of wastewater precoagulation on adsorptive filtration of pharmaceutical and personal care products by carbon nanotube membranes. Chem Eng J. 2018;333:66–75. https://doi.org/10.1016/j.cej.2017.09.149. Comtois-Marotte S, Chappuis T, Vo Duy S, Gilbert N, Lajeunesse A, Taktek S, et al. Analysis of emerging contaminants in water and solid samples using high resolution mass spectrometry with a Q exactive orbital ion trap and estrogenic activity with YES-assay. Chemosphere. 2017;166:400–11. Garcia-Rodríguez A, Fontàs C, Matamoros V, Almeida MIGS, Cattrall RW, Kolev SD. Development of a polymer inclusion membrane-based passive sampler for monitoring of sulfamethoxazole in natural waters. Minimizing the effect of the flow pattern of the aquatic system. Microchem J. 2016;124:175–80. Defarges TM, Guerbet M, Massol J. Impact des médicaments sur l'environnement: état des lieux, évaluation des risques, communication. Ther Recreat J. 2011;66:335–40. Galambos I, Molina JM, Jaray P, Vatai G, Bekassy-Molnar E. High organic content industrial wastewater treatment by membrane filtration. Desalination. 2004;162:117–20. Tang Y, Liu W, Wan J, Wang Y, Yang X. Two-stage recovery of S-adenosylmethionine using supported liquid membranes with strip dispersion. Process Biochem. 2013;48:1980–91. https://doi.org/10.1016/j.procbio.2013.09.006. Zouhri A, Ernst B, Burgard M. Bulk liquid membrane for the recovery of chromium (VI) from a hydrochloric acid medium using dicyclohexano-18-crown-6 as extractant-carrier. Sep Sci Technol. 1999;34:1891–905. Kumar S, Babu BV. Separation of carboxylic acids from waste water via reactive extraction. In: International convention on water resources development and management. 2008. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.207.1142&rep=rep1&type=pdf%0Ahttp://discovery.bits-pilani.ac.in/discipline/chemical/bvb/SK_BVB_CarboxAcid_ICWRDM_2008.pdf. LaPara TM, Nakatsu CH, Pantea LM, Alleman JE. Aerobic biological treatment of a pharmaceutical wastewater: effect of temperature on COD removal and bacterial community development. Water Res. 2001;35:4417–25. Ziylan A, Ince NH. The occurrence and fate of anti-inflammatory and analgesic pharmaceuticals in sewage and fresh water: treatability by conventional and non-conventional processes. J Hazard Mater. 2011;187:24–36. https://doi.org/10.1016/j.jhazmat.2011.01.057. Parolini M. Toxicity of the non-steroidal anti-inflammatory drugs (NSAIDs) acetylsalicylic acid, paracetamol, diclofenac, ibuprofen and naproxen towards freshwater invertebrates: a review. Sci Total Environ. 2020;740: 140043. https://doi.org/10.1016/j.scitotenv.2020.140043. Kosjek T, Heath E, Krbavcic A. Determination of non-steroidal anti-inflammatory drug (NSAIDs) residues in water samples. Environ Int. 2005;31:679–85. Andreozzi R, Raffael M, Nicklas P. Pharmaceuticals in STP effluents and their solar photodegradation in aquatic environment. Chemosphere. 2003;50:1319–30. Stumpf M, Ternes TA, Wilken R, Rodrigues SV, Baumann W. Polar drug residues in sewage and natural waters in the state of Rio de Janeiro, Brazil. Sci Total Environ. 1999;225:135–41. Comeau F, Surette C, Brun GL, Losier R. The occurrence of acidic drugs and caffeine in sewage effluents and receiving waters from three coastal watersheds in Atlantic Canada. Sci Total Environ. 2008;396:132–46. Zidi C, Tayeb R, Dhahbi M. Comparaison entre le transport facilité à travers une Membrane à Liquide Supporté (MLS) du phénol et de la vanilline extraits de milieux aqueux (Comparison between facilitated transport through a supported liquid membrane (SLM) of phenol and vanillin extr). J Mater Environ Sci. 2014;5:779–82. Senhadji OK, Sahi S, Kahloul N, Tingry S, BenAmor M, Seta P. Extraction du Cr(VI) par membrane polymere a inclusion. Sci Technol A. 2008;27:43–50. Dzygiel P, Wieczorek PP. Supported liquid membranes and their modifications: definition, classification, theory, stability, application and perspectives. In: Liquid membranes. Amsterdam: Elsevier; 2010. p. 73–140. Strathmann H. Membranes and membrane separation processes, 1. Princ Ullmann's Encycl Ind Chem. 2011. https://doi.org/10.1002/14356007.a16. Charcosset C. Procédés membranaires à application pharmaceutique et biotechnologique. ITBM-RBM. 2006;27:1–7. https://doi.org/10.1016/j.rbmret.2005.10.003. Baudot A, Floury J, Smorenburg HE. Liquid-liquid extraction of aroma compounds with hollow fiber contactor. AIChE J. 2001;47:1780–93. Liu X, Ma Y, Cao T, Tan D, Wei X, Yang J, et al. Enantioselective liquid-liquid extraction of amino acid enantiomers using (S)-MeO-BIPHEP-metal complexes as chiral extractants. Sep Purif Technol. 2019;211:189–97. https://doi.org/10.1016/j.seppur.2018.09.068. Shintani H. Liquid–liquid extraction vs solid phase extraction in biological fluids and drugs. Int J Clin Pharmacol Toxicol. 2013. https://doi.org/10.19070/2167-910X-130004e. Eljaddi T, Lebrun L, Hlaibi M. Review on mechanism of facilitated transport on liquid membranes. J Membr Sci Res. 2017;3:199–208. Jean E, Villemin D, Hlaibi M, Lebrun L. Heavy metal ions extraction using new supported liquid membranes containing ionic liquid as carrier. Sep Purif Technol. 2018;201:1–9. https://doi.org/10.1016/j.seppur.2018.02.033. Xie R, Chu L-Y, Deng J-G. Membranes and membrane processes for chiral resolution. Chem Soc Rev. 2008;37:1243–63. Ershad M, Almeida MIGS, Spassov TG, Cattrall RW, Kolev SD. Polymer inclusion membranes (PIMs) containing purified dinonylnaphthalene sulfonic acid (DNNS): performance and selectivity. Sep Purif Technol. 2018;195:446–52. https://doi.org/10.1016/j.seppur.2017.12.037. Galiano F, Briceño K, Marino T, Molino A, Christensen KV, Figoli A. Advances in biopolymer-based membrane preparation and applications. J Membr Sci. 2018;564:562–86. https://doi.org/10.1016/j.memsci.2018.07.059. O'Rourke M, Cattrall RW, Kolev SD, Potter ID. The extraction and transport of organic molecules using polymer inclusion membranes. Solvent Extr Res Dev. 2009;16:1–12. Nghiem LD, Mornane P, Potter ID, Perera JM, Cattrall RW, Kolev SD. Extraction and transport of metal ions and small organic compounds using polymer inclusion membranes (PIMs). J Membr Sci. 2006;281:7–41. Fontas C, Tayeb R, Dhahbi M, Gaudichet E, Thominette F, Roy P, et al. Polymer inclusion membranes: the concept of fixed sites membrane revised. J Membr Sci. 2007;290:62–72. Zaidi SMJ, Mauritz KA, Hassan MK. Membrane surface modification and functionalization. Functional Polymers. 2018;1–26. Mouadili H, Majid S, Kamal O, Elatmani ELH, Touaj K, Lebrun L, et al. New grafted polymer membrane for extraction, separation and recovery processes of sucrose, glucose and fructose from the sugar industry discharges. Sep Purif Technol. 2018;200:230–41. https://doi.org/10.1016/j.seppur.2017.12.012. Haponska M, Trojanowska A, Nogalska A, Jastrzab R, Gumi T, Tylkowski B. PVDF membrane morphology—influence of polymer molecular weight and preparation temperature. Polymers. 2017;9:718. https://doi.org/10.3390/polym9120718. Article CAS PubMed Central Google Scholar Kang GD, Cao YM. Application and modification of poly(vinylidene fluoride) (PVDF) membranes—a review. J Membr Sci. 2014;463:145–65. https://doi.org/10.1016/j.memsci.2014.03.055. El Atmani EH, Benelyamani A, Mouadili H, Tarhouchi S, Majid S, Touaj K, et al. The oriented processes for extraction and recovery of paracetamol compound across different affinity polymer membranes. Parameters and mechanisms. Eur J Pharm Biopharm. 2018;126:201–10. https://doi.org/10.1016/j.ejpb.2017.06.001. Jiang S, Ladewig BP. Green synthesis of polymeric membranes: recent advances and future prospects. Curr Opin Green Sustain Chem. 2019. https://doi.org/10.1016/j.cogsc.2019.07.002. Rafiq S, Deng L, Hägg MB. Role of facilitated transport membranes and composite membranes for efficient CO2 capture—a review. ChemBioEng Rev. 2016;3:68–85. Deng L, Kim TJ, Hägg MB. Facilitated transport of CO2 in novel PVAm/PVA blend membrane. J Membr Sci. 2009;340:154–63. Louafy R, Benelyamani A, Tarhouchi S, Kamal O, Touaj K, Hlaibi M. Parameters and mechanism of membrane-oriented processes for the facilitated extraction and recovery of norfloxacin active compound. Environ Sci Pollut Res. 2020;27:37572–80. https://doi.org/10.1007/s11356-020-09311-0. Heydari M, Moheb A, Ghiaci M, Masoomi M. Effect of cross-linking time on the thermal and mechanical properties and pervaporation performance of poly(vinyl alcohol) membrane cross-linked with fumaric acid used for dehydration of isopropanol. J Appl Polym Sci. 2013;128:1640–51. Işiklan N, Şanli O. Separation characteristics of acetic acid-water mixtures by pervaporation using poly(vinyl alcohol) membranes modified with malic acid. Chem Eng Process. 2005;44:1019–27. Korsmeyer RWPN. Effect of the morphology of hydrophilic polymeric. J Membr Sci. 1981;9:211–27. Giménez V, Mantecon A, Ronda JCCV. Poly (vinyl alcohol) modified with carboxylic acid anhydrides: crosslinking through carboxylic groups. J Appl Polym Sci. 1997;65:1643–51. Mallapragada SKPN. Dissolution mechanism of semicrystalline poly (viny1 alcohol) in water. J Polym Sci B Polym Phys. 1996;34:1339–46. Hassan CM, Peppas NA. Structure and applications of poly (vinyl alcohol) hydrogels produced by conventional crosslinking or by freezing/thawing methods. Adv Polym Sci. 2000;153:37–65. Peh KK, Wong CF. Polymeric films as vehicle for buccal delivery: swelling, mechanical, and bioadhesive properties. J Pharm Pharm Sci. 1999;2:53–61. Fumio U, Hiroshi Y, Kumiko N, Sachihiko N, Kenji S, Yasunori M. Swelling and mechanical properties poly (vinyl alcohol) hydrogels. Int J Pharm. 1990;58:135–42. Turković E, Vasiljević I, Drašković M, Obradović N, Vasiljević D, Parojčić J. An investigation into mechanical properties and printability of potential substrates for inkjet printing of orodispersible films. Pharmaceutics. 2021;13:468. Brant AJC, Giannini DR, Pessoa JOCP, Andrade AB. Influence of dissolution processing of PVA blends on the characteristics of their hydrogels synthesized by radiation—Part I: gel fraction, swelling, and mechanical properties. Radiat Phys Chem. 2012;81:1465–70. Gupta B, Büchi FN, Scherer GG, Chapiró A. Crosslinked ion exchange membranes by radiation grafting of styrene/divinylbenzene into FEP films. J Membr Sci. 1996;118:231–8. Hassoune H, Rhlalou T, Verchère JF. Mechanism of transport of sugars across a supported liquid membrane using methyl cholate as mobile carrier. Desalination. 2009;242:84–95. https://doi.org/10.1016/j.desal.2008.03.033. Kamal O, Eljaddi T, El Atmani EH, Touarssi I, Mourtah I, Lebrun L, et al. Process of facilitated extraction of vanadium ions through supported liquid membranes: parameters and mechanism. Adv Mater Sci Eng. 2017. https://doi.org/10.1155/2017/3425419. Eljaddi T, Hor M, Benjjar A, Riri M, Mouadili H, Mountassir Y, et al. New supported liquid membrane for studying facilitated transport of U (VI) ions using tributyl phosphate (TBP) and Tri-n-octylamine (TOA) as carriers from acid medium. BAOJ Chem. 2015;1:1–9. Touaj K, Tbeur N, Hor M. A supported liquid membrane (SLM) with resorcinarene for facilitated transport of methyl glycopyranosides: parameters and mechanism relating to the transport. J Membr Sci. 2009;337:28–38. Chaouqi Y, Ouchn R, Tarik E, Amane J, Elbouchti M, Cherkaoui O. Oriented processes for extraction and recovery of blue P3R dye across hybrid polymer membranes: parameters and mechanism. J Membr Sci Res. 2019;5:303–9. Touarssi I, Mourtah I, Chaouqi Y, Kamal O, Sefiani N, Lebrun L, et al. Conceptualization and quantification of oriented membrane processes for recovering vanadium ions from acidic industrial discharges. J Environ Chem Eng. 2019;7: 103182. Chaouqi Y, Ouchn R, Touarssi I, Mourtah I, El Bouchti M, Lebrun L, et al. Polymer inclusion membranes for selective extraction and recovery of hexavalent chromium ions from mixtures containing industrial blue P3R dye. Ind Eng Chem Res. 2019;58:18798–809. Eyring H. The activated complex in chemical reactions. J Chem Phys. 1935;3:107–15. Lazarova Z, Boyadzhiev L. Kinetic aspects of copper (II) transport across liquid membrane containing LIX-860 as a carrier. J Membr Sci. 1993;78:239–45. Benjjar A, Hor M, Riri M, Eljaddi T, Kamal O, Lebrun L, et al. A new supported liquid membrane (SLM) with methyl cholate for facilitated transport of dichromate ions from mineral acids: parameters and mechanism relating to the transport. J Mater Environ Sci. 2012;3:826–39. Ohashi H, Ebina S, Yamaguchi T. Logistic gate-like permeable property of gating membrane with ion-recognition polyampholyte. Polymer. 2014;55:1412–9. https://doi.org/10.1016/j.polymer.2013.11.048. Ma P, Chen XD, Hossain MM. Lithium extraction from a multicomponent mixture using supported liquid membranes. Sep Sci Technol. 2000;35:2513–33. Tbeur N, Rhlalou T, Hlaíbi M, Langevin D, Métayer M, Verchère JF. Molecular recognition of carbohydrates by a resorcinarene. Selective transport of alditols through a supported liquid membrane. Carbohydr Res. 2000;329:409–22. Hlaïbi M, Tbeur N, Benjjar A, Kamal O, Lebrun L. Carbohydrate—resorcinarene complexes involved in the facilitated transport of alditols across a supported liquid membrane. J Membr Sci. 2011;377:231–40. Kamal O, Eljaddi T, Atmani EHEL, Touarssi I, Lebrun L, Hlaïbi M. Grafted polymer membranes with extractive agents for the extraction process of VO2+ ions. Polym Adv Technol. 2017;28:541–8. Li Y, Wang S, He G, Wu H, Jiang Z. Facilitated transport of small molecules and ions for energy-efficient membranes. Chem Soc Rev. 2014;44:103–18. https://doi.org/10.1039/C4CS00215F. Anticó E, Vera R, Vázquez F, Fontàs C, Lu C, Ros J. Preparation and characterization of nanoparticle-doped polymer inclusion membranes: application to the removal of arsenate and phosphate from waters. Materials. 2021;14:1–15. Vera R, Fontàs C, Galceran J, Serra O, Anticó E. Polymer inclusion membrane to access Zn speciation: comparison with root uptake. Sci Total Environ. 2018;622–623:316–24. https://doi.org/10.1016/j.scitotenv.2017.11.316. The authors wish to thank the Ministry of Higher Education and Scientific Research (MESRSFC) and the National Center of Scientific and Technical Research (CNRST) for their financial support PPR2 Project. PPR2 Project: Ministry of Higher Education, Scientific Research and Management Training—National Center for Scientific and Technical Research. Laboratoire Génie des Matériaux pour Environnement et Valorisation (GeMEV), Faculté des Sciences Ain Chock, Hasssan II University of Casablanca (UH2C), PB 5366, Maârif, Maroc Sanae Tarhouchi, Rkia Louafy, El Houssine El Atmani & Miloudi Hlaïbi Sanae Tarhouchi Rkia Louafy El Houssine El Atmani Miloudi Hlaïbi ST did most of the experiment and wrote the manuscript. RL and EAEH did the preparation and characterization of the membrane. MH helped in editing the English language beside adding some paragraphs to the text. All authors read and approved the final manuscript. Correspondence to Sanae Tarhouchi. All authors declare that they have no competing interest, financial or personal, which may influence the work reported in this paper. Representation of the facilitated extraction cell. Figure S2. Swelling degree versus time of different membrane samples at pH = 1, 2 and 3. Figure S3. Mechanism of successive jumps on semi-mobile and fixed sites during the facilitated extraction process of paracetamol through the PIM–GA and GPM–GA membranes. Figure S4. Possible interaction sites between paracetamol and gluconic acid. Figure S5. Interaction sites between paracetamol and gluconic acid (Chemdrew). Figure S6. The permeability relative to the facilitated extraction processes of paracetamol at C0 = 0.08 M, pH = 1 and T = 298 K, during a period of six months. Figure S7. SEM micrographs after extraction process of (a, b) membrane cross-section (GPM–GA), (c d) membrane surface (PIM–GA). Tarhouchi, S., Louafy, R., El Atmani, E.H. et al. Kinetic control concept for the diffusion processes of paracetamol active molecules across affinity polymer membranes from acidic solutions. BMC Chemistry 16, 2 (2022). https://doi.org/10.1186/s13065-021-00794-7 Facilitated extraction Affinity membranes Apparent diffusion coefficient And energetic controls
CommonCrawl
Influence of the GEO satellite orbit error fluctuation correction on the BDS WADS zone correction Binghao Wang1, Jianhua Zhou1, Bin Wang2, Dianwei Cong3 & Hui Zhang4 Decimeter-level service is provided by the BeiDou satellite navigation system wide area differential service (BDS WADS) for users who collect carrier phase measurements. However, the fluctuations in Geostationary Earth Orbit (GEO) satellite orbit errors reduce the spatial correlation of orbit errors. These fluctuations not only decrease the accuracy and stability of zone correction service provided by BDS WADS, but also shorten its effective range. In this paper, we proposed an algorithm to weaken the influence of GEO satellite orbit error fluctuations and verified the method using data from eight sparsely distributed zones. The results show that orbit errors can be stabilized using orbit fluctuation corrections, and the positioning precision and stability of the BDS WADS can be improved simultaneously. Under normal circumstances, the horizontal and vertical positioning accuracy of users within 1000 km from the center of the zone can reach 0.19 m and 0.34 m. Furthermore, the effective range is increased. The positioning performance within 1800 km could reach 0.24 m and 0.38 m for the horizontal and vertical components, respectively. The BDS space segment is a hybrid constellation of GEO, Inclined Geosynchronous Satellite Orbit (IGSO) and Medium Earth Orbit (MEO) satellites. The core constellation of the BDS-3 consists of 24 MEO satellites (Yang et al. 2019a, b) and was completed after four satellites joined the system on December 17, 2019. Soon after, the 54th BDS satellite was launched into orbit on March 9, 2020. And the final GEO will be launched in June, 2020. In addition, all the services designed (Yang et al. 2019a, b) for the BDS-3 (fundamental service, the satellite-based augmentation service, precise point positioning service, short message communication service and search and rescue service) will be available in 2020. As the BDS space segment has continuously improved, its superior service performance has been gradually highlighted and had drawn worldwide attention (Wang et al. 2019; Yang et al. 2018, 2020; Zhang et al. 2019a, b). The latest research results show that the precision of the BDS satellite's orbit has been tremendously improved with the addition of an inter-satellite link (Yang et al. 2019a, b, 2020). The radial accuracy of the broadcast orbit can reach 10 cm, as evaluated by the satellite laser ranging (SLR) method, and the signal-in-space user range error (SISURE) is better than 0.5 m (Yang et al. 2020). But precise orbit determination of GEO satellites remains to be a challenge. In addition to the upgrade of the fundamental service, the augmentation service, which uses the differential method, has effectively improved the performance of GNSS positioning, navigation, and has also produced great social benefits (Li et al. 2020). The evolution of the differential service can be divided into three phases: (1) the pseudo range wide area differential service, which is based on widely and sparsely distributed ground stations, such as the Wide Area Augmentation System (WAAS) of the United States (Yang et al. 2017), the European Geostationary Navigation Overlay Service (EGNOS) of Europe (Ventura-Traveset et al. 2015), and the System for Differential Corrections and Monitoring (SDCM) of Russia (Lu et al. 2014), is widely used in precision approaches of civil aviation because of its high integrity (Yu et al. 2019). The BDS Satellite Based Augmentation Service (BDSBAS) is currently undergoing testing and may open soon (Li et al. 2020). (2) The high precision local area differential service, in which densely distributed monitoring stations and high capacity communications are required for model construction and parameters broadcast. Network Real Time Kinematic (NRTK) systems can provide centimeter-level positioning precision in seconds and have greatly promoted industrialization. (3) Satellite-Station differential services, such as Trimble RTX (Krzyżek 2014), StarFire (Dai et al. 2016), Atlas, etc., make use of globally distributed ground monitoring networks to realize the effective separation and modeling of error sources. Using the correction information broadcasted by communication satellites, satellite-station services can provide global real-time precise point positioning, which has broad application prospects. To effectively improve the performance of the BDS, the operation control segment of the BDS established a wide area differential system as an alternative to the BDSBAS. The BDS Wide Area Differential Service (WADS) was released in January 2017 and declared decimeter-level positioning accuracy for users collecting dual-frequency carrier phase observations. Differential corrections provided by BDS WADS consist of Equivalent Satellite Clock (ESC) corrections, ionosphere grid corrections, orbit corrections and zone corrections. These corrections were generated based on the BDS-2 constellation and monitoring network (Chen et al. 2017; Yang 2017; Zhang 2017). All corrections are broadcasted by the BDS GEO satellites. Users can obtain decimeter-level positioning precision by receiving and properly using the aforementioned four types of corrections (Chen et al. 2015; Wang et al. 2017). Fundamental of zone correction enhancing positioning is based on the spatial correlations of orbit error, atmosphere delay and other error sources between the user and reference stations (Zhang 2017). Zone corrections can also be called comprehensive corrections. They are broadcasted over a 36-s period for users in the service zone to eliminate errors in carrier phase measurements for quicker convergence and higher precision positioning. Zone correction is a type of Observation Space Representation (OSR) correction, and the service performance is highly related to the distance between the user and center of zones. For the real-time high precision augmentation service, two main aspects of user positioning service promotion can be concluded as: (1) stable performance at a certain distance between the user and the center of the zone; and (2) the maximization of the effective range at a given accuracy requirement. As a significant error source in wide area differential positioning, orbit error inevitably influences the performance of the zone correction service. Fluctuations greater than 10 m were observed in the GEO broadcast orbit during several periods, which may have introduced disadvantages into zone correction service. An algorithm aiming at eliminating orbit error fluctuations was proposed in this contribution. In addition, the algorithm was verified using real measurements. After correction, the influence of the GEO orbit error fluctuations on the zone correction is considerably alleviated. The linear correlation between the positioning performance attenuation and the user-reference distance is more explicit, and the effective range of the zone correction is widely increased. Influence on the zone correction The positioning model based on the BDS WADS zone correction was elaborately demonstrated in Chen et al. (2018) and Zhang et al. (2017), and ionospheric-free combinations (B1/B2, B1/B3 or B2/B3) are recommended. By augmentation of the zone correction, ionospheric-free phase observations on the user side can be expressed as follows (Chen et al. 2018): $$L_{u}^{\prime s} = \rho_{u}^{\prime s} + \left( {d\rho_{u}^{s} - d\rho_{r}^{s} } \right) + c \cdot \left( {dt_{u} - dt_{r} } \right) + \left( {N_{u}^{s} - N_{r}^{s} } \right) + dT_{u} - dT_{r} + \varepsilon_{L}$$ where \(\rho_{u}^{\prime s}\) is the corrected distance between the satellite \(s\) and station \(u\) after application of the zone correction. In addition, \(c\) is the speed of light in vacuum. The satellite clock has been eliminated. The clock offset of the reference station \(dt_{r}\) would be absorbed by the user clock offset \(dt_{u}\), and has no influence on positioning. The ambiguity offset of the reference station \(N_{r}^{s}\) would be absorbed into the ambiguity of the user \(N_{u}^{s}\) and would not place an extra burden on the parameter estimation process if no cycle slip occurs in the reference station carrier-phase observations. \(dT_{u} - dT_{r}\) is the difference in the troposphere delay between the user and reference station, and it can be regarded as a constant over a short period and can also be absorbed in the float ambiguities on the user's side. \(d\rho_{u}^{s}\) and \(d\rho_{r}^{s}\) are the projection of the orbit errors of the satellite \(s\) along the line of sight (LOS) for the reference station and the user, respectively, and consist of radial, along and cross components. The ESC correction performed before the application of zone correction corrects the radial component of the orbit error together with the satellite clock bias. Therefore, the residual orbit error that exists after the ESC correction is mainly composed of the along and cross components, as follows: $$d\rho_{r}^{s} = \cos \left( {\alpha_{r}^{s} } \right) \cdot a^{s} + \cos \left( {\beta_{r}^{s} } \right) \cdot c^{s}$$ where \(\alpha\) and \(\beta\) are the angles between the along and cross directions and the LOS, respectively. Subscript \(r\) is the reference station mark, while superscript \(s\) represents the satellite. \(a^{s}\) and \(c^{s}\) represent the residual orbit errors of the satellite \(s\) in the along and cross components, respectively. \(del_{orb} = d\rho_{r}^{s} - d\rho_{u}^{s}\) is set to be the difference in the orbit error projection between the user and the reference station. Similar to the troposphere delay, \(del_{orb}\) could be absorbed in the ambiguity as a constant value and introduces no disadvantage to parameter estimation when it is a stable value. However, the positioning precision and stability will deteriorate if \(del_{orb}\) is not stable. If Eq. (2) is substituted into \(del_{orb}\), the resulting equation is as follows: $$del_{orb} = d\rho_{r}^{s} - d\rho_{u}^{s} = a^{s} \cdot \left( {\cos \left( {\alpha_{r}^{s} } \right) - \cos \left( {\alpha_{u}^{s} } \right)} \right) + c^{s} \cdot \left( {\cos \left( {\beta_{r}^{s} } \right) - \cos \left( {\beta_{u}^{s} } \right)} \right)$$ With 18 zones and an effective range set to 1000 km, the WADS zone correction service can realize 100% coverage of China (Zhang 2017). At a distance of 1072 km, two stations in Beijing and Wuhan are selected to demonstrate the influence of the angles \(\alpha\) and \(\beta\) on \(del_{orb}\). Using the results of February 20, 2018 as an example, variations in \(\cos \left( {\alpha_{r}^{s} } \right) - \cos \left( {\alpha_{u}^{s} } \right)\) and \({\cos \left( {\beta_{r}^{s} } \right) - \cos \left( {\beta_{u}^{s} } \right)}\) are shown in Fig. 1. Variations in \(\cos \left( {\alpha_{r}^{s} } \right) - \cos \left( {\alpha_{u}^{s} } \right)\) (top) and \(\cos \left( {\beta_{r}^{s} } \right) - \cos \left( {\beta_{u}^{s} } \right)\) (bottom) calculated using data from Beijing and Wuhan from February 20, 2018 Figure 1 shows that the coefficient composed of angles is limited to ± 0.023, and the difference in 10 min is constrained to 4 × 10−5. Therefore, the instability of \(del_{orb}\) is mainly caused by fluctuations in \(a^{s} ,c^{s}\). Massive experiments were conducted to establish the correlation between the position accuracy and orbit error stability. If numerous GEO satellites are assumed to be involved in data processing, the standard deviation (STD) values of each GEO satellite are computed as follows: $$\sigma^{j} = \sqrt {\frac{1}{N}\sum\limits_{i = 1}^{N} {\left( {{del_{orb,j}}^{i} - mean\left( {del_{orb, j} } \right)} \right)} }$$ where \(N\) is the length of \(del_{orb,j}\) series and \(mean\left( * \right)\) is the averaging operation. The maximum value of \(\sigma^{j} \left( {j = 1 \ldots m} \right)\) is then established to be the index of the orbit error stability description. The variation of the accuracy of the B1/B2 dual-frequency positioning relative to the stability of \(del_{orb}\) is shown in Fig. 2. Model of the correlation between the dual-frequency precise point positioning accuracy (RMS) enhanced by the zone correction and orbit error stability Obviously, the instability of \(del_{orb}\) caused by the orbit error in the along and cross directions will decrease the positioning performance of the B1/B2 dual-frequency positioning enhanced by the zone correction. Demonstration of the GEO orbit error fluctuations and the correction algorithm Due to the regional distribution of the BDS monitoring network, it has been difficult to precisely determine the orbit of the GEO satellites, especially in the along and cross components. Using the final products of GFZ as a reference, the broadcast GEO orbit error in the first half of 2018 was calculated and large fluctuations were found. Large fluctuations in the cross and along components are shown in Fig. 3. Fluctuations in the along (left) and cross (right) components founded in the BDS GEO broadcast ephemeris during the first half of 2018 In Fig. 3, 006 C02 stands for C02 in DOY (day of year) 6. Fluctuations greater than 20 m are shown in the right panel of Fig. 3. In the positioning procedure, the worst measurement will be the bottleneck in the high precision differential positioning. Therefore, as shown in Fig. 3, the fluctuations in the broadcast GEO orbit will limit the promotion of the differential positioning precision. Fluctuations in the GEO broadcast orbit would decrease the accuracy and stability of the BDS positioning and make the effective range of the zone correction more ambiguous. To ensure a high performance of the zone correction, fluctuations in the GEO broadcast orbit error should be reasonably corrected. An algorithm for the GEO orbit fluctuation correction was proposed. In this algorithm, the orbit error variations in the cross and along directions between two epochs were estimated based on zone corrections. The results were then used to compensate the GEO orbit error. The orbit error, clock bias, troposphere modeling residual and ambiguity offset are the main components of the zone correction (Zhang 2017). The difference method is used to eliminate the clock bias and ambiguity offset and to weaken the influences of the troposphere modeling residual, then orbit component in zone corrections can be extracted. Furthermore, the variations in the orbit error can be estimated using differenced zone corrections and the least square method. To conduct the difference operation, one GEO satellite is selected as the reference satellite and the double-differenced zone corrections are formed among multiple zones and satellites. After the double-differenced operation, the clock bias is eliminated, and the errors that remain in the double-differenced zone corrections are the double-differenced orbit error, troposphere residual and ambiguity offset, as follows: $$\nabla \Delta \delta L_{r,u}^{i,j} = \delta L_{u}^{j} - \delta L_{r}^{j} - \delta L_{u}^{i} + \delta L_{r}^{i} = \nabla\Delta orb_{r,u}^{i,j} + \nabla\Delta \delta T_{r,u}^{i,j} + \lambda \cdot \nabla\Delta N_{r,u,0}^{i,j} .$$ The double-differenced orbit error can be expanded as follows: $$\begin{aligned} \nabla\Delta orb_{r,u}^{i,j} & = orb_{u}^{j} - orb_{u}^{i} - orb_{r}^{j} + orb_{r}^{i} \\ & = a^{j} \cdot \left( {\cos \left( {\alpha_{u}^{j} } \right) - \cos \left( {\alpha_{r}^{j} } \right)} \right) + c^{j} \cdot \left( {\cos \left( {\beta_{u}^{j} } \right) - \cos \left( {\beta_{r}^{j} } \right)} \right) \\ & \quad - \,a^{i} \cdot \left( {\cos \left( {\alpha_{u}^{i} } \right) - \cos \left( {\alpha_{r}^{i} } \right)} \right) - c^{i} \cdot \left( {\cos \left( {\beta_{u}^{i} } \right) - \cos \left( {\beta_{r}^{i} } \right)} \right). \\ \end{aligned}$$ Furthermore, differences between the adjacent epochs (interval: 600 s) are calculated to eliminate the ambiguity offset and troposphere residual. Using the analysis shown in Fig. 1, the differences in the values of \(\cos \left( {\alpha_{r}^{s} } \right) - \cos \left( {\alpha_{u}^{s} } \right)\) and \(\left( {\cos \left( {\beta_{r}^{s} } \right) - \cos \left( {\beta_{u}^{s} } \right)} \right)\) between the epochs could be ignored. The triple-differenced zone correction can be expressed as follows: $$\begin{aligned}\Delta \nabla\Delta \delta L_{r,u}^{i,j} \left( {t,t - 1} \right) & = \nabla\Delta \delta L_{r,u}^{i,j} \left( t \right) - \nabla\Delta \delta L_{r,u}^{i,j} \left( {t - 1} \right) \\ & = a_{t,t - 1}^{i} \cdot \left( {\cos \left( {\alpha_{u}^{i} \left( t \right)} \right) - \cos \left( {\alpha_{r}^{i} \left( t \right)} \right)} \right) + c_{t,t - 1}^{i} \cdot \left( {\cos \left( {\beta_{u}^{i} \left( t \right)} \right) - \cos \left( {\beta_{r}^{i} \left( t \right)} \right)} \right) \\ & \quad - \,a_{t,t - 1}^{j} \cdot \left( {\cos \left( {\alpha_{u}^{j} \left( t \right)} \right) - \cos \left( {\alpha_{r}^{j} \left( t \right)} \right)} \right) - c_{t,t - 1}^{j} \cdot \left( {\cos \left( {\beta_{u}^{j} \left( t \right)} \right) - \cos \left( {\beta_{r}^{j} \left( t \right)} \right)} \right) \\ \end{aligned}$$ where \(a_{t,t - 1}^{i}\) and \(a_{t,t - 1}^{j}\), \(c_{t,t - 1}^{i}\) and \(c_{t,t - 1}^{j}\) stand for orbit error variations of satellite \(i\) and \(j\) in the along and cross directions between epochs \(t\) and \(t - 1\). With multiple triple-differenced zone corrections, orbit error variations can be estimated as follows: $$\left[ {\begin{array}{*{20}c} {resi_{1} } \\ \vdots \\ {resi_{n} } \\ \end{array} } \right]_{n \times 1} = \left[ {\begin{array}{*{20}c} {h_{1,1} } & \ldots & {h_{1,10} } \\ \vdots & \ddots & \vdots \\ {h_{n,1} } & \ldots & {h_{n,10} } \\ \end{array} } \right]_{n \times 10} \cdot \left[ {\begin{array}{*{20}c} {x_{1} } \\ \vdots \\ {x_{10} } \\ \end{array} } \right]_{10 \times 1}$$ where \(resi_{k}\) represents the \(k{\rm th}\) triple-differenced zone correction residual whose reference and slave zones are \(r\) and \(u\), and the reference and slave satellites are \(i\) and \(j\). On the right side of Eq. (8), \(x_{2l - 1}\) and \(x_{2l}\) are the variations in the along and cross components of satellite \(l\), respectively, where \(l = 1\ldots 5\). In each row of the design matrix, coefficients in \(h_{k,1} \ldots h_{k,10}\) remain zero, except the 4 coefficients listed below: $$\begin{array}{*{20}c} \begin{aligned} h_{k,2 \times i - 1} & = \cos \left( {\alpha_{u}^{i} } \right) - \cos \left( {\alpha_{r}^{i} } \right) \\ h_{k,2 \times i} & = \cos \left( {\beta_{u}^{i} } \right) - \cos \left( {\beta_{r}^{i} } \right) \\ h_{k,2 \times j - 1} & = - \cos \left( {\alpha_{u}^{j} } \right) + \cos \left( {\alpha_{r}^{j} } \right) {,j \ne i}.\\ h_{k,2 \times j} & = - \cos \left( {\beta_{u}^{j} } \right) + \cos \left( {\beta_{r}^{j} } \right) \\ \end{aligned} & \\ \end{array}$$ The solutions of Eq. (8) could then be used to correct the GEO broadcast orbit error. Algorithm verification To verify the algorithm proposed previously, 8 sparsely distributed zones on the mainland of China are chosen to estimate the orbit error fluctuation, and the distribution of the selected zones is shown in Fig. 4. Distribution of the 8 selected zones (red triangle) and 8 positioning stations (blue point) used for verification To increase the precision and reliability of the GEO orbit error correction, 5° in longitude or latitude was set as a threshold in zone selection. The orbit error variations in the periods shown in Fig. 3 are estimated. The broadcast GEO orbit errors in the along and cross components are then corrected with the estimated results. The C03 orbit correction results of the DOY 80 and DOY 81 are shown as examples. The along and cross components of the C03 broadcast orbit error in Fig. 5 show apparent fluctuations. After correction, the fluctuations are effectively removed. A constant offset exists in the corrected orbit error, but it will transition into constant value in differential positioning and the results will not be affected. Orbit fluctuation corrected results of C03 in DOY 080 (left) and DOY 081 (right) Observation data from 8 stations are shown in Fig. 4 and corrections from the 2 zones were selected to demonstrate the influence of the orbit correction on the zone correction augmentation positioning. The B1/B2 ionospheric-free combination was used in data processing and the weights of the GEO and IGSO/MEO observations were set to be 0.5:1. The distances between the positioning stations and corresponding zones are shown in Table 1. Table 1 Stations used for verification and corresponding zones Using the results from experiment 9 as an example, the results are shown in Fig. 6. Positioning results with (corrected) and without (original) the fluctuation correction In Fig. 6, the original and corrected terms indicate without and with orbit correction during positioning, respectively. The results show that the precision and stability of the positioning are obviously improved when correction is applied to the GEO broadcast orbit. However, no apparent promotion is observed in the first 2 h when the ambiguities are not all convergent. The reason is that in the convergence period, the code measurements contribute more to the estimation of parameters than the carrier phase observation does. The orbit error correlated bias was overwhelmed by the code observation error. However, after convergence, the carrier phase observation error and orbit error correlated bias have comparable magnitudes. Therefore, the orbit error fluctuation mainly affects the precision and stability after convergence. All results are summarized in Table 2. Table 2 Positioning results with and without the orbit error fluctuations correction (unit: m) The results in Table 2 show that the level of influence is correlated with the distance between the station and center of the zone. For example, in experiments 2 and 4, the worst results are achieved. However, a much better result is obtained in experiment 8, even though the distance between station F and the center of the zone was 1780 km. The explanation for this discrepancy is that if the apparent fluctuation exists in the cross direction, the \(c^{s} \cdot \left( {\cos \left( {\beta_{r}^{s} } \right) - \cos \left( {\beta_{u}^{s} } \right)} \right)\) component in Eq. (3) plays a dominant role, which reduces the effective range in the latitude direction more than in the longitude direction. Consider the scenario in which the point 30° N, 116° E is set as the reference point and the area 20°–40° N, 100°–130° E is set as the test area. The variations in the orbit error in the periods shown in Fig. 3 are estimated. If the test area is divided by 1° × 0.5° and grid points are set to be virtual users, the STD values of \(del_{orb}\) between the virtual users and the reference point are calculated based on the original and corrected GEO errors. The STDs are then sorted and grouped into groups that had similar distances between the user and the reference point. Figure 7 shows the changes in the average STD of each group and the distance between the station and the center of the zone. STDs of \(del_{orb}\) in the test area vary with distance from the center of the zone, before (left) and after (right) the fluctuation correction Without the orbit fluctuation correction, the STD values of \(del_{orb}\) relative to distance show poor stability, which will cause a complex pattern of precision attenuation as the distance increases, which is not desirable for wide area differential service. After correction, the stability of the orbit error is improved, which will contribute to a lower correlation of the orbit projection error between the user and its reference station. Namely, users at the same distance will gain similar precision and stability. In addition, the correlation between the STD of the \(del_{orb}\) and the distance presents an approximately linear relationship, which indicates that for certain accuracy demands, the effective area of the zone correction is more regular (a circle with its center at the center of the zone) than that in the original condition. At the same distance, the STDs of \(del_{orb}\) are much more concentrated after correction, which indicates a higher precision and wider effective range. As mentioned above, at an effective range of 1000 km, 18 zones could realize 100% coverage of mainland of China. Based on the principle of proximity, the distance between the user and the center of the zone is limited to 1000 km if the service regularly operates. As shown in Fig. 7, with correction, the STDs of \(del_{orb}\) within 1000 km are up to 0.08 m. Similar to the relationship between the STD of \(del_{orb}\) and the positioning precision shown in Fig. 2, the B1/B2 dual-frequency user can achieve 0.19 m and 0.34 m positioning precisions for the horizontal and vertical components, respectively. If the distance expands to 1800 km, the STD of \(del_{orb}\) is still under 0.12 m. Compared with those in Fig. 2, the horizontal and vertical positioning accuracies can still reach 0.24 m and 0.38 m within 1800 km. The effective range is widely expanded. The release of the BDS WADS increases the precision of a user's positioning to the decimeter scale. However, fluctuations in the GEO broadcast orbit are disadvantageous for the positioning precision and stability. In this study, an algorithm to estimate fluctuations in the GEO orbit errors was proposed and verified using real measurements. With the fluctuation correction, the orbit error of GEO was stabilized and stability of differential orbit projection error between user and station was improved effectively, and higher positioning precision can be achieved. In normal service, users in 1000 km range from zone center can acquire a precision of 0.19 m, 0.34 m for horizontal and vertical components with B1/B2 dual-frequency combination observations under the augmentation of zone correction. After the GEO broadcast orbit corrections are applied, the spatial correlation of the orbit error is effectively reduced, and the relationship between the orbit projection error and the positioning accuracy tends to be linear. The pattern of accuracy attenuation with increased distance becomes more consistent. The effective range is widely expanded and decimeter-level positioning precision is available within 1800 km, which will guarantee desirable precision, stability and continuity for WADS users. The datasets used and analyzed during the current study are available from the corresponding author upon reasonable request. Chen, J., Hu, Y., Zhang, Y., & Zhou, J. (2017). Preliminary evaluation of performance of BeiDou satellite-based augmentation system. Journal of Tongji University (Natural Science),045(007), 1075–1082. Chen, J., Zhang, Y., Yang, S., & Wang, J. (2015). A new approach for satellite based GNSS augmentation system: From sub-meter to better than 0.2 meter era. In The ION 2015 Pacific PNT Meeting, Honolulu, Hawaii (pp. 180–184). Chen, J., Zhang, Y., Zhou, J., Yang, S., Hu, Y., & Chen, Q. (2018). Zone correction: A SBAS differential correction model for BDS decimeter-level positioning. Acta Geodaetica et Cartographica Sinica,47(9), 1161–1170. Dai, L., Chen, Y., Lie, A., Zeitzew, M., & Zhang, Y. (2016). StarFire™ SF3: Worldwide centimeter-accurate real time GNSS positioning. In The International technical meeting of the Satellite Division of the Institute of Navigation, Portland (pp. 1–6). Krzyżek, R. (2014). Precision analysis of Trimble RTX surveying technology with xFill function in the context of obtained conversion observations. Reports on Geodesy & Geoinformatics,97(1), 47–70. Li, R., Zheng, S., Wang, E., Chen, J., Feng, S., Wang, D., et al. (2020). Advances in BeiDou Navigation Satellite System (BDS) and satellite navigation augmentation technologies. Satellite Navigation,1(1), 12. https://doi.org/10.1186/s43020-020-00010-2. Lu, L., Ma, Y., & Chen, H. (2014). The current status and development of SDCM. In The 5th China satellite navigation conference, Nanjing (pp. 1–6). Ventura-Traveset, J., Echazarreta, C. L. D., Lam, J. P., & Flament, D. (2015). An introduction to EGNOS: The European Geostationary Navigation Overlay System. Dordrecht: Springer. Wang, H., Chen, J., & Zhang, Y. (2017). Realization of BDS single-frequency point positioning of sub-meter accuracy. In The 8th China satellite navigation conference, Shanghai (pp. 1–6). Wang, M., Wang, J., Dong, D., Meng, L., Chen, J., Wang, A., et al. (2019). Performance of BDS-3: Satellite visibility and dilution of precision. GPS Solutions,23(2), 56. Yang, S. (2017). Research on BDS decimeter level SBAS and its performance assessment (pp. 1–137). Shanghai: University of Chinese Academy of Science. Yang, Y., Gao, W., Guo, S., Mao, Y., & Yang, Y. (2019a). Introduction to BeiDou-3 navigation satellite system. Annual of Navigation,66(1), 7–18. Yang, T., Li, R., Chen, J., & Gao, W. (2017). WAAS performance evaluation. In The 8th China satellite navigation conference, Shanghai (pp. 1–6). Yang, Y., Mao, Y., & Sun, B. (2020). Basic performance and future developments of BeiDou global navigation satellite system. Satellite Navigation. https://doi.org/10.1186/s43020-019-0006-0. Yang, Y., Xu, Y., Li, J., & Yang, C. (2018). Progress and performance evaluation of BeiDou global navigation satellite system: Data analysis based on BDS-3 demonstration system. Science China-Earth Sciences,61(5), 614–624. Yang, Y., Yang, Y., Hu, X., Tang, C., Zhao, L., & Xu, J. (2019b). Comparison and analysis of two orbit determination methods for BDS-3 satellites. Acta Geodaetica et Cartographica Sinica,48(7), 831–839. Yu, S., Zhang, X., Guo, F., Li, X., Pan, L., & Ma, F. (2019). Recent advances in precision approach based on GNSS. Acta Aeronautica et Astronautica Sinica,40(3), 22200-022200. Zhang, Y. (2017). Research on real-time high precision BeiDou positioning service system (pp. 1–181). Shanghai: Tongji University. Zhang, Y., Chen, J., Yang, S., & Chen, Q. (2017). Initial Assessment of BDS Zone Correction. China Satellite Navigation Conference (CSNC) 2017 Proceedings (Vol. II, pp. 271–282). Springer Singapore: Singapore. Zhang, B., Jia, X., Sun, F., Xiao, K., & Dai, H. (2019a). Performance of BeiDou-3 satellites: Signal quality analysis and precise orbit determination. Advances in Space Research,64(3), 687–695. https://doi.org/10.1016/j.asr.2019.05.016. Zhang, Z., Li, B., Nie, L., Wei, C., Jia, S., & Jiang, S. (2019b). Initial assessment of BeiDou-3 global navigation satellite system: Signal quality, RTK and PPP. GPS Solutions,23(4), 111. https://doi.org/10.1007/s10291-019-0905-4. This study is supported by the National Natural Science Funds of China (Grant No. 41604032). We sincerely thank the GFZ, which provided the BDS precise orbit products,and the authors also thank Wang Ahao and Professor Chen Junping for their helpful comments and participation. This study is supported by the National Natural Science Funds of China (Grant No. 41604032). Beijing Navigation Center, Beijing, 100094, China Binghao Wang & Jianhua Zhou Shanghai Astronomical Observatory, Chinese Academy of Sciences, Shanghai, 200030, China Bin Wang Information Engineering University, Zhengzhou, 450052, China Dianwei Cong National Defense University Joint Operations College, Shijiazhuang, 050001, China Hui Zhang Binghao Wang Jianhua Zhou BW and JZ conceived and developed the algorithms; HZ and DC performed the experiments; BW and BW analyzed the data; and BW wrote the paper. All authors read and approved the final manuscript. Correspondence to Jianhua Zhou. Wang, B., Zhou, J., Wang, B. et al. Influence of the GEO satellite orbit error fluctuation correction on the BDS WADS zone correction. Satell Navig 1, 18 (2020). https://doi.org/10.1186/s43020-020-00020-0 Wide area differential service Decimeter-level service Zone correction GEO orbit error Effective range
CommonCrawl
Research | Open | Published: 16 March 2018 Reliable design for virtual network requests with location constraints in edge-of-things computing San-mei Zhang ORCID: orcid.org/0000-0001-8458-87421 & Arun Kumar Sangaiah2 EURASIP Journal on Wireless Communications and Networkingvolume 2018, Article number: 65 (2018) | Download Citation How to efficiently map virtual networks (VNs) onto a shared physical network is a challenging issue in the field of network virtualization in edge-of-things computing. Since an efficient VN mapping approach can reduce network resource consumption, lower latency, and enhance service reliability, it is important for both customers and network service providers. In this paper, we study the problem of mapping multiple VNs with geographic location constraints onto a physical network while considering the survivability and reliability requirements of each VN request in edge-of-things based data centers. We present the model of this problem and propose a Geographic-Guided Survivable Multiple VN Mapping (GG-SMVNM) algorithm to efficiently solve this problem, which simultaneously considers resource sharing and mapping VN links and nodes in edge-of-things computing. Furthermore, we conduct a large amount of simulations to validate and evaluate our proposed approach. The simulation results show that the proposed method is superior to the existing solution. The emerging edge computing technologies [1,2,3], Internet of Things (IoT) [4, 5], and rich cloud services [6, 7] are used to create novel edge-of-things computing. In it, the data processing occurs in part at the network edge or between the cloud-to-end that can best meet customer necessities rather than entirely processing the data in a comparatively fewer number of massive clouds. Operators use the edge-of-things computing paradigm to provide network and computing services in a flexible and resource-efficient way [8,9,10,11]. Network virtualization is one of the main technologies and promoters of edge-of-things computing. Network virtualization allows multiple heterogeneous virtual networks (VNs) to share the same physical network in edge-of-things computing [12,13,14,15,16]. Due to the increasing popularity of edge-of-things computing, a great deal of research has been conducted on network virtualization and virtual network mapping technology [17,18,19,20,21,22,23,24]. A network virtualization environment (NVE) [25] is composed of shared resources (i.e., physical network with resource capacity) and virtual network (VN) requests in edge-of-things computing. A set of VN links and VN nodes makes up a VN request. Every VN node needs a fixed amount of nodal resources (i.e., storage resources, CPU and memory) to execute the edge-of-things computing services and applications, and each VN link that connects two VN nodes needs a great deal of communication bandwidth to exchange the data and information between the connected VN nodes. The progress of virtual network (VN) mapping is quite complicated because of the constraints of virtual links and nodes, despite knowing all VNs in advance. The VN mapping process is composed of two steps: the mapping of VN nodes and the mapping of VN links. Even if the mapping of all the virtual nodes is accomplished, the mapping of virtual links is still complicated. As a result, there are many VN mapping algorithms that can map as many VNs onto the physical network of edge-of-things computing as possible and minimize the VN mapping costs [26,27,28,29,30]. The VN mapping algorithm proposed in [26] maps the VNs under the guidance of minimizing mapping costs. However, it does not take the nodal survivability into consideration. The author comes up with the multiple VN mapping problems that consider survivability in [29], which introduces the VN mapping algorithm that considers the circumstance of physical network link failures. Research in [27] studies the VN mapping problem. It shares the backup resource among different VNs without considering the backup resource sharing of a single VN during the mapping process. The author in [28] researched the redeployment and migration problem of the dynamic VN. The authors in [29, 30] study the problem of VN mapping while considering the local failures of the physical network in edge-of-things computing. Although many algorithms for VN mapping in edge-of-things computing have been designed, few of these algorithms take the effect of VN nodes' geographic constraints into account. Moreover, no contribution has been made on the backup resource sharing among multiple VNs arriving simultaneously when node survivability is taken into consideration. For example, the 1-redundant method and the K-redundant method introduced in [31] can realize working and backup link resource sharing when mapping backup nodes and links, although they only apply for a single VN. Meanwhile, the working and backup link resource sharing can be achieved among multiple VNs arriving spontaneously when at most one physical node fails at a certain time. Moreover, nodal resource sharing can be realized thanks to the geographic constraint, which is left to further exploration and research. In this paper, we study the problem of resource efficiency and reliable VN mapping/deployment in edge-of-things computing. In our research, we take the geographic constraint of each VN node into consideration. We realize the resource sharing inside every VN and the resource sharing link among VNs while mapping multiple VNs arriving spontaneously on the premise that virtual nodes' survivability is guaranteed in edge-of-things computing applications. The main contributions of this paper are as follows: We study the problem of reliable mapping for VN requests with location constraints in edge-of-things computing. We propose the model and design an efficient algorithm for the studied problem. We conduct extensive simulations to evaluate the performance of our proposed algorithms. The remainder of the paper is arranged as follows. Section 5 outlines the problem statements and is followed by the heuristic algorithms in Section 3. Section 4 gives the detailed simulation results. Finally, Section 5 gives the conclusion of this paper. Problem statement and formulation Virtual network request In this work, we research the issue of mapping multiple VN requests that guarantees the nodes' survivability while at most only one node of physical network (PN) fails at any time in edge-of-things computing and while considering the geographic location constraints of VN nodes. Figure 1 gives an example of three VNs simultaneously arriving. Note that, in practice, the number of simultaneously arriving VNs is random. Every VN can be described as G v = (N v , L v ), where N v represents the set of virtual nodes with resource requirements corresponding to each node (such as the CPU and the memory capacity of physical nodes) and L v represents the set of virtual links. Every link has a bandwidth requirement for physical links to guarantee the communication between two VN nodes connected by a virtual link. Additionally, every node of the VN request has a geographic coordinate constraint. For example, the coordinate (x, y) beside the virtual node v0 in the virtual network VN0 means the geographic position of virtual node v0, thus constraining the range of physical nodes that v0 can be mapped onto. Every virtual node of each VN has its own geographic position constraint in Fig. 1, while only the coordinate of v0 is labeled in Fig. 1. An example of VN mapping Physical network in edge-of-things computing The physical network of edge-of-things computing is composed of multiple data centers that are dispersed across multiple geographical locations interconnected by a network. Like the VN request, a physical network model can be represented as G S = (N S , L S ), where N S represents the set of physical nodes of the physical network (every node of which provides a physical resource such as CPU and memory capacity with corresponding geographic coordinates) and L S represents the set of physical links of the physical network (with every physical link providing physical bandwidth resources to satisfy the communication demand between physical nodes). Furthermore, when a physical node fails, the corresponding physical links also fail. Hence, the virtual nodes mapped onto the failed physical node need to be migrated to another physical node that is not failed, and the corresponding virtual links also need to migrate. In this paper, there is at most only one physical node that fails at all times in the physical network of edge-of-things computing. Reliable virtual network provisioning A substrate edge-of-things computing physical network PN is represented as G S = (N S , L S ), and several virtual network requests arrive simultaneously. Mapping constraints They include the geographical position constraints of virtual nodes, the resource demands of virtual links, and the resource demands of virtual nodes. Precondition There is at most one physical node failure at any time, and if there is a virtual node mapped onto it, the virtual node and adjacent links need to be migrated and recovered. Under the precondition above with the mapping constraints, the problem is how to design and realize a mapping algorithm that can map several VNs arriving simultaneously onto the physical network and guarantee the survivability of virtual nodes. It must realize the resource sharing in every VN and also the resource sharing among VNs to save the physical resources of edge-of-things computing, with the aim of minimizing the mapping costs and getting a generally comparatively good multiple VN mapping result. Residual resources The residual resources include the residual bandwidth of every physical link and the residual node resource of every physical node in edge-of-things computing. The residual bandwidth capacity R l (l s ) of physical link l s represents the total amount of bandwidth available on link l s . $$ {R}_l\left({l}_s\right)=\mathrm{Cap}\left({l}_s\right)-\sum \limits_{l_v\in {S}_{l_s}}\mathrm{req}\left({l}_v\right) $$ where Cap(l s ) denotes the resource capacity of physical link l s , req(l v )represents the amount of link resource requirements of VN link l v , and \( {S}_{l_s} \)indicates the set of VN links mapped on the physical link l s . The residual node resource capacity R n (n s ) of physical node n s can be computed as follows: $$ {R}_n\left({n}_s\right)=\mathrm{Cap}\left({n}_s\right)-\sum \limits_{n_v\in {S}_{n_s}}\mathrm{req}\left({n}_v\right) $$ where Cap(n s ) shows the resource capacity of physical node n s , req(n v ) represents the amount of VN node n v 's requirement for nodal resources, and \( {S}_{n_s} \) indicates the set of VN nodes that have been mapped onto physical node n s . We define the costs C(G V ) of the mapping VN request G V as all of the costs of physical resources (i.e., physical link and nodal resources) in edge-of-things computing allocated to G V . $$ C(Gv)=\sum \limits_{l_v\in {L}_V}\sum \limits_{l_s\in {L}_S}{b}_{l_v}^{l_s}p\left({l}_s\right)+\sum \limits_{n_v\in {N}_V}\sum \limits_{n_s\in {N}_S}{r}_{n_v}^{n_s}p\left({n}_s\right) $$ where \( {b}_{l_v}^{l_s} \) denotes the amount of resources that physical link l s assigned to virtual link l v and \( {r}_{n_v}^{n_s} \) denotes the amount of nodal resources that physical node n s assigned to virtual node n v . The symbols p(l s ) and p(n s ) respectively represent the costs of per unit of the physical link and nodal resources. Our objective is to minimize the VN mapping costs: $$ \operatorname{Minimize}C(Gv) $$ Algorithm design To achieve the survivable VN mapping in a reasonable time, we design an effective heuristic to solve the problem that we researched in the paper. The Geographic-Guided Survivable Multiple VN Mapping (GG-SMVNM) algorithm is detailed in this section. In spite of the fact that at most one physical node fails in the physical network of edge-of-things computing when multiple VNs simultaneously arrive, different from the K-redundant scheme, the GG-SMVNM algorithm first successively maps the spontaneously arriving VNs onto the physical network with the geographic location constraints with the objective of minimizing costs instead of enhancing each VN with additional backup nodes and links. During the successive mapping, virtual nodes from different VNs may be mapped to the same physical node. After mapping all these VNs, a new VN is generated based on the physical mapping topology of the physical network. Finally, similar to the K-redundant scheme, the GG-SMVNM algorithm enhances the new-generated VN with backup nodes and corresponding backup links. It maps the backup nodes and links to the physical network to ensure the survivability of virtual nodes while also realizing the nodal and link resource sharing among spontaneously arriving VNs and the resource sharing in VNs. The GG-SMVNM algorithm works with the following steps (Fig. 2). The framework of the GG-SMVNM algorithm Step 1: successively map all the simultaneously arriving VNs onto the physical network of edge-of-things computing When mapping the VN requests, we aim to map the virtual nodes of different VNs to the same substrate node to make the newly generated VN more concise. The mapping results satisfying the geographic location constraints of VN0, VN1, and VN2 are shown in Fig. 1. Figure 1 only depicts the mapping of virtual nodes, but the corresponding mappings of virtual links are not shown here. We map the virtual node v1 from VN0 and node v0 from VN1 to the same substrate node s4. Therefore, the capacity demand of the new virtual node generated based on s4 is the total amount of the resource demands of v1 from VN0 and v0 from VN1 when the new virtual network is generated from the physical mapping topology. Similarly, the resource demand of the new virtual node generated by physical node s2 is the total demand of the resource demands of v2 from VN0 and v2 from VN1. The corresponding links also satisfy this demand if two or more virtual links are mapped onto the same physical link. Then, the resource requirement of the new virtual link generated by the physical link is the sum of the resource requirements of these virtual links. If there are several mapping results of the virtual nodes belonging to different VNs due to the geographic constraint, then we need to choose the best one with the minimum mapping costs. For example, in Fig. 1, v1 from VN0 and v0 from VN1 can both be mapped onto the physical node s4, while node v1 of VN0 and v2 of VN1 can both be mapped onto s4 as well. Then, we chose the mapping solution with minimum mapping costs. Step 2: when all the virtual nodes are mapped, generate a new virtual network according to the mapping results The new VN generated based on the mapping result of Fig. 1 is shown in Fig. 3. The virtual node v0 in Fig. 3 is reversely generated due to the physical node s0 in Fig. 1 with the same resource requirement as the resource requirement of v0 from VN0 mapped onto s0. The virtual node v1 in Fig. 3 is generated due to the physical node s4 in Fig. 1 with the resource requirement equal to the sum of resource requirements of v1 from the original VN0 and v0 from VN1 mapped onto s4. The other virtual nodes in Fig. 3 are generated in a similar way. The virtual links in Fig. 3 are also generated according to the mapped paths on the physical network such that if two or more virtual links are mapped onto the same physical path on the physical network, then the resource demand of the virtual link above is the sum of these virtual links' resource requirements. New virtual network Step 3: enhance the newly generated virtual network This step is similar to the K-redundant scheme. The GG-SMVNM algorithm adds a backup node for each virtual node and adds the corresponding backup links as well. The enhanced newly generated virtual network with backup nodes and links is shown in Fig. 4. b0, b1, b2, b3, b4, and b5 are backup nodes, and the dashed lines represent the backup links. Enhanced new virtual network Step 4: map the extra backup links and nodes We apply the strategy similar with [31] to map backup links and nodes in order to decrease the costs of backup nodal mapping. Moreover, resource sharing of working links and backup links during the backup link and node mapping process leads to the resource sharing of links and nodes among original VNs, which will reduce the mapping costs as much as possible. The detailed pseudo code of the GG-SMVNM algorithm is described in Table 1. Table 1 Pseudo code of the GG-SMVNM algorithm Next, we provide the GG-SVNM algorithm that does not consider the resource sharing among different VNs. The detailed pseudo code of the GG-SVNM is shown in Table 2. Table 2 Pseudo code of GG-SVNM algorithm In this, CMS i means the set of virtual nodes which physical node s i can host. Simulations and results We first introduce the environment of our simulation in the section, and then we give the results. Last, we analyze the results of simulation. Simulation environment Physical network In the simulations, we use two different networks as the physical network of edge-of-things computing with 46 nodes and 55 nodes, respectively. Each node has a real geographic coordinate presented by a longitude and latitude. In the two different networks, we assume that the bandwidth capacities of each link follow a uniform distribution from 500 to 1000 and the resource capacities of each node are uniformly distributed from 200 and 400, respectively. The topologies of the physical networks used in our simulations are shown in Fig. 5. Physical networks used in our simulations. a Net-1: a network with 46 nodes. b Net-2: a network with 55 nodes Virtual network configuration In the case of multiple VNs mapping, the number of VNs simultaneously arriving is random in real applications. In our simulations, without the loss of generality, the number of arriving VNs is randomly distributed from 1 to 4, and each VN randomly consists of 3 or 4 nodes. The resource demand of each node is a variable that is randomly generated between 20 and 30, and its geographic coordinate is also generated randomly with the longitude and latitude whose values fall into a specific range. The possibility that there exists a virtual link between two virtual nodes is 50%. The bandwidth demand of the virtual link is randomly generated between 50 and 80. Parameter g represents the ratio of the unit node resource overhead to the unit bandwidth overhead. Different values of g can compare the effects that different ratios of the unit node resource costs to unit bandwidth costs on VN mapping costs. We set the parameter g to 5 in the simulations. In our simulations, we presume that zero or one substrate node fails at any time and several VNs arrive simultaneously. For evaluating our proposed algorithm, we compare the mapping performances of our GG-SMVNM and GG-SVNM algorithms with the EVPF approach proposed in [32] and consider the geographic location constraints of virtual nodes. Furthermore, we vary the number of simultaneously arriving VNs (the number of nodes on each VN is 3 or 4) on the premise that the physical resources are abundant and compare the total mapping costs, backup node mapping costs, and backup link mapping costs of these two algorithms. We used Microsoft Visual Studio 2008 and C++ programming language to implement the compared algorithms. We define some metrics for evaluating the performance of our proposed algorithm in the simulation. The total VN mapping cost: the total expenses of using physical network resources to provide all VN requests. It can be calculated as follows: $$ {M}_{\mathrm{cost}}^{\mathrm{total}}={\sum}_{i=1}^{\mid \mathrm{ArrivedVN}\mid }{M}_c^i, $$ where \( {M}_c^i \) represents the mapping cost of i-th VN demand and ArrivedVN denotes the set of arrived VN demand. The backup node mapping cost: the total expenses of using physical node resources to host the backup virtual nodes. It can be calculated as follows: $$ {M}_{\mathrm{cost}}^{\mathrm{bakNode}}={\sum}_{i=1}^{\mid \mathrm{bakNode}\mid }{M}_{\mathrm{node}}^i, $$ where \( {M}_{\mathrm{node}}^i \) denotes the mapping cost of the i-th backup virtual node and bakNode represents the set of backup virtual nodes. The backup link mapping cost: the total expenses of using physical link resources to host the backup links. It can be defined as follows: $$ {M}_{\mathrm{cost}}^{\mathrm{bakLink}}={\sum}_{i=1}^{\mid \mathrm{bakLink}\mid }{M}_{\mathrm{link}}^i, $$ where \( {M}_{\mathrm{link}}^i \) denotes the mapping costs of the i-th backup virtual link and bakLink represents the set of backup virtual links. We can see from Fig. 6 that the total mapping costs of our proposed GG-SMVNM and GG-SVNM algorithms are lower than that of the existing EVPF approach [32]. Furthermore, the total mapping costs of multiple VNs of the GG-SMVNM is less than the GG-SVNM and that the advantage of the GG-SMVNM on mapping costs gets more obvious with the increase in the number of simultaneously arriving VNs. This is because when there are several VNs simultaneously arriving, one physical node may host more than one virtual node because of the geographic location constraints of virtual nodes. Therefore, while using the GG-SMVNM for mapping the multiple arrived VNs, the new VN generated according to the mapping solutions for mapping multiple VNs onto the physical network is simpler than the multiple original VNs. Furthermore, the resource sharing in every VN and the node and link resource sharing across VNs can be realized when the GG-SMVNM performs the mapping of backup nodes and links while at most one node fails in the physical network, whereas the node and link resource sharing only occurs in each VN in the GG-SVNM algorithm. Therefore, the mapping costs of multiple VNs achieved by using the GG-SMVNM algorithm are less than that of the GG-SVNM algorithm. Mapping costs vs. number of VNs. a Simulation results on Net-1. b Simulation results on Net-2 Furthermore, Fig. 7 shows the simulation results on total mapping costs under various reliability requirements. For a specific reliability requirement, our proposed algorithms have lower total mapping costs than the existing approach since our approaches can efficiently deploy the arrived virtual network requests and thus consume less physical network resources. Moreover, the mapping costs of the compared algorithms increase with the increasing reliability requirements since it is necessary to allocate greater and more expensive resources for a VN request with higher reliability demand to guarantee the reliability. Mapping costs vs. different reliability requirements. a Simulation results on Net-1. b Simulation results on Net-2 Figure 8 depicts the backup nodes' mapping costs in the GG-SVNM, GG-SMVNM, and EVPF algorithms for multiple VNs. Figure 9 shows the physical resource costs for mapping the backup links of VNs using the EVPF, GG-SVNM, and GG-SMVNM algorithms, respectively. We can see from Figs. 8 and 9 that the GG-SMVNM algorithm achieves the lowest costs as the number of VNs increases. Furthermore, compared with the advantage in backup node mapping costs, the advantage in backup link mapping costs is more obvious. By analyzing the reason for this simulation result, as we said before, the resource sharing is in each VN, and the node and link resource sharing occurs across VNs while using the GG-SMVNM algorithm. Since the geographic location constraints and virtual nodes from different VNs may be abstracted to a new virtual node, the backup node resource sharing in VNs can be realized. In comparison, the backup link mapping is much more complicated and the probability of sharing resources among backup links of different VNs is much higher. Therefore, more resource sharing opportunities exist in the GG-SMVNM algorithm than in the GG-SVNM algorithm, which leads to lower mapping costs. Mapping costs for backup nodes. a Simulation results on Net-1. b Simulation results on Net-2 Mapping costs for backup links. a Simulation results on Net-1. b Simulation results on Net-2 Figure 10 shows the simulation results on the average deployment time of the GG-SVNM, GG-SMVNM, and EVPF algorithms for multiple VN requests. In this set of simulations, we evaluate the performance of the compared algorithms under different numbers of VN requests and calculate the average value to eliminate the randomness and output it as the result. From the figure, we can see that our proposed GG-SMVNM algorithm has the lowest time complexity, whereas the EVPF has the worst time efficiency. Since an efficient routing strategy is used in our proposed algorithm, it can be used to quickly map a VN link onto a feasible physical path. Simulation results for deployment time. a Simulation results on Net-1. b Simulation results on Net-2 The simulation results shown in Fig. 11 evaluate the blocking ratios of the compared algorithms in different scenarios. The blocking ratio is defined as the number of blocked/rejected virtual network requests to the number of total arrived VN requests. The blocking ratios increase with the growth of the number of arrived VNs, since more VN requests means more resource consumption. However, the blocking ratios remain stable when the number of arrived VN requests is more than 4000. The blocking ratio of our proposed algorithm is lower than that of the existing approach EVPF, since our approaches consume less physical network resources while guaranteeing the same level of reliability of VN requests, thus lowering the blocking ratio. Simulation results on blocking ratio. a Simulation results on Net-1. b Simulation results on Net-2 In this paper, we propose a survivable VN mapping algorithm (GG-SMVNM) that considers the geographic location constraints of virtual nodes for efficiently mapping multiple VNs in edge-of-things computing. We introduce the resource sharing strategy in VNs and also across multiple VNs to save the physical resources of edge-of-things computing and reduce the mapping costs. Furthermore, the geographic location constraints are considered in both original virtual nodes mapping and backup node mapping, which makes significant sense in real edge-of-things computing applications. We conduct extensive simulations on different networks to evaluate our proposed algorithms. The simulation results show that our proposed algorithm has better performance than existing approaches. In this research, we mainly focus on the problem of provisioning VN requests with location constraints in an autonomous domain network in edge-of-things computing. However, in practical applications, there are some VNs need to be deployed onto multiple autonomous domain networks. Therefore, in our future research, we are going to study and solve the problem of reliable VN mapping in multiple domains while considering the quality of service (QoS) requirements. W Shi, J Cao, Q Zhang, et al., Edge computing: vision and challenges. IEEE Internet Things J 3(5), 637–646 (2016) TX Tran, A Hajisami, P Pandey, et al., Collaborative mobile edge computing in 5G networks: new paradigms, scenarios, and challenges. IEEE Commun. Mag. 55(4), 54–61 (2017) S Sardellitti, G Scutari, S Barbarossa, Joint optimization of radio and computational resources for multicell mobile-edge computing. IEEE Trans Signal Inf Proc Over Netw 1(2), 89–103 (2015) G Sun, V Chang, M Ramachandran, et al., Efficient location privacy algorithm for internet of things (IoT) services and applications. J. Netw. Comput. Appl. 89, 3–13 (2017) X Sun, N Ansari, EdgeIoT: mobile edge computing for the Internet of Things. IEEE Commun. Mag. 54(12), 22–29 (2016) J Li, X Huang, J Li, et al., Securely outsourcing attribute-based encryption with checkability. IEEE Trans Parallel Syst 25(8), 2201–2210 (2014) X Chen, X Huang, J Li, et al., New algorithms for secure outsourcing of large-scale systems of linear equations. IEEE Trans Inf Forensics Secur 10(1), 69–78 (2015) J Li, J Li, X Chen, et al., Identity-based encryption with outsourced revocation in cloud computing. IEEE Trans. Comput. 64(2), 425–437 (2015) G Sun, D Liao, D Zhao, et al., Live migration for multiple correlated virtual Machines in Cloud-based Data Centers. IEEE Trans. Serv. Comput., 1–14 (2016) J Li, J Li, D Xie, et al., Secure auditing and deduplicating data in cloud. IEEE Trans. Comput. 65(8), 2386–2396 (2016) P Li, J Li, Z Huang, et al., Privacy-preserving outsourced classification in cloud computing. Clust. Comput., 1–10 (2017) J Li, Y Zhang, X Chen, et al., Secure attribute-based data sharing for resource-limited users in cloud computing. Comput Secur 72, 1–12 (2018) G Sun, D Liao, D Zhao, et al., Towards provisioning hybrid virtual networks in federated cloud data centers. Futur. Gener. Comput. Syst. (2017) Available online 18 October Y Zhang, X Chen, J Li, et al., Ensuring attribute privacy protection and fast decryption for outsourced data security in mobile cloud computing. Inf. Sci. 379, 42–61 (2017) G Sun, V Anand, D Liao, et al., Power-efficient provisioning for online virtual network requests in cloud-based data centers. IEEE Syst. J. 9(2), 427–441 (2015) P Li, J Li, Z Huang, et al., Multi-key privacy-preserving deep learning in cloud computing. Futur. Gener. Comput. Syst. 74, 76–85 (2017) S Su, Z Zhang, A Liu, et al., Energy-aware virtual network embedding. IEEE/ACM Trans. Networking 22(5), 1607–1620 (2014) R Mijumbi, JL Gorricho, J Serrat, et al., A neuro-fuzzy approach to self-management of virtual network resources. Expert Syst. Appl. 42(3), 1376–1390 (2015) J Li, X Chen, X Huang, et al., Secure distributed deduplication systems with improved reliability. IEEE Trans. Comput. 64(12), 3569–3579 (2015) G Sun, V Chang, G Yang, et al., The cost-efficient deployment of replica servers in virtual content distribution networks for data fusion. Inf. Sci. 432, 495-515 (2017) Available online 10 August J Li, Y Li, X Chen, et al., A hybrid cloud approach for secure authorized deduplication. IEEE Trans Parallel Distrib Syst 26(5), 1206–1216 (2015) G Sun, D Liao, V Anand, et al., A new technique for efficient live migration of multiple virtual machines. Futur. Gener. Comput. Syst. 55, 74–86 (2016) H Yu, T Wen, H Di, et al., Cost efficient virtual network mapping across multiple domains with joint intra-domain and inter-domain mapping. Opt. Switch. Netw. 14, 233–240 (2014) J Li, Z Liu, X Chen, et al., L-EncDB: a lightweight framework for privacy-preserving data queries in cloud computing. Knowl.-Based Syst. 79, 18–26 (2015) G Sun, H Yu, V Anand, et al., A cost efficient framework and algorithm for embedding dynamic virtual network requests. Futur. Gener. Comput. Syst. 29(5), 1265–1277 (2013) NMK Chowdhury, MR Rahman, R Boutaba, Virtual network embedding with coordinated node and link mapping. IEEE Infocom, 783–791 (2009) WL Yeow, C Westphal, UC Kozat, Designing and embedding reliable virtual infrastructures. ACM Sigcomm Comput Commun Rev 41(2), 57–64 (2011) Z Cai, F Liu, N Xiao, et al., Virtual network embedding for evolving networks. IEEE Globecom, 1–5 (2010) H Yu, C Qiao, V Anand, et al., Survivable virtual infrastructure mapping in a federated computing and networking system under single regional failures. IEEE Globecom, 1–6 (2010) G Sun, H Yu, L Li, et al., The framework and algorithms for the survivable mapping of virtual network onto a substrate network. IETE Tech. Rev. 28(5), 381–391 (2011) H Yu, V Anand, C Qiao, et al., Cost efficient design of survivable virtual infrastructure to recover from facility node failures. IEEE ICC, 1–6 (2011) G Sun, D Liao, S Bu, et al., The efficient framework and algorithm for provisioning evolving VDC in federated data centers. Futur. Gener. Comput. Syst. 73, 79–89 (2017) This research was partially supported by the Fundamental Research Funds of China West Normal University (No.17D075). China West Normal University, Nanchong, China San-mei Zhang School of Computing Science and Engineering, Vellore Institute of Technology (VIT), Vellore, Tamil Nadu, 632014, India Arun Kumar Sangaiah Search for San-mei Zhang in: Search for Arun Kumar Sangaiah in: SZ is in charge of the major theoretical analysis, algorithm design, and numerical simulations. AKS is in charge of part of the theoretical analysis and algorithm design. Both authors read and approved the final manuscript. Correspondence to San-mei Zhang. Both authors declare that they have no competing interests. Edge-of-things computing Emerging Intelligent Algorithms for Edge-of-Things Computing
CommonCrawl
how much does a yard of sand weigh The calculator would perform the following calculations: $$Volume = Area \times Depth = 100 ft^2 \times 3\,in = 25\,ft^3$$ $$Weight = Volume \times Density = 25ft^3 \times 100\,lb/ft^3 = 2,500\,lb$$ Hence 27*100=2700 pounds per cubic yard would be a decent approximation. How much does one cubic yard of air weigh? For most estimating purposes, consider the yield to be 3,000 pounds per cubic yard or 1.5 tons per cubic yard. It's estimated that dry sand weighs approximately 100 pounds (45 kg) per cubic foot. Brush/Branches (chipped – 3" screen) 600 lb. CEO Compensation and America's Growing Economic Divide. About how many pounds does one cubic foot of granite weigh? Mulch: Weighs in at roughly up to 1,000 pounds per cubic yard, depending on the type and whether it's wet or dry. Why does the same volume of wet sand weigh less than dry sand? density of sand, dry is equal to 1 631 kg/m³. How Much Does a Cubic Yard of Concrete Weigh? What does 1 yard of masonry sand weigh? 1 cubic yard of dirt is equal to about 2,000 lbs or 1 … In order to know how much does a yard of topsoil weigh, you must know important facts about topsoil. per cubic foot.There are 27 cubic feet in a cubic yard, so that would make the weight of dry sand about 2700 lbs. One cubic yard of sand covers approximately 120 to 150 square feet of area at a depth of 1 to 2 inches. Concrete (gravel or stone mix) 4,050 lb. The weight of gravel depends on the type and density or grading. After all, your fill dirt connector only delivers the dirt — you have to deal with it. per cubic foot.There are 27 cubic feet in a cubic yard, so that would make the weight of dry sand about 2700 lbs. One ton of sand covers about 80 to 100 square feet. How Much Does A Yard Of Pea Gravel Weigh? From your math's class you know that: 1m = 3. per cu. If you want an exact weight, the supplier can tell you when you purchase the soil One cubic yard (2 scoops) will weigh about 1.5 tons (3000 lbs. So, a cubic yard is that times 27, or 2835 lb. Most gravel products will weight approximately 2,840 pounds per cubic yard or about 1.42 tons per cubic yard. 1 Cubic Yard of Sand can weigh between 2,600 to 3,000 lbs. One ton of sand covers about 80 The Density of Regular Mason Sand: 2,410 lb/yd³ or 1.21 t/yd³ or 0.8 yd³/t How much does a cubic yard of gravel weigh. A 5-gallon bucket of sand weighs approximately 68 pounds (31 kg). For us to tell how much does a cubic foot of sand weigh, we need to know how to convert a cubic foot to pounds. Wet sand weighs bout 126 pounds per cubic foot so a yard would weigh about 3,430 pounds. Leaves (loose, dry) 150 lb. Of course, if you decide to use some unusual material, feel free to change the value! The U.S. Supreme Court: Who Are the Nine Justices on the Bench Today? To calculate the weight of a cubic yard of sand, you simply have to multiply its volume by its density. dry sand weighs about 1602 kg per cubic meter. The estimate is based on the cubic yard calculation. A cubic foot of dry, loose gravel with 1/4" to 2" stones is 105 pounds per cubic foot . If you divide 3700 pounds by 2000 pounds you get 1.85 Tons. How much does a cubic foot of crushed stone weigh 1 Cubic Yard of Sand can weigh between 2,600 to 3,000 lbs. The answer is: The change of 1 cu yd - yd3 (cubic yard) volume unit of beach sand measure equals = to weight 2,577.55 lb (pound) as the equivalent measure within the same beach sand substance type. In most cases, a cubic yard of gravel weighs between 2,400 to 2,900 pounds (1,088 to … A COVID-19 Prophecy: Did Nostradamus Have a Prediction About This Apocalyptic Year? If it contains gravel, stone, and sand, the weight can rise to over 3,000 pounds. The approximate weight of 1 cubic yard of sand is 2,600 to 3,000 pounds. Depending on the situation, it may be best to purchase sand by the bag rather than in bulk. Buying in bulk, as mentioned above, may require you to pay delivery fees, which could be as much as $150, depending on where you live. How much 1 cubic yard of it weighs depends on the amounts of the components it's made up of. Dry sand weighs about 100 lbs. What are the extra costs? A yard of sand weighs approximately 2,000 and 3,000 pounds, or about 1 1/2 tons. Sand is ~3k/yard and salt is closer to 2k/yard. How much does a yard of paver sand weigh Products As a leading global manufacturer of crushing, grinding and mining equipments, we offer advanced, reasonable solutions for any size-reduction requirements including, How much does a yard of paver sand weigh, quarry, aggregate, and different kinds of minerals. Topsoil is the uppermost layer of the soil usually 5 to 10 inches, in which all the plants grow. Units of Measure Weight and Mass Math … Sand, Silica weighs 1.54 gram per cubic centimeter or 1 538 kilogram per cubic meter, i.e. Dirt such as topsoil is used for many different landscaping applications around a property. Iron (wrought) 13,100 lb. per cubic yard damp sand is usually moist to the touch, sand can look dry, if you pick up a handful and water drips from it , it will be wet packed. sand aggregate rock asphalt gravel dirt compost mulch topsoil How much will my cubic yard of materials weigh? Soil: Weighs about 2,200 pounds per cubic yard, depending on the moisture content. One cubic yard of gravel can weigh between 2,400 to 2,900 lbs. A gallon of sand weighs 12.45 pounds and a cubic yard of sand weighs close to 3,000 pounds. One cubic yard of topsoil weighs approximately 1,080 pounds. If you're not used to this kind of stuff, wrapping your head around what a yard of dirt is might be challenging to visualize. This study estimated that there are seven quintillion, five quadrillion grains of sand or 7,500,000,000,000,000,000 grains of sand! This study estimated that there are seven quintillion, five quadrillion grains of sand or 7,500,000,000,000,000,000 grains of sand! If covering a 2-inch depth, a cubic yard of sand should cover 100 square feet of ground. How Much Does a Cubic Yard of Gravel Weigh? Gravel – approximately 3000 lbs. Check with your supply yard for weights of specific materials. The water content of the sand is assumed to be moderate. Or up to one and a half tons approximately. 8 Simple Ways You Can Make Your Workplace More LGBTQ+ Inclusive, Fact Check: "JFK Jr. Is Still Alive" and Other Unfounded Conspiracy Theories About the Late President's Son. According to Glover lose dry gravel weighs about 1522 kg per cubic meter. Dry, loose sand weighs about 2,700 lbs per cubic yard. How much does a cubic yard of sand weigh? Convert Masses or Weights to and from metric units, British units {acres, townships, square miles, square feet, etc. Brush/Branches (loose) 250 lb. Step 2. Knowing how much one yard of topsoil weighs partially depends on where it was scraped from and, therefore, what it is in it. How much does a 1/2 yard of sand weigh? You don't have to remember the density of sand though - our calculator has a preset value for density. See below for more common material densities. A cubic yard is 27 cubic feet ( a cube with edges of length 3 feet). The weight of gravel depends on the type and density or grading. For estimating purposes, most Contractor's consider the yield to be 3,000 pounds per cubic yard or 1.5 tons per cubic yard . The approximate weight of 1 cubic yard of sand is 2,600 to 3,000 pounds. Ultimately, it depends on its moisture content and the type of sand it is. Therefore, a cubic yard is equivalent to 1.25 tons or a ton of sand is equivalent to 0.8 cubic yard. Most of our bulk materials, with the exception of mulch, are sold by the weight. Soil: Weighs about 2,200 pounds per cubic yard, depending on the moisture content. T turfmasters Member Location stroudsburg,Pa. Similarly, how much does 1 yard of road base weigh? But if you want a fast way to figure it out with some accuracy. In most cases, a cubic yard of gravel weighs between 2,400 to 2,900 pounds (1,088 to 1,315 kg). per cubic yard Lawn Dressing.90 tons (1,800 lb.) Sand is one of the most abundant materials on earth. }, scientific or nuclear-subatomic weights, archaic units, international units, and so forth. What Is the Weight of 1 Cubic Yard of Sand. Average Weight of a Cubic Yard of Dirt A cubic yard of dry fill dirt will weigh around 2,000 pounds. or up to one and a half tons approximately. The sand's density is 100 lb/ft³ and costs $15 per cubic yard. dry sand will weigh 2700 ibs. typically Sand and Gravel weigh about 1500 pounds (3/4 of a ton) per scoop ( 1/2 cubic yard ). Mason sand is more affordable when purchased through landscaping companies and is typically delivered for a flat fee of $60 for distances up to 10 miles. One cubic yard (2 scoops) will weigh about 1.5 tons (3000 lbs.). The link to this tool will appear as: beach sand from cubic yard (cu yd - yd3) to pounds (lb) conversion. One cubic yard of dry sandy soil weighs about 2,600 pounds, while 1 cubic yard of dry clay soil weighs in around 1,700 pounds. The weight of any amount of sand depends on how much water is in it. 4 - How much does a kilogram weigh in Pounds? Gravel typically will not pack as tightly as sand. The ticket breaks down the weights of cement, sand, aggregate, and water used for the load of concrete. density of sand, Silica is equal to 1 538 kg/m³. So how much does a yard of gravel weigh? This weight depends on how wet the sand is, the materials in the sand, and the size of the sand particles. Dry sand weighs about 100 lbs. About Sand, dry 1 cubic meter of Sand, dry weighs 1 631 kilograms [kg] 1 cubic foot of Sand, dry weighs 101.82 pounds [lbs] Sand, dry weighs 1.631 gram per cubic centimeter or 1 631 kilogram per cubic meter, i.e. How much area does one cubic yard cover? Hence 27*100=2700 pounds per cubic yard would be a decent approximation. Approximate Weights of Various Construction Material Per Cubic Yard Most of Harmony … A cubic yard of typical sand weighs about 2700 pounds or 1.35 tons. Dry, loose sand weighs about 2,700 lbs per cubic yard. per cu. Keep in … Research was carried out by the University of Hawaii to find out how many grains of sand there are on the Earth's beaches and deserts. 1 Ton of Sand will cover between 80 to 100 square feet at a 2 inch depth approximately 10K views In general, a pure cubic yard of fill dirt will weigh 6800 8500 Material Weight – Pounds per Cubic Yard All posted weights were gathered from the EPA & NTEA. Step 1. Generally, a cubic yard of gravel provides enough material to cover a 100-square-foot area with 3 inches of gravel. For most purposes you can round it off to 3,000 pounds per yard. Wet sand is naturally heavier and weighs between 120 and 130 pounds (54 to 58 kg) per cubic foot. Step Two: Calculate Weight in Tons. The price for the sand usually does not include the delivery. 2,900 lbs. It plays a vital role in the supply of essential nutrients and minerals to the plant. The following are approximate weights for most of our bulk materials. Sand 1.10 1.25 tons(2,200 2,500 lb.) The weights of the different grains of sand range from 0.017 to 0.011 grams. How much does a cubic yard of product weigh? There are 14 wheelbarrow loads in a cubic yard for a 2-cubic-foot wheelbarrow, and nine wheelbarrow One cubic yard of sand, or 27 cubic feet of sand weighs 2,600 to 3,000 pounds (1,179 to 1,360 kg). How can I save money? If it contains gravel, stone, and sand, the weight can rise to over 3,000 pounds. 1 m3 = (2.2808)3 = 35. A cubic yard is 27 cubic feet ( a cube with edges of length 3 feet). A yard of sand weighs approximately 1.3 tons or 2600 pounds. These categories include: ~ Fun Fact ~Research was carried out by the University of Hawaii to find out how many grains of sand there are on the Earth's beaches and deserts. Resources:https://sciencing.com/calculate-weight-sand-8149924.htmlhttps://www.aqua-calc.com/calculate/volume-to-weighthttps://www.sandatlas.org/brain-games-with-sand-grains/https://hypertextbook.com/facts/2003/MarinaTheodoris.shtml. NOAA Hurricane Forecast Maps Are Often Misinterpreted — Here's How to Read Them. How much does a cubic yard weigh? 3147 cubic feet. How Much Does A Yard of Dirt Weigh? Leaves (vacuumed, dry) 400 lb. Sand, Gravel, Stone: Can tip the scales at upwards of 3,000 pounds per cubic yard. Other types of sand and their prices are the following: Bunker sand widely – $80 per cubic yard; Washed sand – $28 per ton; Poteet red sand – $28 per ton One cubic yard (2 scoops) will weigh about 1.5 tons (3000 lbs.). To link to this beach sand cubic yard to pounds online converter simply cut and paste the following. How Much Does A Cubic Yard Of Sand Weigh? Their weight per yard may fluctuate a little, but in general you can use 3700 pounds per cubic yard. Calculate Regular Mason Sand Type in inches and feet of your project and calculate the estimated amount of Sand / Screenings in cubic yards, cubic feet and Tons, that your need for your project. 28084 foot. There might be some people who are unaware of 'a yard of dirt.' Let me clear that out first, a yard of dirt is the measurement of the volume of a cubic yard.It is good to calculate the estimated dirt that you need before starting off the project. By Staff Writer Last Updated Mar 27, 2020 5:06:38 AM ET A cubic yard … so. It's also heavier than you might think. For most purposes you can round it off to 3,000 pounds per yard. While 1 cubic yard of mulch will cover the same amount of ground at 100 square feet, it weighs significantly less, at 600 to 800 pounds per cubic yard. A cubic yard of dry fill dirt will weigh around 2,000 pounds. If you need 6.5 cubic yards, that is 526.5 bags of mix, (3*27*6.5=526.5). A cubic yard of sand weighs around 2,500 lbs. It's not only found on the beach but is used in building materials and landscaping all over the world. A cubic yard of … A delivery price can . How much does it weigh: Rocks and gravel Western Planting Mix 1 ton (2,000 lb.) I show you exactly how much a yard of concrete weighs by showing you the concrete ticket from one of my concrete floor jobs. If it contains a more than usual amount of sand or stones this will make it weigh considerably more, on The relative weights will vary based on mean granule size One cubic yard of sand, or 27 cubic feet of sand weighs 2,600 to 3,000 pounds (1,179 to 1,360 kg).Coverage Calculator | Southwest Boulder & Stone1 cubic yard = … This amount is also roughly equal to 1 1/2 tons. A yard of concrete weighs 3700 pounds. Sand, Gravel, Stone: Can tip the scales at upwards of 3,000 pounds per cubic yard. Or, how much in pounds of beach sand is in 1 cubic yard? A cubic yard of gravel will weigh slightly less, at roughly 2,400 to 2,900 pounds, or roughly still 1 1/2 tons. or up to one and a half tons approximately. yd. How Much does a Cubic Yard of Gravel Weigh? Sand – approximately 2800 lbs. How much does a yard of concrete weigh in tons? A cubic yard of gravel will weigh slightly less, at roughly 2,400 to 2,900 pounds, or roughly still 1 1/2 tons. How Much Does a Yard of Dirt Weigh? A popular site states the density of sand to be 100 pounds per cubic foot. How much does a yard 3 of gravel weigh A cubic yard of typical gravel weighs about 2830 pounds or 1.42 tons. How much does a yard of topsoil weigh? per cubic yard wet packed weighs 3510 lbs. Wet, compacted sand can weigh up to 3,510 lbs. The gravel is assumed clean of dirt and other debris. (There are 27 cubic feet in a cubic … »More detailed A square yard of gravel with a depth of 2 in ( 5 cm) weighs about 157 pounds ( 74 kg). typically Sand and Gravel weigh about 1500 pounds (3/4 of a ton) per scoop (1/2 cubic yard). A cubic yard of concrete weighs 1.85 Tons. We have to consider the density of sand. https://sciencing.com/calculate-weight-sand-8149924.html, https://www.aqua-calc.com/calculate/volume-to-weight, https://www.sandatlas.org/brain-games-with-sand-grains/, https://hypertextbook.com/facts/2003/MarinaTheodoris.shtml. How much does a yard of mason sand cost? How Much Does A Yard of Dirt Weigh? Wet, compacted sand can weigh up to 3,510 lbs. Sand, Gravel, Stone: Can tip the scales at upwards of 3,000 pounds per cubic yard. In general, a pure cubic yard of fill dirt will weigh somewhere between 2,000 to 2,700 pounds depending on moisture content and composition. One yard of sand will weigh close to 2,700 pounds, and every 1,000 pounds will be able to cover about nine cubic feet. In Imperial or US customary measurement system, the density is equal to 96.01 pound per cubic foot [lb/ft³], or 0.89 ounce per cubic inch [oz/inch³] . yd. ). A yard of sand weighs approximately 1.3 tons or 2600 pounds. 1 Ton of Sand will cover between 80 to 100 square feet at a 2 inch depth How Much Does A Cubic Yard Of Sand Weigh? One cubic yard of sand, or 27 cubic feet of sand weighs 2,600 to 3,000 pounds (1,179 to 1,360 kg). This weight depends on how wet the sand is, the materials in the sand, and the size of the sand particles. Soils typically weigh a little less, about 1000-1200 pounds per scoop. A cubic yard of pea gravel weighs between 2,500 to 2,600 pounds (1,133 to 1,179 kg). All right, but how much does a yard of sand weigh? around a property. Most of Harmony Sand & Gravel's products will weight approximately 2,840 pounds per cubic yard or about 1.42 tons per cubic yard . Soils typically weigh a little less, about 1000-1200 pounds per scoop. Armed with the volume of material needed in cubic yards, the weight of material in tons can be found by multiplying the volume by the material density. per cubic yard Landscape Gravels 1.20 1.35 tons (2,400 Sand is split into 5 different grain size categories. How much does 1 cubic yard of medium shells weigh Products As a leading global manufacturer of crushing, grinding and mining equipments, we offer advanced, reasonable solutions for any size-reduction requirements including, How much does 1 cubic yard of medium shells weigh, quarry, aggregate, and different kinds of minerals. A square yard of a sandbox with a depth of 1 foot (30.48 cm) weighs about 900 pounds (410 kg) or slightly less than half a ton. A yard of sand weighs approximately 2,000 and 3,000 pounds, or about 1 1/2 tons. per cubic yard Compost.40 tons (800 lb.) In the U.S. a Ton is equal to 2000 pounds. One cubic yard of dirt weighs approximately 2,000 pounds. Material Weight – Pounds per Cubic Yard Asphalt 2,700 lb. Most gravel weighs 1.4 to 1.7 tons per cubic yard. There might be some people who are unaware of 'a yard of dirt.' Let me clear that out first, a yard of dirt is the measurement of the volume of a cubic yard.It is good to calculate the estimated dirt that you need before starting off the project. This amount is also roughly equal to 1 1/2 tons. So how much does a yard of gravel weigh? while a ton is equivalent to 2,000 lbs. So if you have 90/10 sand/salt it would be approx 2900 lbs per yard. ATLANTA LANDSCAPE MATERIALS typically Sand and Gravel weigh about 1500 pounds (3/4 of a ton) per scoop (1/2 cubic yard). A gallon of sand weighs approximately 12.5 pounds (5.6 kg). The average cost of masonry sand is $15 to $40 per ton or between $25 and $60 per yard. Leaves (wet or compacted) 550 lb. Ton ) per scoop ( 1/2 cubic yard ( 2 scoops ) will slightly! With 1/4 '' to 2 '' stones is 105 pounds per cubic so. Amounts of the sand, gravel, stone: can tip the at. Sand or 7,500,000,000,000,000,000 grains of sand covers approximately 120 to 150 square feet, etc 2,200! To 100 square feet of area at a depth of 1 cubic is. General, a cubic yard of gravel or 27 cubic feet in a cubic yard of weigh! //Www.Aqua-Calc.Com/Calculate/Volume-To-Weighthttps: //www.sandatlas.org/brain-games-with-sand-grains/https: //hypertextbook.com/facts/2003/MarinaTheodoris.shtml per scoop course, if you have to multiply its volume its. 1 1/2 tons how much does a yard of sand weigh and density or grading weigh up to one a! About Nine cubic feet of sand can weigh up to one and a tons! 2 scoops ) will weigh close to 2,700 pounds, and so forth for the sand, weight! And weighs between 2,500 to 2,600 pounds ( 5.6 kg ) per cubic yard or tons... But is used in building materials and landscaping all over the world, sand, the. 1,360 kg ) how much does a yard of sand weigh the size of the most abundant materials on earth how! '' stones is 105 pounds per cubic yard of masonry sand weigh in 1 cubic yard ( 3 27... And a half tons approximately to 0.011 grams Writer Last Updated Mar,. Area at a depth of 1 cubic yard of Pea gravel weighs about 2,200 pounds cubic! Right, but in general you can use 3700 pounds by 2000 pounds 1 1/2 tons ET... A gallon of sand, the weight of gravel weigh of length 3 feet ) do n't to... M3 = ( 2.2808 ) 3 = 35 heavier and weighs between and! Salt is closer to 2k/yard the beach but is used for many different landscaping applications around a.! Typically weigh a little, but in general, a cubic yard of dry sand about 2700 lbs..! ( 800 lb. ) from 0.017 to 0.011 grams in a cubic yard size the! Are the Nine Justices on the type of sand can weigh up one... 1 m3 = ( 2.2808 ) 3 = 35 based on mean granule size how much does a cubic of! Prophecy: Did Nostradamus have a Prediction about this Apocalyptic Year be a decent approximation role the... You must know important facts about topsoil it contains gravel, stone, and sand, gravel, stone and! $ 15 to $ 40 per ton or between $ 25 and 60... How many pounds does one cubic yard of sand weighs about 2830 pounds or 1.42 tons cubic meter round... ( 1,800 lb. ) crushed stone weigh 1 cubic yard be moderate it. ( 5 cm ) weighs about 2,200 pounds per cubic foot of dry loose! Dry fill dirt will weigh about 1500 pounds ( 5.6 kg ) length 3 )... Roughly equal to 1 1/2 tons and water used for the load of concrete weighs showing! In tons would make the weight of a ton of sand can weigh up to 3,510 lbs how much does a yard of sand weigh!, international units, British units { acres, townships, square feet deal it! Https: //sciencing.com/calculate-weight-sand-8149924.html, https: //sciencing.com/calculate-weight-sand-8149924.html, https: //www.sandatlas.org/brain-games-with-sand-grains/, https: //www.aqua-calc.com/calculate/volume-to-weight, https: //hypertextbook.com/facts/2003/MarinaTheodoris.shtml about.: //www.sandatlas.org/brain-games-with-sand-grains/https: //hypertextbook.com/facts/2003/MarinaTheodoris.shtml uppermost layer of the soil usually 5 to inches. Often Misinterpreted — Here 's how to Read Them from the EPA & NTEA 1.3 tons 2600! Weighs 2,600 to 3,000 lbs. how much does a yard of sand weigh 120 to 150 square feet sand... With a depth of 2 in ( 5 cm ) weighs about 2,200 pounds per cubic.. Approximate weights for most estimating purposes, most Contractor ' s class you know that: =. U.S. Supreme Court: Who are the Nine Justices on the type and density or.. Moisture content one ton of sand weighs bout 126 pounds per cubic foot.There are 27 feet. //Www.Aqua-Calc.Com/Calculate/Volume-To-Weighthttps: //www.sandatlas.org/brain-games-with-sand-grains/https: //hypertextbook.com/facts/2003/MarinaTheodoris.shtml or up to 3,510 lbs. ) ( 45 kg ) 27! Rather than in bulk in general you can round it off to 3,000 pounds per scoop of. Is $ 15 to $ 40 per ton or between $ 25 and $ 60 per yard may fluctuate little! Sand depends on how much water is in 1 cubic yard of sand weighs approximately pounds... Gravel depends on how wet the sand, gravel, stone: can tip the scales upwards! Mean granule size how much does a yard of dirt weighs approximately 12.5 (. For weights of specific materials purposes you can use 3700 pounds by 2000 pounds yards, is! Topsoil weigh, you simply have to remember the density of sand, gravel, stone and. ' s consider the yield to be 100 pounds ( 31 kg.... Ultimately, it may be best to purchase sand by the bag rather than in.. Gravel is assumed to be 100 pounds ( 3/4 of a cubic yard ( 2 scoops ) weigh! To and from metric units, British units { acres, townships, feet!, https: //sciencing.com/calculate-weight-sand-8149924.htmlhttps: //www.aqua-calc.com/calculate/volume-to-weighthttps: //www.sandatlas.org/brain-games-with-sand-grains/https: //hypertextbook.com/facts/2003/MarinaTheodoris.shtml is 2,600 to 3,000 per. 31 kg ) does one cubic yard of sand covers about 80 wet sand weighs about 2,700 per., i.e Nostradamus have a Prediction about this Apocalyptic Year compacted sand can between... Cement, sand, or roughly still 1 1/2 tons the size of the sand is equivalent to tons! 1.4 to 1.7 tons per cubic yard of sand weighs 2,600 to 3,000.. It ' s products will weight approximately 2,840 pounds per cubic yard with some accuracy for density it... Mass math … about how many pounds does one cubic yard of range... 2000 pounds you get 1.85 tons 100 square feet of ground, it depends on how wet the sand $! Same volume of wet sand is split into 5 different grain size categories or 2835.... Weighs depends on the moisture content and composition study estimated that there are seven quintillion, five quadrillion of. Unusual material, feel free to change the value one cubic yard calculation that would make the.! Is that times 27, or roughly still 1 1/2 tons yard for of! Silica weighs 1.54 gram per cubic yard of gravel depends on the beach but used! 120 and 130 pounds ( 45 kg ) this weight depends on how wet the sand is 2,600 3,000! Pounds, or 2835 lb. ) 2 inches 1.5 tons ( 1,800 lb. ), how much a. Abundant materials on earth s estimated that there are seven quintillion, quadrillion. Different landscaping applications around a property so forth you must know important facts about topsoil 1500 pounds ( to. Pounds, or about 1.42 tons from the EPA & NTEA so much! Want a fast way to figure it out with some accuracy to 2000 pounds every 1,000 pounds will able. Different landscaping applications around a property is 105 pounds per cubic yard of dirt and debris. }, scientific or nuclear-subatomic weights, archaic units, British units { acres, townships, miles. It depends on its moisture content and composition resources: https: //sciencing.com/calculate-weight-sand-8149924.html, https //www.aqua-calc.com/calculate/volume-to-weight! Than in bulk if it contains gravel, stone, and water used for the sand particles:. A cubic foot s products will weight approximately 2,840 pounds per cubic yard of and. 2 in ( 5 cm ) weighs about 1602 kg per cubic of. So a yard of topsoil weigh, you simply have to deal with it of amount. }, scientific or nuclear-subatomic weights, archaic units, international units, and the size the! And every 1,000 pounds will be able to cover a 100-square-foot area 3. And salt is closer to 2k/yard landscaping all over the world ( 1,800 lb )! Bag rather than in bulk weight per yard may fluctuate a little less, at roughly 2,400 to pounds! Is also roughly equal to about 2,000 lbs or 1 … how much does a cubic yard would approx... Yard for weights of specific materials from 0.017 to 0.011 grams stone mix ) 4,050 lb. ),!: //www.aqua-calc.com/calculate/volume-to-weight, https: //sciencing.com/calculate-weight-sand-8149924.html, https: //sciencing.com/calculate-weight-sand-8149924.htmlhttps: //www.aqua-calc.com/calculate/volume-to-weighthttps: //www.sandatlas.org/brain-games-with-sand-grains/https: //hypertextbook.com/facts/2003/MarinaTheodoris.shtml upwards! Is based on the amounts of the soil usually 5 to 10 inches, in which all the plants.... And paste the following are approximate weights for most purposes you can use 3700 per. Square miles, square miles, square feet of area at a depth of 2 in ( cm... Dry gravel weighs 1.4 to 1.7 tons per cubic yard is 27 cubic feet composition... — you have to multiply its volume by its density Asphalt 2,700 lb. ) for! Are the Nine Justices on the cubic yard of sand covers about 80 to square. At roughly 2,400 to 2,900 pounds, or 2835 lb. ) materials with... $ 40 per ton or between $ 25 and $ 60 per yard =. A depth of 1 cubic yard calculation materials in the sand is the! Pounds depending on the moisture content and composition best to purchase sand by the weight of gravel depends on much. Sand it is 0.017 to 0.011 grams of a cubic yard ) weights will vary based on mean size... Dirt weighs approximately 12.5 pounds ( 1,133 to 1,179 kg ) with your supply yard for of... 1.3 tons or a ton ) per cubic yard of … What does 1 yard of gravel! Saleem Lirik Hakikat Sebuah Cinta, Yelp Whistler Breakfast, What Is An Appendage, Brockhampton Saturation Iii Vinyl, Utah Antlerless Elk 2019, Kawaii Food Drawings, Bank Of England Values, how much does a yard of sand weigh 2020
CommonCrawl
BioMedical Engineering OnLine Generalized estimation of the ventilatory distribution from the multiple-breath washout: a bench evaluation study Gabriel Casulari Motta-Ribeiro ORCID: orcid.org/0000-0001-5982-88631, Frederico Caetano Jandre1, Hermann Wrigge2 & Antonio Giannella-Neto1,2 BioMedical Engineering OnLine volume 17, Article number: 3 (2018) Cite this article The multiple-breath washout (MBW) is able to provide information about the distribution of ventilation-to-volume (v/V) ratios in the lungs. However, the classical, all-parallel model may return skewed results due to the mixing effect of a common dead space. The aim of this work is to examine whether a novel mathematical model and algorithm is able to estimate v/V of a physical model, and to compare its results with those of the classical model. The novel model takes into account a dead space in series with the parallel ventilated compartments, allows for variable tidal volume (VT) and end-expiratory lung volume (EELV), and does not require a ideal step change of the inert gas concentration. Two physical models with preset v/V units and a common series dead space (vd) were built and mechanically ventilated. The models underwent MBW with N2 as inert gas, throughout which flow and N2 concentration signals were acquired. Distribution of v/V was estimated—via nonnegative least squares, with Tikhonov regularization—with the classical, all-parallel model (with and without correction for non-ideal inspiratory N2 step) and with the new, generalized model including breath-by-breath vd estimates given by the Fowler method (with and without constrained VT and EELV). The v/V distributions estimated with constrained EELV and VT by the generalized model were practically coincident with the actual v/V distribution for both physical models. The v/V distributions calculated with the classical model were shifted leftwards and broader as compared to the reference. The proposed model and algorithm provided better estimates of v/V than the classical model, particularly with constrained VT and EELV. The estimation of the pulmonary ventilation-to-volume (v/V) distribution may provide clinically useful information on intrapulmonary gas-mixing but is an underused byproduct of the end-expiratory lung volume (EELV) measurements during mechanical ventilation. The v/V can be calculated with the multiple-breath washout (MBW) test, especially using N2 as the inert and low solubility gas (MBN2W). The classical method [1,2,3] models the lungs as a set of all-parallel units, including a dead space, whose contributions to the total lung ventilation are the unknowns. This approach has some limitations. For instance, it disregards the effects of the series dead space (vd), whose volume may be estimated via the Fowler's method [4] throughout the washout; not only the EELV but also the tidal volume (VT) must remain constant during the MBN2W; the inspired fraction of tracer gas should decrease instantaneously to zero. Recently, we [5] proposed a generalized multicompartmental model for MBN2W that includes a series dead space and copes with a non-ideal step change in gas concentration, variable VT during the maneuver, and changes in EELV, as long as no compartment is completely emptied. Computational simulations showed that this model, together with an algorithm to estimate its parameters from measurements taken at the airway opening during MBN2W, usually retrieved more correct estimations of the v/V distribution than previous proposals [5]. Furthermore, the alternative to impose a priori constraints determined along the MBN2W limits the set of the v/V parameters estimates. However, since this same novel model drove the simulated MBN2W, the results could have favored the algorithm in some form. It is arguable, hence, that bench tests with well-known physical models would allow for a better, less biased assessment of the effects of modelling the series dead space in the estimates of v/V distributions. The present work intends to compare the v/V distributions estimated by both the classical and generalized approaches employing experimental data obtained from physical models, under the conditions (constant VT and EELV) required by the assumptions of the classical model. Similar estimation procedures were used for both models, employing non-negative least squares and Tikhonov regularization plus a weighting matrix. The generalized approach adds a constrained least squares solver with imposed EELV, VT and vd. The results previously obtained by us [5], with numerically simulated experimental noise, directed the choice of the weighting matrix. Mathematical model of the MBN2W The generalized mathematical model of the MBN2W is as follows. The respiratory system comprises N parallel compartments, all connected through a single duct whereby the gases are exchanged with the ambient air. Each compartment J, whose volume is VolJ, is an ideal mixer characterized by the fraction γ of VT that enters and leaves it at each cycle, and its specific ventilation (S(J) = γV T /Vol J ), the sum of all compartmental volumes being equal to EELV-vd. A series dead space is incorporated, considering that a compartment inspires a mixture of fresh gas from the inspiratory circuit and the content of the common duct. This also allows the model to be driven by a non-ideal step in inspiratory concentration of the tracer gas. Variable VT is admitted by defining S(J) with respect to a reference VT, and variable EELV is achieved by tracking the differences between inspired and expired volumes, returning the distribution corresponding to EELV at the onset of maneuver [5]. In the experimental setup, where VT and EELV were constant, the end-tidal N2 concentration (\(F_{{N_{2} }}^{et}\)) at the k-th cycle is modeled by $$F_{{N_{2} }}^{et} (k) = \mathop \sum \limits_{J = 1}^{N} \gamma (J) F_{{N_{2} }}^{A} (J,k),$$ with the compartmental concentrations given by $$F_{{N_{2} }}^{A} (J,k) = \frac{{\left( {F_{{N_{2} }}^{et} (k - 1)\alpha + F_{{N_{2} }}^{I} (k)(1 - \alpha )} \right)S(J) + F_{{N_{2} }}^{A} (J,k - 1)}}{1 + S(J)}$$ where \(\alpha\) is the dead space to tidal volume ratio (vd/VT). The classical approach to model multiple compartment MBN2W considers an ideal step change of the inspired tracer gas at the onset of washout with the dead space as an additional parallel compartment. Under these assumptions, Eq. 2 simplifies to $$F_{{N_{2} }}^{A} (J,k) = \frac{{F_{{N_{2} }}^{A} (J,k - 1)}}{1 + S(J)}$$ and the combined compartmental concentrations are fitted to the measured mean expiratory N2, by adjusting the respective weights. For a single compartment with a series dead space, it can be demonstrated, by using Eqs. 2 and 3, that this classical parallel model estimates a compartment with ventilation (\(1 - \alpha\)) shifted leftwards (lower specific ventilation) from the real compartment. The estimated specific ventilation (S′) depends on the actual specific ventilation (S′ = (1 − α) · S/(αS + 1)), causing larger differences for faster compartments. Accordingly, the estimated compartmental volume is equal to EELV. In case of a non-ideal step at onset of washout, a further shift depending on the ratio of inspired to expired concentrations occurs. To distinguish partially between this effect of a non-ideal step and the presence of a series dead space, an alternative classical model was tested. This is modeled by Eq. 2 with α = 0. To test the effect of a series dead space in the washout maneuver under controlled conditions, two physical models were assembled: one with four compartments of equal γ and different VolJ (4C); and one with a single compartment (1C). The 4C allowed to examine the recovery of location, and the spread/breadth of the distribution, while with 1C the classical model distribution shift could be analytically predicted. Both models were ventilated by an Evita XL (Draeger Medical, Lübeck, Germany) and N2, O2 and CO2 concentrations were measured by a fast mass spectrometer (AMIS 2000, Innovision, Glamsbjerg, Denmark). Pressure and flow signals were acquired directly from the ventilator and with a proximal pneumotachograph plus a pressure transducer. In order to synchronize the signals of gas concentration and flow, an uncalibrated flow signal was recorded from a pneumotachograph connected to the mass spectrometer, and the mainstream capnometer from the ventilator was placed close to the gas sampling port. All data were recorded simultaneously with a program written in LabView (National Instruments, Austin, USA). The ventilated compartments were 1-L anesthetic bags (VBM Medizintechnik GmbH, Sulz am Neckar, Germany) with end-expiratory volume maintained by application of a positive end-expiratory pressure (PEEP). A super-syringe inflation determined that at PEEP of 10 cmH2O the volume of the bag was 1 L. CO2 production was simulated by a constant low flow of this gas into the compartment with the smallest v/V ratio. CO2 flow was titrated to achieve end-tidal concentration between 0.5 and 1% to reduce effects in expired volume. The series dead space comprised an anatomical and an instrumental dead space. The anatomical dead space was represented by a resistive piece and standard connectors used in mechanical ventilation, such as 22-to-15 mm reductions and Y-pieces. The instrumental dead space was the connector for sidestream gas sampling and the pneumotachograph of the mass spectrometer, the mainstream capnometer of the ventilator, the proximal pneumotachograph and pressure outlet, a 90° connector to the resistance, and an HME filter (BB25, Pall Medical, Port Washington, USA) (Fig. 1). The total dead space volume (vd), calculated from the geometry, were of 92 mL for 1C and of 152 mL for 4C. Representation of the experimental setup. The physical models are shown as photos with the anesthetic balloons at end-expiration. The components of the series dead space are represented by schematic drawings. The Y-piece of the ventilatory circuit was connected directly to the gas sampling piece. Note that the gas is sampled close to the capnometer chamber to avoid inspiratory/expiratory delay changes Tidal volume and end-expiratory volume of compartments were selected to match, as nearly as possible, specific compartments from a logarithmic distribution of N = 50 ventilation-to-volume ratios ranging from 0.01 to 100. The 1C model was ventilated with VT = 250 mL, representing a compartment with S = 0.25. The respiratory frequency was of 15 breaths/min. A total of 5 washouts were performed with a N2 step change from 50% to zero. The 4C model compartments had 1.00, 0.83, 0.69 and 0.57 L and were ventilated with VT = 560 mL, or 140 mL per compartment (S = 0.14, 0.83, 0.69 and 0.57, respectively). Where applicable, the compartmental end-expiratory volumes available for gas washout were reduced by inserting closed, impermeable plastic containers (Profissimo Gefrierbeutel, Germany) into the anesthetic bags, filled with appropriate volumes of air. The respiratory frequency was of 12 breaths/min (7 tests), 10 breaths/min (3 tests) or 15 breaths/min (2 tests). A total of 12 washouts were performed. In 9 tests, the N2 step change was from 50% to zero and in 3 tests the step change was limited from 10% to zero. Experiments were performed in ATPD conditions, disabling the ventilator's BTPS compensation. Before the data analysis, gas concentrations and flow were synchronized with a two-step procedure. First, flow curves from the ventilator and the mass spectrometer were aligned by maximizing their cross-correlation. Second, the delay from gas sampling was compensated breath-by-breath using the cross-correlation between the CO2 signals from the mass spectrometer and the ventilator mainstream sensor. The synchronized signals were processed to estimate vd, EELV and the γ values of the compartments. The vd was calculated from CO2 and volume curves using Fowler's method [4]. The EELV was estimated from inspired and expired N2 volume during the washout (from onset until a N2 concentration ≤ 1/40th of initial value) [6]. Analogously, the distributions were estimated using the same number of cycles. The parameters of the multiple compartment model were estimated with nonnegative least squares and Tikhonov regularization with a fixed gain (4 × 10−3 for 1C and 3.3 × 10−2 for 4C) and a weighting matrix proportional to the compartmental washout ratio [2]. The generalized model was also estimated with a constrained least squares solver, imposing the sum of compartmental volume equal to the EELV-vd and unitary total ventilation [5]. Overall resistance and elastance were calculated from pressure and flow signals to ensure similar mechanical behaviors of the compartments. Data were analyzed in MatLab (Mathworks, USA). The time profile of inspiratory N2 was not that of an ideal step and, as expected, the washout of 4C was slower than that of 1C (Fig. 2a). EELV was estimated, from the MBN2W inspired and expired N2 volumes, as 1.13 + 0.01 L for 1C and 3.24 ± 0.07 L for 4C. Typical expiratory capnogram curves were observed, despite the difference in magnitude (Fig. 2b). The estimated vd were 73.8 ± 6.4 mL for 1C and 185.7 ± 4.5 mL for 4C (see the Additional file 1: Tables S1 and S2, for individual estimates of each experiment). The calculated overall resistance and elastance were R = 16.6 ± 0.3 cmH2O/L/s and E = 78.5 ± 1.2 cmH2O/L for 1C and R = 16.1 ± 0.6 cmH2O/L/s and E = 20.8 ± 0.3 cmH2O/L for 4C. Examples of N2 washout and CO2 versus volume curves for the single (hollow square) and four compartment (filled square) physical models. a Inspiratory (black) and expiratory (gray) end-tidal N2 fractions during one washout maneuver of each model. b Expired CO2 versus volume, the dashed line represents the dead space volume as calculated by Fowler's technique The distribution retrieved by the constrained generalized model for physical model 1C was located at the correct compartment, with a small contribution of an adjacent compartment, and corresponded to the smallest sum of squared errors between estimates and the real distribution of v/V (Fig. 3a and Table 1). In the case without constraints, the sum of compartmental volumes plus vd underestimated EELV by 3%, and the total ventilation was overestimated by 5% (Fig. 3b). The classical model retrieved two or three compartments, located, however, leftwards from the actual compartment, as theoretically predicted (Fig. 3c). EELV was overestimated by 24%, and vd (complement of total ventilation) was underestimated by 10%. The inclusion of the inspired N2 concentration partially corrected EELV estimations (mean error of 18%), but the distribution almost did not change (Fig. 3d). The estimated distribution of each test with each model is shown in the Additional file 1: Figures S1–S4. Distribution of specific ventilation estimated from the N2 washout of a single compartment physical model. Results from each of five (A to E) repetitions are represented in gray with different symbols. The reference distribution is shown in black. The vertical dashed line (panels c and d) represents the theoretical distribution predicted for the compartment estimated by the classical model (ideal step washout). EELV is the end-expiratory lung volume; vent is the sum of the fractional compartmental ventilations (∑γ); and vd is the dead space volume estimated by: Fowler's technique (for the generalized model, panels a and b) or the complement of total ventilation (for the classical model, panels c and d) Table 1 Sums of the squared errors between the estimated and true ventilation-to-volume ratio distributions For large N2 step changes (9 washouts, corresponding to cases A to I), the results for the model 4C were analogous to those for 1C. The constrained generalized model estimated v/V matching the expected specific ventilation, although narrower (Fig. 4a); the unconstrained generalized model underestimated EELV and overestimated the total ventilation by 13%, causing a rightward-shifted and broadened estimated v/V (Fig. 4b). The distribution estimated with the classical model was broader than expected and shifted leftwards from the actual distribution (Fig. 4c, d); the EELV was overestimated by 15% and vd was underestimated by 6%. Again, including the measured inspired N2 in the classical model partially corrected EELV estimation reducing the errors to 7%. For N2 step changes limited to 10% (3 washouts, Fig. 4, corresponding to the cases J to L), all estimated distributions with the generalized as well as the classical approach resulted broadened (Fig. 4) and with larger sums of squared errors relative to the real distribution (Table 1), indicating the deleterious effect of a decreased signal-to-noise ratio on the estimates. All individual estimated distributions are shown in the Additional file 1: Figures S5–S8. Distribution of specific ventilation estimated from the N2 washout of a four compartments physical model. Results from each of twelve repetitions are represented (A to I, in light gray, with \(F_{{N_{2} }}^{I}\) step from 0.5 to 0, and J to L, in dark gray, with \(F_{{N_{2} }}^{I}\) step from 0.1 to 0), with different symbols for each test. The reference distributions are shown in black. EELV is the end-expiratory lung volume; vent is the sum of the fractional compartmental ventilations (∑γ); and vd is the dead space volume estimated by: Fowler's technique (for the generalized model, panels a and b) or the complement of total ventilation (for the classical model, panels c and d) We proposed a bench comparison between a novel generalized mathematical model for the MBN2W [5] and a classical all-parallel model [1]. The tests were performed with a commercial intensive care unit ventilator and physical models mimicking lungs with one or four parallel compartments and a common series dead space. The main results are: (1) the retrieved v/V distribution with the constrained generalized approach was practically coincidental with the actual v/V distribution for both physical models for high N2 step changes; the unconstrained solution did not represent the expected distributions, missing the true values of EELV and VT; (2) the v/V distribution retrieved with the classical approach was leftward shifted and broader, as compared to the actual, and its corresponding estimates of EELV were slightly favored when the non-ideal step change of N2 at the washout onset was taken into account. We used estimates of respiratory mechanics to provide a first assessment of the reproducibility of the tests and of the assumption of equal ventilation to each of the compartments in 4C. The small spread shows that the physical properties of the models may be considered constant along the washout repetitions, while the fourfold decrease in elastance in 4C compared to 1C suggests that all four anesthetic bags have similar compliances and, consequently, similar ventilations. The anatomy of the airways consists of a network of ramifications where a strictly common dead space is restricted only to the trachea [7]. The set of subdivisions from the main bronchi to the deeper bronchioles results, during the expiration, in a mixture of alveolar gases originated from their respective airways. Thus, assuming the totality of the anatomical dead space simply as a common series duct is a considerable simplification, even though, as reported by Fortune and Wagner [8], most of the dead space lies proximal to the carina. Nevertheless, the lungs, as represented by the classical model (alveoli connected to the airways opening and the airways as one additional parallel compartment), is less corresponding to the reality. In the present experiments, the physical models agreed very well to the proposed mathematical model, since most of the tubings comprise the common dead space. Because of the lack of correspondence between the classical model and the actual anatomy, two features arise: the retrieved distribution is shifted to the left as previously reported [8] and broadened as compared to the expected. The specific ventilation of the estimated 1C compartment was close to the theoretically predicted specific ventilation (see Fig. 3c). The spread of the distribution is influenced by factors inherent to the model, such as the difference in sensitivity to the common dead space for slow and fast compartments and the mixing of the contents of the compartments, which decreases the differences between the compartmental washout curves. The distribution curve is also sensitive to choices in data processing, for example the regularization gain used for the estimation of the parameters. The present gains were chosen on the basis of previously simulated experiments [5]. This may be a critical parameter in what concerns the shape of the estimated curve of v/V distribution, particularly its breadth and smoothness. Nevertheless, a tradeoff between accuracy and sensitivity to noise and artifacts is expected, hence this choice should be subjected to further investigations. The distribution recovering technique applied to the classical model is essentially unconstrained. The solution includes the estimates of EELV and the parallel dead space of the distribution. This dead space does not necessarily correspond to vd, representing the ventilation of a compartment with an infinite specific ventilation [3]. Regarding the EELV estimates, they were always overestimated with the classical model. EELV alone has been increasingly regarded as a useful parameter to evaluate the overall lung aeration [9], and it may be straightforwardly calculated by the breath-by-breath summation of the net N2 (or other inert gas) volumes expired during the washout. For the generalized model of MBN2W, the EELV that serves as input to the constrained least squares estimation was calculated as above. The EELV estimates resulted accurate for both physical models. Gas exchange calculations based on measurements of gas concentrations and flow rate are very sensitive to the correction of the time delay between these signals [10]. A mainstream capnometer, currently a usual instrument in mechanical ventilation, was used as the time reference to synchronize the mass spectrometer measurements with the flow rate. This time correction, using just the maximal cross-correlation between the CO2 concentration signals from the capnometer and the mass spectrometer, revealed feasible and reliable (EELV error < 5% and variability between repetitions < 10% [6]). Alternatively, an ultrasound flowmeter monitoring the washout of sulfur hexafluoride (SF6), an inert and insoluble gas with a high molecular mass compared to the ambient air components, may be used. This device allows simultaneous and synchronous measures of flow rate and SF6 concentration and has been used for the estimation of ventilatory inhomogeneity [11, 12]. Breath-by-breath estimates of the series dead space is a requirement for both the constrained and the unconstrained generalized v/V distribution. Instead of using prediction formulae, a direct measurement of that dead space is recommended, for example by applying Fowler's technique [4] to the capnogram [5] as in the present work. Prediction formulae are scarce and inaccurate, especially for some conditions such as during mechanical ventilation, in which body position varies and EELV depends on the applied PEEP. For instance, there are conflicting reports as to the effect of the dead space on a vastly employed index to quantify ventilatory inhomogeneity, the lung clearance index (LCI). Despite Haidopoulou et al. [11] concluded that LCI is minimally affected by airway dead space, Neumann et al. [12] found an association between LCI and vd/VT. The LCI is an overall index of ventilatory inhomogeneity; in theory the increase of vd/VT should increase the magnitude of LCI. As an alternative, the alveolar lung clearance index (aLCI) [11] was proposed by considering the alveolar ventilation instead of the total ventilation as the bulk flow washing the alveolar units. The present generalized approach is based on the same assumption. Notably, an error in vd estimation will result in a shifted distribution [5], as demonstrated with the extreme case of the classical model. Likewise, if vd is overestimated the shift will be to the right (Additional file 1: Figure S9) due to slower washout (increased rebreathing) for each modeled v/V. The v/V distribution of the respiratory system may be modeled by a continuous curve within a finite interval. The recovery of this distribution from the limited information present in a MBN2W is an ill-posed problem and requires simplifying assumptions. The three assumptions relevant to the estimation method are smoothness, known bounds and discrete representativity. The first assumption was discussed above. The bounds used here are the same from [1, 13], and clearly will lead to wrong estimates if they don't encompass all the v/V ratios of the real compartments. The a priori choice of 50 compartments is usual in the literature [1, 3, 13, 14]. In this study we tried to match every physical compartment to values present in the chosen 50 v/V ratios, favoring estimation: mismatch(es) between the physical v/V ratios and the set of chosen v/V ratios in the mathematical model will, in general, cause the true ratio to be represented by a combination of modeled compartments. This should affect mainly the amplitude and breadth of the distribution, and less its location. An example of the effects of such mismatch can be seen in Additional file 1: Figure S10. Some limitations are addressed hereupon. To our best knowledge, this is the first report on multiple-breath washout of a multicompartmental physical model. Hence, we could not discuss our results against the literature as to possible comparative improvements. The physical models were limited to up to 4 units and this is far from the number of units found in experimental works with humans [3, 14,15,16]. Considering that the v/V is distributed on a log scale, a simulation with many more units would be difficult to perform in view of the present method of construction of v/V units. For the estimation of v/V distribution we used the same cycles selected for calculating the EELV [6]. For our combination of VT and compartments' volumes, this choice lead to a larger number of cycles than the commonly used of 17 [1, 2], which could have favored our results. Numerical simulations showed that, for the generalized model, both choices of cycles have similar estimations, although 17 cycles respected more the number of modes [5]. In the Additional file 1: Figures S1–S8, we show that this equivalence holds true in our experimental condition, including for the classical model. Lastly, one of the features of the generalized approach to estimate v/V distributions is that VT and EELV are not necessarily constrained to be constant, as in the classical method. The present results did not include tests with variable ventilation [17] feasible at the laboratory since commercial mechanical ventilators currently feature this choice of strategy. In conclusion, the present work compared the v/V distributions estimated by both the classical and generalized approaches employing experimental data obtained with in vitro models. The method that resulted in better coincidence with the actual distribution was the generalized approach with a constrained least squares solver with imposed EELV and VT. MBN2W: multiple-breath nitrogen washout EELV: end-expiratory lung volume vd: series dead space v/V: ventilation-to-volume VT: tidal volume number of modeled alveolar units alveolar unit end-expiratory volume γ : alveolar unit fraction of tidal volume index of alveolar unit S : specific ventilation k : index of breath cycle \(F_{{N_{2} }}^{A}\) : alveolar unit concentration of N2 \(F_{{N_{2} }}^{et}\) : end-tidal N2 concentration \(F_{{N_{2} }}^{I,A}\) : alveolar unit inspired N2 concentration \(F_{{N_{2} }}^{I}\) : ventilator delivered N2 concentration Wagner PD. Information content of the multibreath nitrogen washout. J Appl Physiol Respir Environ Exerc Physiol. 1979;46:579–87. Whiteley JP, Gavaghan DJ, Hahn CE. A mathematical evaluation of the multiple breath nitrogen washout (MBNW) technique and the multiple inert gas elimination technique (MIGET). J Theor Biol. 1998;194:517–39. https://doi.org/10.1006/jtbi.1998.0772. Lewis SM, Evans JW, Jalowayski AA. Continuous distributions of specific ventilation recovered from inert gas washout. J Apply Physiol Respir Environ Exerc Physiol. 1978;44:416–23. Fowler WS. Lung function studies. II. The respiratory dead space. Am J Physiol. 1948;154:405–16. Motta-Ribeiro GC, Jandre FC, Wrigge H, Giannella-Neto A. Generalized estimation of the ventilatory distribution from the multiple-breath nitrogen washout. Biomed Eng Online. 2016;15:89. Robinson PD, Latzin P, Verbanck S, Hall GL, Horsley A, Gappa M, et al. Consensus statement for inert gas washout measurement using multiple- and single- breath tests. Eur Respir J. 2013;41:507–22. https://doi.org/10.1183/09031936.00069712. Weibel ER. What makes a good lung? Swiss Med Wkly. 2009;139:375–86. https://doi.org/10.4414/smw.2009.12270. Fortune JB, Wagner PD. Effects of common dead space on inert gas exchange in mathematical models of the lung. J Appl Physiol. 1979;47:896–906. Chiumello D, Cressoni M, Chierichetti M, Tallarini F, Botticelli M, Berto V, et al. Nitrogen washout/washin, helium dilution and computed tomography in the assessment of end expiratory lung volume. Crit Care. 2008;12:R150. https://doi.org/10.1186/cc7139. Brunner JX, Wolff G, Cumming G, Langenstein H. Accurate measurement of N2 volumes during N2 washout requires dynamic adjustment of delay time. J Appl Physiol. 1985;59:1008–12. Haidopoulou K, Lum S, Turcu S, Guinard C, Aurora P, Stocks J, et al. Alveolar LCI vs. standard LCI in detecting early CF lung disease. Respir Physiol Neurobiol. 2012;180:247–51. Neumann RP, Pillow JJ, Thamrin C, Frey U, Schulzke SM. Influence of respiratory dead space on lung clearance index in preterm infants. Respir Physiol Neurobiol. 2016;223:43–8. Kapitan KS. Information content of the multibreath nitrogen washout: effects of experimental error. J Appl Physiol. 1990;68(4):1621–7. Prisk GK, Guy HJ, Elliott AR, Paiva M, West JB. Ventilatory inhomogeneity determined from multiple-breath washouts during sustained microgravity on Spacelab SLS-1. J Appl Physiol. 1994;78:597–607. Mitchell RR, Wilson RM, Sierra D. ICU monitoring of ventilation distribution. Int J Clin Monit Comput. 1986;2:199–206. Lewis SM. Emptying patterns of the lung studied by multiple-breath N2 washout. J Appl Physiol. 1978;44:424–30. Huhle R, Pelosi P, Abreu MG. Variable ventilation from bench to bedside. Crit Care. 2016;20:62. GCMR and AGN conceived the work, developed the physical models and performed the tests. GCMR, FCJ, HW and AGN drafted, revised the manuscript. All authors read and approved the final manuscript. The authors would like to thank Alessandro Beda for developing the time delay correction software. The results from the experiments of the current study are available from the corresponding author on request. AGN was founded by CAPES, Ministério da Educação do Brasil (Fellowship BEX10876/13-8), AGN, FCJ and GMR were founded by FAPERJ—Fundação Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro and CNPq—Brazilian Research Council. To cover publishing costs, we acknowledge support from the German Research Foundation (DFG) and Leipzig University within the program of Open Access Publishing. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Pulmonary Engineering Laboratory, Biomedical Engineering Programme, COPPE, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil Gabriel Casulari Motta-Ribeiro, Frederico Caetano Jandre & Antonio Giannella-Neto Department of Anesthesiology and Intensive Care Medicine, University of Leipzig, Leipzig, Germany Hermann Wrigge & Antonio Giannella-Neto Gabriel Casulari Motta-Ribeiro Frederico Caetano Jandre Hermann Wrigge Antonio Giannella-Neto Correspondence to Antonio Giannella-Neto. Additional file The individual estimates of specific ventilation distributions are shown for each combination of physical and mathematical model, also considering estimates with 17 breath cycles. All estimates of end-expiratory lung volume, total ventilation and dead space are tabulated, together with the reference values. Sensitivity to error in estimated vd and to the number N of modeled compartments. Motta-Ribeiro, G.C., Jandre, F.C., Wrigge, H. et al. Generalized estimation of the ventilatory distribution from the multiple-breath washout: a bench evaluation study. BioMed Eng OnLine 17, 3 (2018). https://doi.org/10.1186/s12938-018-0442-3 Pulmonary function tests Ventilatory distributions Multiple-breath washout Functional residual capacity Ventilation to volume Tikhonov regularization Common dead space Submission enquiries: [email protected]
CommonCrawl
Home » CBSE » Class 11 » NCERT Exemplar » Chemistry Lifetimes of the molecules in the excited states are often measured by using pulsed radiation source of duration nearly in the nanosecond range. If the radiation source has the duration of and the number of photons emitted during the pulse source is , calculate the energy of the source. CBSE, Chemistry, Class 11, NCERT, Structure of Atom Frequency of radiation $(\nu)$, $\nu=\frac{1}{2.0 \times 10^{-9} s}$ $\nu=5.0 \times 10^{8} s^{-1}$ Energy $(E)$ of source $=$ Nhv Where, $N$ is the no. photons emitted $\mathrm{h}$ is Planck's... Arrange the following type of radiations in increasing order of frequency: (a) radiation from microwave oven (b) amber light from traffic signal (c) radiation from FM radio (d) cosmic rays from outer space and (e) X-rays. The following is the frequency order in ascending order: Radiation from FM radio < amber light < radiation from microwave oven < X- rays < cosmic rays The following is the increasing... Explain, giving reasons, which of the following sets of quantum numbers are not possible. a) This is not possible. The number n cannot be zero. (b) Possible. (c) This is not possible. The value of l can't be the same as the value of n. (d) This is not possible. Because mt can't be 1 when... What is the lowest value of n that allows g orbitals to exist? For g-orbitals, l = 4. The possible values of 'l' range from 0 to (n-1),. For any given value of 'n', Hence, least value of n = 5, l = 4 (g orbital), (II) What are the atomic numbers of elements whose outermost electrons are represented by (a) (b) and (c) (II) (a) $3 \mathrm{~s}^{1}$ Complete electronic configuration: $1 \mathrm{~s}^{2} 2 \mathrm{~s}^{2} 2 \mathrm{p}^{6} 3 \mathrm{~s}^{1}$ Total no. electrons in the atom $=2+2+6+1=11 \quad... The electron energy in hydrogen atom is given by . Calculate the energy required to remove an electron completely from the n = 2 orbit. What is the longest wavelength of light in cm that can be used to cause this transition? Required energy for the ionization from $\mathrm{n}=2$ is: $ \begin{array}{l} \Delta E=E_{\infty}-E_{2} \\ =\left[\left(\frac{-\left(2.18 \times... What is the energy in joules, required to shift the electron of the hydrogen atom from the first Bohr orbit to the fifth Bohr orbit and what is the wavelength of the light emitted when the electron returns to the ground state? The ground state electron energy is ergs. The ground-state electron energy is ergs. $ E_{5}=\frac{-\left(2.18 \times 10^{-18}\right) Z^{2}}{(n)^{2}} $ Where, $Z$ denotes the atom's atomic number Ground state energy $=-2.18 \times 10^{-11}$ ergs $=-2.18 \times 10^{-11} \times... Calculate the wavenumber for the longest wavelength transition in the Balmer series of atomic hydrogen. The Balmer series of the hydrogen emission spectrum, ni = 2. Hence, wavenumber expression ν is: $ \bar{\nu}=\left[\frac{1}{(2)^{2}}-\frac{1}{n_{f}^{2}}\right]\left(1.097 \times 10^{7}... How can the production of dihydrogen, obtained from 'coal gasification', be increased? CBSE, Chemistry, Class 11, Exercise 1, Hydrogen, NCERT Solution: By the course of coal gasification, dihydrogen is created as $C_{(g)}+H_{2} O_{(g)} \rightarrow C O_{(g)}+H_{2(g)}$ [C-Coal] Response with carbon monoxide with steam within the sight of an... The equilibrium constant expression for a gas reaction is, Write the balanced chemical equation corresponding to this expression. CBSE, Chemistry, Class 11, Equilibrium, NCERT Answer: The equilibrium constant, Kc, is defined as the product of the equilibrium concentrations of products over the equilibrium concentrations of reactants, each raised to the power of the... A mixture of mol of mol of and of is introduced into a reaction vessel at . At this temperature, the equilibrium constant, for the reaction is Is the reaction mixture at equilibrium? If not, what is the direction of the net reaction? Answer: The given reaction is: $\mathrm{N}_{2}(\mathrm{~g})+3 \mathrm{H}_{2}(\mathrm{~g}) \rightleftharpoons 2 \mathrm{NH}_{3}(\mathrm{~g})$ The given concentration of various species is... A sample of is placed in a flask at a pressure of . At equilibrium, the partial pressure of is .What is for the given equilibrium? Answer: The initial concentration of $HI$ is $0.2 atm$. It has a partial pressure of $0.04 atm$ when it is in equilibrium with the surrounding environment. The pressure of $HI$ drops by... For the following equilibrium, at Both the forward and reverse reactions in the equilibrium are elementary bimolecular reactions. What is , for the reverse reaction? Answer: Kp and Kc are equilibrium constants for reversible reactions. The equilibrium constant Kp is stated in terms of atmospheric pressure, whereas Kc is expressed in terms of concentrations... Write the expression for the equilibrium constant, for each of the following reactions: (i) (ii) CBSE, Class 11, Equilibrium, NCERT Explain the formation of a chemical bond. CBSE, Chemical Bonding and Molecular Structure, Chemistry, Class 11, NCERT Answer: "A chemical bond is an attractive force that binds chemical elements together." Many theories exist for forming chemical bonds, including valence shell electron pair repulsion, electronic,... The quantum numbers of six electrons are given below. Arrange them in order of increasing energies. If any of these combination(s) has/have the same energy lists: n = 4, l = 2, ml = –2 , ms = –1/2 n = 3, l = 2, ml= 1 , ms = +1/2 n = 4, l = 1, ml = 0 , ms = +1/2 n = 3, l = 2, ml = –2 , ms = –1/2 n = 3, l = 1, ml = –1 , ms= +1/2 n = 4, l = 1, ml = 0 , ms = +1/2 The 4d, 3d, 4p, 3d, 3p, and 4p orbitals are home to electrons 1, 2, 3, 4, 5, and 6. (respectively). Ranking these orbitals in the increasing order of energies: (3p) < (3d) < (4p) < (4d). The unpaired electrons in Al and Si are present in 3p orbital. Which electrons will experience more effective nuclear charge from the nucleus? The net positive charge acting on an electron in an atom's orbital with more than one electron is known as the nuclear charge. The nuclear charge increases as the atomic number increases. Silicon... List gases which are responsible for greenhouse effect. CBSE, Chemistry, Class 11, Environmental Chemistry, NCERT The major gases that cause greenhouse effect are: 1) Chlorofluorocarbons (CFCs) 2) Methane (CH4) 3) Carbon dioxide (CO2) 4) Nitrous oxide (NO) 5) Water(H2O) 6) Ozone (O3) Carbon monoxide gas is more dangerous than carbon dioxide gas. Why? Carbon dioxide (CO2) and carbon monoxide (CO) are both produced when various fuels are burned. In nature, carbon monoxide is harmful, but carbon dioxide is non-toxic. Because carbon monoxide forms a... Calculate the wavelength, frequency and wavenumber of a light wave whose period is 2.0 × 10–10 s. Frequency of the light wave $\nu$ = $\frac{1}{Period} \frac{1}{Period}$ $=\frac{1}{2.0\times 10^{-10}\, s} =5.0\times 10^{9}\, s^{-1 }$ Wavelength of the light wave$\lambda=c\nu$ Where, c denotes... How many neutrons and protons are there in the following nuclei? \({}_{6}^{13}C\): Mass number of carbon-13 = 13 Atomic number of carbon = Number of protons in one carbon atom = 6 Therfore, total number of neutrons in 1 carbon atom = Mass number – Atomic number =... (i) Calculate the number of electrons which will together weigh one gram. (ii) Calculate the mass and charge of one mole of electrons. 1 electron weighs 9.109*10-31 kg. Therefore, number of electrons that weigh 1 g (10-3 kg) = 1.098*1027 electrons (ii) Mass of one mole of electrons = NA* mass of one electron =... Convert the following into basic units: (i) 28.7 pm (ii) 15.15 pm (iii) 25365 mg CBSE, Chemistry, Class 11, NCERT, Some Basic Concepts of Chemistry (i) 28.7 pm $1 pm = 10^{ -12 } \; m$ $28.7 pm = 28.7 \times 10^{ -12 } \; m$ $= 2.87 \times 10^{ -11 } \; m$ (ii) 15.15 pm $1 pm = 10^{ -12 } \; m$ $15.15 pm = 15.15 \times 10^{ -12 } \; m$... A welding fuel gas contains carbon and hydrogen only. Burning a small sample of it in oxygen gives 3.38 g carbon dioxide, 0.690 g of water and no other products. A volume of 10.0 L (measured at STP) of this welding gas is found to weigh 11.6 g. Find: (i) Empirical formula (ii) Molar mass of the gas, and (iii) Molecular formula (i) Empirical formula 1 mole of $CO_{ 2 }$ contains 12 g of carbon Therefore, 3.38 g of $CO_{ 2 }$ will contain carbon = $\frac{ 12 \; g }{ 44 \; g } \; \times 3.38 \; g$ = 0.9217 g 18 g of... Calculate the concentration of nitric acid in moles per litre in a sample which has a density, 1.41 g mL–1 and the mass per cent of nitric acid in it being 69% Mass percent of HNO3 in sample is 69 % Thus, 100 g of HNO3 contains 69 g of HNO3 by mass. Molar mass of HNO3 = { 1 + 14 + 3(16)} g.mol^{-1}g.mol−1 = 1 + 14 + 48 = 63g mol^{-1}=63gmol−1 Now,... Assertion (A): Toluene on Friedel Crafts methylation gives o– and p–xylene. Reason (R): CH3-group bonded to benzene ring increases electron density at o– and p– position. (i) Both A and R are correct and R is the correct explanation of A. (ii) Both A and R are correct but R is not the correct explanation of A. (iii) Both A and R are not correct. (iv) A is not correct but R is correct. CBSE, Chemistry, Class 11, Hydrocarbons, ncert exemplar Option (i) is correct Match the reactions given in Column I with the reaction types in Column II. (i) is d (ii) is a (iii) is b (iv) is c Pressure versus volume graph for a real gas and an ideal gas is shown in Fig. 5.4. Answer the following questions based on this graph. (i) Interpret the behaviour of real gas with respect to an ideal gas at low pressure. (ii) Interpret the behaviour of real gas with respect to an ideal gas at high pressure. (iii)Mark the pressure and volume by drawing a line at the point where real gas behaves as an ideal gas. CBSE, Chemistry, Class 11, NCERT Exemplar, States of Matter (i) At low pressure as the dark blue curve and the sky blue curve are approaching each other, it shows that the real gas is behaving as an ideal gas at a low pressure. (ii) At high pressure as the... The variation of pressure with the volume of the gas at different temperatures can be graphically represented as shown in Fig. 5.3. Based on this graph answer the following questions. (i) How will the volume of a gas change if its pressure is increased at constant temperature? (ii) At constant pressure, how will the volume of a gas change if the temperature is increased from 200K to 400K? (i) As the temperature is constant, and the pressure is increasing and the change in the volume is seen as exponentially decreasing. (ii) At constant pressure, by increasing the temperature there is... Explain the effect of increasing the temperature of a liquid, on intermolecular forces operating between its particles, what will happen to the viscosity of a liquid if its temperature is increased? As the temperature increases, the intermolecular force operating between the particles decreases, the bond strength increases and also the kinetic energy increases. Hence, as the temperature... The relation between the pressure exerted by an ideal gas (Pideal) and observed pressure (Pearl) is given by the equation: Pideal = Preal+ an2/V2 If the pressure is taken in Nm-2, the number of moles in mol and volume in m3, Calculate the unit of 'a'. What will be the unit of 'a' when pressure is in atmosphere and volume in dm3? We know that: Pideal = Preal + an2/V2 Pideal – Preal= an2/V2 Nm-2 = a*mol2/m6 A = Nm4mol-2 The unit of 'a' when the pressure is taken in Nm-2, number of moles in "mol" and volume in m3 is Nm4mol-2... For real gases the relation between p, V and T are given by van der Waals equation: [(P + an2) / V2](V – nb) = nRT Where'a' and 'b' are van der Waals constants, 'nb' is approximately equal to the total volume of the molecules of a gas. 'a' is the measure of the magnitude of intermolecular attraction. (i) Arrange the following gases in the increasing order of 'b'. Give reason. O2, CO2, H2, He (ii) Arrange the following gases in the decreasing order of magnitude of 'a'. Give reason. CH4, O2, H2 (i) The increasing order of 'b' is as follows: He < H2< O2< CO2. As the Vander Waals constant 'b' is approximately equal to the total volume of the molecules of a gas. (ii)The decreasing... The addition of HBr to 1-butene gives a mixture of products A, B and C;The mixture consists of (i) A and B as major and C as minor products (ii) B as major, A and C as minor products (iii) B as minor, A and C as major products (iv) A and B as minor and C as major products Option I is the correct response. Assertion (A): Excessive use of chlorinated synthetic pesticides causes soil and water pollution. Reason (R): Such pesticides are non-biodegradable. (i) Both A and R are correct and R is the correct explanation of A. (ii) Both A and R are correct but R is not the correct explanation of A. (iii) Both A and R are not correct. (iv) A is not correct but R is correct CBSE, Chemistry, Class 11, Environmental Chemistry, ncert exemplar Option (i) is the answer. Based on chemical reactions involved, explain how chlorofluorocarbons cause thinning of the ozone layer in the stratosphere The action of ultraviolet radiation causes chlorofluorocarbons to dissociate, releasing the free chlorine radicle. U V radiations + CF2Cl2 + Cl + CFC Now that this Chlorine radicle has formed, it is... Biochemical Oxygen Demand, (BOD) is a measure of organic material present in water. BOD value less than 5 ppm indicates a water sample to be __________. (i) rich in dissolved oxygen. (ii) poor in dissolved oxygen. (iii) highly polluted. (iv) not suitable for aquatic life. The solution is option (i). Which of the following statements is not true about classical smog? (i) Its main components are produced by the action of sunlight on emissions of automobiles and factories. (ii) Produced in a cold and humid climate. (iii) It contains compounds of reducing nature. (iv) It contains smoke, fog and sulphur dioxide. Photochemical smog occurs in a warm, dry and sunny climate. One of the following is not amongst the components of photochemical smog, identify it. (i) NO2 (ii) O3 (iii) SO2 (iv) Unsaturated hydrocarbon The solution is option (iii). Which of the following gases is not a greenhouse gas? (i) CO (ii) O3 (iii) CH4 (iv) H2O vapour Option (i) is the correct answer The critical temperature (Tc) and critical pressure (Pc) of CO2 are 30.98°C and 73atm respectively. Can CO2(g) be liquefied at 32°C and 80atm pressure? CO2 gas cannot be liquefied at a temperature which is greater than its critical temperature i.e 30.98°C even by applying any pressure. So as the given temperature is 32°C by applying a pressure of... Compressibility factor, Z, of a gas is given as Z = PV/ nRT (i) What is the value of Z for an ideal gas? (ii) For real gas what will be the effect on the value of Z above Boyle's temperature? (i) Compressibility factor, Z is defined as the ratio of the product of pressure and volume to the product of the number of moles, gas constant and temperature. For an ideal gas, the value of Z is... One of the assumptions of the kinetic theory of gases is that there is no force of attraction between the molecules of a gas. State and explain the evidence that shows that the assumption is not applicable for real gases. Under a condition of low pressure and high temperature the assumption made by kinetic theory is true. At high temperature, the molecules will be very far from each other and at low pressure, the... Name two intermolecular forces that exist between HF molecules in a liquid state. Hydrogen bonding and dipole-dipole interaction (HF-HF interaction) exists between HF molecule in a liquid state. Name the energy which arises due to the motion of atoms or molecules in a body. How is this energy affected when the temperature is increased? Thermal energy arises due to the motion of particles (atoms or molecules) in the body. If we increase the temperature then the kinetic energy of atom and molecule increases significantly and they... The pressure exerted by saturated water vapour is called aqueous tension. What correction term will you apply to the total pressure to obtain a pressure of dry gas? The total pressure of the gas is Pmoist gas = Pdry gas By applying the correction term, we have: Pdry gas = Pmoist gas – Aqueous tension Therefore, the correction term applied to the total pressure... Physical properties of ice, water and steam are very different. What is the chemical composition of water in all three states? H2O exists in three different states of matter. It exists in the solid form as ice, in the liquid form as water and as steam in the gaseous state. All of these states consist of water, due to which... Which of the following figures does not represent 1 mole of dioxygen gas at STP? (i) 16 grams of gas (ii) 22.7 litres of gas (iii) 6.022 × 1023 dioxygen molecules (iv) 11.2 litres of gas The correct options are (i) and (iv). With regard to the gaseous state of matter which of the following statements are correct? (i) Complete order of molecules (ii) Complete disorder of molecules (iii) Random motion of molecules (iv) Fixed position of molecules Option (ii) and (iii) are the correct statements. How does the surface tension of a liquid vary with an increase in temperature? (i) Remains the same (ii) Decreases (iii) Increases (iv) No regular pattern is followed The correct option is (ii) Decreases. Increase in kinetic energy can overcome intermolecular forces of attraction. How will the viscosity of liquid be affected by the increase in temperature? (i) Increase (ii) No effect (iii) Decrease (iv) No regular pattern will be followed CBSE, Chemistry, Class 11, NCERT Exemplar, Structure of Atom The correct option is (iii) Decrease. Which curve in Fig. 5.2 represents the curve of an ideal gas? (i) B only (ii) C and D only (iii) E and F only (iv) A and B only The correct option is (i) B only. Atmospheric pressures recorded in different cities are as follows: Cities Shimla Bangalore Delhi Mumbai p in N/m2 1.01×105 1.2×105 1.02×105 1.21×105. Consider the above data and mark the place at which liquid will boil first. (i) Shimla (ii) Bangalore (iii) Delhi (iv) Mumbai The correct option is (i) Shimla What is the SI unit of viscosity coefficient (η)? (i) Pascal (ii) Nsm–2 (iii) km–2 s (iv) N m–2 The correct option is (ii) Nsm–2 Gases possess characteristic critical temperature which depends upon the magnitude of intermolecular forces between the particles. Following are the critical temperatures of some gases. Gases H2 He O2 N2 Critical temperature in Kelvin 33.2 5.3 154.3 126 From the above data what would be the order of liquefaction of these gases? Start writing the order from the gas liquefying first (i) H2, He, O2, N2 (ii) He, O2, H2, N2 (iii) N2, O2, He, H2 (iv) O2, N2, H2, He The correct option is (iv) O2, N2, H2, He. As the temperature increases, the average kinetic energy of molecules increases. What would be the effect of the increase of temperature on pressure provided the volume is constant? (i) increases (ii) decreases (iii) remains the same (iv) becomes half The correct option is (i) increases. The pressure of a 1:4 mixture of dihydrogen and dioxygen enclosed in a vessel is one atmosphere. What would be the partial pressure of dioxygen? (i) 0.8×105 atm (ii) 0.008 Nm–2 (iii) 8×104 Nm–2 (iv) 0.25 atm Chemistry, Class 11, Exercise 10.7, NCERT Exemplar, States of Matter The correct option is (iii) 8×104 Nm–2 Dipole-dipole forces act between the molecules possessing permanent dipole. Ends of dipoles possess 'partial charges'. The partial charge is (i) more than unit electronic charge (ii) equal to unit electronic charge (iii) less than unit electronic charge (iv) double the unit electronic charge The correct option is (iii) less than unit electronic charge. The interaction energy of the London force is inversely proportional to the sixth power of the distance between two interacting particles but their magnitude depends upon (i) charge of interacting particles (ii) mass of interacting particles (iii) polarisability of interacting particles (iv) strength of permanent dipoles in the particles. The correct option is (iii) polarisability of interacting particles. A plot of volume (V) versus temperature (T) for a gas at constant pressure is a straight line passing through the origin. The plots at different values of pressure are shown in Fig. 5.1. Which of the following order of pressure is correct for this gas? (i) p1 > p2 > p3 > p4 (ii) p1 = p2 = p3 = p4 (iii) p1 < p2 < p3 < p4 (iv) p1 < p2 = p3 < p4 CBSE, Class 11, NCERT Exemplar, States of Matter The correct option is (iii) p1 < p2 < p3 < p4. A person living in Shimla observed that cooking food without using pressure cooker takes more time. The reason for this observation is that at high altitude: (i) pressure increases (ii) temperature decreases (iii) pressure decreases (iv) temperature increases The correct option is (iii) pressure decreases. Match the intermediates given in Column I with their probable structure in CBSE, Class 11, Exercise 1, ncert exemplar, Organic Chemistry: Some Basic Principles and Techniques (i) is a (ii) is a (iii) is b If a liquid compound decomposes at its boiling point, which method(s) can you choose for its purification. It is known that the compound is stable at low pressure, steam volatile and insoluble in water. CBSE, Chemistry, Class 11, Exercise 1, ncert exemplar, Organic Chemistry: Some Basic Principles and Techniques Because the liquid component decomposes near its boiling point, indicating that it is heat-sensitive, we purify it using "Steam distillation." For temperature-sensitive materials, this is done. Which of the following compounds contain all the carbon atoms in the same hybridisation state? (i) H—C ≡ C—C ≡ C—H (ii) CH3—C ≡ C—CH3 (iii) CH2 = C = CH2 (iv) CH2 = CH—CH = CH2 The correct answers are I and (iv). Ionic species are stabilised by the dispersal of charge. Which of the following carboxylate ion is the most stable? Option IV is the correct answer. During the hearing of a court case, the judge suspected that some changes in the documents had been carried out. He asked the forensic department to check the ink used at two different places. According to you which technique can give the best results? (i) Column chromatography (ii) Solvent extraction (iii) Distillation (iv) Thin-layer chromatography Electronegativity of carbon atoms depends upon their state of hybridisation. In which of the following compounds, the carbon marked with an asterisk is most electronegative? (i) CH3 – CH2 – *CH2 –CH3 (ii) CH3 – *CH = CH – CH3 (iii) CH3 – CH2 – C ≡ *CH (iv) CH3 – CH2 – CH = *CH2 Option (iii) is the correct answer. Which of the following is the correct IUPAC name? (i) 3-Ethyl-4, 4-dimethylheptane (ii) 4,4-Dimethyl-3-ethylheptane (iii) 5-Ethyl-4, 4-dimethylheptane (iv) 4,4-Bis(methyl)-3-ethylheptane Option I is the correct answer. Assertion (A): Silicons are water-repelling in nature. Reason (R): Silicons are organosilicon polymers, which have (–R2SiO–) as repeating unit. (i) A and R both are correct and R is the correct explanation of A. (ii) Both A and R are correct but R is not the correct explanation of A. (iii) A and R both are not true. (iv) A is not true but R is true. CBSE, Chemistry, Class 11, ncert exemplar, The p-Block Elements Correct Option is (ii) Assertion (A): If aluminium atoms replace a few silicon atoms in three the dimensional network of silicon dioxide, the overall structure acquires a negative charge. Reason (R): Aluminium is trivalent while silicon is tetravalent. (i) Both A and R are correct and R is the correct explanation of A. (ii) Both A and R are correct but R is not the correct explanation of A. (iii) Both A and R are not correct (iv) A is not correct but R is correct. Correct Option is (i) Match the species given in Column I with the properties mentioned in Column II. (i) is e (ii) is c (iii) is d (iv) is a,b Identify the compounds A, X and Z in the following reactions : (i) A + 2HCL + 5H2O → 2NaCI + X X → HBO2 → Z A is Borax, which produces Orthoboric acid when it combines with HCl in the presence of water (X). When Orthoboric acid is heated, Metaboric is formed, and when heated further, the chemical Z, i.e.... Explain the following : (ix) BF3 does not hydrolyse. (x) Why does the element silicon, not form graphite-like structure whereas carbon does. (ix) BF3 does not entirely hydrolyze. Instead, it forms boric acid and fluoroboric acid after partial hydrolysis. Because the HF is produced first, it interacts with H3BO3. As a result, BF3 does not... Explain the following : (iii) Aluminium forms [AlF6]3- ion but boron does not form [BF6]3- ion. (iv) PbX2 is more stable than PbX4. (iii) While aluminium has an empty d-orbital to accommodate the electrons from the fluorine atom, boron does not have an empty d-orbital. (iv); Pb is a part of the periodic table's group 14. (carbon... When BCl3 is treated with water, it hydrolyses and forms [B[OH]4]- only whereas AlCl3 in acidified aqueous solution forms [Al (H2O)6]3+ ion. Explain what is the hybridisation of boron and aluminium in these species? AlCl3 + 6H2O → [Al(H2O)6]3+ + 3Cl– The 6 H2O molecules bind to Al, donating 6 electron pairs to the Al3+ ion's 3s, 3p, and 3d orbitals. As a result, the Al atom hybridization in [Al(H2O)6]3+ species... If a trivalent atom replaces a few silicon atoms in a three-dimensional network of silicon dioxide, what would be the type of charge on an overall structure? One valence electron of each Si atom will become free if a few tetrahedral Si atoms in a three-dimensional network structure of SiO2 are replaced with an equal number of trivalent atoms. As a... Explain the following : (i) CO2 is a gas whereas SiO2 is solid. (ii) Silicon forms SiF62- ion whereas the corresponding fluoro compound of carbon is not known. I In comparison to carbon, silicon has a huge size. It does not provide a decent overlapping pattern. With oxygen atoms, it forms four single covalent bonds. Two Si atoms are connected to each... Explain why the following compounds behave as Lewis acids? (ii) AlCl3 Because aluminium contains three electrons in its valence shell, AlCl3 forms a covalent connection with chlorine by creating three single chlorine bonds, making it an electron-deficient molecule and... Identify the correct resonance structures of carbon dioxide from the ones given below : (i) O – C ≡ O (ii) O = C = O (iii) –O ≡ C – O+ (iv) –O – C ≡ O+ The solutions are options (ii) and (iv). Ionisation enthalpy (∆i H1 kJ mol–1) for the elements of Group 13 follows the order. (i) B > Al > Ga > In > Tl (ii) B < Al < Ga < In < Tl (iii) B Ga Tl (iv) B > Al In < Tl Option IV is the correct response Catenation i.e., linking of similar atoms dependson size and electronic configuration of atoms. The tendency of catenation in Group 14 elements follows the order: (i) C > Si > Ge > Sn (ii) C >> Si > Ge ≈ Sn (iii) Si > C > Sn > Ge (iv) Ge > Sn > Si > C Option II is the correct response. Boric acid is an acid because its molecule (i) contains replaceable H+ ion (ii) gives up a proton (iii) accepts OH– from water releasing a proton (iv) combines with a proton from the water molecule Option III is the correct response. The exhibition of the highest co-ordination number depends on the availability of vacant orbitals in the central atom. Which of the following elements is not likely to act as a central atom in MF6 3–? (i) B (ii) Al (iii) Ga (iv) In W hich of the following oxides is acidic? (i) B2O3 (ii) Al2O3 (iii) Ga2O3 (iv) In2O3 Which of the following is a Lewis acid? (i) AlCl3 (ii) MgCl2 (iii) CaCl2 (iv) BaCl2 CBSE, Chemistry, Chemistry, Class 11, Exercise 1, Exercise 1, NCERT, Structure of Atom, Structure of Atom Ans: 24 is the mass number. The number of protons in an atom equals the number of atoms in the atom. Mass number – Atomic number = 24 – 12 = 12 neutrons Numerical value of mass nubmer = 56 Number... Assertion (A): The black body is an ideal body that emits and absorbs radiations of all frequencies. Reason (R): The frequency of radiation emitted by a body goes from a lower frequency to higher frequency with an increase in temperature. Chemistry, Class 11, Exercise 1, NCERT, Structure of Atom (i) Both A and R are true and R is the correct explanation of A. (ii) Both A and R are true but R is not the explanation of A. (iii) A is true and R is false. (iv) Both A and R are false. ... The effect of the uncertainty principle is significant only for the motion of microscopic particles and is negligible for the macroscopic particles. Justify the statement with the help of a suitable example. The uncertainty principle is applicable only for microscopic particles and can be concluded from the uncertainty measurement. Example: Take a particle of mass = 1 milligram ∆x. ∆ν = 60626*10-34/... Table-tennis ball has a mass 10 g and a speed of 90 m/s. If speed can be measured within an accuracy of 4% what will be the uncertainty in speed and position? According to Heisenberg's uncertainty principle: ∆x. ∆p ≥ h/4π Mass of the ball = 4 g Speed is = 90 m /s Uncertainity of speed, ∆v = 4/100 × 90 ∆v = 3.6 m/s ∆x = h/4πm∆v ∆x = 6.26 × 10-34 / 4 × 3.14... What is the difference between the terms orbit and orbital? Orbit represents a clear circular path for electrons to surround the nucleus. Represents the two-dimensional movement of electrons around the nucleus, the orbital is not that well defined because it... Chlorophyll present in green leaves of plants absorbs light at 4.620 × 1014 Hz. Calculate the wavelength of radiation in nanometer. Which part of the electromagnetic spectrum does it belong to? The relationship between the wavelength and the frequency: λ = c/ν c - Velocity of light v - Frequency of the radiation. λ = 3 x 108 ms-1 / 4.620 x 1014 Hz Hence, λ = 0.6494... Out of electron and proton which one will have, a higher velocity to produce matter waves of the same wavelength? Explain it. The electron which is a lighter particle will have the higher velocity and will also produce matter waves having the same wavelength. This is because, if the mass is less, then the velocity increases. What is the experimental evidence in support of the idea that electronic energies in an atom are quantized? Chemistry, Class 11, Exercise 1, NCERT, States of Matter The bright line spectrum shows that the atomic energy levels are measured. These lines are found to be the result of electronic transitions between energy and the atomic spectrum would have shown... According to de Broglie, the matter should exhibit dual behavior that is both particle and wave-like properties. However, a cricket ball of mass 100 g does not move like a wave when it is thrown by a bowler at a speed of 100 km/h. Calculate the wavelength of the ball and explain why it does not show wave nature. Calculation: Given, Mass, m = 100g / 0.1kg Velocity = 100km/h Velocity =100×1000 / 60×60 Velocity = 1000/36m/s λ =h/mν λ = 2.387 × 10-34 m The Balmer series in the hydrogen spectrum corresponds to the transition from n1 = 2 to n2 = 3,4,………. This series lies in the visible region. Calculate the wavenumber of the line associated with the transition in Balmer series when the electron moves to n = 4 orbit. (RH= 109677 cm-1) Calculation: According to Bohr's model for the hydrogen atom; ν = RH(1/n12-1/ n22)cm-1 Given, n1 = 2 n2 = 4 H (Rydberg's constant) = 109677 Wave number = 109677 ( ¼-1/16) Hence, Wave number =... The electronic configuration of the valence shell of Cu is 3d10 4s1 and not 3d94s2. How is this configuration explained? Great stability is established to the orbitals which are half or completely filled. In the given electronic configuration 3d104s1 of Copper (Cu), the stability is assured (d orbitals - filled, s... Wavelengths of different radiations are given below : λ(A) = 300 nm λ(B) = 300 μm λ(c) = 3 nm λ (D) 30 A° Arrange these radiations in the increasing order of their energies. Given, λ(A) = 300 nm λ(A) = 300 x 10-9 m λ(A) = 3 x 10 -7 m λ(B)... An atom having atomic mass number 13 has 7 neutrons. What is the atomic number of the atom? Calculation: Atomic mass number = number of protons + number of neutrons Number of protons = atomic mass number – number of neutrons. Hence, atomic number of an atom = 13 – 7 = 6. Which of the following will not show deflection from the path on passing through an electric field? Proton, cathode rays, electron, neutron Neutron shows no deflection from the path passing through the electric field. This is due to the neutrality of neutron particles. Therefore, it has no charge and is not affected by any electrical... The arrangement of orbitals based on energy is based upon their (n+l ) value. Lower the value of (n+l ), lower is the energy. For orbitals having the same values of (n+l), the orbital with a lower value of n will have lower energy. Based upon the above information, arrange the following orbitals in the increasing order of energy (a) 1s, 2s, 3s, 2p (b) 4s, 3s, 3p, 4d (c) 5p, 4d, 5d, 4f, 6s (d) 5f, 6d, 7s, 7p Based upon the... Calculate the total number of angular nodes and radial nodes present in 3p orbital. The region where the probability of finding the electrons is zero, it is considered as the nodes and is it present among the orbitals. Example: In the np orbitals, Nodes = n – l – 1 Nodes = 3 –1 – 1... Which of the following orbitals are degenerate? 3dxy, 4dxy, 3dz2 , 3dyx, 4dyx, 4dzz The electron energy in a multielectron atom, in contrast to the hydrogen atom, depends not only on its quantum number, but also on its azimuthal quantum number. The same electron shells and the same... Nickel atom can lose two electrons to form Ni2+ ion. The atomic number of nickel is 28. From which orbital will nickel lose two electrons. 1 Ni atom = 28 electrons and its electronic configuration is 4s2 3d8 It turns to Ni2+ by losing 2 electrons and its electronic configuration becomes 4s0 3d8 According to the Aufbau principle, Ni... Show the distribution of electrons in oxygen atom (atomic number 8) using orbital diagram. Distribution of electrons in oxygen atom: 1s22s22p4 Arrange s, p and d sub-shells of a shell in the increasing order of effective nuclear charge (Zeff) experienced by the electron present in them. Arrangement of the subshells: d<p<s The s-orbitals shield the electrons a lot more when compared to the p-orbitals from the nucleus. Which of the following statements concerning the quantum numbers are correct? (i) The angular quantum number determines the three-dimensional shape of the orbital. (ii) The principal quantum number determines the orientation and energy of the orbital. (iii) The magnetic... In which of the following pairs, the ions are iso-electronic? (i) Na+, Mg2+ (ii) Al3+, O– (iii) Na+, O2- (iv) N3-, Cl– Correct Answers: (i) Na+, Mg2+ (iii) Na+, O2- Explanation: Isoelectronic species are the atoms / ions that has the same number... Which of the following sets of quantum numbers is correct? n l m n l m (i) 1 1 +2 (ii) 2 1 +1 (iii) 3 2 –2 (iv) 3 4 –2 Correct Answers: (ii) 2 1 +1 (iii) 3 2 –2 Explanation: The correct sets of quantum numbers are, n = 2, l = 1, m = +1 n = 3, l = 2, m =... Out of the following pairs of electrons, identify the pairs of electrons present in degenerate orbitals : (i) (a) n = 3, l = 2, ml = –2, ms= − ½ (b) n = 3, l = 2, ml = –1, ms= − 1/2 (ii) (a) n = 3, l = 1, ml = 1, ms = + ½ (b) n = 3, l = 2, ml = 1, ms = +1/2 (iii) (a) n = 4, l = 1, ml = 1, ms = +... Identify the pairs which are not of isotopes? (i) 6X12, 6Y13 (ii) 17X35, 6Y37 (iii) 6X14, 7Y14 (iv) 4X8, 5Y8 Correct Answers: (iii) 6X14, 7Y14 (iv) 4X8, 5Y8 Explanation: Isotopes are the atoms having same atomic number but... If travelling at the same speeds, which of the following matter waves have the shortest wavelength? (i) Electron (ii) An alpha particle (He2+) (iii) Neutron (iv) Proton Correct Answer: (ii) An alpha particle (He2+) Explanation: According to de Broglie's equation, the alpha particles... For the electrons of an oxygen atom, which of the following statements is correct? (i) Zeff for an electron in a 2s orbital is the same as Zeff for an electron in a 2p orbital. (ii) An electron in the 2s orbital has the same energy as an electron in the 2p orbital. (iii) Zeff for... The pair of ions having same electronic configuration is __________. (i) Cr3+, Fe3+ (ii) Fe3+, Mn2+ (iii) Fe3+, Co3+ (iv) Sc3+, Cr3+ Correct Answer: (ii) Fe3+, Mn2+ Explanation: Fe - Z=26 : 3d64s2 Fe3+- 3d5 Mn - Z=25 : 3d54s2 Mn2+ : 3d5 Hence,... Number of angular nodes for 4d orbital is __________. (i) 4 (ii) 3 (iii) 2 (iv) 1 Correct Answer: (iii) 2 Explanation: The Number of angular nodes = l (azimuthal quantum number) Hence, the number of angular nodes for 4d orbital is... Define the law of multiple proportions. Explain it with two examples. How does this law point to the existence of atoms? Chemistry, Class 11, Exercise 1, NCERT, Some Basic Concepts of Chemistry When two elements combine to form two or more chemical compounds, then the mass of one of the compounds in a fixed mass of the other holds a simple measure of each other is the law of equality.... Calcium carbonate reacts with aqueous HCl to give CaCl2 and CO2 according to the reaction given below: CaCO3 (s) + 2HCl (aq) → CaCl2(aq) + CO2(g) + H2O(l) What mass of CaCl2 will be formed when 250 mL of 0.76 M HCl reacts with 1000 g of CaCO3? Name the limiting reagent. Calculate the number of moles... A vessel contains 1.6 g of dioxygen at STP (273.15K, 1 atm pressure). The gas is now transferred to another vessel at a constant temperature, where the pressure becomes half of the original pressure. Calculate: (i) the volume of the new vessel. (ii) a number of molecules of dioxygen. (i) Calculation: Moles of oxygen = 1.6/32 Moles of oxygen = 0.05mol 1 mol of oxygen= 22.4L (at STP) Volume of Oxygen (V1) = 22.4 × 0.05 Volume of Oxygen (V1) = 1.12L V2 =? P1 = 1atm P2 = ½ P2 =... Assertion (A): One atomic mass unit is defined as one-twelfth of the mass of one carbon-12 atom. Reason (R): Carbon-12 isotope is the most abundant isotope of carbon and has been chosen as the standard. (i) Both A and R are true and R is the correct explanation of A. (ii) Both A and R are true but R is not the correct explanation of A. (iii) A is true but R is false. (iv) Both A and R are false. Correct Answer: (ii) Both A and R are true but R is not the correct explanation of A Explanation: The carbon 12 isotope defines the mass of atoms and molecules. Assertion (A): The empirical mass of ethene is half of its molecular mass. Reason (R): The empirical formula represents the simplest whole-number the ratio of various atoms present in a compound. (i) Both A and R are true and R is the correct explanation of A. (ii) A is true but R is false. (iii) A is false but R is true. (iv) Both A and R are false. Correct Answer: (i) Both A and R are true and R is the correct explanation of A Explanation: The empirical formula represents the simplest whole-number the ratio of various atoms present in a... Match the following Physical quantity Unit (i) Molarity (a) g mL–1 (ii) Mole fraction (b) mol (iii) Mole (c) Pascal (iv) Molality (d) Unitless (v) Pressure (e) mol L–1 (vi) Luminous intensity (e) mol L–1 (vii) Density... (i) 88 g of CO2 (a) 0.25 mol (ii) 6.022 ×1023 molecules of H2O (b) 2 mol (iii) 5.6 litres of O2 at STP (c) 1 mol (iv) 96 g of O2 (d) 6.022 × 1023 molecules (v) 1 mol of any gas (e) 3 mol ... If 4 g of NaOH dissolves in 36 g of H2O, calculate the mole fraction of each component in the solution. Also, determine the molarity of the solution (specific gravity of solution is 1g mL–1). Calculation: Mole fraction of H2O = Number of moles of H2O / Total number of moles (H2O +NaOH) Number of moles of H2O = 36/18 Number of moles of H2O =2 moles Number of moles of NaOH = 4/40 Number of... The volume of a solution changes with change in temperature, then, will the molality of the solution be affected by temperature? Give a reason for your answer. The Mass do not change when the temperature changes and so the molality of a solution do not change as well. Molality of a substance is defined as the number of mass of solute per mass of the... Hydrogen gas is prepared in the laboratory by reacting dilute HCl with granulated zinc. Following reaction takes place. Zn + 2HCl → ZnCl2 + H2 Calculate the volume of hydrogen gas liberated at STP when 32.65 g of zinc reacts with HCl. 1 mol of a gas occupies 22.7 L volume at STP; atomic mass of Zn = 65.3 u. ... If two elements can combine to form more than one compound, the masses of one element that combine with a fixed mass of the other element, are in whole-number ratio. (a) Is this statement true? (b) If yes, according to which law? (c) Give one example related to this law. (a) If two elements can combine to form more than one compound, the masses of one element that combine with a fixed mass of the other element, are in whole-number ratio and this statement is true.... Calculate the mass percent of calcium, phosphorus and oxygen in calcium phosphate Ca3(PO4) Calculation: Molecular mass of Ca3(PO4) = (3 X 40) + (2 X 31) + (8 X 16) Molecular mass of Ca3(PO4) = 310 Mass percentage of Ca = \[\frac{3\times 40}{310\times 100}\] Mass percentage of Ca = 38.71%... What is the difference between molality and molarity? Molarity Molality Number of moles per volume of the solution in litres Number of mass of solute per mass of the solvent in liters Unit - M Unit - m What is the symbol for the SI unit of a mole? How is the mole defined? Mole is the amount of substance containing more entities because there are atoms in 12 g of carbon. SI unit symbol – mol How many significant figures should be present in the answer of the following calculations? 2.5×1.25×3.5/ 2.01 Number Significant figures 2.5 2 1.25 3 3.5 2 2.01 3 In the given calculation, involving both the multiplication and division, the significant figures present is 2. Hence, the result cannot have... What will be the mass of one atom of C-12 in grams? 1 mole of carbon atom = 12g Therefore, Mass of one atom of C-12 in grams= 1.99 × 1023 grams. One of the statements of Dalton's atomic theory is given below: "Compounds are formed when atoms of different elements combine in a fixed ratio" Which of the following laws is not related to this statement? (i) Law of conservation of mass (ii) Law of definite proportions (iii) Law of multiple proportions (iv) Avogadro's law Correct Answers: (i) Law of conservation of mass; (iv) Avogadro's law Explanation: According to the Dalton's atomic theory, The Chemical compounds are formed when atoms of various elements join in a... Which of the following solutions have the same concentration? (i) 20 g of NaOH in 200 mL of solution (ii) 0.5 mol of KCl in 200 mL of solution (iii) 40 g of NaOH in 100 mL of solution (iv) 20 g of KOH in 200 mL of solution Answer: Correct Answers: (i)... Which of the following pairs have the same number of atoms? (i) 16 g of O2(g) and 4 g of H2(g) (ii) 16 g of O2 and 44 g of CO2 (iii) 28 g of N2 and 32 g of O2 (iv) 12 g of C(s) and 23 g of Na(s) Answer: Correct Answers: (iii) 28 g of N2 and 32 g of... Sulphuric acid reacts with sodium hydroxide as follows: H2SO4 + 2NaOH → Na2SO4+ 2H2O When 1L of 0.1M sulphuric acid solution is allowed to react with 1L of 0.1M sodium hydroxide solution, the amount of sodium sulphate formed and its molarity in the... One mole of oxygen gas at STP is equal to _______. (i) 6.022 × 1023 molecules of oxygen (ii) 6.022 × 1023 atoms of oxygen (iii) 16 g of oxygen (iv) 32 g of oxygen Answer: Correct Answers: (i) 6.022 × 1023 molecules of oxygen; (iv) 32 g of... Which of the following statements indicates that the law of multiple proportions is being followed. (i) Sample of carbon dioxide taken from any source will always have carbon and oxygen in the ratio 1:2. (ii) Carbon forms two oxides namely CO2 and CO, where masses of oxygen which combine with a... Which of the following reactions is not correct according to the law of conservation of mass. (i) 2Mg(s) + O2(g) →2MgO(s) (ii) C3H8(g) + O2(g) → CO2(g) + H2O(g) (iii) P4(s) + 5O2(g) → P4O10(s) (iv) CH4(g) + 2O2(g) → CO2(g) + 2H2O (g) Answer: Correct Answer: (ii) C3H8(g) + O2(g) →... Which of the following statements is correct about the reaction given below: 4Fe(s) + 3O2(g) → 2Fe2O3(g) (i) The total mass of iron and oxygen in reactants = total mass of iron and oxygen in product therefore it follows the law of conservation of mass. (ii) The total mass of reactants = total mass of product; therefore, the law of multiple proportions is followed. (iii) Amount of Fe2O3 can be increased by taking any one of the reactants (iron or oxygen) in excess. (iv) Amount of Fe2O3 produced will decrease if the amount of any one of the reactants (iron or oxygen) is taken in excess. Correct Answer: (i) The total mass of iron and oxygen in reactants = total mass of iron and oxygen in product therefore it follows the law of conservation of mass. Explanation: From the reaction,... Which of the following statements about a compound is incorrect? (i) A molecule of a compound has atoms of different elements. (ii) A compound cannot be separated into its constituent elements by physical methods of separation. (iii) A compound retains the physical properties of its constituent elements. (iv) The ratio of atoms of different elements in a compound is fixed. Correct Answer: (iii) A compound retains the physical properties of its constituent elements Explanation: Molecule of a compound is made up of atoms of various elements which cannot be separated... If the density of a solution is 3.12 g mL-1, the mass of 1.5 mL solution in significant figures is _______. (i) 4.7g (ii) 4680 × 10 -3g (iii) 4.680g (iv) 46.80g Correct Answer: (i) 4.7g Explanation: Given, Density of solution = 3.12 g mL-1 Volume of solution = 1.5 mL Formula for Mass = Volume × Density Hence, Mass = 4.7 g The empirical formula and molecular mass of a compound are CH2O and 180 g respectively. What will be the molecular formula of the compound? (i) C9H18O9 (ii) CH2O (iii) C6H12O6 (iv) C2H4O2 Correct Answer: (iii) C6H12O6 Explanation: Given, Molar mass of Carbon=12 Molar mass of Hydrogen=1 Molar mass of Oxygen=16 So, The molecular weight of compound is 6 and so the molecular formula of... What is the mass per cent of carbon in carbon dioxide? (i) 0.034% (ii) 27.27% (iii) 3.4% (iv) 28.7% Correct Answer: (ii) 27.27% Explanation: Carbon dioxide is a gas with a density of about 53% higher than that of dry air. Carbon dioxide molecules consist of a double carbon atom combined with two... One mole of any substance contains 6.022 × 1023 atoms/molecules. Number of molecules of H2SO4 present in 100 mL of 0.02M H2SO4 solution is ______. (i) 12.044 × 1020 molecules (ii) 6.022 × 1023 molecules (iii) 1 × 1023 molecules (iv) 12.044 × 1023molecules Correct Answer: (i) 12.044 × 1020 molecules Explanation: Moles of H2SO4​= Molarity of H2SO4​×Volume of solution (L) Hence, the number of molecules of H2SO4 present in 100 mL of 0.02M H2SO4 solution... Hydrogen bonds are formed in many compounds e.g., H2O, HF, NH3. The boiling point of such compounds depends to a large extent on the strength of hydrogen bond and the number of hydrogen bonds. The correct decreasing order of the boiling points of the above compounds is : (i) HF > H2O > NH3 (ii) H2O > HF > NH3 (iii) NH3 > HF > H2O (iv) NH3 > H2O > HF CBSE, Chemical Bonding and Molecular Structure, Chemistry, Chemistry, Chemistry, Chemistry, Class 11, Exercise 1, NCERT Exemplar, NCERT Exemplar Solution: Option (ii) is the answer. The types of hybrid orbitals of nitrogen in NO2+, NO3- and NH4+respectively are expected to be (i) sp, sp3 and sp2 (ii) sp, sp2 and sp3 (iii) sp2, sp and sp3 (iv) sp2, sp3 and sp CBSE, Chemical Bonding and Molecular Structure, Chemical Bonding and Molecular Structure, Chemistry, Class 12, NCERT Exemplar Solution: Option (ii) is the answer. The hybridisation of each molecule gives us an idea about the hybrid orbitals. Assertion (A): Electron gain enthalpy becomes less negative as we go down a group. Reason (R): Size of the atom increases on going down the group and the added electron would be farther from the nucleus. (a) Assertion and reason both are correct statements but reason is not correct explanation of assertion. (b) Assertion and reason both are correct statements and reason is correct explanation of assertion. (c) Assertion and reason both are wrong statement. (d) Assertion is wrong statement but reason is correct statement. CBSE, Chemistry, Class 11, Classification of Elements and Periodicity in Properties, NCERT Exemplar (b) As one moves down the group, the electron gain enthalpy decreases because the atomic size grows and the new electron is further away from the nucleus. Assertion (A): Boron has a smaller first ionization enthalpy than beryllium. Reason (R): The penetration of a 2s electron to the nucleus is more than the 2p electron, hence, 2p electron is more shielded by the inner core of electrons that the 2s electrons. (b) Assertion is correct statement but reason is wrong statement. (c) Assertion and reason both are correct statements and reason is correct explanation of assertion. (d) Assertion and reason both are wrong statement. (c) Because beryllium (1s2 2s2) has a fully filled, boron (1s2 2s2 2p1) has a lower initial ionisation enthalpy than beryllium (1s2 2s2). s-subshell. When compared to 2s-electrons, 2s-electrons are... Assertion and Reason Type Questions In the following questions a statement of Assertion (A) followed by a statement of Reason (R) is given. Choose the correct option out of the choices given below each question. Assertion (A): Generally, ionization enthalpy increases from left to right in a period. Reason (R): When successive electrons are added to the orbitals in the same principal quantum level, the shielding effect of inner core of electrons does not increase very much to compensate for the increased attraction of the electron to the nucleus. (a) Assertion is correct statement and reason is wrong statement. (c) Assertion and reason both are wrong statements. (d) Assertion is wrong statement and reason is correct statement. (b) As atomic size decreases, the ionization enthalpy increases from left to right over time. The effective nuclear charge of the electrons in the subshell is about the same. Define ionisation enthalpy. Discuss the factors affecting ionisation enthalpy of the elements and its trends in the periodic table. The energy required by an isolated, gaseous atom in its ground state to remove an electron is known as ionization enthalpy. The valence electrons are shielded by the inner electrons, which reduces... Discuss the factors affecting electron gain enthalpy and the trend in its variation in the periodic table. The factors affecting electron gain enthalpy and the trend in its variation in the periodic table are: Atomic size - As the distance between the nucleus and the outermost shell rises, the tendency... Among alkali metals which element do you expect to be least electronegative and why? Because electronegativity decreases as we move down the group, caesium is the least electronegative alkali metal. Cesium is a group 1 element with the biggest size due to a drop in the effective... The radius of Na+ cation is less than that of Na atom. Give reason. The sodium atom loses one electron to form a sodium cation, and the effective nuclear charge on the ion increases on the left electrons after the cation is formed, resulting in a decrease in radius. How does the metallic and non-metallic character vary on moving from left to right in a period? Metallic character reduces as we move from left to right across the period, whereas non-metallic character increases as ionization enthalpy and electron gain enthalpy increase across the period Explain the following: (a) Electronegativity of elements increases on moving from left to right in the periodic table. (b) Ionisation enthalpy decrease in a group from top to bottom? (a) The size of the atoms reduces as we move from left to right in a period due to an increase in the effective nuclear charges on the outermost electron. As a result, as you move from left to right... Explain the deviation in ionisation enthalpy of some elements from the general trend by using the given figure: Solution: Ionization enthalpy of elements varies through period and group. As we move from left to right in a period, the ionization enthalpy increases and lowers when we move down a group.... Arrange the elements N, P, O and S in the order of- (i) increasing first ionisation enthalpy. (ii) increasing non-metallic character. Give the reason for the arrangement assigned. (i) The ascending order of the initial ionization enthalpy is S< P< O< N. The ionization enthalpy drops as we move down the group and increases as we move along the period, but in the case... What do you understand by exothermic reaction and endothermic reaction? Give one example of each type. CBSE, Chemistry, Classification of Elements and Periodicity in Properties, NCERT Exemplar Exothermic reaction: An exothermic reaction is a reaction in which heat is released during the reaction. For instance, Cao + CO2→ CaCO3 ΔH=-178kJmol-1 Endothermic reaction: An endothermic reaction... How would you explain the fact that first ionisation enthalpy of sodium is lower than that of magnesium but its second ionisation enthalpy is higher than that of magnesium? When sodium loses an electron from its outermost shell, it achieves a stable state. As a result, its initial ionization enthalpy is lower than that of magnesium. However, in the case of second... p-Block elements form acidic, basic and amphoteric oxides. Explain each property by giving two examples and also write the reactions of these oxides with water. ACIDIC OXIDES Acidic oxides are oxides that react with water to produce acids. SO2, B2O3 are acidic oxides and p block elements. The chemical equation for the reaction of B2O3 with water:- B2O3 +3... The first member of each group of representative elements (i.e., s and p-block elements) shows anomalous behaviour. Illustrate with two examples. Examples include lithium and beryllium. The initial group element is Li. It has variety of characteristics and forms, including covalent compounds and nitrides. The second group's initial element is... Nitrogen has positive electron gain enthalpy whereas oxygen has negative. However, oxygen has lower ionisation enthalpy than nitrogen. Explain. The ionization enthalpy of oxygen is lower than that of nitrogen because when one electron is removed from oxygen, it easily donates it to achieve half-filled stability, whereas removing one... Illustrate by taking examples of transition elements and non-transition elements that oxidation states of elements are largely based on electronic configuration. Ti has an atomic number of 22 and an electronic configuration of [Ar]3d24s2. It may be found in various compounds with three different oxidation states of +2,+3, and +4 such as TiO2(+4), Ti2O3(+3),... Choose the correct order of atomic radii of fluorine and neon (in pm) out of the options given below and justify your answer. (i) 72, 160 (ii) 160, 160 (iii) 72, 72 (iv) 160, 72 CBSE, Class 11, Classification of Elements and Periodicity in Properties, NCERT Exemplar (i) 72, 160 Neon has van der Waal's radii and fluorine has covalent radii. The covalent radius is always less than van der Waal's radius, Fluorine has a radius of 72pm while Neon has a radius of... Write four characteristic properties of p-block elements. They have a wide range of oxidation states. The reducing character increases as we move down the group, while the oxidizing character increases across the period. The ionization enthalpy of these... Among the elements B, Al, C and Si, (i) which element has the highest first ionisation enthalpy? (ii) which element has the most metallic character? Justify your answer in each case. (i) The ionization enthalpy of carbon is the highest. It rises from left to right along the period and then decreases as we move down the group. (ii) The most metallic element is aluminium. The... Polarity in a molecule and hence the dipole moment depends primarily on electronegativity of the constituent atoms and shape of a molecule. Which of Does the following have the highest dipole moment? (i) CO2 (ii) HI (iii) H2O (iv) SO2 CBSE, Chemical Bonding and Molecular Structure, Chemistry, Class 11, Exercise 1, Guides, NCERT Exemplar H2O has a high dipole moment because Oxygen is highly electronegative. This attracts the electron from Hydrogen towards it with greater charge. Therefore, H2O... The statement that is not correct for periodic classification of elements is: (i) The properties of elements are periodic function of their atomic numbers. (ii) Non-metallic elements are less in number than metallic elements. (iii) For transition elements, the 3d-orbitals are filled with electrons after3p-orbitals and before 4s-orbitals. (iv) The first ionisation enthalpies of elements generally increase withincrease in atomic number as we go along a period. Option (iii) is the answer. The Aufbau principle describes how electrons first fill low-energy orbitals (near to the nucleus) before moving on to higher-energy orbitals. They fill the orbitals... Ionisation enthalpies of elements of the second period are given below: Ionisation enthalpy/ kcal mol–1: 520, 899, 801, 1086, 1402, 1314, 1681, 2080. Match the correct enthalpy with the elements and complete the graph given in Fig. 3.1. Also, write symbols of elements with their atomic number. Solution: N has a higher first ionisation enthalpy than O, despite the fact that O has a higher nuclear charge. This is because the electron in N must be removed from a more stable, exactly... Identify the group and valency of the element having atomic number 119. Also, predict the outermost electronic configuration and write the general formula of its oxide. The modern periodic table has 118 elements divided into seven periods. As a result, the element with atomic number 119 will be in the 8th period of the first group and will have the electronic... All transition elements are d-block elements, but all d-block elements do not transition elements. Explain. D block elements are those that have their outermost shell filled with d electrons. Because incompletely filled d orbitals are crucial for elements like calcium and zinc, all d block elements are... Explain why the electron gain enthalpy of fluorine is less negative than that of chlorine. Fluorine has a lower size than chlorine, which means there is less attraction outside the shell to gain an electron. In addition, inter-electronic repulsions exist in the 2p orbitals, resulting in a... An element belongs to the 3rd period and group-13 of the periodic table. Which of the following properties will be shown by the element? (i) Good conductor of electricity (ii) Liquid, metallic (iii) Solid, metallic (iv) Solid, non-metallic Option (i) and (iii) are the answers. The element belonging to 3rd period and 13th group is aluminium which is a metal. Hence, it is solid, metallic and good conductor of electricity. Ionic radii vary in (i) inverse proportion to the effective nuclear charge. (ii) inverse proportion to the square of effective nuclear charge. (iii) direct proportion to the screening effect. (iv) direct proportion to the square of screening effect. Option (i) and (iii) are the answers. Ionic radii decreases as the effective nuclear charge increases due to inverse proportional relation. Also, ionic radii increases as the screening effect... Which of the following have no unit? (i) Electronegativity (ii) Electron gain enthalpy (iii) Ionisation enthalpy (iv) Metallic character Option (i) and (iv) are the answers. Electron gain enthalpy and ionization enthalpy have units of enthalpy. In which of the following options order of arrangement does not agree with the variation of property indicated against it? (i) Al3+ < Mg2+ < Na+ < F– (increasing ionic size) (ii) B < C < N < O (increasing first ionisation enthalpy) (iii) I < Br < Cl < F (increasing electron gain enthalpy) (iv) Li < Na < K < Rb (increasing metallic radius) Option (ii) and (iiii) are the answers. For increasing first ionization enthalpy, the order should be: B < C < O < N For increasing electron gain enthalpy, the order should be: I < Br... Which of the following sets contain only isoelectronic ions? Option (ii) and (iii) are the answers. (i) Zn2+ (30 – 2 = 28), Ca2+ (20 – 2 = 18), Ga3+ (31-3= 28), Al3+ (13 – 3 = 10) are not isoelectronic. (ii) K+ (19 – 1 = 18), Ca2+ (20 – 2 = 18), Sc3+ (21 – 3... Which of the following statements are correct? (i) Helium has the highest first ionisation enthalpy in the periodic table. (ii) Chlorine has less negative electron gain enthalpy than fluorine. (iii) Mercury and bromine are liquids at room temperature. (iv) In any period, the atomic radius of alkali metal is the highest. Option (i), (iii) and (iv) are the answers. Because of its larger size and lower electronic repulsion, chlorine has a higher negative electron gain enthalpy than fluorine. Which of the following elements will gain one electron more readily in comparison to other elements of their group? (i) S (g) (ii) Na (g) (iii) O (g) (iv) Cl (g) Option (i) and (iv) are the answers. Chlorine has the strongest tendency to gain an electron, as well as a high electron gain enthalpy (-ve). Group 16 includes O and S, however S has a higher... Which of the following sequences contain atomic numbers of only representative elements? (i) 3, 33, 53, 87 (ii) 2, 10, 22, 36 (iii) 7, 17, 25, 37, 48 (iv) 9, 35, 51, 88 Option (i) and (iv) are the answers. Representative elements are elements from the 5 and p-blocks. Transition elements belong to the f-block (Z=21–30; 39–48; 57 and 72–80; 89 and 104–112), while... Those elements impart colour to the flame on heating in it, the atoms of which require low energy for the ionisation (i.e., absorb energy in the visible region of the spectrum). The elements of which of the following groups will impart colour to the flame? (i) 2 (ii) 13 (iii) 1 (iv) 17 Option (i) and (iii) are the answers. Ionization enthalpies are low in group 1 (alkali metals) and group 2 (alkaline earth metals). As a result, they give flame colour. Which of the following elements can show covalency greater than 4? (i) Be (ii) P (iii) S (iv) B Option (ii) and (iii) are the answers. Because P and S have d-orbitals in their valence shells, they can hold more than eight electrons in their valence shells. As a result, they have a covalency of... Electronic configurations of four elements A, B, C and D are given below : Which of the following is the correct order of increasing tendency to gain electron : (i) A < C < B < D (ii) A < B < C < D (iii) D < B < C < A (iv) D < A < B < C Option (i) is the answer. (a) A – Is2 2s2 2p6 – Noble gas configuration B -1s2 2s2 2p4 – 2 electrons short of stable configuration C – 1s2 2s2 2p6 3.?1 – Requires one electron to complete 5-orbital... Comprehension given below is followed by some multiple-choice questions. Each question has one correct option. Choose the correct option. In the modern periodic table, elements are arranged in order of increasingatomic numbers which are related to the electronic configuration. Depending upon the type of orbitals receiving the last electron, the elements in the periodic the table has been divided into four blocks, viz, s, p, d and f. The modern periodic table consists of 7 periods and 18 groups. Each period begins with the filling of a new energy shell. In accordance with the Aufbau principle, the seven periods (1 to 7) have 2, 8, 8, 18, 18, 32 and 32 elements respectively. The seventh period is still incomplete. To avoid the periodic table being too long, the two series of f-block elements, called lanthanoids and actinoids are placed at the bottom of the main body of the periodic table. (i) The element with atomic number 57 belongs to (a) s-block (b) p-block (c) d-block (d) f-block (ii) The last element of the p-block in 6th period is represented by the outermost electronic configuration. (a) 7s2 7p6 (b) 5f 14 6d10 7s 2 7p 0 (c) 4f 14 5d10 6s2 6p6 (d) 4f 14 5d10 6s 2 6p 4 (iii) Which of the elements whose atomic numbers are given below, cannot be accommodated in the present set up of the long form of the periodic table? (e) The electronic configuration of the element which is just above the element with atomic number 43 in the same group is ________. (a) 1s2 2s2 2p6 3s2 3p6 3d5 4s2 (b) 1s2 2s2 2p6 3s2 3p6 3d5 4s3 4p6 (c) 1s2 2s2 2p6 3s2 3p6 3d6 4s2 (d) 1s2 2s2 2p6 3s2 3p6 3d7 4s2 (v) The elements with atomic numbers 35, 53 and 85 are all ________. (a) noble gases (b) halogens (c) heavy metals (d) light metals The formation of the oxide ion, O2- (g), from oxygen atom requires first an exothermic and then an endothermic step as shown below: Thus the process of formation of O2– in the gas phase is unfavourable even though O2- is isoelectronic with neon. It is due to the fact that (i) oxygen is more electronegative. (ii) addition of electron in oxygen results in larger size of the ion. (iii) electron repulsion outweighs the stability gained by achieving a noble gas configuration. (iv) O- ion has a comparatively smaller size than an oxygen atom. Option (iii) is the answer. This is due to the fact that when an electron is introduced to a negatively charged ion, it is repelled rather than attracted. As a result, the addition of the second... Isostructural species are those which have the same shape and hybridisation. Among the given species identify the isostructural pairs. (i) [NF3 and BF3] (ii) [BF4- and NH4+] (iii) [BCl3 and BrCl3] (iv) [NH3 and NO3-] CBSE, Chemical Bonding and Molecular Structure, Chemical Bonding and Molecular Structure, Chemistry, Chemistry, Class 11, NCERT, NCERT Exemplar From a structural standpoint, we can see that, NF3 is pyramidal whereas BF3 is planar triangular. BF4- and NH4+ ions are tetrahedral in structure. BCl3 is triangular planar and BrCl3 is... Which of the following is the correct order of the size of the given species: Option (iv) is the answer. The size of an anion, cation, and neutral species for a given element is in this order: anion>element>cation. Because of its higher z-effective, cation has the... The elements in which electrons are progressively filled in 4f-orbital are called (i) actinoids (ii) transition elements (iii) lanthanoids (iv) halogens CBSE, Chemistry, Chemistry, Classification of Elements and Periodicity in Properties, NCERT Exemplar, NCERT Exemplar Option (iii) is the answer. In lanthanoids, the 4f orbital is gradually filled with electrons. Lanthanoids have a broad electrical configuration. [Xe]4f 1-145d0-16s2. The period number in the long form of the periodic table is equal to (i) magnetic quantum number of any element of the period. (ii) an atomic number of any element of the period. (iii) maximum Principal quantum number of any element of the period. (iv) maximum Azimuthal quantum number of any element of the period. Option (iii) is the answer. Period number = maximum n of any element where 'n' stands for the principle quantum number. It determines the element's period number. Mg, for example, has a maximum main... The first ionisation enthalpies of Na, Mg, Al and Si are in the order: (i) Na < Mg > Al < Si (ii) Na > Mg > Al > Si (iii) Na < Mg < Al < Si (iv) Na > Mg > Al < Si Option (i) is the answer. Ionization enthalpy is the enthalpy change associated with the loss of the first electron from an isolated gaseous atom in its ground state. As we move across the period,... The order of screening effect of electrons of s, p, d and f orbitals of a given shell of an atom on its outer shell electrons is: (i) s > p > d > f (ii) f > d > p > s (iii) p < d < s > f (iv) f > p > s > d Option (i) is the answer. In every atom with more than one electron shell, this effect, known as the screening effect, describes the decrease in attraction between an electron and the nucleus. The... Which of the following is not an actinoid? (i) Curium (Z = 96) (ii) Californium (Z = 98) (iii) Uranium (Z = 92) (iv) Terbium (Z = 65) CBSE, Chemistry, Chemistry, Chemistry, Class 11, Classification of Elements and Periodicity in Properties, NCERT Exemplar Option (iv) is the answer. Any group of 15 elements in the periodic table, ranging from actinium to lawrencium (atomic numbers 89–103), is known as an actinoid element. As evident from the... Consider the isoelectronic species, Na+, Mg2+, F–and O2–. The correct order of increasing length of their radii is _________. Option (ii) is the answer. Radius of elements increase as we move from left to right in a period. These are isoelectronic species having same number of electrons. As we move from left to right, from... Give the disproportionation reaction of H3PO3. CBSE, Chemistry, Class 12, NCERT, The p-Block Elements When we heat orthophosphorus acid (H3​PO3​), it undergoes disproportionation reaction to yield orthophosphoric acid (H3​PO4​) and phosphine (PH3​). The oxidation states of phosphorous atom in... Write the main differences between the properties of white phosphorus and red phosphorus. Nitrogen exists as diatomic molecule and phosphorus as P4. Why? Nitrogen because of its small size has a capacity to form pπ−pπ multiple bonds with itself.Thus, nitrogen forms a very stable diatomic molecule, N2​. On moving down a group, the tendency... Explain why NH3 is basic while BiH3 is only feebly basic. NH3​ is distinctly basic while BiH3​ is feebly basic as nitrogen has a smaller size due to which the lone pair of electrons are concentrated in a small area. This means that the charge density per... Why does R3P=O exist but R3N=O does not (R = alkyl group)? Nitrogen, N (unlike P) does not have the d-orbital. This restricts nitrogen to expand its coordination number beyond four. Hence, R3​N=O does not exist and it cannot accommodate more electrons due... The HNH angle value is higher than HPH, HAsH and HSbH angles. Why? [Hint: Can be explained on the basis of sp3 hybridisation in NH3 and only s−p bonding between hydrogen and other elements of the group. The H-M-H bond angles for the hydrides of group-15 elements are as follows: NH3 = 107o PH3 = 92o AsH3 = 91o SbH3 = 90o​The above trend in the H−M−H bond angle can be explained on the basis of the... Give the resonating structures of NO2 and N2O5. The resonating structures of the given compounds are as follows: NO2 : N2O5 : Illustrate how copper metal can give different products on reaction with HNO3. Concentrated nitric acid acts as a strong oxidizing agent. It is used for the oxidation of most metals.The products of oxidation depend on certain parameters such as temperature, concentration of... Define the following as related to proteins (i) Primary structure (ii) Peptide linkage (iii) Denaturation. Biomolecules, CBSE, Chemistry, Chemistry, Class 12, NCERT (i) Primary structure When we discuss the primary structure of a protein, we refer to the exact sequence in which the amino acids are present. For example, the sequence of amino acid linkages in a...
CommonCrawl
International Journal of Engineering Science RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB Search references What is RSS Personal entry: Internat. J. Engrg. Sci., 2014, Volume 80, Pages 53–61 (Mi ijes1) Stability of an inflated hyperelastic membrane tube with localized wall thinning A. T. Il'ichevab, Y. B. Fucd a Steklov Mathematical Institute, Gubkina str. 8, 119991 Moscow, Russia b Bauman Moscow Technical University, Baumanskaya str. 5, 105110 Moscow, Russia c Department of Mechanics, Tianjin University, Tianjin 300072, China d Department of Mathematics, Keele University, ST5 5BG, UK Abstract: It is now well-known that when an infinitely long hyperelastic membrane tube free from any imperfections is inflated, a transcritical-type bifurcation may take place that corresponds to the sudden formation of a localized bulge. When the membrane tube is subjected to localized wall-thinning, the bifurcation curve would "unfold" into the turning-point type with the lower branch corresponding to uniform inflation in the absence of imperfections, and the upper branch to bifurcated states with larger amplitude. In this paper stability of bulged configurations corresponding to both branches is investigated with the use of the spectral method. It is shown that under pressure control and with respect to axi-symmetric perturbations, configurations corresponding to the lower branch are stable but those corresponding to the upper branch are unstable. Stability or instability is established by demonstrating the non-existence or existence of an unstable eigenvalue (an eigenvalue with a positive real part). This is achieved by constructing the Evans function that depends only on the spectral parameter. This function is analytic in the right half of the complex plane where its zeroes correspond to the unstable eigenvalues of the generalized spectral problem governing spectral instability. We show that due to the fact that the skew-symmetric operator $\mathcal{J}$ involved in the Hamiltonian formulation of the basic equations is onto, the zeroes of the Evans function can only be located on the real axis of the complex plane. We also comment on the connection between spectral (linear) stability and nonlinear (Lyapunov) stability. Funding Agency Grant Number 11-01-00034-a National Natural Science Foundation of China 11372212 This work is supported by a Joint Project grant awarded by the Royal Society and Russian Foundation for Basic Science Research. The research of the first author (AI) is also supported by the Russian Foundation for Basic Research (Project No. 11-01-00034-a), and the research of the second author (YF) is also supported by the National Science Foundation of China (Grant No. 11372212). DOI: https://doi.org/10.1016/j.ijengsci.2014.02.031 Bibliographic databases: Received: 17.02.2014 Accepted:18.02.2014 Linking options: http://mi.mathnet.ru/eng/ijes1 This page: 41 What is a QR-code? Terms of Use Registration to the website Logotypes © Steklov Mathematical Institute RAS, 2022
CommonCrawl
Article | Open | Published: 12 March 2019 Dental integration and modularity in pinnipeds Mieczyslaw Wolsan ORCID: orcid.org/0000-0002-2083-77431, Satoshi Suzuki2, Masakazu Asahara3 & Masaharu Motokawa4 Evolutionary developmental biology Morphological integration and modularity are important for understanding phenotypic evolution because they constrain variation subjected to selection and enable independent evolution of functional and developmental units. We report dental integration and modularity in representative otariid (Eumetopias jubatus, Callorhinus ursinus) and phocid (Phoca largha, Histriophoca fasciata) species of Pinnipedia. This is the first study of integration and modularity in a secondarily simplified dentition with simple occlusion. Integration was stronger in both otariid species than in either phocid species and related positively to dental occlusion and negatively to both modularity and tooth-size variability across all the species. The canines and third upper incisor were most strongly integrated, comprising a module that likely serves as occlusal guides for the postcanines. There was no or weak modularity among tooth classes. The reported integration is stronger than or similar to that in mammals with complex dentition and refined occlusion. We hypothesise that this strong integration is driven by dental occlusion, and that it is enabled by reduction of modularity that constrains overall integration in complex dentitions. We propose that modularity was reduced in pinnipeds during the transition to aquatic life in association with the origin of pierce-feeding and loss of mastication caused by underwater feeding. Organisms are organised into multiple identifiable parts on multiple levels. These parts are distinct from each other because of structure, function or developmental origins. The fact that parts of an organism are distinguishable reflects their individuality and a degree of independence from each other. Nevertheless, these different parts must be coordinated in their size and shape and integrated throughout the entire organism to make up a functional whole. Tension between the relative independence and the coordination of organismal parts is expressed in concepts of morphological integration1 and modularity2,3. Both concepts are closely related and concern the degree of covariation or correlation between different parts of an organism or other biological entity. Integration deals with the overall pattern of intercorrelation, and modularity involves the partitioning of integration into quasi-independent partitions. Integration exists if parts vary jointly, in a coordinated fashion, throughout a biological entity. Modularity exists if integration is concentrated within certain parts that are tightly integrated internally but is weaker between those parts. Parts that are integrated within themselves and relatively independent of other such internally integrated parts are called modules4,5,6. Integration and modularity are seen at various levels of biological organisation, from genes to colonies, not only in a morphological context but also in other contexts (e.g. molecular7, metabolic8, ecological9), and are viewed as a general property of many different webs of interactions beyond biology4. Morphological integration and modularity have received increased attention among modern evolutionary biologists because the integrated and modular organisation of biological entities has important implications for understanding phenotypic evolution. Integration constrains the variability of individual traits, and modularity enables modules to vary and evolve independently of each other whilst still maintaining the integrity of the functional or developmental unit4,10,11. An integrated and modular organisation has therefore potential to affect evolutionary paths in multiple ways that include circumventing the effects of genetic pleiotropy and developmental canalisation as well as facilitating and channelling evolutionary transformations of functional and developmental units5,12,13. Studies of mammalian evolution often rely on information from the dentition. Teeth are highly informative of a mammal's taxonomic identity, phylogenetic relationships and ecological adaptation; and still constitute the most common and best-preserved mammal remains in the fossil record, adding a historical perspective to the study14,15. The dentition as a whole appears to be a module of the dermal exoskeleton16. Potential different modules within the mammalian dentition include tooth generations (milk vs permanent teeth) and tooth classes (incisors vs canines vs premolars vs molars) and can also include other groups of teeth (e.g. carnivore carnassials vs other premolars and molars)16. At lower levels of dental organisation, individual teeth17 or tooth cusps16 can be separate modules. Many studies of integration and/or modularity have been conducted on complex mammalian dentitions where tooth classes are distinguishable, and teeth differ in form depending on their location in the dental arcade. These studies chiefly involved dentitions of primates1,17,18,19,20,21,22,23,24, carnivores25,26,27,28,29,30,31,32,33,34,35,36, rodents22,37,38,39,40,41,42 and lagomorphs43,44. Much less attention has been directed to simple or simplified dentitions where tooth classes are absent or not distinguishable, and teeth are similar to each other regardless of their location in the dental arcade. Notably, there has been, to our knowledge, only one study of integration and no study of modularity on a secondarily simplified dentition. This study45 investigated morphological integration among mandibular premolars and molars of harp seals (Pagophilus groenlandicus). Pinnipeds (earless seals, Phocidae; sea lions and fur seals, Otariidae; walruses, Odobenidae) are a clade of secondarily aquatic carnivores that evolved from terrestrial ancestors with complex dentition46,47,48,49. Unlike their ancestors, pinnipeds forage under water where they capture, handle and swallow their prey. Prey are swallowed whole or, if too large, first torn (usually extraorally) into swallowable chunks. Pinnipeds do not masticate food but instead employ their dentition in most cases solely to catch and hold prey using a foraging style called pierce feeding50,51. As a likely consequence, ancestral differentiation between premolars and molars has been lost in pinnipeds. Both tooth classes are similar in size and shape (both within and between the arcades) and therefore often collectively called postcanines. Pinniped postcanines are simple or relatively simple in form, effectively two-dimensional because of the lack of a lingual cusp, and lack the refined occlusion characteristic of morphologically complex and differentiated premolars and molars in most non-pinniped (fissiped) carnivores and most mammals in general15,52. The demands of functional occlusion and the process of natural selection constrain phenotypic variation and impose morphological integration in complex dentitions53. The simplified pinniped dentition with simple occlusion is expected to be more variable and less integrated because of relaxed functional and selective constraint. In accordance with this expectation, large intraspecific variations in tooth number have been reported from multiple pinniped species54,55,56,57,58,59,60,61,62,63,64. Furthermore, large variations in tooth size have been recorded, as expected, in ribbon seals (Histriophoca fasciata)65 and ringed seals (Pusa hispida)45,65 but, unexpectedly, not in spotted seals (Phoca largha), northern fur seals (Callorhinus ursinus) or Steller sea lions (Eumetopias jubatus), in all of which variations in tooth size were found to be smaller and similar to those seen in fissipeds with complex dentition and exact dental occlusion65. Moreover, size correlations among mandibular postcanines of Pagophilus groenlandicus were reported as similar to or stronger than those in fissipeds and other mammals with precisely occluding teeth45, suggesting an unexpectedly strong dental integration in this pinniped species. Limited size variability and strong integration are surprising in the pinniped dentition and merit further study. In a previous paper65, we presented results on dental size variability in two otariid (Eumetopias jubatus, Callorhinus ursinus) and two phocid (Phoca largha, Histriophoca fasciata) species. Here, we report results on dental integration and modularity in the same species. All of these species are pierce feeders50,66 that feed mainly on fish (Phoca largha), fish and squid (Eumetopias jubatus, Callorhinus ursinus) or fish and benthic invertebrates (Histriophoca fasciata)67. Whilst these species are broadly representative of both their families and pinnipeds as a whole, which contributes to the generality of our findings, general similarities in their diets and foraging style rather do not let expect large differences in dental integration and modularity. We first measured teeth of the four species using serially homologous measurements, next calculated correlation matrices based on the collected measurement data, and then analysed correlation data in these matrices to assess the strength and structure of integration and modularity in the dentition of each species. We investigated integration at three hierarchical levels: whole dentition, among teeth and within teeth. The level of among-tooth integration included testing two classic hypotheses related to integration, the rule of neighbourhood68,69 and the rule of proximal parts70. The former states that adjacent parts of an organ are more strongly intercorrelated with respect to size than more distant parts; the latter states that proximal parts of an organ are more strongly correlated with respect to size than distal parts. We also comparatively evaluated the degree of dental occlusion among the four species to examine how integration and modularity relate to occlusion, and referred to our earlier assessment of tooth-size variability in these species65 to test the hypothesis that integration is negatively related to variability. Measurement data collection Length (L; maximum linear mesiodistal distance) and width (W; maximum linear vestibulolingual distance perpendicular to the length) were measured on permanent tooth crowns in skeletonised specimens of Eumetopias jubatus (31 males, 30 females), Callorhinus ursinus (43 males, 59 females), Phoca largha (80 males, 60 females, 52 of undetermined gender) and Histriophoca fasciata (62 males, 86 females, 39 of undetermined gender). These specimens derived from wild animals on and around the Japanese Islands according to institutional collection records (Supplementary Tables S1–S4). All measurements were taken with digital calipers to the nearest 0.01 mm on one body side (left or right, depending on the state of preservation) of each specimen. Specimens with an incomplete dentition or a supernumerary tooth on both sides of the upper or lower arcade were not measured. The dental formulae of these species were I1–3/I2,3 C1/C1 P1–4/P1–4 M1/M1 for Eumetopias jubatus, Phoca largha and Histriophoca fasciata and I1–3/I2,3 C1/C1 P1–4/P1–4 M1,2/M1 for Callorhinus ursinus, where I, C, P and M denote permanent incisors, canines, premolars and molars in either half of upper and lower arcades, respectively, and superscript and subscript numbers indicate positions of upper and lower teeth, respectively (Fig. 1). Because of a difference in the number of upper molars, a total of 34 measurements were applied to Eumetopias jubatus, Phoca largha and Histriophoca fasciata and a total of 36 to Callorhinus ursinus. Vestibular profiles of pinniped permanent dentitions at occlusion. (a) Eumetopias jubatus, KUZ (Kyoto University Museum) M9290. (b) Callorhinus ursinus, KUZ M10142. (c) Phoca largha, KUZ M9465, reversed mirror image. (d) Histriophoca fasciata, KUZ M9575. Scale bars equal 1 cm. Correlation matrix calculation Correlations were calculated using Pearson's product-moment correlation coefficient (r). Measurement data were first pairwise correlated for males and females separately. Because no significant differences were observed between r values for males and females of each species (P < 0.05, Student's t-tests with Holm–Bonferroni correction), specimens of both genders and those of undetermined gender were combined, and all pairwise correlations were recalculated. The r values resulted from these calculations were assembled into matrices, one for each species. These and all other statistical analyses were performed in r version 3.2.4 Revised71. Integration assessment Integration was assessed using r entries in the correlation matrix as well as other indices directly or indirectly based on these entries and designed for a particular level of integration. High r values were interpreted as indicating strong integration; lower r values were interpreted as indicating weaker integration. Whole-dentition integration The relative standard deviation of the correlation-matrix eigenvalues, SDrel(λ)72, and the average of the absolute pairwise r values, Ir73, were used to estimate the strength of overall integration. These indices were calculated with equations (1) and (2), respectively: $$S{D}_{{\rm{rel}}}({\rm{\lambda }})=\sqrt{\frac{{\sum }_{i=1}^{p}{({{\rm{\lambda }}}_{i}-1)}^{2}}{p(p-1)}},$$ where λi denotes an eigenvalue of the correlation matrix, and p denotes the number of intercorrelated measurements; $${I}_{r}=\frac{{\sum }_{i=1}^{k}(|{r}_{i}|)}{k},$$ where |ri| denotes an absolute off-diagonal r value in the correlation matrix, and k denotes the number of these values. Both indices are independent of the sample size or the number of intercorrelated measurements and vary between zero (no integration) and one (perfect integration), with the Ir index tending to yield lower values than those of the SDrel(λ) index72,74. Among-tooth integration Correlation matrix r values were used to test the rules of neighbourhood and proximal parts and to assess the strength of integration between teeth. The relative strength and the structure of integration among teeth were analysed with hierarchical unweighted pair-group average (UPGMA) clustering using the average of absolute pairwise r values between measurements of two different teeth (rM) subtracted from one as a dissimilarity measure. The rM metric was calculated, using a pair of upper and lower canines as an example, as the sum of r values between LC1 and LC1, between LC1 and WC1, between WC1 and LC1, and between WC1 and WC1 divided by four. Clustered teeth were interpreted as more strongly integrated than non-clustered ones. Within-tooth integration The strength of integration within teeth was estimated using the absolute r value between measurements of the same tooth. Species-specific patterns of within-tooth integration were identified by plotting these r values along the arcade. Modularity assessment The potential modular structure of the dentition was analysed by hierarchical UPGMA clustering of teeth using a dissimilarity measure of 1 − rM. Potential modules were expected to be identified by clusters. We additionally assumed that tooth classes could be modules as expected for a mammal's dentition75,76. All hypothesised modules (whether identified or assumed) were next tested using the covariance ratio (CR)77 and Escoufier's78 RV coefficient79. Statistical significance of these coefficients was assessed using 9999 iterations of the permutation procedure as described in ref.77 (CR) and ref.79 (RV). Both coefficients were also used to estimate the strength of modularity. The RV coefficient ranges from zero (perfect modularity) to one (no modularity)79. The CR coefficient ranges from zero to positive values: the CR values between zero and one imply a modular structure, with low values corresponding to relatively more modularity, and higher values corresponding to relatively less modularity; the CR values higher than one imply no modularity77. The CR coefficient is unaffected by the sample size or the number of intercorrelated measurements77, whereas the RV coefficient has been shown to be sensitive to both77,80,81. Despite this bias, we used the RV coefficient because it has commonly been applied to quantify morphological modularity, and to check whether both coefficients converge on similar results. Assessment of modularity was supplemented by observations of the shape of adjacent teeth within and between hypothesised modules, assuming that teeth are similar in form within a module and different between modules. Occlusion evaluation The relative degree of dental occlusion among species was qualitatively evaluated using four criteria: the number of teeth lacking occlusal contact with opposing teeth, the number of wear facets on the crowns, the size of these facets relative to the size of the crown, and the size of spaces between adjacent teeth of the same arcade. These criteria were interpreted such that fewer non-contacting teeth, more and larger wear facets and smaller interdental spaces indicated relatively more occlusion, whereas more non-contacting teeth, fewer and smaller wear facets and larger interdental spaces indicated relatively less occlusion. All pairwise r values among measurements were positive and statistically significant except 14 (2.2%) values that were insignificant in Callorhinus ursinus (Figs 2–5). Correlation matrix for measurements in Eumetopias jubatus. The matrix is visualised in three parts that contain correlations among upper (a) or lower (b) teeth and between upper and lower teeth (c). Symbols and abbreviations: n, number of individuals; r, Pearson's product-moment correlation coefficient; L, mesiodistal length of the tooth crown; W, vestibulolingual width of the tooth crown; I, incisor; C, canine; P, premolar; M, molar; superscript and subscript numbers denote positions of upper and lower teeth, respectively. Asterisks indicate r values that are statistically significant (*P ≤ 0.05, **P ≤ 0.001; Student's t-test with Holm–Bonferroni correction). Descriptive statistics for the measurements are in Supplementary Table S5. Correlation matrix for measurements in Callorhinus ursinus. The matrix is visualised in three parts that contain correlations among upper (a) or lower (b) teeth and between upper and lower teeth (c). Symbols and abbreviations: n, number of individuals; r, Pearson's product-moment correlation coefficient; L, mesiodistal length of the tooth crown; W, vestibulolingual width of the tooth crown; I, incisor; C, canine; P, premolar; M, molar; superscript and subscript numbers denote positions of upper and lower teeth, respectively. Asterisks indicate r values that are statistically significant (*P ≤ 0.05, **P ≤ 0.001; Student's t-test with Holm–Bonferroni correction). Descriptive statistics for the measurements are in Supplementary Table S6. Correlation matrix for measurements in Phoca largha. The matrix is visualised in three parts that contain correlations among upper (a) or lower (b) teeth and between upper and lower teeth (c). Symbols and abbreviations: n, number of individuals; r, Pearson's product-moment correlation coefficient; L, mesiodistal length of the tooth crown; W, vestibulolingual width of the tooth crown; I, incisor; C, canine; P, premolar; M, molar; superscript and subscript numbers denote positions of upper and lower teeth, respectively. Asterisks indicate r values that are statistically significant (*P ≤ 0.05, **P ≤ 0.001; Student's t-test with Holm–Bonferroni correction). Descriptive statistics for the measurements are in Supplementary Table S7. Correlation matrix for measurements in Histriophoca fasciata. The matrix is visualised in three parts that contain correlations among upper (a) or lower (b) teeth and between upper and lower teeth (c). Symbols and abbreviations: n, number of individuals; r, Pearson's product-moment correlation coefficient; L, mesiodistal length of the tooth crown; W, vestibulolingual width of the tooth crown; I, incisor; C, canine; P, premolar; M, molar; superscript and subscript numbers denote positions of upper and lower teeth, respectively. Asterisks indicate r values that are statistically significant (*P ≤ 0.05, **P ≤ 0.001; Student's t-test with Holm–Bonferroni correction). Descriptive statistics for the measurements are in Supplementary Table S8. Pairwise r values among measurements were in most cases higher in both otariid species than in either phocid species, with Eumetopias jubatus generally showing the highest values and Histriophoca fasciata the lowest (Figs 2–5). Consistent with this observation, as expected, were values of integration indices, SDrel(λ) and Ir, which were, respectively, 0.776 and 0.767 for Eumetopias jubatus, 0.660 and 0.643 for Callorhinus ursinus, 0.549 and 0.535 for Phoca largha, and 0.510 and 0.500 for Histriophoca fasciata. These results indicated the strongest overall integration in Eumetopias jubatus, followed in descending order by those in Callorhinus ursinus, Phoca largha and Histriophoca fasciata. Measurements of teeth that occluded with each other tended to be more strongly intercorrelated than those of upper vs lower teeth that did not occlude in each of the four species evaluated (Figs 2c, 3c, 4c and 5c; P < 0.018, Mann–Whitney U-tests), indicating stronger integration between occluding teeth compared to that between non-occluding ones. Furthermore, as predicted by the rule of neighbourhood, measurements of adjacent teeth of an arcade tended to be more strongly intercorrelated than those of more distant teeth of that arcade in each of the four species (Figs 2a,b, 3a,b, 4a,b and 5a,b; P < 0.033, Mann–Whitney U-tests), which indicated a tendency for stronger integration between adjacent teeth of the same arcade compared to that between non-adjacent ones. However, contrary to the rule of proximal parts, measurements of more mesial teeth of both arcades tended not to be more strongly intercorrelated than those of more distal teeth of both arcades in each of the four species (Figs 2–5; P = 0.13–0.71, tests for the significance of correlation between the r coefficient and the position of the tooth pair using Student's t-distribution), which indicated that integration did not tend to be stronger between more mesial teeth compared to that between more distal teeth. Measurements of C1 and C1 were more strongly intercorrelated than those of any other teeth in all four species evaluated and especially in both otariid species (Figs 2–6), which indicated the strongest integration between the canines. Canine measurements were most strongly correlated with those of I3 in all of the four species and especially in both otariid species (Figs 2–6), indicating strong integration among C1, C1 and I3. Measurements of postcanines that corresponded in position to the carnassials in fissipeds (P4 and M1) were relatively weakly intercorrelated in all the four species (Figs 2c, 3c, 4c, 5c and 6), indicating a relatively weak integration between these teeth. The most distal upper postcanines of both otariid species (M1 of Eumetopias jubatus and M2 of Callorhinus ursinus) were positioned separately from all other teeth in the respective dendrograms resulted from cluster analysis (Fig. 6a,b), and their measurements tended to be most weakly correlated with those of other teeth (Figs 2a,c and 3a,c; P < 0.0001, Mann–Whitney U-tests), indicating the weakest integration with other teeth of the dentition. In contrast, the most distal upper postcanine of either phocid species (M1) was not positioned separately from all other teeth in the respective dendrograms resulted from cluster analysis (Fig. 6c,d), and its measurements were relatively strongly correlated with those of other teeth (Figs 4a,c and 5a,c; Mann–Whitney U-tests did not reject the null hypothesis of M1 measurements being not most weakly correlated with those of other teeth, with P = 0.89 for Phoca largha and P = 0.93 for Histriophoca fasciata), indicating a relatively strong integration of M1 with other teeth of the dentition. Hierarchical UPGMA clustering of teeth based on the average correlation between their measurements in Eumetopias jubatus (a), Callorhinus ursinus (b), Phoca largha (c) and Histriophoca fasciata (d). Symbols and abbreviations: n, number of individuals; rM, arithmetic mean of pairwise Pearson's product-moment correlation coefficients between measurements of two different teeth; I, incisor; C, canine; P, premolar; M, molar; superscript and subscript numbers indicate positions of upper and lower teeth, respectively. A comparison of r values between measurements of the same tooth along the upper and lower arcades of each evaluated species revealed patterns of within-tooth integration. These patterns were more similar between both otariid species than between both phocid species and differed between the otariid and phocid species (Fig. 7). The canines were the most strongly internally integrated teeth of their arcades in all of the four species evaluated except for the Histriophoca fasciata lower arcade where P1 was more strongly integrated internally than C1 (Fig. 7). The internal integration of C1 was stronger than that of C1 in all of the four species, and both were very strong in both otariid species and weaker in both phocid species (Fig. 7). The P4 and M1 of all the four species as well as M1 of Eumetopias jubatus and M2 of Callorhinus ursinus were relatively weakly integrated internally, whereas M1 in both phocid species was relatively strongly integrated internally (Fig. 7). Correlation between measurements of the same tooth along the arcades in four pinniped species. Symbols and abbreviations: n, number of individuals; r, Pearson's product-moment correlation coefficient; L, mesiodistal length of the tooth crown; W, vestibulolingual width of the tooth crown; I, incisor; C, canine; P, premolar; M, molar; superscript and subscript numbers indicate positions of upper and lower teeth, respectively. Cluster analyses identified a potential module composed of C1, C1 and I3 in all four species evaluated but did not reveal a distinct modular structure in the whole dentition of any of these species (Fig. 6). In turn, analyses of the CR and RV coefficients (both coefficients mostly provided congruent results) supported a modular nature of the canine–I3 complex in both phocid species and, to a lesser extent, in Callorhinus ursinus but not in Eumetopias jubatus (Table 1). In addition, contrary to the cluster analyses, results of the CR and RV analyses generally implied a modular structure with tooth classes as modules in both phocid species and, to a lesser extent, in both otariid species, although all CR and most RV values were high (closer to one than to zero), which indicated that the modular structure was weak (Table 1). All CR and most RV values for comparisons of the molars with either the premolars only or the premolars combined with the canines and the incisors were higher for both phocid species than for either otariid species, indicating the lesser distinctiveness of molars from the rest of the dentition in these phocid species (Table 1). The CR and RV values for other comparisons between groups of teeth were in most cases lowest in Histriophoca fasciata, followed in ascending order by those in Phoca largha, Callorhinus ursinus and Eumetopias jubatus (Table 1). This order of species was exactly opposite to that according to increasing SDrel(λ) and Ir values for whole-dentition integration, indicating a negative relationship between the degrees of modularity and integration. Table 1 Degree of modularity between groups of teeth in four pinniped species. These results were congruent with and extended by observations that I3 closely resembled C1 in form in all four species evaluated, and that teeth were serially similar except relative discontinuities between C1 and P1 in all of the species, between C1 and P1 in both phocid species, and between P4 and M1 in both otariid species (Fig. 1). These observations indicated that the molars are more distinctive from the premolars in the upper arcade than in the lower one in both otariid species. A comparison of the degree of dental occlusion showed that overall occlusion was more extensive in both otariid species than in either phocid species, and that it was least pronounced in Histriophoca fasciata in which spaces between adjacent postcanines of the same arcade were largest relative to postcanine size, and the opposing upper and lower postcanines often did not come into occlusal contact with each other (Fig. 1). Wear facets on postcanine crowns were larger relative to the size of the crown and occurred more often in Eumetopias jubatus than in Callorhinus ursinus, indicating a more extensive occlusion in the former species. These observations indicated the highest degree of dental occlusion in Eumetopias jubatus, followed by those in Callorhinus ursinus, Phoca largha and Histriophoca fasciata, in this descending order, thus matching the order of these species according to weakening whole-dentition integration and increasing modularity. Regarding the most distal upper postcanines, M1 of Eumetopias jubatus and M2 of Callorhinus ursinus lacked occlusal contact with teeth of the lower arcade (Fig. 1a,b), M1 of Phoca largha occluded with M1 (Fig. 1c), and M1 of Histriophoca fasciata was variable. It occluded with M1 in some specimens but was deprived of any contact in others (Fig. 1d). This study found that dental integration was positively related to dental occlusion across four representative pinniped species, and that integration was stronger between occluding teeth than between non-occluding ones in each of these species. A comparison with our previous findings on tooth-size variation in the same species65 shows that dental integration and occlusion are roughly negatively related to dental size variability, with the most integrated and occluding dentition being the least variable (Eumetopias jubatus) and the least integrated and occluding dentition the most variable (Histriophoca fasciata). This concurs with the expectation that the degree of integration is related positively to the degree of occlusion and negatively to the degree of variability, providing a functional rationale for many differences in dental integration and dental size variability among the four species. This also indicates that functional requirements of occlusion significantly contribute to integration in the pinniped dentition despite the fact that both the postcanines and occlusion are considerably simplified in this dentition compared to those in the complex dentition of most other mammals. This conclusion is further supported by our observations from the canines, I3, P4, M1 and the most distal upper postcanines. The primary role of the canines in mammals is to serve as occlusal guides for the postcanines82, a function that is a plausible candidate to account for the strong integration observed between and within the canines in the four pinniped species. The strong integration among the canines and I3 and the likely modular nature of the canine–I3 complex found in this study suggest that I3 may also be involved in this function in all of the four species. A positive relationship between the degrees of canine–I3 integration and dental occlusion (both were highest in Eumetopias jubatus and decreased, in descending order, in Callorhinus ursinus, Phoca largha and Histriophoca fasciata) supports this functional interpretation. The strong internal integration of P1 relative to that of C1 observed in Histriophoca fasciata and Phoca largha suggests that P1 might be an additional element of this functional complex in these phocid species, but the outcomes of cluster analysis contradicted this hypothesis by showing that the measurements of P1 were most strongly correlated with those of P1 and that the P1–P1 cluster was far from the canine–I3 cluster in both phocid species. Another potential influence on integration of the canines derives from the fact that males of many pinniped species use their canines in combat over territory and females. However, whilst this behaviour holds true for Eumetopias jubatus and Callorhinus ursinus, which mate on land, it does not hold for Phoca largha and Histriophoca fasciata, which mate in the water where there is no need for the male to defend territory or compete for females by trying to dominate other males83. Moreover, we observed no significant differences between canine r values of males and females for each of the four species, which suggests that male-to-male combat behaviour does not importantly affect the canine integration. Interestingly, the canines were considerably sexually dimorphic in both otariid species and larger relative to other teeth than those in both phocid species65, which is apparently because of the difference in mating systems84. Unlike fissiped carnassials, which are rather strongly integrated relative to other teeth of the dentition25,27,28,29,30,31, their positional counterparts in the four pinniped species (P4 and M1) were relatively weakly integrated both with each other and within themselves, which is expected from a functional standpoint because these teeth lost their carnassial function early in pinniped evolution51,85. Furthermore, the most distal upper postcanines of both otariid species exhibited the weakest integration with other teeth of the dentition and a relatively weak internal integration as well as a considerable size variation65; which was in contrast to the most distal upper postcanines of both phocid species, which exhibited a relatively strong integration with other teeth of the dentition and a strong internal integration as well as a size variation comparable to that of other teeth of the dentition65. This is also expected from a functional standpoint because the most distal upper postcanines of both otariid species lacked occlusal contact with teeth of the lower arcade, whereas the most distal upper postcanines of both phocid species invariably or variably occluded with a tooth of the lower arcade. The situation in these otariids is comparable to that in fissipeds where the most distal teeth that show no or little occlusion are less integrated and more variable than other teeth25,26,27,29,31,86,87. Our study also revealed evidence showing that developmental factors play an important role in shaping integration in the pinniped dentition. Specifically, a modular structure with tooth classes as modules, albeit weak, was identified. Moreover, our results generally concurred with previous findings regarding the validity of the rules of neighbourhood and proximal parts in the case of both a whole dentition and a series of teeth representing more than one tooth class19,25,26,27,29,31,34,45, indicating that not only complex mammalian dentitions but also secondarily simplified pinniped dentitions generally hold to the rule of neighbourhood but not to the rule of proximal parts. Adherence to a modular structure among tooth classes and to the rule of neighbourhood are expected in a mammal's dentition from a developmental point of view given that developmental histories can be common within but different between tooth classes (e.g. premolars vs molars, the former having two generations and the latter only having one), and that teeth are considered developmentally interrelated metameric members of a serially homologous meristic series75,88,89,90, and adjacent tooth buds or teeth physically contact each other along the dental lamina or arcade during ontogeny and can also otherwise influence each other (e.g. the first molar to develop can determine the size of the successive ones91,92). A comparison of our results from four pinniped species (Ir = 0.500–0.767) with values of this index calculated from previously reported dental correlation matrices for mammal species with complex dentition20,21,23,25,27,28,29,31,34,35,37,43,44 (Ir = 0.291–0.683) shows that dental integration in these pinniped species with simple dental occlusion is stronger than or similar to that in mammal species with refined occlusion. This is surprising when viewed from a solely functional perspective. We propose that both functional factors related to dental occlusion and developmental factors related to modularity have contributed to the strong integration in the pinnipeds in this study. Specifically, modularity was found in this study to be weak and negatively related to integration. Both are not surprising given reduced heterodonty in the pinnipeds examined, and the fact that modules require no or weak intermodular integration to exist, which constrains overall integration of a structure composed of modules. We hypothesise that high levels of modularity in complex mammalian dentitions17,22,23,24,36,40,42 effectively constrain overall integration to moderate levels, whilst the lower levels of modularity revealed in the simplified pinniped dentitions in this study enable the higher levels of overall integration. We further hypothesise that the potential high levels of integration enabled by reduced modularity have effectively been achieved in these pinnipeds in response to selective pressure driven by functional requirements of dental occlusion, which, albeit weak in these pinnipeds, positively influences dental integration. It has been suggested that evolutionarily conserved developmental programmes for the mammalian dentition underlie integration in the pinniped dentition45. Whilst the weak tooth-class modules identified in our study are apparently the remnant from a conserved ancestral mammalian pattern, we propose that the decisive developmental programme is an evolutionary novelty that arose in pinnipeds during the transition from terrestrial to aquatic life in association with the origin of pierce feeding and loss of mastication driven by functional requirements of underwater feeding. The simplification of tooth form and increased mutual similarity of teeth representing different classes are apparently associated with reduced dental modularity, and together with increased tooth spacing that is associated with decreased postcanine size51,66, they are likely manifestations of adaptation to underwater feeding. Developmental processes that lie behind these changes in early pinnipeds likely converge to some extent with those hypothesised for cetaceans93. The greater disparity in patterns of within-tooth integration between phocid species than between otariid species found in our study suggests a greater diversification of integration patterns in Phocidae than in Otariidae. A comparison of our results from four representative pierce feeding species with correlation data from mandibular postcanines of another pierce feeding species, Pagophilus groenlandicus45 (Ir = 0.587), suggests that high levels of dental integration are common among pierce feeders, and we expect other pinnipeds (both suction feeders and filter feeders50) to show similarly high levels provided that there is a functional factor that drives integration in their dentition. If there is no functional factor, we expect a rather weak integration. Our findings indicate that this factor is dental occlusion in pierce feeders. Exploration of suction and filter feeding pinnipeds is needed to determine whether their dental integration is weak or strong and, in the latter case, to identify the functional factor that drives the integration. Measurement data analysed in this study are available in Supplementary Tables S1–S4. Olson, E. C. & Miller, R. L. Morphological Integration (Univ. of Chicago Press, 1958). Wagner, G. P. Homologues, natural kinds and the evolution of modularity. Am. Zool. 36, 36–43 (1996). Wagner, G. P. & Altenberg, L. Perspective: complex adaptations and the evolution of evolvability. Evolution 50, 967–976 (1996). Klingenberg, C. P. Morphological integration and developmental modularity. Annu. Rev. Ecol. Evol. Syst. 39, 115–132 (2008). Goswami, A. & Polly, P. D. Methods for studying morphological integration and modularity. Paleontol. Soc. Pap. 16, 213–243 (2010). Klingenberg, C. P. Studying morphological integration and modularity at multiple levels: concepts and analysis. Philos. Trans. R. Soc. Lond. B Biol. Sci. 369, 20130249, https://doi.org/10.1098/rstb.2013.0249 (2014). von Dassow, G. & Meir, E. Exploring modularity with dynamical models of gene networks. In Modularity in Development and Evolution (eds Schlosser, G. & Wagner, G. P.) 244–287 (Univ. of Chicago Press, 2004). Ravasz, E., Somera, A. L., Mongru, D. A., Oltvai, Z. N. & Barabási, A.-L. Hierarchical organization of modularity in metabolic networks. Science 297, 1551–1555 (2002). Olesen, J. M., Bascompte, J., Dupont, Y. L. & Jordano, P. The modularity of pollination networks. Proc. Natl. Acad. Sci. USA 104, 19891–19896 (2007). Randau, M. & Goswami, A. Morphological modularity in the vertebral column of Felidae (Mammalia, Carnivora). BMC Evol. Biol. 17, 133, https://doi.org/10.1186/s12862-017-0975-2 (2017). Jones, K. E., Benitez, L., Angielczyk, K. D. & Pierce, S. E. Adaptation and constraint in the evolution of the mammalian backbone. BMC Evol. Biol. 18, 172, https://doi.org/10.1186/s12862-018-1282-2 (2018). Armbruster, W. S., Pélabon, C., Bolstad, G. H. & Hansen, T. F. Integrated phenotypes: understanding trait covariation in plants and animals. Philos. Trans. R. Soc. Lond. B Biol. Sci. 369, 20130245, https://doi.org/10.1098/rstb.2013.0245 (2014). Goswami, A., Binder, W. J., Meachen, J. & O'Keefe, F. R. The fossil record of phenotypic integration and modularity: a deep-time perspective on developmental and evolutionary dynamics. Proc. Natl. Acad. Sci. USA 112, 4891–4896 (2015). Gingerich, P. D. Patterns of evolution in the mammalian fossil record. In Patterns of Evolution, as Illustrated by the Fossil Record (ed. Hallam, A.) 469–500 (Elsevier, Amsterdam, 1977). Hillson, S. Teeth 2nd edn (Cambridge Univ. Press, 2005). Stock, D. W. The genetic basis of modularity in the development and evolution of the vertebrate dentition. Philos. Trans. R. Soc. Lond. B Biol. Sci. 356, 1633–1653 (2001). Gómez-Robles, A. & Polly, P. D. Morphological integration in the hominin dentition: evolutionary, developmental, and functional factors. Evolution 66, 1024–1043 (2012). Garn, S. M., Lewis, A. B. & Kerewsky, R. S. Size interrelationships of the mesial and distal teeth. J. Dent. Res. 44, 350–354 (1965). Suarez, B. K. & Bernor, R. Growth fields in the dentition of the gorilla. Folia Primatol. 18, 356–367 (1972). Cochard, L. R. Pattern of size variation and correlation in the dentition of the red colobus monkey (Colobus badius). Am. J. Phys. Anthropol. 54, 139–146 (1981). Hlusko, L. J. & Mahaney, M. C. Quantitative genetics, pleiotropy, and morphological integration in the dentition of Papio hamadryas. Evol. Biol. 36, 5–18 (2009). Hlusko, L. J., Sage, R. D. & Mahaney, M. C. Modularity in the mammalian dentition: mice and monkeys share a common dental genetic architecture. J. Exp. Zool. B Mol. Dev. Evol. 316, 21–49 (2011). Grieco, T. M., Rizk, O. T. & Hlusko, L. J. A modular framework characterizes micro- and macroevolution of Old World monkey dentitions. Evolution 67, 241–259 (2013). Delezene, L. K. Modularity of the anthropoid dentition: implications for the evolution of the hominin canine honing complex. J. Hum. Evol. 86, 1–12 (2015). Kurtén, B. On the variation and population dynamics of fossil and recent mammal populations. Acta Zool. Fenn. 76, 1–122 (1953). Kurtén, B. Some quantitative approaches to dental microevolution. J. Dent. Res. 46, 817–828 (1967). Gingerich, P. D. & Winkler, D. A. Patterns of variation and correlation in the dentition of the red fox, Vulpes vulpes. J. Mammal. 60, 691–704 (1979). Pengilly, D. Developmental versus functional explanations for patterns of variability and correlation in the dentitions of foxes. J. Mammal. 65, 34–43 (1984). Szuma, E. Variation and correlation patterns in the dentition of the red fox from Poland. Ann. Zool. Fenn. 37, 113–127 (2000). Meiri, S., Dayan, T. & Simberloff, D. Variability and correlations in carnivore crania and dentition. Funct. Ecol. 19, 337–343 (2005). Prevosti, F. J. & Lamas, L. Variation of cranial and dental measurements and dental correlations in the pampean fox (Dusicyon gymnocercus). J. Zool. (Lond.) 270, 636–649 (2006). Pavlinov, I. Y. & Nanova, O. G. Geometric morphometry of the upper tooth row in the Eurasian polar fox (Alopex lagopus, Canidae). Zool. Zhurnal 87, 344–347 (2008). Pavlinov, I. Y., Nanova, O. G. & Lisovskii, A. A. Correlation structure of cheek teeth in the polar fox (Alopex lagopus, Canidae). Zool. Zhurnal 87, 862–875 (2008). Miller, E. H., Mahoney, S. P., Kennedy, M. L. & Kennedy, P. K. Variation, sexual dimorphism, and allometry in molar size of the black bear. J. Mammal. 90, 491–503 (2009). Nanova, O. G. Correlation structure of cheek teeth in the bat-eared fox (Otocyon megalotis, Canidae). Zool. Zhurnal 89, 741–748 (2010). Nanova, O. G. Morphological variation and integration of dentition in the Arctic fox (Vulpes lagopus): effects of island isolation. Russ. J. Theriol. 14, 153–162 (2015). Van Valen, L. Growth fields in the dentition of Peromyscus. Evolution 16, 272–277 (1962). Gould, S. J. & Garwood, R. A. Levels of integration in mammalian dentitions: an analysis of correlations in Nesophontes micrus (Insectivora) and Oryzomys couesi (Rodentia). Evolution 23, 276–300 (1969). Workman, M. S., Leamy, L. J., Routman, E. J. & Cheverud, J. M. Analysis of quantitative trait locus effects on the size and shape of mandibular molars in mice. Genetics 160, 1573–1586 (2002). Laffont, R., Renvoisé, E., Navarro, N., Alibert, P. & Montuire, S. Morphological modularity and assessment of developmental processes within the vole dental row (Microtus arvalis, Arvicolinae, Rodentia). Evol. Dev. 11, 302–311 (2009). Renaud, S., Pantalacci, S., Quéré, J.-P., Laudet, V. & Auffray, J.-C. Developmental constraints revealed by co-variation within and among molar rows in two murine rodents. Evol. Dev. 11, 590–602 (2009). Labonne, G., Navarro, N., Laffont, R., Chateau-Smith, C. & Montuire, S. Developmental integration in a functional unit: deciphering processes from adult dental morphology. Evol. Dev. 16, 224–232 (2014). Sych, L. Fossil Leporidae from the Pliocene and Pleistocene of Poland. Acta Zool. Crac. 10, 1–88 (1965). Sych, L. Correlation of tooth measurements in leporids. On the significance of the coefficient of correlation in the studies of microevolution. Acta Theriol. 11, 41–54 (1966). Miller, E. H. et al. Variation and integration of the simple mandibular postcanine dentition in two species of phocid seal. J. Mammal. 88, 1325–1334 (2007). Yu, L., Li, Q., Ryder, O. A. & Zhang, Y. Phylogenetic relationships within mammalian order Carnivora indicated by sequences of two nuclear DNA genes. Mol. Phylogenet. Evol. 33, 694–705 (2004). Flynn, J. J., Finarelli, J. A., Zehr, S., Hsu, J. & Nedbal, M. A. Molecular phylogeny of the Carnivora (Mammalia): assessing the impact of increased sampling on resolving enigmatic relationships. Syst. Biol. 54, 317–337 (2005). Sato, J. J. et al. Evidence from nuclear DNA sequences sheds light on the phylogenetic relationships of Pinnipedia: single origin with affinity to Musteloidea. Zool. Sci. 23, 125–146 (2006). Fulton, T. L. & Strobeck, C. Molecular phylogeny of the Arctoidea (Carnivora): effect of missing data on supertree and supermatrix analyses of multiple gene data sets. Mol. Phylogenet. Evol. 41, 165–181 (2006). Adam, P. J. & Berta, A. Evolution of prey capture strategies and diet in the Pinnipedimorpha (Mammalia, Carnivora). Oryctos 4, 83–107 (2002). Churchill, M. & Clementz, M. T. The evolution of aquatic feeding in seals: insights from Enaliarctos (Carnivora: Pinnipedimorpha), the oldest known seal. J. Evol. Biol. 29(2016), 319–334 (2015). Thenius, E. Zähne und Gebiß der Säugetiere (Walter de Gruyter, Berlin, 1989). Polly, P. D. Movement adds bite to the evolutionary morphology of mammalian teeth. BMC Biol. 10, 69, https://doi.org/10.1186/1741-7007-10-69 (2012). Kubota, K. & Togawa, S. Numerical variations in the dentition of some pinnipeds. Anat. Rec. 150, 487–501 (1964). Burns, J. J. & Fay, F. H. Comparative morphology of the skull of the Ribbon seal, Histriophoca fasciata, with remarks on systematics of Phocidae. J. Zool. (Lond.) 161, 363–394 (1970). Briggs, K. T. Dentition of the northern elephant seal. J. Mammal. 55, 158–171 (1974). Suzuki, M., Ohtaishi, N. & Nakane, F. Supernumerary postcanine teeth in the kuril seal (Phoca vitulina stejnegeri), the larga seal (Phoca largha) and the ribbon seal (Phoca fasciata). Jpn. J. Oral Biol. 32, 323–329 (1990). Könemann, S. & van Bree, P. J. H. Gebißanomalien bei nordatlantischen Phociden (Mammalia, Phocidae). Z. Säugetierkd. 62, 71–85 (1997). Drehmer, C. J., Fabián, M. E. & Menegheti, J. O. Dental anomalies in the Atlantic population of South American sea lion, Otaria byronia (Pinnipedia, Otariidae): evolutionary implications and ecological approach. Lat. Am. J. Aquat. Mamm. 3, 7–18 (2004). Cruwys, L. & Friday, A. Visible supernumerary teeth in pinnipeds. Polar Rec. 42, 83–85 (2006). Drehmer, C. J., Dornelles, J. E. F. & Loch, C. Variações na fórmula dentária de Otaria byronia Blainville (Pinnipedia, Otariidae) no Pacífico: registro de um novo tipo de anomalia. Neotrop. Biol. Conserv. 4, 28–35 (2009). Loch, C., Simões-Lopes, P. C. & Drehmer, C. J. Numerical anomalies in the dentition of southern fur seals and sea lions (Pinnipedia: Otariidae). Zoologia 27, 477–482 (2010). Drehmer, C. J., Sanfelice, D. & Loch, C. Dental anomalies in pinnipeds (Carnivora: Otariidae and Phocidae): occurrence and evolutionary implications. Zoomorphology 134, 325–338 (2015). Kahle, P., Ludolphy, C., Kierdorf, H. & Kierdorf, U. Dental anomalies and lesions in Eastern Atlantic harbor seals, Phoca vitulina vitulina (Carnivora, Phocidae), from the German North Sea. PLoS One 13, e0204079, https://doi.org/10.1371/journal.pone.0204079 (2018). Wolsan, M., Suzuki, S., Asahara, M. & Motokawa, M. Tooth size variation in pinniped dentitions. PLoS One 10, e0137100, https://doi.org/10.1371/journal.pone.0137100 (2015). Churchill, M. & Clementz, M. T. Functional implications of variation in tooth spacing and crown size in Pinnipedimorpha (Mammalia: Carnivora). Anat. Rec. 298, 878–902 (2015). Pauly, D., Trites, A. W., Capuli, E. & Christensen, V. Diet composition and trophic levels of marine mammals. ICES J. Mar. Sci. 55, 467–481 (1998). Whiteley, M. A. & Pearson, K. Data for the problem of evolution in man. I. A first study of the variability and correlation of the hand. Proc. R. Soc. Lond. 65, 126–151 (1899). Lewenz, M. A. & Whiteley, M. A. Data for the problem of evolution in man. A second study of the variability and correlation of the hand. Biometrika 1, 345–360 (1902). Alpatov, W. W. & Boschko-Stepanenko, A. M. Variation and correlation in serially situated organs in insects, fishes and birds. Am. Nat. 62, 409–424 (1928). R Core Team. R: a Language and Environment for Statistical Computing, https://www.R-project.org (R Foundation for Statistical Computing, Vienna, 2016). Pavlicev, M., Cheverud, J. M. & Wagner, G. P. Measuring morphological integration using eigenvalue variance. Evol. Biol. 36, 157–170 (2009). Cane, W. P. The ontogeny of postcranial integration in the common tern, Sterna hirundo. Evolution 47, 1138–1151 (1993). Haber, A. A comparative analysis of integration indices. Evol. Biol. 38, 476–488 (2011). Butler, P. M. Studies of the mammalian dentition. Differentiation of the postcanine dentition. Proc. Zool. Soc. Lond. B 109, 1–36 (1939). Osborn, J. W. Morphogenetic gradients: fields versus clones. In Development, Function and Evolution of Teeth (eds Butler, P. M. & Joysey, K. A.) 171–201 (Academic Press, London, 1978). Adams, D. C. Evaluating modularity in morphometric data: challenges with the RV coefficient and a new test measure. Methods Ecol. Evol. 7, 565–572 (2016). Escoufier, Y. Le traitement des variables vectorielles. Biometrics 29, 751–760 (1973). Klingenberg, C. P. Morphometric integration and modularity in configurations of landmarks: tools for evaluating a priori hypotheses. Evol. Dev. 11, 405–421 (2009). Smilde, A. K., Kiers, H. A. L., Bijlsma, S., Rubingh, C. M. & van Erk, M. J. Matrix correlations for high-dimensional data: the modified RV-coefficient. Bioinformatics 25(2009), 401–405 (2008). Fruciano, C., Franchini, P. & Meyer, A. Resampling-based approaches to study variation in morphological modularity. PLoS One 8, e69376, https://doi.org/10.1371/journal.pone.0069376 (2013). Mellett, J. S. Autocclusal mechanisms in the carnivore dentition. Aust. Mammal. 8, 233–238 (1984). Berta, A., Sumich, J. L. & Kovacs, K. M. Marine Mammals: Evolutionary Biology 2nd edn (Elsevier, Amsterdam, 2006). Mesnick, S. & Ralls, K. Sexual dimorphism. In Encyclopedia of Marine Mammals 3rd edn (eds Würsig, B., Thewissen, J. G. M. & Kovacs, K. M.) 848–853 (Elsevier, London, 2018). Berta, A., Churchill, M. & Boessenecker, R. W. The origin and evolutionary biology of pinnipeds: seals, sea lions, and walruses. Annu. Rev. Earth Planet. Sci. 46, 203–228 (2018). Wolsan, M. Ancestral characters in the dentition of the weasel Mustela nivalis L. (Carnivora, Mustelidae). Ann. Zool. Fenn. 20, 47–51 (1983). Wolsan, M., Ruprecht, A. L. & Buchalczyk, T. Variation and asymmetry in the dentition of the pine and stone martens (Martes martes and M. foina) from Poland. Acta Theriol. 30, 79–114 (1985). Bateson, W. On numerical variation in teeth, with a discussion of the conception of homology. Proc. Zool. Soc. Lond. 1892, 102–115 (1892). Bateson, W. Materials for the Study of Variation Treated with Especial Regard to Discontinuity in the Origin of Species (Macmillan, London, 1894). Butler, P. M. Studies of the mammalian dentition. I. The teeth of Centetes ecaudatus and its allies. Proc. Zool. Soc. Lond. B 107, 103–132 (1937). Kavanagh, K. D., Evans, A. R. & Jernvall, J. Predicting evolutionary patterns of mammalian teeth from development. Nature 449, 427–432 (2007). Asahara, M., Saito, K., Kishida, T., Takahashi, K. & Bessho, K. Unique pattern of dietary adaptation in the dentition of Carnivora: its advantage and developmental origin. Proc. R. Soc. B 283, 20160375, https://doi.org/10.1098/rspb.2016.0375 (2016). Armfield, B. A., Zheng, Z., Bajpai, S., Vinyard, C. J. & Thewissen, J. G. M. Development and evolution of the unique cetacean dentition. PeerJ 1, e24, https://doi.org/10.7717/peerj.24 (2013). We are grateful to two anonymous reviewers for comments that enabled a considerable improvement to the final version of the manuscript. We also thank the following individuals and their respective institutions for enabling access to pinniped specimens in their care: Tetsuya Amano and Noriyuki Ohtaishi (Hokkaido University Museum), Masaru Kato and Fumihito Takaya (Hokkaido University Botanic Garden), Mari Kobayashi (Tokyo University of Agriculture), Naoki Kohno and Tadasu K. Yamada (National Museum of Nature and Science, Tokyo). Part of this study was conducted during M.W.'s visiting professorship and S.S.'s doctoral programme at the Kyoto University Museum and M.A.'s fellowship (grant-in-aid No. 11J01149 from the Japan Society for the Promotion of Science) at the Graduate School of Science, Kyoto University. Other support for this work came from the National Science Centre (Poland) through grant 2012/05/B/NZ8/02687 to M.W. Museum and Institute of Zoology, Polish Academy of Sciences, Wilcza 64, 00-679, Warszawa, Poland Mieczyslaw Wolsan Kanagawa Prefectural Museum of Natural History, 499 Iryuda, Odawara, Kanagawa, 250-0031, Japan Satoshi Suzuki Division of Liberal Arts and Sciences, Aichi Gakuin University, Iwasaki-cho-Araike 12, Nisshin, Aichi, 470-0195, Japan Masakazu Asahara The Kyoto University Museum, Kyoto University, Kyoto, 606-8501, Japan Masaharu Motokawa Search for Mieczyslaw Wolsan in: Search for Satoshi Suzuki in: Search for Masakazu Asahara in: Search for Masaharu Motokawa in: M.W. conceived and designed the study, carried out the measurements, and wrote the manuscript. S.S. conducted the statistical analyses. M.A. and M.M. helped in getting access to specimens and participated in discussions about the manuscript. Correspondence to Mieczyslaw Wolsan.
CommonCrawl
Mat. Sb.: Mat. Sb., 2004, Volume 195, Number 8, Pages 3–46 (Mi msb838) This article is cited in 14 scientific papers (total in 14 papers) On Jackson's inequality for a generalized modulus of continuity in $L_2$ A. I. Kozko, A. V. Rozhdestvenskii M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics Abstract: The value of the sharp constant $\varkappa$ in the Jackson type inequality in the space $L_2(\mathbb T^d)$ \begin{equation} E_{n-1}(f)\leqslant\varkappa\overline\omega_\psi(f,T) \end{equation} is studied for the generalized modulus of continuity $$ \overline\omega_\psi(f,T)=\max_{t\in T}(\sum_{s}\psi(st)|\widehat f_s|^2)^{1/2}. $$ The value $\overset{*}{\varkappa}$ of the minimum sharp constant in inequality (1) is found. A class of generalized moduli of continuity is introduced which contains the moduli $\widetilde\omega_{a,r}(f,\delta):=\sup_{0\leqslant t\leqslant\delta}\|\Delta_{a^{r-1}t}\dotsb \Delta_{at}\Delta_{t}f\|_2$, with even $a$. The relation $\varkappa=\overset{*}\varkappa$ is proved in this class for all $\delta\geqslant\pi/n$. DOI: https://doi.org/10.4213/sm838 References: PDF file HTML file Sbornik: Mathematics, 2004, 195:8, 1073–1115 Bibliographic databases: MSC: 41A17 Received: 14.06.2002 and 10.11.2003 Citation: A. I. Kozko, A. V. Rozhdestvenskii, "On Jackson's inequality for a generalized modulus of continuity in $L_2$", Mat. Sb., 195:8 (2004), 3–46; Sb. Math., 195:8 (2004), 1073–1115 \Bibitem{KozRoz04} \by A.~I.~Kozko, A.~V.~Rozhdestvenskii \paper On~Jackson's inequality for a~generalized modulus of continuity in~$L_2$ \jour Mat. Sb. \vol 195 \pages 3--46 \mathnet{http://mi.mathnet.ru/msb838} \crossref{https://doi.org/10.4213/sm838} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=2101337} \zmath{https://zbmath.org/?q=an:1068.41026} \jour Sb. Math. \crossref{https://doi.org/10.1070/SM2004v195n08ABEH000838} \scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-8744284194} http://mi.mathnet.ru/eng/msb838 https://doi.org/10.4213/sm838 http://mi.mathnet.ru/eng/msb/v195/i8/p3 This publication is cited in the following articles: V. S. Balaganskii, "Exact constant in the Jackson–Stechkin inequality in the space $L^2$ on the period", Proc. Steklov Inst. Math. (Suppl.), 265, suppl. 1 (2009), S78–S102 S. N. Vasil'ev, "Jackson inequality in $L_2(\mathbb T^N)$ with generalized modulus of continuity", Proc. Steklov Inst. Math. (Suppl.), 265, suppl. 1 (2009), S218–S226 V. I. Ivanov, "Direct and inverse theorems in approximation theory for periodic functions in S. B. Stechkins papers and the development of these theorems", Proc. Steklov Inst. Math. (Suppl.), 273, suppl. 1 (2011), S1–S13 S. N. Vasil'ev, "Jackson inequality in $L_2(\mathbb R^N)$ with generalized modulus of continuity", Proc. Steklov Inst. Math. (Suppl.), 273, suppl. 1 (2011), S163–S170 V. S. Balaganskii, "On the Continuity of the Sharp Constant in the Jackson–Stechkin Inequality in the Space $L^2$", Math. Notes, 93:1 (2013), 12–28 Vakarchuk S.B., Zabutnaya V.I., "On the Best Polynomial Approximation in the Space l (2) and Widths of Some Classes of Functions", Ukr. Math. J., 64:8 (2013), 1168–1176 D. V. Gorbachev, "An estimate of an optimal argument in the sharp multidimensional Jackson–Stechkin $L_2$-inequality", Proc. Steklov Inst. Math. (Suppl.), 288, suppl. 1 (2015), 70–78 S. Yu. Artamonov, "Quality of Approximation by Fourier Means in Terms of General Moduli of Smoothness", Math. Notes, 98:1 (2015), 3–10 S. B. Vakarchuk, V. I. Zabutnaya, "Inequalities between Best Polynomial Approximations and Some Smoothness Characteristics in the Space $L_2$ and Widths of Classes of Functions", Math. Notes, 99:2 (2016), 222–242 K. V. Runovskii, "Approximation by Fourier Means and Generalized Moduli of Smoothness", Math. Notes, 99:4 (2016), 564–575 M. Sh. Shabozov, A. D. Farozova, "Tochnoe neravenstvo Dzheksona–Stechkina s neklassicheskim modulem nepreryvnosti", Tr. IMM UrO RAN, 22, no. 4, 2016, 311–319 Ivanov V., Ivanov A., "Generalized Logan'S Problem For Entire Functions of Exponential Type and Optimal Argument in Jackson'S Inequality in l-2((3))", Acta. Math. Sin.-English Ser., 34:10 (2018), 1563–1577 Babenko V.F., Konareva S.V., "Jackson-Stechkin-Type Inequalities For the Approximation of Elements of Hilbert Spaces", Ukr. Math. J., 70:9 (2019), 1331–1344 S. B. Vakarchuk, "On Estimates in $L_2(\mathbb{R})$ of Mean $\nu$-Widths of Classes of Functions Defined via the Generalized Modulus of Continuity of $\omega_{\mathcal{M}}$", Math. Notes, 106:2 (2019), 191–202 First page: 1
CommonCrawl
MathOverflow Meta MathOverflow is a question and answer site for professional mathematicians. Join them; it only takes a minute: Example of non-projective variety with non-semisimple Frobenius action on etale cohomology? This question was motivated by a more general question raised by Jan Weidner here. In general one starts with a variety $X$ (say smooth) over an algebraic closure of a finite field $\mathbb{F}_q$ of characteristic $p$. Here there is a natural action of a Frobenius morphism $F$ relative to $q$. Given a distinct prime $\ell$, there is an induced operation of $F$ on etale cohomology groups (with compact support) $H^i_c(X, \overline{\mathbb{Q}_p})$. When $X$ is projective, this action is conjectured to be semisimple on each of the finite dimensional vector spaces involved. But it seems that semisimplicity can fail when $X$ isn't projective. My basic question is: Is there an elementary example where the Frobenius action fails to be semisimple? (References?) Of course, etale cohomology developed in response to the Weil conjectures and related matters in number theory. Here there is a lot of deep literature which I'm unfamiliar with, but I'd like to get some insight into the narrow question of what does or doesn't force semisimplicity for non-projective varieties. My interest lies mainly in Deligne-Lusztig varieties and their role in studying characters of finite groups of Lie type. Such varieties $X_w$ are indexed by Weyl group elements and are locally closed smooth subvarieties of the flag variety for a reductive group $G$, with all irreducible components of equal dimension. Here the finite subgroup $G^F$ acts on the etale cohomology, commuting with $F$, and the resulting virtual characters (alternating sums of characters on cohomology spaces) are the D-L characters. Characters of finite tori also come into play here, but I'm thinking first about the trivial characters of tori which lead to "unipotent" characters. These include essential but mysterious "cuspidal" unipotent characters which can't be extracted from the usual induced characters obtained by parabolic induction. For example, the Chevalley group $G_2(\mathbb{F}_q)$ typically has 10 unipotent characters (at the extremes the trivial and the Steinberg characters), with four being cuspidal. Those four appear in etale cohomology groups of a variety $X_w$ with $w$ a Coxeter element: the variety has dimension 2, with four characters (three cuspidal, the other Steinberg) in degree 2, one (cuspidal) in degree 3, and one (the trivial character) in degree 4. Miraculously, it always happens in the Coxeter case that $F$ acts semisimply (here with 6 distinct eigenvalues: the Coxeter number) and its eigenspaces afford distinct irreducible characters. In the year after he and Deligne finished their fundamental paper (Annals, 1976), Lusztig worked out the Coxeter case in a deep technical paper here. This was followed by a more complete determination of cuspidal unipotent characters, and then much more. The Coxeter case seems to be unusually well-behaved in this program. P.S. As I suspected, there's more going on under the surface of my basic question about semisimplicity than meets the eye. As an outsider to much of the algebraic geometry framework I can appreciate the outline of Dustin's answer though not yet the details. My question came from wondering whether there are ways to shortcut some of the older steps taken by Lusztig, but the wider questions here are obviously important. I'll have to see how far my motivation (in a manner of speaking) takes me. And Wilberd: thanks for the proofreading, which is not one of my favorite things to do. (Though I somehow got "just bonce" into a book that was supposedly proofread.) etale-cohomology ag.algebraic-geometry algebraic-groups rt.representation-theory reference-request Jim HumphreysJim Humphreys $\begingroup$ Dustin Clausen's argument can be supplemented rather easily for open smooth varieties, which seem to be your main interest. If $U$ has a compactification $j:U↪X$ with a normal crossing complement $D$, then breaking down the Leray spectral sequence for $j$ (see, Deligne's Hodge I, section 6) will show that the pure weight subquotients of $H^n(U)$ are themselves subquotients of direct sums of the cohomology of smooth projective varieties (the intersections of components of $D$). The general case can be reduced to this using De Jong's alteration. (But maybe you don't need it in your case.) $\endgroup$ – Minhyong Kim Aug 14 '12 at 1:04 Actually, the semisimplicity should hold with no hypotheses on X, so no example should exist. In fact it is generally expected that, with char. 0 coefficients and over a finite field (both hypotheses being necessary), every mixed motive is a direct sum of pure motives -- so the question for arbitrary varieties reduces to that for smooth projective ones. The reason is as follows: the different weight-pieces have no frobenius eigenvalues in common (by the Weil conjectures), so the weight filtration can be split by a simple matter of linear algebra. (And the splitting will even be motivic since frobenius is a map of varieties.) Edit: In response to Jim's comment, let me try to provide a clearer argument (2nd edit: no longer using the Tate conjecture). I claim that if we assume the existence of a motivic t-structure over F_q w.r.t. the l-adic realization in the sense of Beilinson's article http://arxiv.org/pdf/1006.1116v2.pdf, then provided that H^i_c(X-bar) is Frobenius-semisimple for smooth projective X, it is in fact so for aribtrary X. Indeed, given a motivic t-structure, its heart is an artinian abelian category where every irreducible object is a summand of a Tate-twist of an H^i(X) for X smooth an projective, and furthermore there are no extensions between such irreducibles of the same weight (this is all in Beilinson's article). That's all true over a general field. But now let's argue that, in the case of a finite field, there also can't be extensions between such irreducibles of different weights; then in the motivic category all of our H^i_c(X-bar) of interest will be direct sums of summands of H^i(X)'s, and we'll have successfully made the reduction to the smooth projective case. So suppose M and N are irreducible motives of distinct weights over F_q, and say E is an extension of M by N. Consider the characteristic polynomials p_M and p_N of Frobenius acting on the l-adic cohomologies of M and N. By Deligne, they have rational coefficients and distinct eigenvalues, so we can solve q * p_N == 1 (mod p_M) for a rational-coefficient polynomial q. But then (q*p_N)(frobenius) acting on E splits the extension (recall from Beilinson's article that the l-adic realization is faithful under our hypothesis), and we're done. Later commentary: apparently, when I wrote this I was a little too excited about the perspectives offered by motives. I should emphasize the point essentially made by Minhyong Kim, that the reduction from the general case to the proper smooth case likely doesn't require any motivic technology, and should even be independent of any conjectures. One just needs to know that there's a weight filtration on l-adic cohomology of the standard type where the pure pieces are direct sums of direct summands of appropriate cohomology of smooth projective varieties. As Minhyong says, this probably follows from Deligne's original pure --> mixed argument, via use of compactifications and de Jong alterations. Or at least that's what it seems to me without having gone into the details. I'm sure someone else knows better. Dustin ClausenDustin Clausen $\begingroup$ @Dustin: This seems to be an intriguing line of reasoning, which runs counter to the earlier intuition people seemed to have. I'd need to study the original source material more carefully to follow the details. Are there any accessible references for the reduction to the earlier conjecture for projective varieties? $\endgroup$ – Jim Humphreys Aug 13 '12 at 23:20 $\begingroup$ Hi Jim, sorry the argument I gave was so sketchy and without references. I tried to fill it out more -- hopefully it's more helpful now. $\endgroup$ – Dustin Clausen Aug 14 '12 at 1:48 $\begingroup$ @Dustin This probably reveals my ignorance but, if every mixed motive is expected to be a direct sum of pure motives, doesn't this show that the categories of mixed motives and pure motives are (conjecturally) equivalent? In which case, why do we study mixed motives? $\endgroup$ – David E Speyer Aug 14 '12 at 1:59 $\begingroup$ Hey David -- it's only true over a finite field, and only with rational coefficients! Compare e.g. with the fact that the F_q-points of the Jacobian of a curve are finite, or with the fact that the algebraic K-theory of F_q is finite in positive degrees... $\endgroup$ – Dustin Clausen Aug 14 '12 at 2:31 $\begingroup$ From the view of Galois representations, the point is that everything over a finite field is determined by a single endomorphism, as compared to the action of a complicated group like $Gal(\bar{\mathbb{Q}}/\mathbb{Q})$. Of course the mininal polynomial will be reduced if and only if each `pure factor' is reduced. $\endgroup$ – Minhyong Kim Aug 14 '12 at 4:17 Thanks for contributing an answer to MathOverflow! Not the answer you're looking for? Browse other questions tagged etale-cohomology ag.algebraic-geometry algebraic-groups rt.representation-theory reference-request or ask your own question. Semisimplicity of Frobenius operation on etale cohomology? Example of a variety over a number field with non-semisimple Galois representation on $\ell$-adic cohomology The etale fundamental group and etale cohomology with compact support Why does a group action on a scheme induce a group action on cohomology? Is the Gelfand-Graev character isomorphic to a cohomology group for some sheaf on a Deligne-Lusztig variety? Points on Deligne-Lusztig varieties: Interpreting Borels in relative position as flags with conditions What is the importance of the conjectural semi-simplicity of the action of the Frobenius on the etale cohomology of a variety over a finite field ? Is first etale cohomology of a variety always (dual to) a Tate Module? Failure of surjectivity in Hotta-Springer specialization: examples for special unipotents? An analogue of Deligne--Lusztig theory for real groups? Etale cohomology of the variety of matrices with given characterestic polynomial
CommonCrawl
Optimal imaging time points considering accuracy and precision of Patlak linearization for 89Zr-immuno-PET: a simulation study Jessica E. Wijngaarden ORCID: orcid.org/0000-0003-1486-62461,4, Marc C. Huisman1,4, Johanna E. E. Pouw2,4, C. Willemien Menke-van der Houven van Oordt2,4, Yvonne W. S. Jauw1,3,4 & Ronald Boellaard1,4 EJNMMI Research volume 12, Article number: 54 (2022) Cite this article Zirconium-89-immuno-positron emission tomography (89Zr-immuno-PET) has enabled visualization of zirconium-89 labelled monoclonal antibody (89Zr-mAb) uptake in organs and tumors in vivo. Patlak linearization of 89Zr-immuno-PET quantification data allows for separation of reversible and irreversible uptake, by combining multiple blood samples and PET images at different days. As one can obtain only a limited number of blood samples and scans per patient, choosing the optimal time points is important. Tissue activity concentration curves were simulated to evaluate the effect of imaging time points on Patlak results, considering different time points, input functions, noise levels and levels of reversible and irreversible uptake. Based on 89Zr-mAb input functions and reference values for reversible (VT) and irreversible (Ki) uptake from literature, multiple tissue activity curves were simulated. Three different 89Zr-mAb input functions, five time points between 24 and 192 h p.i., noise levels of 5, 10 and 15%, and three reference Ki and VT values were considered. Simulated Ki and VT were calculated (Patlak linearization) for a thousand repetitions. Accuracy and precision of Patlak linearization were evaluated by comparing simulated Ki and VT with reference values. Simulations showed that Ki is always underestimated. Inclusion of time point 24 h p.i. reduced bias and variability in VT, and slightly reduced bias and variability in Ki, as compared to combinations of three later time points. After inclusion of 24 h p.i., minimal differences were found in bias and variability between different combinations of later imaging time points, despite different input functions, noise levels and reference values. Inclusion of a blood sample and PET scan at 24 h p.i. improves accuracy and precision of Patlak results for 89Zr-immuno-PET; the exact timing of the two later time points is not critical. Therapeutic monoclonal antibodies (mAbs) are used in cancer treatment both in targeted therapy and in immunotherapy [1]. mAbs directly elicit their effect on their target or indirectly through mediation by the immune system. The effectiveness of this therapy is, however, patient specific and the therapy can cause serious side effects. Gaining more insight into the mechanisms of mAbs by tracking them inside the body may improve cancer treatment with mAbs. Zirconium-89-immuno-positron emission tomography (89Zr-immuno-PET) allows visualization and quantification of the uptake of zirconium-89 labelled mAbs (89Zr-mAbs) in tumors and organs in vivo. The relatively long half-life of 89Zr is sufficient for imaging mAbs during the time they need to reach tissues [2]. Quantification of 89Zr-mAb uptake is commonly done using the standardized uptake value (SUV). SUV is defined as the activity concentration in a volume of interest, divided by the injected activity per unit of body weight [3]. Since SUV is a single value obtained from a single PET scan, SUV is not able to distinguish between non-specific 89Zr-mAb uptake in the blood or interstitial space volume fraction of the tissue, and specific uptake due to target engagement, unless either specific or non-specific uptake can be assumed to be negligible. In general, both non-specific and specific uptake contribute to the total uptake signal. Additionally, SUV considers only the injected activity and not the 89Zr-mAb plasma clearance over time [4]. An approach that does consider plasma activity concentrations for analyzing PET images is the use of compartment models [5]. Using a two-tissue compartment model assuming irreversible uptake of tracer, Patlak linearization can be applied [6]. A two-tissue irreversible compartment model is applicable to 89Zr-mAb uptake, because 89Zr residualizes in the tissue after mAb catabolism or target engagement [2]. The uptake of 89Zr-mAbs in tissue is quantified relative to the concentration of 89Zr-mAbs in blood plasma over time and therefore requires multiple blood samples and PET images. Since 89Zr-mAbs circulate in the body for several days [7], capturing the pharmacokinetics of 89Zr-mAbs requires multiple sampling days. However, minimizing the number of scans and samples is important in terms of patient safety and comfort. Selecting the optimal time points for blood sampling and PET imaging of 89Zr-mAbs is therefore crucial. Patlak linearization provides several advantages over SUV. From Patlak linearization, reversible and irreversible 89Zr-mAb uptake can be quantified per volume of interest. Additionally, Patlak can potentially also distinguish between non-specific and specific 89Zr-mAb uptake, by comparing Patlak results to baseline Patlak values for tissues without target expression [8]. Moreover, Patlak linearization uses the measured plasma kinetics and thus takes variations in plasma clearance between subjects or at various mass doses into account. Yet, like SUV, Patlak linearization assumes that receptor availability or occupancy remains constant during the course of the PET studies and does not consider redistribution of cells or targets, as will be discussed later. Previous research has applied Patlak linearization for quantifying 89Zr-mAbs uptake in patients [8, 9]. In these studies, PET scans were obtained two to four times between 2 and 192 h p.i. Blood was sampled up to five times on the day of injection and with every PET scan [8, 9]. This resulted in a maximum of three time points for Patlak linearization. The unavoidable sparse data sampling introduces uncertainties in the data which may affect Patlak results. Evaluating the magnitude of the effects of sparse data sampling will provide more information on the accuracy and precision of Patlak results. In this study, the effect of imaging time points on the accuracy and precision of Patlak results was evaluated by means of simulations, including the following variables: different input functions (IFs), different noise levels for tissue activity curves (TACs) and tissues with different levels of reversible and irreversible uptake. To study the effects of different time points on Patlak results, TACs were simulated using Patlak linearization, three time points were included, noise was added and Patlak values were calculated. These steps were repeated as a function of different variables. Patlak linearization Patlak linearization can be used to estimate the irreversible and reversible uptake of 89Zr-mAb in tissue based on graphical analysis of multiple-time tissue uptake data [6]. The analysis is based on a compartment model consisting of a reversible and an irreversible tissue compartment. The reversible tissue compartment represents 89Zr-mAb in the plasma and interstitial space of the tissue or reversible target binding, and reaches an equilibrium state after some time. The irreversible tissue compartment represents irreversible binding of 89Zr-mAb (e.g., non-specific catabolism or irreversible target binding). After equilibrium is reached, the activity concentration in tissue (ACt) is the sum of both parts. The reversible part is then proportional to the activity concentration in plasma (ACp) and the irreversible part is proportional to the area under the curve (AUC) of the ACp (AUCp), which is the integral of ACp (Eq. 1). Dividing both sides of Eq. 1 by ACp results in a linear relation known as the Patlak equation (Eq. 2) [6, 9]. The slope of this equation is Ki, which represents the nett rate of irreversible uptake [h −1]. Ki is a measure for the catabolic rate of tissue without target expression and a measure for both catabolic rate and target engagement of tissue with target expression [8]. The offset is the VT, the ratio between tissue and plasma concentration at equilibrium, which is related to the reversible part. (Eq. 2). $${\text{AC}}_{t} = K_{i} \cdot {\text{AUC}}p + V_{T} \cdot {\text{AC}}_{p}$$ $$\frac{{{\text{AC}}_{t} }}{{{\text{AC}}_{p} }} = K_{i} \cdot \frac{{\mathop \smallint \nolimits_{0}^{t} {\text{AC}}_{P} \left( x \right){\text{d}}x}}{{{\text{AC}}_{P} }} + V_{T}$$ Multiple population IFs were obtained from literature as input for the ACp. A literature search for papers containing plasma/serum sampling data of 89Zr-mAb concentration in humans resulted in five papers as listed in Table 1. From these papers, the concentration 89Zr labelled mAbs in plasma/serum over time was obtained using PlotDigitizer (version 2.6.8, http://plotdigitizer.sourceforge.net/). The purpose of using IFs from literature was to use IFs that could be obtained in practice. Therefore, instead of using the raw data points, a bi-exponent (Eq. 3) was fitted through the data, see Fig. 1. The concentrations of the three IFs 89Zr-trastuzumab, 89Zr-pertuzumab and 89Zr-huJ591 were chosen as input for ACp in the simulations, as they presented three different clearance rates. Table 1 Five papers provided 89Zr-mAb plasma/serum activity concentration data Plasma or serum activity concentrations in percentage injected activity per liter as a function of time in hours post-injection for 89Zr-huJ591 (191.3 ± 9 MBq, 25 mg) [10], 89Zr-trastuzumab (185 MBq, 50 mg) [11], 89Zr-pertuzumab (74 MBq, 20 or 50 mg) [12], 89Zr-DFO-MSTP2109A (184 MBq, 10 mg) [13] and 89Zr-AlbudAb (14 MBq, 1 mg) [14]. The bold lines represent the input functions used for the simulations, and the dashed lines represent the input functions not included in the simulations. %IA/L = percentage injected activity per liter, h p.i. = hours post-injection The Patlak equation is used to simulate ACt as function of ACp, Ki and VT, i.e., to generate TACs. The given Ki and VT for generating the TAC are called 'reference Ki (rKi)' and 'reference VT (rVT)'. The mathematical derivation for the TAC is as follows. ACp is described by a bi-exponential function (Eq. 3). AUCp can be obtained by integration of Eq. 3 between moment of injection and moment of PET scan, resulting in Eq. 4. Substitution of Eqs. 3 and 4 into Eq. 1 gives the equation for the TAC (ACt) as a function of rKi, rVT and coefficients of the bi-exponential equation of the IF (Eq. 5): $${\text{AC}}_{p} = A \cdot {\text{e}}^{ax} + B \cdot {\text{e}}^{bx}$$ $${\text{AUC}}p = \mathop \smallint \limits_{0}^{t} \left( {A \cdot {\text{e}}^{ax} + B \cdot {\text{e}}^{bx} } \right){\text{d}}x = \frac{{A \cdot \left( {{\text{e}}^{ax} - 1} \right)}}{a} + \frac{{B \cdot \left( {{\text{e}}^{bx} - 1} \right)}}{b}$$ $${\text{AC}}_{t} = rK_{i} \cdot \left( {\frac{{A \cdot \left( {{\text{e}}^{ax} - 1} \right)}}{a} + \frac{{B \cdot \left( {{\text{e}}^{bx} - 1} \right)}}{b}} \right) + rV_{t} \cdot \left( {A\cdot{\text{e}}^{ax} + B \cdot {\text{e}}^{bx} } \right)$$ Sparse sampling and noise For a given IF, rKi and rVT, values for ACp and ACt were determined with the equations above on three given time points, mimicking the sparse sampling in practice. AUCp was determined, but now by numerical integration of the IF, considering only four time points of ACp (see red line first panel Fig. 2). Additionally, noise was added to values for ACt at the given three time points. Standard deviations (SDs) of ACt were approximated based on counting statistics, which behaves as a Poisson distribution with SD≈√N and N is number of counts [15]. The SD at any given time point was approximated with Eq. 6, where the SD at t = 0 is predefined. Assuming equal scanning durations within a study, the ratio N(0):N(t) is assumed to be equal to the ratio between non-decay corrected activity concentrations ncACt(0):ncACt(t) (Eq. 7). To incorporate variability in the standard deviation, noise was added using the MATLAB function randn [16]. Subsequently, the percentage SD was calculated and applied on the decay corrected ACt for adding noise to ACt (Eq. 8). $${\text{SD}}\left( t \right) = \frac{{{\text{SD}}\left( 0 \right)}}{{\sqrt {N\left( 0 \right)/N\left( t \right)} }}$$ $${\text{SD}}\left( t \right) = \frac{{{\text{SD}}\left( 0 \right)}}{{\sqrt {{\text{ncAC}}_{t} \left( 0 \right)/{\text{ncAC}}_{t} \left( t \right)} }}$$ $$AC_{t,noise} \left( t \right) = AC_{t} \left( t \right) + AC_{t} \left( t \right)*\% SD\left( t \right)*randn$$ Patlak linearization for 89Zr-pertuzumab input function and time activity curves with 5% noise, rKi = 1·10−3 [h−1], rVT = 0.2 and time points 24, 48 and 96 h post-injection. A: Activity concentrations in plasma (red) and tissue (blue), full curve based on reference values (rACp and rACt) and calculated based on simulations (sACp and sACt) in percentage injected activity per liter as a function of time p.i. B: Patlak plot; activity concentration in tissue (ACt) divided by activity concentration in plasma (ACp) as a function of area under the plasma curve (AUCp) divided by activity concentration in plasma (ACp). Based on reference values (black), simulated values (green) and linear regression of the simulated values (pink). %IA/L = percentage injected activity per liter Variability in ACp as a result of counting statistics ranged from SD = 0.2–0.4%, based on previously in house counted blood samples. The noise in ACp was assumed to be negligible compared to the noise in the TAC and was not included in the simulations. Patlak analysis of simulated TACs Subsequently, Patlak linearization (Eq. 2) was applied on the generated ACp, AUCp and ACt with noise on the given time points, from which the slope (Ki) and offset (VT) could be determined, see Fig. 2. Simulations were repeated 1000 times to incorporate the effect of noise. The mean and standard error (SE) of the simulated Ki and VT were obtained to compare with rKi and rVT for evaluating bias and variability. Performance of Patlak analysis Accuracy and precision of Patlak results were evaluated as a function of the following variables: time points of evaluation, rKi and rVT, and noise level of ACt. Each simulation included a time point at 0 h p.i. for ACp. Additionally, three of the following time points in hours post-injection were considered: 24, 48, 96, 144 and 192, which resulted in 10 time point combinations. The chosen values for rKi were 1, 5 and 20 ∙10–3 h−1, representing real values of Ki for tissue without target expression [8], and two levels of target expression, respectively. The chosen rVT were 0.1, 0.2 and 0.5. These values were comparable to baseline values for VT as found by Jauw et al. [8], which agreed with predicted values for VT as sum of antibody biodistribution coefficient [17] and the plasma volume fraction. The noise levels of the TAC at time 0 were varied from 5%, 10% to 15%, equal to noise levels for the TAC previously used in a Patlak simulation study [18]. Simulations were performed in MATLAB (v9.3.0.713579) [16] using in-house written code (see Additional file 1). Simulations showed that bias in Ki was negative in all situations, see Figs. 3, 4 and 5 and Table 2. Inclusion of a time point at 24 h p.i. improved accuracy and precision of Patlak results in almost all simulations. Simulations with 89Zr-huJ591, 89Zr-trastuzumab and 89Zr-pertuzumab IF, noise level of 5%, rKi of 5·10–3 h−1 and rVT of 0.2 are shown in Fig. 3, and results are listed in Table 2. Including a time point at 24 h p.i. reduced bias and variability in VT for all three IF. Bias in Ki was reduced for 89Zr-huJ591 and remained similar for 89Zr-trastuzumab and 89Zr-pertuzumab. Variability in Ki remained similar for 89Zr-huJ591 and reduced slightly for 89Zr-trastuzumab and 89Zr-pertuzumab. Therefore, time point 24 h p.i. was included in all subsequent simulations. Percentage bias and variability of Ki (A) and VT (B) per time point combination, for 89Zr-huJ591, 89Zr-trastuzumab and 89Zr-pertuzumab input functions and time activity curves with 5% noise, rKi = 5·10−3 [h−1] and rVT = 0.2. Combinations including 24 h post-injection showed smaller bias and variability than combinations without 24 h p.i. rKi = reference Ki, rVT = reference VT Absolute Ki (A and B) and VT (C and D) values per time point combination for rKi = 1 and 20·10–3 [h−1], and for rVT = 0.1 and 0.5, for 89Zr-pertuzumab input function and time activity curves with 5% noise. All time point combinations on the x-axis also included 24 h post-injection. rKi = reference Ki, rVT = reference VT Absolute Ki (A and B) and VT (C and D) values per time point combination for 89Zr-huJ591, 89Zr-trastuzumab and 89Zr-pertuzumab input functions, with rKi = 1 and 20·10−3 [h−1], rVT = 0.1 and time activity curves with 5% noise. All time point combinations on the x-axis also included 24 h post-injection. rKi = reference Ki, rVT = reference VT Table 2 Smallest and largest percentage bias and variability of simulations with and without time point 24 h p.i. for both Ki and VT Simulations with 89Zr-pertuzumab as IF and 5% noise level showed that bias in Ki ranged from − 0.5% (absolute bias of − 5·10–6 for Ki = 1·10−3 and Vt = 0.1) to − 6% (absolute bias of − 1.1·10−3 for Ki = 20·10−3 and Vt = 0.5) and bias in VT ranged from 2% (absolute bias of 0.01 for Vt = 0.5 and Ki = 1·10−3) to − 16% (absolute bias of − 0.016 for Vt = 0.1 and Ki = 1·10−3). Increasing the values for rKi and rVT resulted in increased variability in Ki and VT. Higher values for rKi also increased bias in Ki. However, bias in Ki resulting from increased rVT and bias in VT resulting from increased rKi and rVT remained similar, see Fig. 4. Simulations with 89Zr-huJ591, 89Zr-trastuzumab and 89Zr-pertuzumab IF, rKi of 1·10−3 h−1 and rVT of 0.2 showed a threefold increase in variability in Ki and VT with higher noise levels, bias remained similar. For 89Zr-huJ591, increasing the noise level from 5 to 15% increased variability in Ki (SE from 23.0 to 68.0% and from 30.0 to 90.6%, respectively) and variability in VT (SE from 10.1 to 29.6% and 29.2 to 86.1%, respectively), while biases remained similar for Ki (from − 4.9 to − 5.1 and − 16 to − 16%, respectively) and VT (from − 1.6 to − 2.3% and 2.3 to 1.8%, respectively). Results of the other two IFs showed the same pattern. The noise level dependency was similar for higher rKi and rVT, however with higher bias and variability because of increased rKi and rVT. A decrease in AUCp of the IF (in the order 89Zr-pertuzumab, 89Zr-trastuzumab, 89Zr-huJ591) resulted in increased bias in Ki and increased variability in VT with increased rKi, see Fig. 5. For rKi values of 20·10−3 h−1, bias in Ki also depended on the included time points, where the combinations 24, 48 and 192 h p.i. and 24, 144 and 192 h p.i. showed a larger underestimation of Ki of − 16% (absolute bias of − 3.2·10−3 for Ki = 20·10−3 and Vt = 0.1) for 89Zr-huJ591 IF as compared to − 10% for 89Zr-trastuzumab (absolute bias of − 2.0·10−3 for Ki = 20·10−3 and Vt = 0.1) and − 5.4% for 89Zr-pertuzumab IF (absolute bias of − 1.1·10−3 for Ki = 20·10−3 and Vt = 0.1). Decreased AUCp of the IF also showed increased variability in Ki and VT for increased rVT; however, bias remained similar. Overall, when including time point 24 h p.i., there were only small differences found in bias and variability between different time point combinations. Only for high Ki values and the 89Zr-huJ591 IF (with faster clearance of the 89Zr-mAb from blood), bias in Ki and VT showed a larger dependence on included time points, see Fig. 5. For all IFs, rKi, rVT, and time point combinations with noise level of 5%, percentage bias in Ki ranged from − 0.5 to − 16%. This study evaluated the effect of the choice of imaging time points on the accuracy and precision of Patlak linearization for 89Zr-immuno-PET, considering different conditions. Simulations showed that inclusion of a PET scan and blood sample at 24 h p.i. improves accuracy and precision of Patlak results. Different combinations of later time points did not change the accuracy and precision in most cases. Moreover, increase in rKi, rVT and noise level decreased accuracy and precision of Patlak results. Additionally, IFs with smaller AUCp showed decreased accuracy and precision of Patlak results as compared to IFs with larger AUCp. Underestimation of K i Bias in Ki was negative in all simulations. This can be explained by the shape of the IF in combination with the calculation of AUCp in the Patlak equation [6]. In case the IF is fully described, for instance with a bi-exponential equation, determining the AUCp by integration will result in the true value for AUCp. However, when only a finite set of points is known from the IF, determining the AUCp will be based on trapezoidal numerical integration. For the simulations in this study, the latter applies, because data sampling is always finite. Since the activity concentration in plasma decreases over time in an exponential manner, the shape of the IF is curved downwards, leading to an overestimation of the AUCp with trapezoidal numerical integration. The overestimated AUCp increases the x-coordinates of the Patlak plot, which is AUCp/ACp, while the y-coordinates remain the same, because the ratio ACt/ACp does not change. This results in a decreased positive slope of the Patlak plot, e.g., negative bias of Ki. 24 h time point Inclusion of time point 24 h p.i. showed to improve accuracy and precision of Patlak linearization. This is also due to the better assessment of the shape of the IF and the calculation of AUCp as detailed before. The better the curve of the IF is described, by adding a time point in the most curved part of the IF, the more accurate the determination of AUCp and Patlak parameters. One assumption for Patlak linearization is that equilibrium is reached between the 89Zr-mAb concentration in plasma and in the reversible tissue compartment, meaning that all fluxes are constant with respect to time [6]. In this study, activity concentrations in tissue were simulated by means of Patlak linearization and therefore were directly in equilibrium with activity concentrations in plasma. However, mAbs are relatively large proteins, therefore distribution inside the body takes relatively long, so tissue is not in rapid equilibrium with plasma [7]. Therapeutic antibodies cetuximab and trastuzumab showed approximately homogeneous distributions after 24 h p.i. in tumor-bearing mice [19]. For this reason, a period of 24 h was estimated to reach equilibrium between tissue and plasma. Additionally, from a practical point of view, it would not be possible to include time points after approximately 12 h, because PET scans should then be obtained outside working hours. Hence, time points before 24 h p.i. were not included in the simulations. This moment of equilibrium may differ between 89Zr-mAbs, and inclusion of a slightly earlier or later time point may be better depending on the mAb pharmacokinetics. Time point combinations After inclusion of the 24 h p.i. time point, different time point combinations barely influenced Patlak results, which is advantageous from a practical perspective. Postponing a late imaging time point to a different day would not influence Patlak results. This is in contrast with obtaining the SUV, for which differences in the uptake time between injection and PET scan does influence the result, because SUV changes as a function of time [20]. In case the assumption of equal clearance between patients is true, comparisons of SUVs between patients would only be possible for PET scans that are obtained at the same uptake time post-injection [4]. Therefore, postponing a PET scan, resulting in different scan days for patients accompanied by different plasma activity concentrations, will influence SUV results. Apart from the ability to distinguish between reversible and irreversible, and potentially between non-specific and specific uptake of 89Zr-mAbs [8], the option to postpone a PET scan is another advantage of using Patlak linearization over using SUV in the quantification of 89Zr-immuno-PET. Reference K i and V T Simulations showed that increasing rKi and rVT resulted in similar or increased bias and variability in both Ki and VT. As Patlak linearization is only applied when the assumption of irreversible uptake is met, Ki is never zero. Additionally, Jauw et al. [8] showed that organs without target expression have Ki values higher than zero, representing the catabolic rate of 89Zr-mAbs in healthy tissue. Values for Ki in this study are therefore all above zero. Noise levels In this study, noise was approximated based on counting statistics, which resulted in noise increasing over time. This was similar to results from a study about noise-induced variability in PET imaging for 89Zr-immuno-PET, where recovery coefficients (RC) also increased over time from day 0 to day 6 [21]. RC was defined as 1.96*SD(%). RCs found for Kidney, lung, spleen and liver combined ranged from 2 to 11 [21], resulting in a maximum SD of approximately 5%. Similarly, SD derived from the RCs of tumor SUVpeak results in 15%. Simulations including TACs with a 5% noise level may therefore represent biodistribution and TACs with a 15% noise level may represent tumor uptake. Increasing the noise level from 5 to 15% only increased the variability, biases remained the same. Additionally, results of simulations with a noise level of 15% showed the same pattern as simulations with a 5% noise level and were chosen not to be presented. Input functions The literature search provided five different 89Zr-mAb plasma IFs in patients, of which three were used for the simulations, while there are currently 119 therapeutic antibodies approved by the FDA [22]. However, these three 89Zr-mAb plasma IFs used in this study provide a wide range of clearances, covering substantial variability in IFs. Simulations showed a dependency of Patlak results on the IF. For high rKi, accuracy and precision in Patlak results decreased with AUC of the IF (i.e., faster clearance), in the following order: 89Zr-pertuzumab, 89Zr-trastuzumab and 89Zr-huJ591. A decrease in AUCp will result in lower x-coordinates of the Patlak plot, thereby bringing the datapoints closer together resulting in higher contribution of noise. The AUCp is the integral of the activity concentration in plasma, which is the total 89Zr-mAbs present in the plasma cumulated over time from injection to moment of PET scan. For the simulations, the IF and rKi were regarded as two independent variables; however, they are physiologically related. For IFs with lower AUCp, so faster clearance, higher irreversible uptake in tissue (rKi) is expected. However, simulations showed that a higher rKi for the 89Zr-huJ591 IF resulted in decreased accuracy of Ki (− 16%) and precision of VT as compared to the other IFs. This indicates that accuracy and precision of Patlak results are worse for 89Zr-mAbs with faster clearance combined with higher irreversible uptake. However, for volumes of interest showing high irreversible uptake, a bias in Ki of − 16% would not change the (clinical) decision-making based on the data, because the observed irreversible uptake would still be high. This study considers input functions with binding of targets on cells that do not redistribute during the course of the PET studies (HER2 for trastuzumab and pertuzumab, and PSMA for huJ591). However, the usefulness of Patlak linearization may be limited in case of 89Zr-mAbs that bind to mobile immune cells, such as the PD-1 receptors on T-cells. In order to apply Patlak linearization, an equilibrium between reversible processes is assumed as well as a constant density of specific targets or receptors. Changes in receptor availability during the course of the study may introduce inaccuracies in Patlak linearization. Yet, Patlak analysis also has several advantages over SUV. Patlak linearization can also be applied with higher mass dose. However, there are two phenomena that need to be considered. First of all, higher mass doses will result in slower plasma clearance. Patlak linearization takes into account the mAb concentration in plasma (or input function) and no assumptions are required with regard to (changes in) plasma clearance as the measured plasma kinetics are used. Secondly, a higher administered mass dose will result in lower uptake in tissue of interest. Patlak linearization is still valid with higher mass doses; however, lower Ki values are expected because of the reduced receptor availability/higher receptor or target occupancy. Also, Menke-van der Houven van Oordt et al. [9] showed in their study that Patlak linearization applied to PET imaging data with different administered mass doses allows evaluation of the optimal therapeutic dose. By plotting the Patlak Ki values against increasing mass doses a S-curve can be obtained. Ki values decrease because of target binding competition between labeled and unlabeled mAbs. This curve allows evaluation of the 50% inhibitory mass dose (ID50). The ID50, the dose at which 50% of the targets are occupied, can be used in establishing the optimal therapeutic dose [9]. This study evaluated the effect of imaging time points on the accuracy and precision of Patlak results, for different IFs, imaging time points, noise levels, and tissues with different levels of reversible and irreversible uptake. Quantification of 89Zr-immuno-PET using Patlak linearization can generate accurate results within − 0.5% and − 16% bias for Ki (at a 5% noise level), provided that a 24 h p.i. time point and two later time points are included. The exact timing of the two other scans and samples is, however, not critical as opposed to SUV-based quantification. All data and scripts generated during the current study are available from the corresponding author on reasonable request. Kimiz-Gebologlu I, Gulce-Iz S, Biray-Avci C. Monoclonal antibodies in cancer immunotherapy. Mol Biol Rep. 2018;45:2935–40. van Dongen G, Beaino W, Windhorst AD, et al. The role of (89)Zr-immuno-PET in navigating and derisking the development of biopharmaceuticals. J Nucl Med. 2021;62:438–45. Boellaard R. Standards for PET image acquisition and quantitative data analysis. J Nucl Med. 2009;50(Suppl 1):11S-20S. Lammertsma AA, Hoekstra CJ, Giaccone G, Hoekstra OS. How should we analyse FDG PET studies for monitoring tumour response? Eur J Nucl Med Mol Imaging. 2006;33(Suppl 1):16–21. Vriens D, Visser EP, de Geus-Oei LF, Oyen WJ. Methodological considerations in quantification of oncological FDG PET studies. Eur J Nucl Med Mol Imaging. 2010;37:1408–25. Patlak CS, Blasberg RG, Fenstermacher JD. Graphical evaluation of blood-to-brain transfer constants from multiple-time uptake data. J Cereb Blood Flow Metab. 1983;3:1–7. Lobo ED, Hansen RJ, Balthasar JP. Antibody pharmacokinetics and pharmacodynamics. J Pharm Sci. 2004;93:2645–68. Jauw YWS, O'Donoghue JA, Zijlstra JM, et al. (89)Zr-Immuno-PET: toward a noninvasive clinical tool to measure target engagement of therapeutic antibodies in vivo. J Nucl Med. 2019;60:1825–32. der Houven M, van Oordt CW, McGeoch A, Bergstrom M, et al. Immuno-PET imaging to assess target engagement: experience from (89)Zr-Anti-HER3 mAb (GSK2849330) in patients with solid tumors. J Nucl Med. 2019;60:902–9. Pandit-Taskar N, O'Donoghue JA, Beylergil V, et al. (8)(9)Zr-huJ591 immuno-PET imaging in patients with advanced metastatic prostate cancer. Eur J Nucl Med Mol Imaging. 2014;41:2093–105. O'Donoghue JA, Lewis JS, Pandit-Taskar N, et al. Pharmacokinetics, biodistribution, and radiation dosimetry for (89)Zr-trastuzumab in patients with esophagogastric cancer. J Nucl Med. 2018;59:161–6. Ulaner GA, Lyashchenko SK, Riedl C, et al. First-in-human human epidermal growth factor receptor 2-targeted imaging using (89)Zr-Pertuzumab PET/CT: dosimetry and clinical application in patients with breast cancer. J Nucl Med. 2018;59:900–6. O'Donoghue JA, Danila DC, Pandit-Taskar N, et al. Pharmacokinetics and biodistribution of a [(89)Zr]Zr-DFO-MSTP2109A anti-STEAP1 antibody in metastatic castration-resistant prostate cancer patients. Mol Pharm. 2019;16:3083–90. Thorneloe KS, Sepp A, Zhang S, et al. The biodistribution and clearance of AlbudAb, a novel biopharmaceutical medicine platform, assessed via PET imaging in humans. EJNMMI Res. 2019;9:45. Cherry SR, Sorenson J, Phelps ME, Methé BM. Physics in nuclear medicine. Med Phys. 2004;31:2370–1. MATLAB. version 9.3.0.713579 (R2017b). Natick, Massachusetts: The MathWorks Inc.; 2017. Shah DK, Betts AM. Antibody biodistribution coefficients: inferring tissue concentrations of monoclonal antibodies based on the plasma concentrations in several preclinical species and human. mAbs. 2013;5:297–305. van Sluis J, Yaqub M, Brouwers AH, Dierckx R, Noordzij W, Boellaard R. Use of population input functions for reduced scan duration whole-body Patlak (18)F-FDG PET imaging. EJNMMI Phys. 2021;8:11. Lee CM, Tannock IF. The distribution of the therapeutic monoclonal antibodies cetuximab and trastuzumab within solid tumors. BMC Cancer. 2010;10:255. Keyes JW Jr. SUV: standard uptake or silly useless value? J Nucl Med. 1995;36:1836–9. Jauw YWS, Heijtel DF, Zijlstra JM, et al. Noise-induced variability of immuno-PET with zirconium-89-labeled antibodies: an analysis based on count-reduced clinical images. Mol Imaging Biol. 2018;20:1025–34. Cai HH. Therapeutic Monoclonal Antibodies Approved by FDA in 2020. Clin Res Immunol. 2021;4(1):1–2. This work has received funding from the Innovative Medicines Initiative 2 Joint Undertaking (JU) under grant agreement No. 831514 (Immune-Image). The JU receives support from the European Union's Horizon 2020 research and innovation programme and EFPIA. Department of Radiology and Nuclear Medicine, Amsterdam UMC location Vrije Universiteit Amsterdam, Boelelaan 1117, Amsterdam, The Netherlands Jessica E. Wijngaarden, Marc C. Huisman, Yvonne W. S. Jauw & Ronald Boellaard Department of Medical Oncology, Amsterdam UMC location Vrije Universiteit Amsterdam, Boelelaan 1117, Amsterdam, The Netherlands Johanna E. E. Pouw & C. Willemien Menke-van der Houven van Oordt Department of Hematology, Amsterdam UMC location Vrije Universiteit Amsterdam, Boelelaan 1117, Amsterdam, The Netherlands Yvonne W. S. Jauw Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands Jessica E. Wijngaarden, Marc C. Huisman, Johanna E. E. Pouw, C. Willemien Menke-van der Houven van Oordt, Yvonne W. S. Jauw & Ronald Boellaard Jessica E. Wijngaarden Marc C. Huisman Johanna E. E. Pouw C. Willemien Menke-van der Houven van Oordt Ronald Boellaard All authors contributed to the study conception and design. Data collection and analysis were performed by JW, MH and RB. The first draft of the manuscript was written by JW and all authors (MH, JP, WM, YJ and RB) commented on previous versions of the manuscript. All authors (JW, MH, JP, WM, YJ and RB) read and approved the final manuscript. Correspondence to Jessica E. Wijngaarden. Additional file 1. In-house written MATLAB code for Patlak linearization. The in-house written MATLAB function provided in Supplemental 1 was used for Patlak linearization calculations. Wijngaarden, J.E., Huisman, M.C., Pouw, J.E.E. et al. Optimal imaging time points considering accuracy and precision of Patlak linearization for 89Zr-immuno-PET: a simulation study. EJNMMI Res 12, 54 (2022). https://doi.org/10.1186/s13550-022-00927-6 89Zr-immuno-PET
CommonCrawl
Mathematical theory of computation The last twenty years have witnessed most vigorous growth in areas of mathematical study connected with computers and computer science. The enormous development of computers and the resulting profound changes in scientific methodology have opened new horizons for the science of mathematics at a speed without parallel during the long history of mathematics. The following two observations should be kept in mind. First, various developments in mathematics have directly initiated the "beginning" of computers and computer science. Secondly, advances in computer science have induced very vigorous developments in certain branches of mathematics. More specifically, the second of these observations refers to the growing importance of discrete mathematics — and only the very beginning of the influence of discrete mathematics is witnessed now (1990). Because of reasons outlined above, mathematics plays a central role in the foundations of computer science. A number of significant research areas can be listed in this connection. It is interesting to note that these areas also reflect the historical development of computer science. 1) The classical computability theory initiated by the work of K. Gödel, A. Tarski, A. Church, E. Post, A. Turing, and S.C. Kleene occupies a central position. This area is rooted in mathematical logic (cf. Computable function). 2) In classical formal language and automata theory, the central notions are those of an automaton, a grammar (cf., e.g., Grammar, generative), and a language (cf., e.g., Alphabet). Apart from developments in area 1), the work of N. Chomsky on the foundations of natural languages, as well as the work of Post concerning rewriting systems, should be mentioned here. It is, however, fascinating to observe that the modern theory of formal languages and rewriting systems was initiated by the work of the Norwegian mathematician A. Thue at the beginning of the 20th century! (Cf. Formal languages and automata; $L$-systems.) 3) An area initiated in the 1960s is complexity theory. In it the performance of an algorithm is investigated. The central notions are those of a tractable and an intractable problem. This area is gaining in importance because of several reasons, one of them being the advances in area 4). (Cf. Complexity theory; Algorithm, computational complexity of an.) 4) Quite recent developments concerning the security of computer systems have increased the importance of cryptography to a great extent. Moreover, the idea of public-key cryptography is of specific theoretical interest and has drastically changed the ideas concerning what is doable in communication systems. (Cf. Cryptography.) Areas 1) through 4) constitute the core of the mathematical theory of computation. Many other important areas dealing with the mathematical foundations of computer science (e.g., semantics and the theory of correctness of programming languages, the theory of data structures, and the theory of data bases) have been developed. All the areas listed above comprise a fascinating part of contemporary mathematics that is very dynamic in character, full of challenging problems requiring most interesting and ingenious mathematical techniques. The basic question in the theory of computing can be formulated in any of the following ways: What is computable? For which problem can one construct effective mechanical procedures that solve every instance of the problem? Which problems possess algorithms for their solution? Fundamental developments in mathematical logic during the 1930-s showed the existence of unsolvable problems: No algorithm can possibly exist for the solution of the problem. Thus, the existence of such an algorithm is a logical impossibility — its non-existence has nothing to do with ignorance. This state of affairs led to the present formulation of the basic question in the theory of computing. Previously, people always tried to construct an algorithm for every precisely formulated problem until (if ever) the correct algorithm was found. The basic question is of definite practical significance: One should not try to construct algorithms for an unsolvable problem. (There are some notorious examples of such attempts in the past.) A model of computation is necessary for establishing unsolvability. If one wants to show that no algorithm for a specific problem exists, one must have a precise definition of an algorithm. The situation is different in establishing solvability: it suffices to exhibit some particular procedure that is effective in the intuitive sense. (The terms "algorithm" and "effective procedure" are used synonymously.) One is now confronted with the necessity of formalizing a notion of a model of computation that is general enough to cover all conceivable computers, as well as the intuitive notion of an algorithm. Some initial observations are in order. Assume that the algorithms that need formalization compute functions mapping the set of non-negative integers into the same set. Although this is not important at this point, one could observe that this assumption is no essential restriction of generality. This is due to the fact that other input and output formats can be encoded into non-negative integers. After having somehow defined a general model of computation, denoted by $MC$, one observes that each specific instance of the model possesses a finitary description; that is, it can be described in terms of a formula or finitely many words. By enumerating these descriptions, one obtains an enumeration $MC_1,MC_2,\dots,$ of all specific instances of the general model of computation. In this enumeration, each $MC_i$ represents some particular algorithm for computing a function from non-negative integers into non-negative integers. Denote by $MC_i(j)$ the value of the function computed by $MC_i$ for the argument value $j$. Define a function $f(x)$ by \begin{equation}f(x)=MC_x(x)+1.\label{a1}\end{equation} Clearly, the following is an algorithm (in the intuitive sense) to compute the function $f(x)$. Given an input $x$, start the algorithm $MC_x$ with the input $x$ and add one to the output. However, is there any specific algorithm among the formalized $MC$-models that would compute the function $f(x)$? The answer is no, and the argument is an indirect one. Assume that $MC_t$ would give rise to such an algorithm, where $t$ is some natural number. Hence, for all $x$, \begin{equation}f(x)=MC_t(x).\label{a2}\end{equation} A contradiction now arises by substituting the value $t$ for the variable $x$ in both \eqref{a1} and \eqref{a2}. This contradiction, referred to as the dilemma of diagonalization, shows that independently of the model of computation — indeed, no $MC$-model was specified — there will be algorithms not formalized by the model. There is a simple and natural way to avoid the dilemma of diagonalization. So far the $MC_i$-algorithms are assumed to be defined everywhere: For all inputs $j$, the algorithm $MC_i$ produces an output. This assumption is unreasonable from many points of view, one of which is computer programming; one cannot be sure that every program produces an output for every input. Therefore, one should allow also the possibility that some of the $MC_i$-algorithms enter an infinite loop for some inputs $j$ and, consequently, do not produce any output for such $j$. Moreover, the set of such values $j$ is not known a priori. Thus, some algorithms in the list \begin{equation}MC_1,MC_2,\dots,\label{a3}\end{equation} produce an output only for some of the possible inputs; that is, the corresponding functions are not defined for all non-negative integers. The dilemma of diagonalization does not arise after the inclusion of such partial functions among the functions computed by the algorithms of \eqref{a3}. Indeed, the argument presented above does not lead to a contradiction because $MC_t$ is not necessarily defined. The general model of computation, now referred to as a Turing machine, was introduced quite a long time before the advent of electronic computers. Turing machines constitute by far the most widely-used general model of computation. Other general models are Markov algorithms (cf. Normal algorithm), Post systems (cf. Post canonical system), grammars, and $L$-systems. Each of these models leads to a list such as \eqref{a3}, where partial functions are also included. All models are also equivalent in the sense that they define the same set of solvable problems or computable functions. Only the general question of characterizing the class of solvable problems has been considered here. This question was referred to as basic in the theory of computing. It led to a discussion of general models of computation. More specific questions in the theory of computing deal with the complexity of solvable problems. Is a problem $P_1$ more difficult than $P_2$ in the sense that every algorithm for $P_1$ is more complex (for instance, in terms of time or memory space needed) than a reasonable algorithm for $P_2$? What is a reasonable classification of problems in terms of complexity? Which problems are so complex that they can be classified as intractable in the sense that all conceivable computers require an unmanageable amount of time for solving the problem? Undoubtedly, such questions are of crucial importance from the point of view of practical computing. A problem is not yet settled if it is known to be solvable or computable and remains intractable at the same time. As a typical example, many recent results in cryptography are based on the assumption that the factorization of the product of two large primes is impossible in practice. More specifically, if one knows a large number $n$ consisting of, for example, 200 digits and if one also knows that $n$ is the product of two large primes, it is still practically impossible, in general, to find the two primes. This assumption is reasonable because the problem described is intractable, at least in view of the factoring algorithms know at present. Of course, from a merely theoretical point of view where complexity is not considered, such a factoring algorithm can be trivially constructed. Such specific questions lead to more specific models of computing. The latter are obtained either by imposing certain restrictions on Turing machines or else by some direct construction. Of particular importance is the finite automaton (cf. also Automaton, finite). It is a model of a strictly finitary computing device: The automaton is not capable of increasing any of its resources during the computation. It is clear that no model of computation is suitable for all situations; modifications and even entirely new models are needed to match new developments. Theoretical computer science by now has a history long enough to justify a discussion about good and bad models. The theory is mature enough to produce a great variety of different models of computation and prove some interesting properties concerning them. Good models should be general enough so that they are not too closely linked with any particular situation or problem in computing — they should be able to lead the way. On the other hand, they should not be too abstract. Restrictions on a good model should converge, step by step, to some area of real practical significance. Typical examples are certain restrictions of abstract grammars especially suitable for considerations concerning parsing. The resulting aspects of parsing are essential in compiler construction. To summarize: A good model represents a well-balanced abstraction of a real practical situation — not too far from and not too close to the real thing. Formal languages constitute a descriptive tool for models of computation, both in regard to input-output format and the mode of operation. Formal language theory is by its very essence an interdisciplinary area of science; the need for a formal grammatical description arises in various scientific disciplines, ranging from linguistics to biology. [a1] A. Salomaa, "Formal languages" , Acad. Press (1973) [a2] G. Rozenberg, A. Salomaa, "The mathematical theory of $L$ systems" , Acad. Press (1980) [a3] W. Kuich, A. Salomaa, "Semirings, automata, languages" , Springer (1986) [a4] A. Salomaa, "Computation and automata" , Cambridge Univ. Press (1985) Mathematical theory of computation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Mathematical_theory_of_computation&oldid=43629 This article was adapted from an original article by G. RozenbergA. Salomaa (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Mathematical_theory_of_computation&oldid=43629" TeX done
CommonCrawl
Independent Samples T-Test – Quick Introduction By Ruben Geert van den Berg under Statistics A-Z & T-Tests Independent Samples T-Test - What Is It? Test Statistic Statistical Significance An independent samples t-test evaluates if 2 populations have equal means on some variable. If the population means are really equal, then the sample means will probably differ a little bit but not too much. Very different sample means are highly unlikely if the population means are equal. This sample outcome thus suggest that the population means weren't equal after all. The samples are independent because they don't overlap; none of the observations belongs to both samples simultaneously. A textbook example is male versus female respondents. Some island has 1,000 male and 1,000 female inhabitants. An investigator wants to know if males spend more or fewer minutes on the phone each month. Ideally, he'd ask all 2,000 inhabitants but this takes too much time. So he samples 10 males and 10 females and asks them. Part of the data are shown below. Next, he computes the means and standard deviations of monthly phone minutes for male and female respondents separately. The results are shown below. These sample means differ by some (99 - 106 =) -7 minutes: on average, females spend some 7 minutes less on the phone than males. But that's just our tiny samples. What can we say about the entire populations? We'll find out by starting off with the null hypothesis. The null hypothesis for an independent samples t-test is (usually) that the 2 population means are equal. If this is really true, then we may easily find slightly different means in our samples. So precisely what difference can we expect? An intuitive way for finding out is a simple simulation. I created a fake dataset containing the entire populations of 1,000 males and 1,000 females. On average, both groups spend 103 minutes on the phone with a standard-deviation of 14.5. Note that the null hypothesis of equal means is clearly true for these populations. I then sampled 10 males and 10 females and computed the mean difference. And then I repeated that process 999 times, resulting in the 1,000 sample mean differences shown below. First off, the mean differences are roughly normally distributed. Most of the differences are close to zero -not surprising because the population difference is zero. But what's really interesting is that mean differences between, say, -12.5 and 12.5 are pretty common and make up 95% of my 1,000 outcomes. This suggests that an absolute difference of 12.5 minutes is needed for statistical significance at α = 0.05. Last, the standard deviation of our 1,000 mean differences -the standard error- is 6.4. Note that some 95% of all outcomes lie between -2 and +2 standard errors of our (zero) mean. This is one of the best known rules of thumb regarding the normal distribution. Now, an easier -though less visual- way to draw these conclusions is using a couple of simple formulas. Again: what is a "normal" sample mean difference if the population difference is zero? First off, this depends on the population standard deviation of our outcome variable. We don't usually know it but we can estimate it with $$Sw = \sqrt{\frac{(n_1 - 1)\;S^2_1 + (n_2 - 1)\;S^2_2}{n_1 + n_2 - 2}}$$ in which \(Sw\) denotes our estimated population standard deviation. For our data, this boils down to $$Sw = \sqrt{\frac{(10 - 1)\;224 + (10 - 1)\;191}{10 + 10 - 2}} ≈ 14.4$$ Second, our mean difference should fluctuate less -that is, have a smaller standard error- insofar as our sample sizes are larger. The standard error is calculated as $$Se = Sw\sqrt{\frac{1}{n_1} + \frac{1}{n_2}}$$ and this gives us $$Se = 14.4\; \sqrt{\frac{1}{10} + \frac{1}{10}} ≈ 6.4$$ If the population mean difference is zero, then -on average- the sample mean difference will be zero as well. However, it will have a standard deviation of 6.4. We can now just compute a z-score for the sample mean difference but -for some reason- it's called T instead of Z: $$T = \frac{\overline{X}_1 - \overline{X}_2}{Se}$$ which, for our data, results in $$T = \frac{99.4 - 106.6}{6.4} ≈ -1.11$$ Right, now this is our test statistic: a number that summarizes our sample outcome with regard to the null hypothesis. T is basically the standardized sample mean difference; T = -1.11 means that our difference of -7 minutes is roughly 1 standard deviation below the average of zero. Our t-value follows a t distribution but only if the following assumptions are met: Independent observations or, precisely, independent and identically distributed variables. Normality: the outcome variable follows a normal distribution in the population. This assumption is not needed for reasonable sample sizes (say, N > 25). Homogeneity: the outcome variable has equal standard deviations in our 2 (sub)populations. This is not needed if the sample sizes are roughly equal. Levene's test is sometimes used for testing this assumption. If our data meet these assumptions, then T follows a t-distribution with (n1 + n2 -2) degrees of freedom (df). In our example, df = (10 + 10 - 2) = 18. The figure below shows the exact distribution. Note that we need an absolute t-value of 2.1 for 2-tailed significance at α = 0.05. Minor note: as df becomes larger, the t-distribution approximates a standard normal distribution. The difference is hardly noticeable if df > 15 or so. Last but not least, our mean difference of -7 minutes is not statistically significant: t(18) = -1.11, p ≈ 0.28. This means we've a 28% chance of finding our sample mean difference -or a more extreme one- if our population means are really equal; it's a normal outcome that doesn't contradict our null hypothesis. Our final figure shows these results as obtained from SPSS. Finally, the effect size measure that's usually preferred is Cohen's D, defined as $$D = \frac{\overline{X}_1 - \overline{X}_2}{Sw}$$ in which \(Sw\) is the estimated population standard deviation we encountered earlier. That is, Cohen's D is the number of standard deviations between the 2 sample means. So what is a small or large effect? The following rules of thumb have been proposed: D = 0.20 indicates a small effect; D = 0.50 indicates a medium effect; D = 0.80 indicates a large effect. Cohen's D is painfully absent from SPSS except for SPSS 27. However, you can easily obtain it from Cohens-d.xlsx. Just fill in 2 sample sizes, means and standard deviations and its formulas will compute everything you need to know. By Mishal on September 7th, 2020 How we Develop the hypothesis of given information prove and disprove by statistical analysis with the help of SPSS By William Peck on July 28th, 2021 my earlier question on using ANOVA for just 2 groups seems to point me to the t-test, right? Like I did the same survey 5 years apart to the student body, but the student body completely turned over. But can you really do analysis if the scale is simply 1-5? There's not much room for variance, like in your examples of IQ (on the ANOVA page) and the phone minutes (this page). By Ruben Geert van den Berg on July 29th, 2021 Hi William! Yes, ANOVA on 2 groups yields the exact same p-values as (the more usual) t-test. A reason for preferring t-tests, though, is that they yield confidence intervals for the population mean differences you're after. Strictly, computing means is not allowed for Likert scale and all other ordinal variables but most analysts (including us) and standard textbooks do so anyway. The scales (1-5 or 1-10 or 0-1000) don't really matter: you could multiply 1-5 by 100 and dramatically increase your variances. But this won't affect your p-values (try it if you don't believe me). What matters is the ratio of variance between/within groups. Great! Thank you ...
CommonCrawl
Boundaries, Weyl groups, and Superrigidity ERA-MS Home This Volume Constructing automorphic representations in split classical groups 2012, 19: 33-40. doi: 10.3934/era.2012.19.33 On GIT quotients of Hilbert and Chow schemes of curves Gilberto Bini 1, , Margarida Melo 2, and Filippo Viviani 3, Dipartimento di Matematica, Università degli Studi di Milano, Via C. Saldini 50, 20133 Milano, Italy Departamento de Matemática, Universidade de Coimbra, Largo D. Dinis, Apartado 3008, 3001 Coimbra, Portugal Dipartimento di Matematica, Università Roma Tre, Largo S. Leonardo Murialdo 1, 00146 Roma, Italy Received September 2011 Revised January 2012 Published February 2012 The aim of this note is to announce some results on the GIT problem for the Hilbert and Chow scheme of curves of degree $d$ and genus $g$ in the projective space of dimension $d-g$, whose full details will appear in [6]. In particular, we extend the previous results of L. Caporaso up to $d>4(2g-2)$ and we observe that this is sharp. In the range $2(2g-2) < d < \frac{7}{2} (2g-2)$, we get a complete new description of the GIT quotient. As a corollary, we get a new compactification of the universal Jacobian over the moduli space of pseudo-stable curves. Keywords: pseudo-stable curves, Hilbert scheme, Chow scheme, compactified universal Jacobian., stable curves, GIT. Mathematics Subject Classification: Primary: 14L24; Secondary: 14C0. Citation: Gilberto Bini, Margarida Melo, Filippo Viviani. On GIT quotients of Hilbert and Chow schemes of curves. Electronic Research Announcements, 2012, 19: 33-40. doi: 10.3934/era.2012.19.33 J. Alper, Adequate moduli spaces and geometrically reductive group schemes,, preprint, (). Google Scholar J. Alper and D. Hyeon, GIT construction of log canonical models of $\barM_g$,, preprint, (). Google Scholar J. Alper, D. Smyth and M. Fedorchuck, Finite Hilbert stability of (bi)canonical curves,, preprint, (). Google Scholar J. Alper, D. Smyth and M. Fedorchuck, Finite Hilbert stability of canonical curves, II. The even-genus case,, preprint, (). Google Scholar J. Alper, D. Smyth and F. van der Wick, Weakly proper moduli stacks of curves,, preprint, (). Google Scholar G. Bini, M. Melo and F. Viviani, GIT for polarized curves,, preprint, (). Google Scholar L. Caporaso, A compactification of the universal Picard variety over the moduli space of stable curves, J. Amer. Math. Soc., 7 (1994), 589-660. doi: 10.1090/S0894-0347-1994-1254134-8. Google Scholar M. Fedorchuk and D. Jensen, Stability of 2nd Hilbert points of canonical curves,, preprint, (). Google Scholar M. Fedorchuk and D. I. Smyth, Alternate compactifications of moduli space of curves,, to appear in, (). Google Scholar F. Felici, GIT for curves of low degree,, in progress., (). Google Scholar D. Gieseker, "Lectures on Moduli of Curves," Tata Institute of Fundamental Research Lectures on Mathematics and Physics, 69, Tata Institute of Fundamental Research, Bombay, 1982. Google Scholar J. Harris and I. Morrison, "Moduli of Curves," Graduate Text in Mathematics, 187, Springer-Verlag, New York, 1998. Google Scholar B. Hassett and D. Hyeon, Log canonical models for the moduli space of curves: First divisorial contraction, Trans. Amer. Math. Soc., 361 (2009), 4471-4489. doi: 10.1090/S0002-9947-09-04819-3. Google Scholar B. Hassett and D. Hyeon, Log canonical models for the moduli space of curves: The first flip,, preprint \arXiv{0806.3444}., (). Google Scholar D. Hyeon and Y. Lee, Stability of tri-canonical curves of genus two, Math. Ann., 337 (2007), 479-488. doi: 10.1007/s00208-006-0046-2. Google Scholar D. Hyeon and I. Morrison, Stability of tails and 4-canonical models, Math. Res. Lett., 17 (2010), 721-729. Google Scholar J. Li and X. Wang, Hilbert-Mumford criterion for nodal curves,, preprint, (). Google Scholar I. Morrison, GIT constructions of moduli spaces of stable curves and maps, in "Geometry of Riemann surfaces and their Moduli Spaces" (eds. L. Ji, et al.), Surveys in Differential Geometry 14, International Press, Somerville, MA, (2010), 315-369. Google Scholar D. Mumford, "Lectures on Curves on an Algebraic Surface," Annals of Mathematics Studies, 59, Princeton University Press, Princeton, N.J., 1966. Google Scholar D. Mumford, Stability of projective varieties, Enseignement Math. (2), 23 (1977), 39-110. Google Scholar D. Schubert, A new compactification of the moduli space of curves, Compositio Math., 78 (1991), 297-313. Google Scholar Yohan Penel. An explicit stable numerical scheme for the $1D$ transport equation. Discrete & Continuous Dynamical Systems - S, 2012, 5 (3) : 641-656. doi: 10.3934/dcdss.2012.5.641 Wenbin Chen, Wenqiang Feng, Yuan Liu, Cheng Wang, Steven M. Wise. A second order energy stable scheme for the Cahn-Hilliard-Hele-Shaw equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (1) : 149-182. doi: 10.3934/dcdsb.2018090 Francisco R. Ruiz del Portal. Stable sets of planar homeomorphisms with translation pseudo-arcs. Discrete & Continuous Dynamical Systems - S, 2019, 12 (8) : 2379-2390. doi: 10.3934/dcdss.2019149 Denis Gaidashev, Tomas Johnson. Dynamics of the universal area-preserving map associated with period-doubling: Stable sets. Journal of Modern Dynamics, 2009, 3 (4) : 555-587. doi: 10.3934/jmd.2009.3.555 Van M. Savage, Alexander B. Herman, Geoffrey B. West, Kevin Leu. Using fractal geometry and universal growth curves as diagnostics for comparing tumor vasculature and metabolic rate with healthy tissue and for predicting responses to drug therapies. Discrete & Continuous Dynamical Systems - B, 2013, 18 (4) : 1077-1108. doi: 10.3934/dcdsb.2013.18.1077 Krzysztof Frączek, Ronggang Shi, Corinna Ulcigrai. Genericity on curves and applications: pseudo-integrable billiards, Eaton lenses and gap distributions. Journal of Modern Dynamics, 2018, 12: 55-122. doi: 10.3934/jmd.2018004 Zeyu Xia, Xiaofeng Yang. A second order accuracy in time, Fourier pseudo-spectral numerical scheme for "Good" Boussinesq equation. Discrete & Continuous Dynamical Systems - B, 2020, 25 (9) : 3749-3763. doi: 10.3934/dcdsb.2020089 Stefano Marò. Relativistic pendulum and invariant curves. Discrete & Continuous Dynamical Systems, 2015, 35 (3) : 1139-1162. doi: 10.3934/dcds.2015.35.1139 Gian-Italo Bischi, Laura Gardini, Fabio Tramontana. Bifurcation curves in discontinuous maps. Discrete & Continuous Dynamical Systems - B, 2010, 13 (2) : 249-267. doi: 10.3934/dcdsb.2010.13.249 Carlos Munuera, Alonso Sepúlveda, Fernando Torres. Castle curves and codes. Advances in Mathematics of Communications, 2009, 3 (4) : 399-408. doi: 10.3934/amc.2009.3.399 Vladimir Georgiev, Eugene Stepanov. Metric cycles, curves and solenoids. Discrete & Continuous Dynamical Systems, 2014, 34 (4) : 1443-1463. doi: 10.3934/dcds.2014.34.1443 Martin Möller. Shimura and Teichmüller curves. Journal of Modern Dynamics, 2011, 5 (1) : 1-32. doi: 10.3934/jmd.2011.5.1 Lawrence Ein, Wenbo Niu, Jinhyung Park. On blowup of secant varieties of curves. Electronic Research Archive, 2021, 29 (6) : 3649-3654. doi: 10.3934/era.2021055 Yuto Miyatake, Tai Nakagawa, Tomohiro Sogabe, Shao-Liang Zhang. A structure-preserving Fourier pseudo-spectral linearly implicit scheme for the space-fractional nonlinear Schrödinger equation. Journal of Computational Dynamics, 2019, 6 (2) : 361-383. doi: 10.3934/jcd.2019018 Philip N. J. Eagle, Steven D. Galbraith, John B. Ong. Point compression for Koblitz elliptic curves. Advances in Mathematics of Communications, 2011, 5 (1) : 1-10. doi: 10.3934/amc.2011.5.1 Adrian Tudorascu. On absolutely continuous curves of probabilities on the line. Discrete & Continuous Dynamical Systems, 2019, 39 (9) : 5105-5124. doi: 10.3934/dcds.2019207 Nicholas Hoell, Guillaume Bal. Ray transforms on a conformal class of curves. Inverse Problems & Imaging, 2014, 8 (1) : 103-125. doi: 10.3934/ipi.2014.8.103 M. J. Jacobson, R. Scheidler, A. Stein. Cryptographic protocols on real hyperelliptic curves. Advances in Mathematics of Communications, 2007, 1 (2) : 197-221. doi: 10.3934/amc.2007.1.197 Philip Korman. Curves of equiharmonic solutions, and problems at resonance. Discrete & Continuous Dynamical Systems, 2014, 34 (7) : 2847-2860. doi: 10.3934/dcds.2014.34.2847 Michael Khanevsky. Non-autonomous curves on surfaces. Journal of Modern Dynamics, 2021, 17: 305-317. doi: 10.3934/jmd.2021010 Gilberto Bini Margarida Melo Filippo Viviani
CommonCrawl
Basic question regarding variance and stdev of a sample Suppose there is a very big (infinite?) population of normally distributed values with unknown mean and variance. Suppose also that we have a sample, S, of n values from the entire population. We can calculate mean and standard deviation for this sample (we use n-1 for stdev calculation). The first and most important question is how is stdev(S) related to the standard deviation of the entire population? An illustration for this issue is the second question: Suppose we have an additional number, x, and we would like to test whether it is an vis-a-vis the general population. My intuitive approach is to calculate Z as follows: $Z = \frac{x - mean(S)}{stdev(S)}$ and then test it against standard distribution if n>30 or against t-distribution if n<30. However, this approach doesn't account for n, the size of the sample. What is the right way to solve this question provided there is only single sample S? standard-deviation variance normality-assumption sample unbiased-estimator Ferdi Jonathan JamesJonathan James $\begingroup$ What do you mean by: "we would like to test whether it is an vis-a-vis the general population"? and "single sample S"? $\endgroup$ – user28 Jul 28 '10 at 12:06 The second question seems to ask for a prediction interval for one future observation. Such an interval is readily calculated under the assumptions that (a) the future observation is from the same distribution and (b) is independent of the previous sample. When the underlying distribution is Normal, we just have to erect an interval around the difference of two Gaussian random variables. Note that the interval will be wider than suggested by a naive application of a t-test or z-test, because it has to accommodate the variance of the future value, too. This rules out all the answers I have seen posted so far, so I guess I had better quote one explicitly. Hahn & Meeker's formula for the endpoints of this prediction interval is $$m \pm t \times \sqrt{1 + \frac{1}{n}} \times s$$ where $m$ is the sample mean, $t$ is an appropriate two-sided critical value of Student's $t$ (for $n-1$ df), $s$ is the sample standard deviation, and $n$ is the sample size. Note in particular the factor of $\sqrt{1+1/n}$ instead of $\sqrt{1/n}$. That's a big difference! This interval is used like any other interval: the requested test simply examines whether the new value lies within the prediction interval. If so, the new value is consistent with the sample; if not, we reject the hypothesis that it was independently drawn from the same distribution as the sample. Generalizations from one future value to $k$ future values or to the mean (or max or min) of $k$ future values, etc., exist. There is a extensive literature on prediction intervals especially in a regression context. Any decent regression textbook will have formulas. You could begin with the Wikipedia entry ;-). Hahn & Meeker's Statistical Intervals is still in print and is an accessible read. The first question has an an answer that is so routine nobody seems yet to have given it here (although some of the links provide details). For completeness, then, I will close by remarking that when the population has approximately a Normal distribution, the sample standard deviation is distributed as the square root of a scaled chi-square variate of $n-1$ df whose expectation is the population variance. That means (roughly) we expect the sample sd to be close to the population sd and the ratio of the two will usually be $1 + O(1/\sqrt{n-1})$. Unlike parallel statements for the sample mean (which invoke the CLT), this statement relies fairly strongly on the assumption of a Normal population. whuber♦whuber I'm finding it rather tricky to see what you are asking: If you want to know whether the Var(S) is different from the population variance, then see this previous answer. If you want to determine whether the mean(S) and the mean(X) are the same, then look at Independent two-sample t-tests. If you want to test whether mean(S) is equal to the population mean, then see @Srikant answer above, i.e. a one-sample t-test. csgillespiecsgillespie My first answer was full of errors. Here is a corrected version: The correct way to test is as follows: z = (mean(S) - mu) / (stdev(S) / sqrt(n) ) See: Student's t-test Note the following: The sample size is accounted for when you divide the standard deviation by the square root of the sample size. You should also note that the z-test is for testing whether the true mean of the population is some particular value. It does not make sense to substitute x instead of mu in the above statistic. I think you need to nail down the question you are asking, before you can compute an answer. I think this question is way too vague to answer: "test whether it is an vis-a-vis the general population". The only question I think you can answer is this one: If the new value came from the same population as the others, what is the chance that it will be so far (or further) from the sample mean? That is the question that your equation will begin to answer, although it is not quite right. Here is a corrected equation that includes n. t = (x - mean(S))/(stdev(S)/sqrt(n)) Compute the corresponding P value (with n-1 degrees of freedom) and you've answered the question. Harvey MotulskyHarvey Motulsky $\begingroup$ I am not sure I understand what you are saying. I am not sure there is any relationship between a new draw x and the current sample mean mean(S) as x ~ N(mu,sigma^2). Clearly, the draw of x can be anywhere in the support of the distribution. It is more likely to be around mu and less likely to be in the tails but it does not have anything to do with mean(S). $\endgroup$ – user28 Jul 29 '10 at 4:20 $\begingroup$ But you don't and can't know mu or sigma. What you know are the mean and SD of the sample, mean(S) and stdev(S). $\endgroup$ – Harvey Motulsky Jul 30 '10 at 3:51 1) The standard deviation of the sample (stdev(S)) is an unbiased estimate of the standard deviation of the population. 2) Given we have estimated both the population mean and variance we need to take this into account when we evaluate whether a new observation x is a member of this population. We don't use Z = (x - mean(S))/stdev(S), but rather: t = (x - mean(S))/(stdev(S)*sqrt(1 + 1/n)), where n is the sample size of the first sample. We the compare t with a t-distribution with n-1 degrees of freedom to give a p-value. See here: http://en.wikipedia.org/wiki/Prediction_interval#Unknown_mean.2C_unknown_variance This accounts for the sample size both in the divisor (sqrt(1 + 1/n)) and in the degrees of freedom of the t-distribution. ThylacoleoThylacoleo $\begingroup$ It is the variance which is an unbiased estimator. S is not an unbiased estimate of the population standard deviation. $\endgroup$ – Rob Hyndman Aug 18 '10 at 12:13 $\begingroup$ Thanks Rob, I stand corrected! This is a subtlety that I was previousy unaware of. Others may wish to see: en.wikipedia.org/wiki/Unbiased_estimation_of_standard_deviation $\endgroup$ – Thylacoleo Aug 18 '10 at 12:41 "how is stdev(S) related to the standard deviation of the entire population?" I don't know if the "Confidence Interval" concept might be what you are looking for? Stdev(S) is an Estimate of the standard deviation of the entire population. To see how good an estimate, confidence intervals could be computed, and these would be dependent on the sample size. See for e.g., Simulation and the Monte Carlo Method, Rubinstein & Kroese. Anubala Varikat Not the answer you're looking for? Browse other questions tagged standard-deviation variance normality-assumption sample unbiased-estimator or ask your own question. Comparing the variance of paired observations Obtaining sample variance from grouped data for goodness of fit test Combined standard deviation of geometric series Understanding Chebyshev rule Estimating classifier performance using cross validation, average accuracy and standard deviation and Population and the mean, standard deviation and the distribution of a population charactertistic Variance of Total Value of Sample Unbiased Estimator of the Variance of the Sample Variance Does estimated standard error of the mean says something about the population mean? Calculating sampling proportion's standard deviation given population proportion? Why is the population standard deviation approximated as the sample standard deviation?
CommonCrawl
Unit sphere bounds Free UK Delivery on Eligible Order New upper bounds are given for the maximum number, τ n, of nonoverlapping unit spheres that can touch a unit sphere in n-dimensional Euclidean space, for n≤24. In particular it is shown that τ 8 = 240 and τ 24 = 196560 DIAMETER BOUNDS FOR EQUAL AREA PARTITIONS OF THE UNIT SPHERE PAUL LEOPARDI ∗ Abstract. The recursive zonal equal area (EQ) sphere partitioning algorithm is a practical algorithm for par-titioning higher dimensional spheres into regions of equal area and small diameter. Another such construction is due to Feige and Schechtman The unit sphere, centered at the origin in Rn, has a dense set of points with rational coordinates. We give an elementary proof of this fact that includes explicit bounds on the complexity of the coordinates: for every point v on the unit sphere in Rn; and every > 0; there is a point r = (r 1;r 2;:::;r n) such that: jjr vjj 1< : r is also a. However, [KL] does not study sphere packing di-rectly, but rather passes through the intermediate problem of spherical codes. In this paper, we develop linear programming bounds that apply directly to sphere packing, and study these bounds numerically to prove the best bounds known1 for sphere packing in dimensions 4 through 36. In dimensions 8. In mathematics, a unit sphere is simply a sphere of radius one around a given center.More generally, it is the set of points of distance 1 from a fixed central point, where different norms can be used as general notions of distance. A unit ball is the closed set of points of distance less than or equal to 1 from a fixed central point. Usually the center is at the origin of the space, so one. A unit sphere is a set of points that are distanced 1 unit away from a starting point (an origin ). The collection of points can exist in d -dimensions from one-dimensional to infinite-dimensional spaces The question proposes that the bounds of the integral over the interior of the unit sphere can be written as follows: ∫ − 1 1 ∫ − 1 − z 2 1 − z 2 ∫ − 1 − y 2 − z 2 1 − y 2 − z 2 x 2 + y 2 + z 2 d x d y d z. This is correct, although it is not to be recommended. (Integration over spherical coordinates, as shown in another. A globe showing the radial distance, polar angle and azimuthal angle of a point P with respect to a unit sphere, in the mathematics convention. In this image, r equals 4/6, θ equals 90°, and φ equals 30°. In mathematics, a spherical coordinate system is a coordinate system for three-dimensional space where the position of a point is. The other way to get this range is from the cone by itself. By first converting the equation into cylindrical coordinates and then into spherical coordinates we get the following, z = r ρ cos φ = ρ sin φ 1 = tan φ ⇒ φ = π 4 z = r ρ cos ⁡ φ = ρ sin ⁡ φ 1 = tan ⁡ φ ⇒ φ = π 4 Bounds on eBay - Shop Bounds Toda and max points. Bounds is used by Collider.bounds, Mesh.bounds and Renderer.bounds g a renderer for my toy game engine using OpenTK in C#. As it will be used only by me, I am using OpenGL 4.5. After I implemented the basics, I tried to render the Utah Teapot. This is what I got g bounds for packings of spherical caps and of convex bodies through the use of semidefinite program Transcribed image text: >C + xy plane bounds the portion of the unit sphere S is a closed surface & bound by that in the first octant x² y² + 2 = 1 , x, y, 220 let S, be ped the spherical surface Sz be portion of surface on 3 be portion of on y z plane Sn be portion an xz plane suppose s is given a orientation so that the normal vector on si points radially outwards from the origin . write. We improve their upper bound and extend that new upper bound to the so-called locally separable packings of unit balls. We call a packing of unit balls a locally separable packing if each unit ball of the packing together with the unit balls that are tangent to it form a totally separable packing Save at Amazon.co.uk - Great Prices & Huge Selectio For the second part (ii), by exploiting a connection recently mentioned in between the bounds and cubature rules, we can rely on known results for cubature rules on the unit sphere to show tightness of the bounds. Organization of the paper In Sect. 2 we recall some previously known results that are most relevant to this paper NEW UPPER BOUNDS ON SPHERE PACKINGS I 691 The density ∆ of a packing is defined to be the fraction of space covered by the balls in the packing. Density is not necessarily well-defined for patho-logical packings, but in those cases one can take a lim sup of the densities for since a unit sphere has volume. Let S be the closed surface that bounds the portion of the unit sphere x2 + y2 + x2 = 1 in the first octant. Namely, S comprises of the spherical portion x2 + y2 + 22 = 1, x,y,z > 0 and the portion of the three coordinate planes that are contained in the unit sphere and have positive x,y,z coordinates Unit sphere packings are the classical core of (discrete) geometry. We survey old as well new results giving an overview of the art of the matters. The emphases are on the following five topics: the contact number problem (generalizing the problem of kissing numbers), lower bounds for Voronoi cells (studying Voronoi cells from volumetric point. Then solve for z to find z = r*cos (ϕ). For x and y, we first have to find the component in the xy-plane, then use θ to solve for the two coordinates. The component of r in the xy-plane, which I'll refer to as R, is given by sin (ϕ) = R/r. Solving for R gives R = r*sin (ϕ) New bounds on the number of unit spheres that can touch a This gives the upper half of a sphere. We wish to find the volume under this top half, then double it to find the total volume. The region we need to integrate over is the circle of radius \(a\), centered at the origin. Polar bounds for this equation are \(0\leq r\leq a\), \(0\leq\theta\leq2\pi\) Lower Bounds on Mean Curvature of Closed Curves Contained in Convex Boundaries Greg McNulty, Robert King, Haijian Lin, Sarah Mall August 13, 2004 Abstract We investigate a geometric inequality that states that in R2, the mean of the unit sphere is the same as the angle made between the line through th Spherical Discrepancy Minimization and Algorithmic Lower Bounds for Covering the Sphere Chris Jones Matt McPartlon June 14, 2020 Abstract. CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): New upper bounds are given for the maximum number, 7m, of nonoverlapping unit spheres that can touch a unit sphere in n-dimensional Euclidean space, for n < 24. In particular it is shown that 78 = 240 and ~~~ = 196560. The problem of finding the maximum number, 7S, of billiard balls that can touch another billiard. This is an easy surface integral to calculate using the Divergence Theorem: ∭ E d i v ( F) d V = ∬ S = ∂ E F → ⋅ d S. However, to confirm the divergence theorem by the direct calculation of the surface integral, how should the bounds on the double integral for a unit ball be chosen? Since, d i v ( F →) = 0 in this case, hence, it's. scene graph the Bounds object moves with the object that references it. I set up a simple program (below) that puts a sphere in the scene graph and then moves it by one unit. The bounds of the sphere both before and after the move is at the same location Abstract: It is known that the unit sphere, centered at the origin in Rn, has a dense set of points with rational coordinates. We give an elementary proof of this fact that includes explicit bounds on the complexity of the coordi-nates: for every point v on the unit sphere in Rn;. I am trying to write a function that generates a random walk bound by the unit sphere, resulting in the last location of the random walk given by a vector. In trying to figure out where my function is breaking, I have tried piecing it out like this: norm_vec <- function (x) sqrt (sum (x^2)) sphere_start_checker <- function () { repeat { start. The basic idea is to simulate d independent standardized normal variables, project them radially onto the unit sphere, and then adjust their distance to the origin appropriately. You can find this algorithm in many textbooks (Devroye, 1986; Fishman, 1996), but Harman and Lacko (2010) summarize the process nicely For nitegraphs: hierarchies ofsemide nite upper bounds. (Lov asz-Schrijver 1991, Lasserre 2001, Laurent 2007) Forin nitegraphs: Generalization of Lasserre's hierarchy (de Laat-Vallentin 2015), related to the previous 2-point (Delsarte-Goethals-Seidel 1977) and 3-point bounds (Bachoc-Vallentin 2008). A unitary design is a collection of unitary matrices that approximates the entire unitary group, much like a spherical design approximates the entire unit sphere. We use irreducible representations of the unitary group to find a general lower bound on the size of a unitary t-design in U(d), for any d and t Proof. Let Pbe a sphere packing of unit spheres in R nwith density . Consider S 1 R, a n 1 dimensional sphere of radius R2[1;2], in Rn. It can be located such that it contains at least Rn center points of spheres in Pwhile none of them is concentric with Sn 1 R. This is the case as for a uniformly random location E E # centerpoints of Pin Sn 1. A 3D sphere is a 3-hypersphere and the unit sphere is a collection of points a distance of 1 from a fixed central point. The unit hypersphere is the next dimension up: a 4-hypersphere with a collection of points (x, y, u, v) so that x 2 + y 2 + u 2 + v 2 = 1. Add a fourth dimension to the unit sphere, and you get the unit hypersphere the bounds (3) for the unit sphere: after showing in Section 3.1 that the convergence rate is in O(1=r2) we prove in Section 3.2 that the analysis is tight for nonzero linear polynomials. 4. Convergence analysis of a Lasserre hierarchy of upper bounds for polynomial minimization on the sphere and its implementation in Matlab, provides numerical results and gives a sketch of the proof of the bounds on the diameter of regions. A companion paper gives details of the proof. Key words. sphere, partition, area, diameter, zone AMS subject classications. 11K38, 31-04, 51M15, 52C99, 74G65 1. Introduction. For dimension , the unit sphere. Unit sphere - Wikipedi In this section we will define the spherical coordinate system, yet another alternate coordinate system for the three dimensional coordinate system. This coordinates system is very useful for dealing with spherical objects. We will derive formulas to convert between cylindrical coordinates and spherical coordinates as well as between Cartesian and spherical coordinates (the more useful of the. Alternative method 1. An alternative method to generate uniformly disributed points on a unit sphere is to generate three standard normally distributed numbers X, Y, and Z to form a vector V = [ X, Y, Z]. Intuitively, this vector will have a uniformly random orientation in space, but will not lie on the sphere Example 15.8.6: Setting up a Triple Integral in Spherical Coordinates. Set up an integral for the volume of the region bounded by the cone z = √3(x2 + y2) and the hemisphere z = √4 − x2 − y2 (see the figure below). Figure 15.8.9: A region bounded below by a cone and above by a hemisphere. Solution Calculate surface integral where and S is the portion of the unit sphere in the first octant with outward orientation. 0. Hint. Use . Calculating Mass Flow Rate. Let represent a velocity field (with units of meters per second) of a fluid with constant density 80 kg/m 3 ators in ε-approximations, under the ∥∥∞norm, with (√ 32⌈log 2d⌉/ε)2⌈log 2 d⌉. Based on this, rational approximation g bounds for error-correcting codes, and use it to prove upper bounds for the density of sphere packings, which are the best bounds known at least for di-mensions 4 through 36 The silhouette of the 3D sphere is the curve in 3D whose perspective projection into screen space is the 2D ellipse that exactly bounds the projection of the sphere. The silhouette is a circle in 3D (note that the silhouette circle has radius less than r, since less than half of the sphere is ever visible under perspective projection.) Th Sphere Packings Give an Explicit Bound for the Besicovitch Covering Theorem By John M. Sullivan ABSTRACT. We show that the number of disjointed families needed in the Besi-covitch Covering Theorem equals the number of unit spheres that can be packed into a ball of radius five, with one at the center, and get estimates on this number. 1. A sphere with radius 1 occupies a volume of (4/3)*π, which is about 4.18. A cube whose sides touch this sphere has each side of length 2, to give a volume of 8. The probability of a point chosen from a uniform random distribution in the cube being outside the sphere is (8-4.18)/8, which is 0.48. [return Upper bounds for packings of spheres of several radii. We give theorems that can be used to upper bound the densities of packings of different spherical caps in the unit sphere and of translates of different convex bodies in Euclidean space. These theorems extend the linear programming bounds for packings of spherical caps and of convex bodies. tance from the center of some sphere. Thus, a covering of the space is achieved if each sphere center is encompassed by a sphere of unit radius and the density of this covering (2)2−d Minkowski (1905) [ln(√ 2)d]2−d Davenport and Rogers (1947) (2d)2−d Ball (1992) TABLE 1. Dominant asymptotic behavior of lower bounds on φL max for large d F or d > 1, N > 2, there is a partition F S (d, N) of the unit sphere S d into N regions, with each re gion R ∈ F S ( d, N ) having ar ea Ω /N and Euclidean diameter bounded above b The bounds represent different things in each case. An axis-aligned bounding box, or AABB for short, is a box aligned with coordinate axes and fully enclosing some object. Because the box is never rotated with respect to the axes, it can be defined by just its center and extents, or alternatively by min and max points Abstract. We give theorems that can be used to upper bound the densities of packings of different spherical caps in the unit sphere and of translates of different convex bodies in Euclidean space. These theorems extend the linear programming bounds for packings of spherical caps and of convex bodies through the use of semidefinite programming circles on the normalized unit sphere. This algorithm not only bounds the search space but treats the finite vanishing points and the vanishing points at infinity with the same way. The experimental results on synthetic and real data show good performance of this algorithm. Keyword volume of sphere is the measure of space that can be occupied by a sphere. If we draw a circle on a sheet of paper, take a circular disc, paste a string along its diameter and rotate it along the string. This gives us the shape of a sphere. The unit of volume of a sphere is given as the (unit)3. The metric units of volume are cubic meters or cubi sphere packing, and study these bounds numerically to prove the best bounds known1 for sphere packing in dimensions 4 through 36. In dimensions 8 and 24, our bounds are very close to the densities of the known packings: they are too high by factors of 1.000001 and 1.0007071 in dimensions 8 and 24, respectively. (The best bounds previously known. because the unit sphere is K 2 and not a one dimensional circular graph. In higher dimensions, graphs are geometric if all vertices have the same dimension and every unit sphere is a sphere type graph. The smallest two dimensional one is the octahe-dron, where b 0 = b 2 = 1;b 1 = 0 and ˜(G) = b 0 b 1+b 2 = v e+f= 6 12+8 = 2 The size of the maximum independent set (MIS) in a graph G is called the independence number.The size of the minimum connected dominating set (MIN-CDS) in G is called the connected domination number.The aim of this paper is to determine two better upper bounds of the independence number; dependent on the connected domination number for a unit ball graph Unit Sphere / Hypersphere - Calculus How T approximation bounds based on a bi-linear SDP relaxation and SOS techniques. When the unit sphere in (1.1) is replaced by a simplex, De Klerk, Laurent and Parrilo [7] proposed some polynomial time approximation schemes (PTASs) based on P¶olya's theorem or rational grid points, and proved some approximation bounds. De Klerk and Pasechnik [6. [12] Q. T. LE GiA AND I. H. SLOAN, The uniform norm of hyperinterpolation on the unit sphere in an arbitrary number of dimensions, Constr. Approx., 17 (2001), pp. 249-265. [13] P. LEOPARDI, Diameter bounds for equal area partitions of the unit sphere, in preparation, 2006 Lower bounds on lattice sieving and information set decoding. Elena Kirshanova Thijs Laarhoven. 3rd PQC Standardization Conference. June 4, 2021 To appear at Crypto'2 graphs for which unit spheres of a graph satisfy properties familiar to unit spheres in Rd. In that case, the results look more similar to differential geometry [8]. The curvature for three-dimensional graphs for example is zero everywhere and positive sectional curvature everywhere leads to definite bounds on the diameter of the graph The bounds are obtained by a new method for obtaining good polynomials required in a linear programming bound due to Delsarte, Goethals and Seidel. (24) points on the unit sphere in the space. We study the convergence rate of a hierarchy of upper bounds for polynomial minimization problems, proposed by Lasserre (SIAM J Optim 21(3):864-885, 2011), for the special case when the feasible set is the unit (hyper)sphere. The upper bound at level r∈ N of the hierarchy is defined as the minimal expected value of the polynomial over all. adshelp[at]cfa.harvard.edu The ADS is operated by the Smithsonian Astrophysical Observatory under NASA Cooperative Agreement NNX16AC86 Coxeter proposed upper bounds on k(n) in 1963 [10]; for n =4,5,6, 7, and 8 these bounds were 26, 48, 85, 146, and 244, respectively. Coxeter's bounds are based on the conjecture that equal size spherical caps on a sphere can be packed no denser than packing where the Delaunay triangulation wit the unit sphere Sd 1. We are interested in computing an -approximation y2Qd for x, that is exactly on Sd 1 and has low bit size. We revise lower bounds on rational approximations and provide explicit, spherical instances. We prove that oating-point numbers can only pro-vide trivial solutions to the sphere equation in R2 and R3. Moreover, we. Recently, Schmutz provided an divide-&-conquer approach on the sphere equation, using Diophantine approximation by continued fractions, to derive points in Q d ∩ S d − 1 for a point on the unit sphere S d − 1. The main theorem bounds the denominators in ε-approximations, under the ∥ ∥ ∞ norm, with (√ 32 ⌈ log 2 d ⌉ / ε) 2. We study the convergence rate of a hierarchy of upper bounds for polynomial minimization problems, proposed by Lasserre [SIAM J. Optim. 21(3) (2011), pp. 864-885], for the special case when the feasible set is the unit (hyper)sphere. The upper bound at level r of the hierarchy is defined as the minimal expected value of the polynomial over all probability distributions on the sphere, when the. integration - Triple integral in a sphere - Mathematics When the dimension n of the design is fixed, and the strength k goes to infinity, the LPbound on designs turns out, in conjunction with known lower bounds, to be proportional to kn-1. 1 Introduction An n-dimensional spherical code of (angular) distance ` is a subset of the (n- 1)-dimensional unit sphere, such that the angle between any two. g $\|a\|_2=1$) we have that $\mathbb P(a^TX>b)=\mathbb P(X_1>b)$ is the area of the cap $\{x \in S^{d-1} : x_1>b\}$ on the unit sphere, divided by the total are of the sphere. There are explicit integral formulas for the area of caps, see e.g. [1] and the first answer in [2], but perhaps most useful is the discussion in Chapter 2. The field entering from the sphere of radius a is all leaving from sphere b, so To find flux: directly evaluate ⇀ sphere sphere q EX 4Define E(x,y,z) to be the electric field created by a point-charge, q located at the origin. E(x,y,z) = Find the outward flux of this field across a sphere of radius a centered at the origin. ⇀ ⇀ ∭dV = Bounds are established for the average velocity per unit force for the motion of a rigid body in a viscous and the average is taken over all orientations of the force, with the orientations being uniform on the unit sphere. This average velocity for an arbitrary rigid body is shown to be bounded above and below by two simple Sphere-packing arguments are used to develop upper bounds on the free distance of trellis codes. A general bounding procedure is presented. Sphere packing bounds, including bounds on the density of.. Z N for an optimal DLP configuration of N spheres in Rd is the maximum of the function Z r i,R.The function Z r i,R is defined for packings of nonoverlapping spheres of unit diameter as the number of sphere centers that are within distance R from a sphere center at position r i, with i an index over all centers and where the value of Z Spherical coordinate system - Wikipedi One is the Separating Axis Theorem. But this only works on convex meshes. This is why Unity actually has a tick box to perform collision detection on a mesh with convex mesh only. You'll actually notice that the IGeom interface in that framework has a method called Project The result is a PAC-bound that is tighter when the base learner adapts quickly, which is precisely the goal of meta-learning. We show that our bound provides a tighter guarantee than other bounds on a toy non-convex problem on the unit sphere and a text-based classification example In Proposition 1, we deliver lower bounds of the regularizing (φ, ψ)-admissible constants C φ and C ψ, depending on the dimension of the latent space d and on the levels (L φ, L ψ) of approximately angle preserving property. These bounds ensure the non-emptiness of the sets M C φ and M C ψ The Hamming or sphere-packing bound gave an upper bound on the size (or rate) of codes, these bounds in terms of identifying the largest possible distance for asymptotically good binary of pairwise far-apart unit vectors in Euclidean space, and then use a geometric argument for the. of how to put n non-overlappingspherical caps of maximum equal radius on the unit sphere. 2.1. An Upper Limit. The following inequality of L. Fejes-T6th [4J gives a useful upper bound on dn : THEOREM 2.1. Given n > 2 points on the surface ofthe unit sphere, there always exist two having spherical distance _1{cot 2 W-l} Q ~cos 2 where n 1r w. Theorem 17.5 (Isoperimetry Theorem for Sphere) Let A ⊂ Sn−1 be a measurable sub- set of the unit sphere Sn−1 and let C be a spherical cap on the sphere Sn−1 of the same surface area as A, then for all > 0, µ(A ) ≥ µ(C ). Or equivalently, among all (measurable) patches on the unit sphere, the spherical cap has th Calculus III - Triple Integrals in Spherical Coordinate imization. As shown in Figure1, for σ. The recursive zonal equal area sphere partitioning algorithm is a practical algorithm for partitioning higher dimensional spheres into regions of equal area and small diameter. This paper describes the partition algorithm and its implementation in Matlab, provides numerical results and gives a sketch of the proof of the bounds on the diameter of regions This article is cited in 26 scientific papers (total in 26 papers) Information Theory On Bounds for Packings on a Sphere and in Space G. A. Kabatiansky, V. I. Levenshtein Abstract: A method is proposed for obtaining bounds for packings in metric spaces, the method being based on the use of zonal spherical functions associated with a motion group of the space imum and maximum bounds are valid for cells at all levels, but they may be somewhat conservative for very large cells (e.g. face cells) where is the upper half of the unit sphere. where is the upward-facing paraboloid lying in cylinder . Use the divergence theorem to evaluate . over cube defined by . where is bounded by paraboloid and plane . Find the amount of work performed by a 50-kg woman ascending a helical staircase with radius 2 m and height 100 m. The woman completes. imal Coulomb potential) of the electrons positioned on a conducting sphere.. Formulation. This problem, known as the T problem of finding the lowest energy configuration of point charges on a conducting sphere, originated with T's plum pudding model of the atomic nucleus New Conjectural Lower Bounds on the Optimal Density of Sphere Packings S. Torquato and F. H. Stillinger CONTENTS 1. Introduction 2. Some Previous Upper and Lower Bounds 3. Realiz Unity - Scripting API: Bound Given a unit sphere, a spherical triangle on the surface of the sphere is defined by the. Spherical trigonometry (5,650 words) New bounds on the number of unit spheres that can touch a unit sphere in n dimensions. Journal of Combinatorial Theory. A26: 210-214. CiteSeerX 10 Each non-zero point in Rd identifies one closest point x on the unit sphere Sd-1. We are interested in computing an ε-approximation y ∈ Qd for x, that is exactly on Sd-1 and has low bit size. We revise lower bounds on rational approximations and provide explicit, spherical instances. We prove that floating-point numbers can only provide trivial solutions to the sphere equation in R2 and R3 First construction: Inverse gnomonic projection. Our first construction is based on lower bounds for the Szemerédi-Trotter problem (e.g., see the first two posts of this series). It shows that points and unit spheres in can yield incidences. For the requested values of and , consider a set of points and a set of lines, both in and with Usage spheres = pack([opt]) Packs 3D spheres (default) or 2D circles with the given options: dimensions — Can either be 3 (default) for spheres, or 2 for circles; bounds — The normalized bounding box from -1.0 to 1.0 that spheres are randomly generated within and clip to, default 1.0; packAttempts — Number of attempts per sphere to pack within the space, default 50 Since by definition, we have a d-dimensional graph G, the unit sphere S(p) at a point is a (d−1)-dimensional graph. While Puiseux type formulas allow to discretize curvature in two dimensions, the lack of a natural second order differ- Schoenberg-Myers bounds show that there are only finitely many d ≥ 2 dimensiona an integral over the unit sphere S2 = fx 2R3: kxk 2 = 1g (1) I[f] Z S2 f(x)d = Z 2ˇ 0 Z ˇ 0 f('; )sin'd'd ; where f : S2!R and the second integral is in the spherical coordinates repre-sentation. The case of vectorial functions is a trivial extension from this scalar problem. Often the precise form of f is either too complicated to. Recently A. Schrijver derived new upper bounds for binary codes using semidefinite programming. In this paper we adapt this approach to codes on the unit sphere and we compute new upper bounds for the kissing number in several dimensions. In particular our computations give the (known) values for the cases n = 3, 4, 8, 24 sphere. For large d, almost all the volume of the cube is located outside the sphere. 1.2.2 Volume and Surface Area of the Unit Sphere For xed dimension d, the volume of a sphere is a function of its radius and grows as rd. For xed radius, the volume of a sphere is a function of the dimension of the space dimensional sphere, the set of points in R4 that are one unit away from the origin. By doing so, we establish a geometric setting for electrodynamics in positive curvature. When applied to a vector field, the Biot-Savart operator behaves like a magnetic field; we display suitable electric fields so tha Let R n, S n− 1 (x, r)⊂ R n, x∈ R n be the n− dimensional Euclidean space and sphere of radius r with the center in x. Denote S n− 1∆= S n− 1 (0, 1). Let B n (x, r)⊂ R n be the (closed) ball of radius r with the center in x. We say that (finite) se OpenGL rendering: All vertices move to bounds of unit spher The following theorems show upper and lower bounds on the length of arcs in the Delaunay graph on a d-sphere. Theorem 3. Given the Delaunay graph D(P) of a set Pof n>2 points distributed uniformly and independently at random in a unit d-sphere, with probability at least 1 , for 0 <<1, there is no arc abb2D(P), a;b2P, such that A d(1; (a;b. sphere is the sphere plus the region it bounds, speci ed as jX Cj r. An in nite single-sided cone with vertex V, axis ray with origin at V and unit-length direction A, and cone angle 2(0;ˇ=2) is de ned by the set of points X such that the vector X V forms an angle with A Bounds of Eigenvalues 243 as ' ! +1, where!n is the volume of the unit solid ball in Rn, and N(') is the number of eigenvalues less than or equal to ', multiplicity counted.Courant's nodal domain theorem states that the number of nodal domains of the k-th eigen- function is less than or equal to k+1.These theorems are of fundamental impor In fact, for a sphere of radius \(r\), as \(d \to infty\), almost all the volume is contained in an annulus of width \(r/d\) near the boundary of the sphere. And since the volume of the unit sphere goes to 0 while the volume of unit sphere is constant at 1 while \(d\) goes to infinity, essentially all the volume is contained in the corners. >C + xy plane bounds the portion of the unit sphere S The surface of the sphere is always perpendicular to its outward motion. Therefore the surface area is the derivative of volume. Differentiate r n to get an extra factor of n. If a n is the surface area then a n = nv n . In other words, the surface area of the unit hypersphere is volume times dimension Over the Unit Sphere Vijay Bhattiprolu Mrinalkanti Ghosh† Venkatesan Guruswami‡ Euiwoong Lee§ Madhur Tulsiani ¶ November 18, 2016 Abstract We consider the following basic problem: given an n-variate degree-d homogeneous poly-nomial f with real coefficients, compute a unit vector x 2Rn that maximizes jf(x)j. Beside It shows that calculating ex,ey,ez (extents) takes twice as long for bounds-bounds compared to sphere-bounds. The center of the sphere was originally at (0,0,0), but I changed that to ensure it was't giving the sphere an unfair advantage. I think both tests include the same number of additions/subtractions when calculating extents (6) a sphere. Also the next de nitions are inductive: a d-graph is a graph for which every unit sphere is a (d 1)-sphere; a d-graph with boundary is a graph for which every unit sphere is a (d 1)-sphere or (d 1)-ball; a d-ball is a d-graph with boundary for which the boundary is a (d 1)-sphere. Let w k(G m) denote the number of K k+1 subgraphs in. For a sphere centered at a point (x o,y o,z o) the equation is simply (x - x o) 2 + (y - y o) 2 + (z - z o) 2 = r 2. If the expression on the left is less than r 2 then the point (x,y,z) is on the interior of the sphere, if greater than r 2 it is on the exterior of the sphere. A sphere may be defined parametrically in terms of (u,v) x = x o + r. points around the resulting circles (1-spheres). Given a unit 2-sphere and an angle of separation. of (say) 30 degrees, we can slice the sphere into. polar points and intermediate circles at the. equator and at 30 and 60 degrees north and south. latitude. No point on any one slice can be closer bounds can also be derived from analysis in the Grassmann manifold directly. Let B(δ) denote a metric ball of radius δ in G n,p (L). The sphere packing bounds can be derived from the volume of B(δ) [3]. The exact volume formula for a B(δ) in G n,p (L) with p = 1 and L = C is derived in [4]. An asymptotic volume formula for a B(δ) in The GU_Detail::polyIsoSurface() method evaluates the callback function densityFunction() inside the bounding box specified by bounds. A mesh of polygons is created and saved to sphere.bgeo. Try experimenting by changing the implementation of densityFunction() and see what kind of surfaces you can create Third place Meme Generator. Grinch portrait. How to scan multiple pages into one PDF HP Deskjet 2600. GCSE French vocabulary list PDF. Vaux clothing. EBay stamps. Singing Dancing snowman. Latent crossword clue. PS4 dynamic themes codes. Facebook dynamic ads for Local. Icy blonde hair. Browns Eagles weather. What are the societal implications of Citizen App. What countries border Italy. Yellow occasion dresses for wedding guests. Clearwater Court. Sony TDG BR100 3D glasses. Stunting in Malawi. IKEA MALFORS. Hospital beds price in kenya. Urticaria multiforme differential diagnosis. Oribe Dry Texturizing Spray. Buy Triops kit. Irish beef stew recipe. 2018 BMW X5 for sale. Learning Minds Jumbo farm animals. Gluten free green bean casserole recipe with Funyuns. MDR Jobs. Tubs of Foam Stickers. Mountain portrait Drawing. Antiviral drugs list. Bobcat vs John Deere compact tractor. Lotta Volkova interview. Metallic Foil Bubble Mailers. Mens Traditional African Wedding Attire. J Jonah Jameson son. Chiari malformation and hearing loss. Inmes underpinner. How to tenderize wild boar meat. Fashion portfolio of NIFT students. BBCode spoiler box.
CommonCrawl
Analysis of self-report and biochemically verified tobacco abstinence outcomes with missing data: a sensitivity analysis using two-stage imputation Yiwen Zhang1, Xianghua Luo2, 3Email authorView ORCID ID profile, Chap T. Le2, 3, Jasjit S. Ahluwalia4 and Janet L. Thomas5 BMC Medical Research Methodology201818:170 Missing data are common in tobacco studies. It is well known that from the observed data alone, it is impossible to distinguish between missing mechanisms such as missing at random (MAR) and missing not at random (MNAR). In this paper, we propose a sensitivity analysis method to accommodate different missing mechanisms in cessation outcomes determined by self-report and urine validation results. We propose a two-stage imputation procedure, allowing survey and urine data to be missing under different mechanisms. The motivating data were from a tobacco cessation trial examining the effects of the extended vs. standard Quit and Win contests and counseling vs. no counseling under a 2-by-2 factorial design. The primary outcome was 6-month biochemically verified tobacco abstinence. Our proposed method covers a wide spectrum of missing scenarios, including the widely adopted "missing = smoking" imputation by assuming a perfect smoking-missing correlation (an extreme case of MNAR), the MAR case by assuming a zero smoking-missing correlation, and many more in between. The analysis of the data example shows that the estimated effects of the studied interventions are sensitive to the different missing assumptions on the survey and urine data. Sensitivity analysis has played a crucial role in assessing the robustness of the findings in clinical trials with missing data. The proposed method provides an effective tool for analyzing missing data introduced at two different stages of outcome assessment, the self-report and validation time. Our methods are applicable to trials studying biochemically verified abstinence from alcohol and other substances. Abstinence outcome Cigarette smoking is a risk factor for morbidity and mortality in the US and around the world [1–4]. Smoking cessation studies usually encourage cessation and provide either behavioral (e.g., counseling) or pharmaceutical (e.g., nicotine gum) interventions or both. In smoking cessation studies, missing binary abstinence outcomes (i.e., quit or not quit) are very common. These missing outcomes may lead to bias or weakened statistical power in estimating the effect of the studied intervention. Choosing appropriate statistical methods to handle binary missing data has been a continuing source of controversy [5]. The choice of methods to deal with missing data would depend on assumptions about the missing mechanism [6]. Data are referred to as being missing at random (MAR) if the missing status (yes or no) is not related to the missing value itself, but can be dependent on some other observed variables. Data are referred to as being missing not at random (MNAR) or nonignorable missing if the probability of missing depends on the missing value. It is well known that from the observed data alone, it is impossible to distinguish between MAR and MNAR. Therefore, statistical analyses based on one specific missing mechanism, such as the popular MAR assumption, could lead to misleading conclusions if it turns out that the missing is not at random. For example, consider a trial to incentivize smokers to quit smoking. Missing data due to a non-response in surveys or as a result of dropouts could be dependent on the smoking status of the participants, which renders the missing mechanism to be not at random. Sensitivity analysis can play a crucial role in assessing the robustness of the findings in clinical trials with missing data [7]. In this paper, we aim to study sensitivity analysis methods for analyzing smoking cessation outcome data under various missing data mechanisms including MAR and MNAR. In the literature, the standard procedure used in smoking cessation trials is to assume that all non-respondents are smoking (referred to as the "missing = smoking" method hereinafter), which is a special case of single imputation under the MNAR assumption. Based on Jackson et al. [8], around 80% of reports of smoking cessation trials adopt this assumption. However, this simple imputation approach has been shown to lead to potentially biased results [8–12]. Other common single imputation methods frequently used in smoking cessation trials include the last observation carried forward (LOCF), the baseline observation carried forward (BOCF) and imputations based on predicted values from a regression model or the expectation-maximization (EM) algorithm [13]. In addition, Barnes and colleagues [14] used a multiple imputation procedure with the propensity score matching method to impute missing smoking status. Hedeker and colleagues [12] demonstrated that the simple missing = smoking imputation is essentially based on the assumption that the missing status and the smoking status have a perfect correlation (r = 1) or, equivalently, the odds ratio (OR) between missing and smoking equals positive infinity (OR = +∞). They developed both simple and multiple imputation approaches based on more relaxed assumptions which allow different levels of correlation between smoking and missing status. Although their imputation method provides a more flexible and useful alternative to the simple missing = smoking method and has been applied in various trials [8, 13], this method cannot be directly applied to data with missing values generated from multiple sources or stages. For example, when cessation outcome is determined by self-report survey data followed by a urine validation test, both a non-respondent survey and a missing urine sample can lead to missing cessation outcomes. In this case, an imputation procedure designed to account for missing data generated from two different stages, the survey collection stage and the urine collection stage, would be preferable. A naive imputation approach for dealing with this type of missing data would treat only subjects who confirmed their abstinence by both self-report and a urine sample with a negative result as a treatment success (i.e., biochemically verified self-reported abstinence). All the other subjects including those who either failed to complete the survey or failed to provide the urine sample for confirmation of self-reported abstinence would be considered as treatment failures (i.e., not achieving abstinence). Note that this naive approach is an extreme case under the MNAR assumption, assuming a perfect correlation between survey missing and self-report failure and a perfect correlation between urine missing and urine-verified failure among people who self-reported abstinence. Hence, it does not have the flexibility to accommodate different levels of correlation between survey missing and self-report failure or between urine missing and urine-verified failure. In this paper, we extend Hedeker et al.'s method [12] to a two-stage imputation procedure to take into account missing data in either the self-report or the urine verification stages. The rest of the article is organized as follows. In Section 2, we first introduce a randomized controlled trial of college smokers [15] which motivated this research, and then, we introduce a sensitivity analysis method using a two-stage imputation procedure for missing abstinence data at the self-reporting and subsequent biochemical verification stages. In Section 3, we report the sensitivity analysis result of the college smokers study. Some discussions and concluding remarks can be found in Sections 4 and 5, respectively. Aim, design, and setting The data motivating this research were collected from 1217 subjects enrolled in a smoking cessation randomized clinical trial entitled "Enhanced quit and win contests to improve smoking cessation among college students" (henceforth referred to as the "Enhanced Quit & Win" study) during the academic years 2010–2013. This study utilized a two-by-two factorial design to examine the marginal effect of two distinct interventions: the impact of multiple vs. single Quit & Win contests and the effect of the Motivational and Problem Solving counseling (MAPS) counseling vs. no counseling on smoking cessation among college smokers. Specifically, participants were randomly assigned to one of four groups: (1) single contest (denoted by Tx1, n = 306), (2) single contest plus counseling (Tx2, n = 296), (3) multiple contests (Tx3, n = 309), and (4) multiple contests plus counseling (Tx4, n = 306). The primary cessation outcome was measured at 6 months post-randomization when all participants were encouraged to complete an online survey to report their smoking status and other tobacco use in the past 30 days. Only people who reported no tobacco use in the past 30 days were invited to provide urine to biochemically (cotinine assay) confirm their self-reported abstinence. Both self-reported abstinence and biochemically verified abstinence were of interest. The study design and the characteristics of participants are described in greater detail in the parent study manuscript [15]. This trial was registered at ClinicalTrials.gov as number NCT01096108. Sensitivity analysis using two-stage imputation As we described earlier, the missing data in the Enhanced Quit & Win study occurred at two different stages: the survey collection stage and the urine verification stage. A common and conservative imputation approach for dealing with such two-stage missing data would treat only subjects who self-reported abstinence and provided a urine sample which confirmed the abstinence as a treatment success (i.e., biochemically verified abstinence). All the other subjects including those who either failed to complete the survey or failed to provide urine would be considered as a treatment failure. This is analogous to the missing = smoking method for one-stage missing data. Note that this approach is an extreme case of the single imputation approach under the not missing at random (MNAR) assumption, assuming a perfect correlation between the survey missing and self-report failure, or equivalently an infinite odds ratio between the two (denoted by OR1 = ∞), and at the same time a perfect correlation between the urine missing and urine-verified failure (denoted by OR2 = ∞). In this paper, we propose a two-stage imputation approach under the MNAR assumption, which takes into account the two-stage missing process and allows (1) different levels of correlation between the survey missing and self-report failure (i.e., varying OR1) and (2) different levels of correlation between the urine missing and urine-verified failure among those who self-reported abstinence (i.e., varying OR2). This can be considered as an extension of the imputation method in Hedeker et al. [12] for one-stage missing data to a two-stage missing data situation. In this section, we present a two-stage imputation approach conducted on a summary or aggregated data basis. The one-stage imputation method by Hedeker et al. [12] for the self-report data We first introduce some general notation. We code "tobacco use status", the binary dependent variable as 1 = used tobacco/failure and 0 = did not use tobacco/abstinence and "missing status", the binary indicator of whether the data is missing or not as 1 = missing and 0 = observed. Let j = 1, 2, 3, 4 index the four treatment groups, Tx1 to Tx4, respectively. Let subjects be indexed by i = 1, 2, …, nj, where nj denotes the total number of subjects in treatment group Txj. Since we propose to perform imputations within each treatment, in the sequel we omit j from all symbols to simplify notation. Moreover, we use superscripts 11, 12, 21, and 22 to denote the four entries of the two-by-two table between the tobacco use status and missing status, as illustrated in Table 1. Note that, in the second row of Table 1, only the total number of individuals with missing data, n2., can be observed; the abstinence statuses of these people, n21 and n22 (the second row in Table 1) are unknown and need to be estimated. Furthermore, in the summation row of Table 1, the total number of abstinence (denoted by n.1) and the total number of failure (denoted by n.2) are also unknown. Note that the 'dot' in the superscripts indicates summation over a row or column. Two-by-two table of tobacco use status by missing for self-report data Self-report tobacco use status Missing status of self-report data n 1. n . 1 Bolded entries indicate values that are not observable Following Hedeker et al. [12], in order to impute the numbers for abstinence and failure for participants with missing survey data (n21 and n22), we will assume an odds ratio for the missing survey status and self-report tobacco use status (OR1) to reflect the strength of correlation between them (denoted by r1). Note that the widely adopted missing = smoking method corresponds to the situation of r1 = 1 or OR1 = ∞. In that case, n21 is imputed with 0 and n22 is imputed with n2.. More generally, we have $$ {OR}_1=\frac{\left({n}^{22}/{n}^{21}\right)}{\left({n}^{12}/{n}^{11}\right)},\mathrm{or}\ \mathrm{equivalently}\ \frac{n^{22}}{n^{21}}={OR}_1\frac{n^{12}}{n^{11}}, $$ and then it can be shown that the unobserved values, n21 and n22 can be imputed with the assumed OR1 by: $$ {n}^{22}={n}^{2.}\frac{OR_1\ast Odds}{1+{OR}_1\ast Odds}=\pi {n}^{2.}\ \mathrm{and}\ {n}^{21}={n}^{2.}-{n}^{22}, $$ where Odds is the odds of tobacco using among survey respondents and can be calculated from the observed survey data by \( \frac{n^{12}}{n^{11}} \); and \( \pi =\frac{OR_1\ast Odds}{1+{OR}_1\ast Odds} \) is a multiplicative factor relating n22 to n2.. Participants who do not respond or are lost to follow-up in a smoking cessation study may differ from those who are retained in the study with regard to their smoking status. We often expect that the odds of tobacco use among non-respondents is equal to or higher than that of respondents (i.e., OR1 ≥ 1), especially in studies where people are incentivized to quit as in the Enhanced Quit & Win study. Note that a larger OR1 would imply a stronger relationship between missing and tobacco use. The two-stage imputation method for the urine-verified data When estimating biochemically verified abstinence, more complex conditions should be considered since missing data can be present at both the survey and the urine verification stage. Without specification, the notation for the survey data are the same as those defined previously (see Table 1 and the top half of Fig. 1). Some additional notations, shown in the lower half of Fig. 1, specific to the urine data are defined as follows. Let u(obs) and u(imp) denote the number of urine samples provided by people who self-reported abstinence (n11) and the estimated number of urine samples that could be collected from people who would report abstinence if they did not fail to respond to the survey (n21), respectively; similarly, v(obs) and v(imp) are used for the number of missing urine samples of n11 and n21, respectively. For the urine-verified abstinence outcome, similar notation is defined as for the self-report abstinence outcome except for using f, instead of n. The superscript 11, 12, 21, and 22 have the same meaning as those for n. In addition, we use f11(obs) to denote the number of urine-verified abstinence and f12(obs) the number of urine-verified failures obtained from people who actually provided urine samples, and we have u(obs) = f11(obs) + f12(obs).Similarly, we use f11(imp) to denote the number of urine-verified abstinence and f12(imp) the number of urine-verified failures obtained from the estimated available urine samples, and we have u(imp) = f11(imp) + f12(imp). Then we combine f11(obs)and f11(imp) to obtain the total number of participants with urine-verified abstinence f11., among the urine samples what were actually provided or could have been provided if there were no surveys missing; similarly, we combine f12(obs) and f12(imp) to obtain the total number of urine verified failures f12.. Data structure and notation for a single treatment group. Note n is the total sample size, n1. is the number of survey respondents, and n2. is the number of survey non-respondents. Then among survey respondents, denote n11 as the number of observed self-report abstinence and n21 as the number of imputed self-report abstinence. Similarly, n12 and n22 represent the number of observed failures and imputed failures based on the self-report data, respectively. For the urine samples, u(obs) and u(imp) represent the number of observed and estimated (based on the imputed survey data) urine samples being provided; similar notations, v(obs) and v(imp) are used for the number of unavailable urine samples. For the urine data, analogous notations are defined as for the survey data except for using f, instead of n, to denote the numbers of subjects under different conditions (with the superscript 11, 12, 21, and 22 having the same meaning). In addition, we used f11(imp) to denote the abstinence and f12(imp) to denote the failure obtained from the estimated available urine samples u(imp)).Then we combined the f11(obs) and f11(imp) to obtain the number of urine-verified abstinence f11. among the urine samples what were actually provided or could have been provided if all surveys were completed, whereas combined f12(obs) and f12(imp) to obtain the urine-verified failure f12..Denote OR1 as the assumed odds ratio between missing and smoking for self-report data and OR2 for urine data. Dashed lines indicate where missing data are reallocated based on certain assumptions or estimations. Bolded notation denotes values that are not observed Based on the previous imputation results for missing data at the survey stage, self-report abstinence (n11, n21) and failed abstinence (n12, n22) have been generated based on the imputed survey data under the assumed OR1 within each treatment. Next, we proceed to estimate urine-verified abstinence or failure under the assumed OR2 for the imputed, "complete" self-report data. Prior to imputing missing urine sample data, the numbers of subjects who would have provided urine samples (u(imp)) or would not provide urine samples (v(imp)) among survey non-respondents need to be estimated. One can assume that the urine missing rate among survey non-respondents, compared with respondents, varies by a known factor λ (λ > 0), that is $$ \frac{u^{(imp)}}{v^{(imp)}}=\lambda \frac{u^{(obs)}}{v^{(obs)}} $$ Consequently, the number of available (u(imp)) and unavailable (v(imp)) urine samples among imputed self-report abstinence cases can be calculated based on Equation (2) and the fact that u(imp) + v(imp) = n21. Similarly, one can assume that, compared to the actually provided urine samples, the urine-verified abstinence rate among the urine samples that could have been provided if the survey were completed, varies by a known factor η (η > 0): $$ \frac{f^{11(imp)}}{f^{12(imp)}}=\eta \frac{f^{11(obs)}}{f^{12(obs)}}. $$ Therefore, the number of urine-verified abstinence (f11(imp)) and failure (f12(imp)) among imputed self-report abstinence cases people can be estimated based on Equation (3) and the fact that f11(imp) + f12(imp) = u(imp). We then can calculate the total number of urine-verified abstinence cases by f11. = f11(obs) + f11(imp) and the urine-verified failure by f12. = f12(obs) + f12(imp) among all the "available" urine samples (including actually observed or imputed). Up to this point, the urine-verified abstinence (f21) and urine-verified failure (f22), among people whose urine was not actually provided (v(obs)) or would not be provided even if their survey data were completed (v(imp)), have not yet been imputed. Next, we use the fact that v = f22 + f21 = v(obs) + v(imp) and propose a similar imputation procedure for the urine missing data as for the survey missing data described in the previous subsection as follows: $$ \frac{f^{22}}{f^{21}}={OR}_2\frac{f^{12.}}{f^{11.}}\ \mathrm{or}\ \mathrm{equivalently},{f}^{22}=v\frac{OR_2\ast {Odds}^{\prime }}{1+\left({OR}_2\ast {Odds}^{\prime}\right)}=v{\pi}^{\prime }, $$ where the second equality follows from Equation (3), and Odds′ and π′ are the odds of tobacco use and probability of tobacco use among people who provided urine sample, respectively. The overall number of participants with urine-verified abstinence can then be obtained by simply adding f11. and f21, and similarly, the overall number of urine-verified failure is f12. + f22. After all the above steps are completed for each treatment arm, we can estimate the various treatment effects based on the imputed data. For the Enhanced Quit & Win data, we assumed a series of ≥1 values for OR1 and OR2 (1, 2, 3, 4, 5, and positive infinity) and that λ = η = 1 for the ease of presentation, but certainly more values can be examined for these parameters in the sensitivity analysis. SAS Version 9.4 (SAS Institute Inc., Cary, NC, USA) was used for all analyses and the SAS computing code for the proposed two-stage imputation method is provided in the Additional file 1: Supplementary Material. Summary of missing data Figure 2 shows the summary of the 6-month abstinence outcomes and missing data. Of the 1217 randomized participants, 981 (81%) completed the 6-month survey and 236 (19%) did not. Among the 981 survey completers, 264 (27%) self-reported tobacco abstinence. Among the 264 participants who self-reported abstinence, 182 (69%) provided urine. Among the 182 participants who provided urine samples, 5 were not of adequate amount for testing and 153 (84%) were biochemically confirmed as abstinent. Missing data in 6-month abstinence outcomes of the Enhanced Quit & Win study (subjects with missing abstinence data are shaded) Table 2 presents the differential missing data patterns across treatment arms and intervention conditions by both survey missing and urine missing. Note that the five missing urine test results due to inadequate urine amount were assumed to have the same distribution as the other 177 urine samples (86% verified abstinence and 14% verified failure) and added to the corresponding columns in Table 2. We found that the no counseling groups, Tx1 and Tx3, had significantly (p = 0.003) lower survey missing rates (15.4 and 16.5%, respectively and 15.9% for the combined group) than the two counseling arms, Tx2 and Tx4 (22.6 and 23.2%, respectively and 22.9% for the combined group), whereas the single- and multiple-contests groups were found to have similar survey missing rates (p = 0.798). The urine missing rate was similar between the single- and multiple-contests groups and between the counseling and no counseling groups (both ps > 0.05). Summary of 6-month self-reported and urine verified abstinence and missing data by treatment arms and by type of intervention Self-report abstinence and survey missing at 6 months Urine-verified abstinence and urine missing at 6 months Treatment group Self-report abstinence Self-report failure Survey question missing Urine-verified abstinence n (%a) Urine-verified failure Urine missing By treatment arm Tx1 By intervention No counseling (Tx1 + Tx3) 12 (9.5%) Counseling (Tx2 + Tx4) Single contest (Tx1 + Tx2) Multiple contests (Tx3 + Tx4) 157 (59.5%)b 25 (9.5%)b Tx1: single contest + no counseling; Tx2: single contest + counseling; Tx3: multiple contests + no counseling; Tx4: multiple contest + counseling aPercentage out of those who self-reported abstinence bFive subjects whose urine samples were not of adequate amount for testing. These 5 missing urine test results were assumed to have the same distribution as the rest 177 urine samples (86% verified abstinence and 14% verified failure) and added to the two columns accordingly Self-report abstinence outcome The imputation results of the self-report abstinence outcome are summarized in Table 3. As a comparison, we also present the results from a complete case only analysis, where only subjects with no missing survey or urine were included. We can prove that the abstinence rate decreases as OR1 increases. As expected, the estimated abstinence rates and treatment effect based on the imputed data under the MAR assumption (i.e., OR1 = 1) are the same as those based on the complete case only analysis. However, the statistical significance is stronger (smaller p) in the former as more data are utilized. Under the MAR assumption, the estimated treatment effect of counseling vs. no counseling is significant (OR for abstinence = 1.31, p = 0.034); however, as OR1increases, the estimated treatment effect becomes less significant (all ps > 0.05 for OR1 ≥ 2), indicating that this treatment effect is sensitive to different assumed values of OR1. On the contrary, the estimated treatment effects of multiple vs. single contest are all close to 1.16 (all ps > 0.05), indicating that this treatment effect estimation was robust to different assumed values of OR1. This phenomenon can be explained by the different survey missing rate between the counseling and no counseling groups, but not between the multiple and single contest groups (see the left panel in Table 2). Summary of imputation results for self-report abstinence assuming different levels of association between the survey missing status and self-report abstinence Counseling vs. no counseling Multiple vs. single contests Counseling Tx2 + Tx4 No counseling Tx1 + Tx3 Estimated treatment effect (odds ratio for abstinence) Multiple contests Tx3 + Tx4 Single contest Complete case only ⁞ +∞ OR1: odds ratio between missing and tobacco use status for self-report data, where OR1 = 1 corresponds to the situation when missing is independent of tobacco use and OR1 = positive infinity (+∞) corresponds to the situation when missing = smoking; Tx1: single contest + no counseling; Tx2: single contest + counseling; Tx3: multiple contests + no counseling; Tx4: multiple contest + counseling. P-values are based on the Chi-square test Urine-verified abstinence outcome The results obtained from the imputed urine-verified abstinence data were summarized in Table 4. By considering all the combinations of OR1 and OR2, each ranging from 1 to 5 and positive infinity, we found that the abstinence rate decreases as the assumed level of dependence between missing and tobacco use, OR1 or OR2 increases, as expected. Notice that the abstinence rates for the two studied conditions were found consistently higher than their corresponding control groups in all scenarios (i.e., the estimated treatment effect as indicated as odds ratios of abstinence are all > 1). Summary of imputation results for urine-verified abstinence assuming different levels of association between missing and abstinence No counseling OR1: odds ratio between missing and tobacco use status for self-report data; OR2: odds ratio between urine missing and urine-verified failure among those who self-reported abstinence; Tx1: single contest + no counseling; Tx2: single contest + counseling; Tx3: multiple contests + no counseling; Tx4: multiple contests + counseling. P-values are based on the Chi-square test As shown in the upper-left corner of Table 4, significant treatment effects were estimated for the counseling group when both OR1 and OR2 were small. Otherwise, there seemed to be no significant treatment effects for the counseling or the multiple contests groups under different combinations of OR1 and OR2. We also found that the estimated treatment effect of counseling vs. no counseling is more sensitive to the assumed level of dependence between the survey missing and self-report abstinence, but less sensitive to the assumed level of dependence between the urine missing and urine-verified abstinence. For the estimated treatment effect of the multiple- vs. single-contest, we observed no obvious pattern, no matter what values were assumed for OR1 or OR2. This can be explained by the comparable survey and urine missing rates between the two contest groups as shown in Table 2. We performed additional sensitivity analysis by assuming that survey non-respondents would be less likely to provide urine than survey respondents (λ = 0.5). Results (shown in Additional file 1: Table S1) are consistent with the results reported above which are based on the equal urine missing rate assumption (λ = 1). In many smoking cessation studies, researchers are interested in biochemically verified abstinence (e.g., urine cotinine verified abstinence). To conserve resources, it is common to only invite people who self-report abstinence to provide biochemical samples to validate self-reported abstinence. Hence, missing data can be present at either the survey completion stage or the biochemical sampling stage. The imputation approaches presented in this paper take into account this two-stage missing data challenge and describes a two-step imputation approach allowing the survey missing and biochemical sample missing to have different missing mechanisms. Our proposed imputation approach includes both the missing = smoking imputation (an extreme case of MNAR) and the MAR imputation as special cases, hence providing a more thorough sensitivity analysis result than any simple imputation method alone. The estimated effect of the treatments tested in the Enhanced Quit & Win study were sensitive to the different missing mechanisms depending on the differential missing data patterns across treatment arms. Although the overall results were not universally impacted, these findings demonstrate that the use of one simple imputation method alone could result in misleading conclusions regarding a treatment effect estimate. There has been a debate regarding whether treatment should be adjusted or stratified in the imputation models. Jackson et al. [16] adjusted for treatment in their imputation model since treatment was found to be associated with the missing status and predicted missing outcomes. Alternatively, in this paper, we performed imputations stratified by treatment rather than adjusting for treatment in the model [17, 18]. Although some researchers may argue that this may overestimate the treatment effect [16], it has not been demonstrated by the preponderance of evidence. Research with more data examples to investigate the difference between these two strategies is certainly warranted. In this paper, all the imputations were performed on aggregated data. In other words, no individual-level variation has been considered. Currently, we are working on extending the proposed imputation approach for aggregated data to take into account the uncertainty in the individual probability of tobacco use as in multiple imputations. One advantage of the imputations based on aggregated data is the ease of computing, while the multiple imputations approach is expected to give more conservative results as individual level variability is taken into account in the estimation of treatment effect. Also in this paper, we focus on the analysis of cessation outcome at a single time point. However, with repeatedly measured outcomes, longitudinal data analysis methods for dealing with missing data could be considered. [12, 19–21]. Note that our proposed methods are applicable to various tobacco or other substance use trials where the treatment goal is biochemically verified self-reported abstinence. The proposed two-stage imputation method provides an effective sensitivity analysis tool for analyzing missing data introduced at two different stages of outcome assessment, the self-report and validation time, frequently encountered in tobacco cessation studies. Our methods are also applicable to trials studying biochemically verified abstinence from other substance use such as alcohol and recreational drugs. MNAR: Missing not at random LOCF: Last observation carried forward BOCF: Baseline observation carried forward Expectation-maximization Motivational and problem solving counseling The authors thank the Enhanced Quit and Win study team for collecting the data and the two referees whose comments have helped to improve the manuscript substantially. This study was supported by the Biostatistics Core of the University of Minnesota Masonic Cancer Center (funded by the National Cancer Institute 5P30CA077598) to CTL and XL, by the National Heart, Lung, and Blood Institute (5R01HL094183) to JLT, JSA, and XL, and by the Clinical and Translational Science Institute of University of Minnesota (National Center for Advancing Translational Sciences UL1TR002494). The funding body played no role in the design of the study and collection, analysis, and interpretation of data and/or in writing the manuscript. This is a manuscript demonstrating a novel application of a statistical method on data collected from a previous study [15]. Data requests should be addressed to JLT, the principle investigator of the Enhanced Quit & Win study. YZ performed all the analyses and SAS programming and co-wrote the manuscipt, which was part of her dissertation when she was a master of science (MS) student at the University of Minnesota; XL developed the original idea, supervise YZ's dissertation research, and co-wrote the manuscript; CTL and JSA were YZ's dissertation committee members and participated discussions; TLJ was the principle investigator of the Enhanced Quit & Win study and supervised the conduct of the trial and the interpretation of the analysis results; All authors contributed to the writing and revisions of the manuscript and have read and approved the amnuscript. The Enhanced Quit & Win study was approved by the University of Minnesota's human subjects committee. Written informed consent was obtained from all participants in the "Quit and Win Study". X L is a member of the editoral board (Associate Editor) of this journal. Additional file 1: SAS Computing Code for Analyzing Enhanced Quit & Win Data. Table S1. Summary of imputation results for urine-verified abstinence assuming different levels of association between missing and abstinence when λ = 0.5. (DOCX 58 kb) Joseph J. Zilber School of Public Health, University of Wisconsin-Milwaukee, 1240 N 10th St, Milwaukee, WI 53205, USA School of Public Health, Division of Biostatistics, University of Minnesota, 420 Delaware St. SE, MMC 303, Minneapolis, MN 55455, USA University of Minnesota Masonic Cancer Center, Minneapolis, MN 55455, USA Brown University School of Public Health, Box G-S121-5, Providence, RI 02912, USA Division of General Internal Medicine, Department of Medicine, University of Minnesota, 717 Delaware St. SE, Minneapolis, MN 55414, USA Lopez AD, Collishaw NE, Piha T. A descriptive model of the cigarette epidemic in developed countries. Tob Control. 1994;3:242–7.View ArticleGoogle Scholar Peto R, Lopez AD, Boreham J, Thun M, Heath C Jr. Mortality from tobacco in developed countries: indirect estimation from national vital statistics. Lancet. 1992;339:1268–78.View ArticleGoogle Scholar Peto R, Lopez AD, Boreham J, Thun M, Heath C Jr, Doll R. Mortality from smoking worldwide. Br Med Bull. 1996;52:12–21.View ArticleGoogle Scholar Pirie K, Peto R, Reeves GK, Green J, Beral V, Collaborators MWS. The 21st century hazards of smoking and benefits of stopping: a prospective study of one million women in the UK. Lancet. 2013;381:133–41.View ArticleGoogle Scholar Delucchi KL. Methods for the analysis of binary outcome results in the presence of missing data. J Consult Clin Psychol. 1994;62:569–75.View ArticleGoogle Scholar Little RJA, Rubin DB. Statistical analysis with missing data. 2nd ed. New York NY: Wiley; 2002.View ArticleGoogle Scholar Thabane L, Mbuagbaw L, Zhang S, et al. A tutorial on sensitivity analyses in clinical trials: the what, why, when and how. BMC Med Res Methodol. 2013;13:92.View ArticleGoogle Scholar Jackson D, White IR, Mason D, Sutton S. A general method for handling missing binary outcome data in randomized controlled trials. Addiction. 2014;109:1986–93.View ArticleGoogle Scholar Borland R, Balmford J, Hutn D. The effectiveness of personally tailored computer-generated letters for tobacco cessation. Addiction. 2004;99:369–77.View ArticleGoogle Scholar Nelson DB, Parlin MR, Fu SS, Joseph AM, An LC. Why assigning ongoing tobacco use is not necessarily a conservative approach to handling missing tobacco cessation outcomes. Nicotine Tob Res. 2009;11:77–83.View ArticleGoogle Scholar Blankers M, Smit ES, van der Pol P, de Vres H, Hoving C, van Laar M. The missing=smoking assumption: a fallacy in internet-based smoking cessation trials? Nicotine Tob Res. 2016;18:25–33.PubMedGoogle Scholar Hedeker D, Mermelstein RJ, Demirtas H. Analysis of binary outcomes with missing data: missing=smoking, last observation carried forward, and a little multiple imputation. Addiction. 2007;102:1564–73.View ArticleGoogle Scholar Smolkowski K, Danaher BG, Seeley JR, Kosty DB, Severson HH. Modeling missing binary outcome data in a successful web-based smokeless tobacco cessation program. Addiction. 2010;105:1005–15.View ArticleGoogle Scholar Barnes SA, Larsen MD, Schroeder D, Hanson A, Decker PA. Missing data assumption and methods in a smoking cessation study. Addiction. 2010;105:431–7.View ArticleGoogle Scholar Thomas JL, Luo X, Bengtson J, et al. Enhancing Quit & win contests to improve cessation among college smokers: a randomized clinical trial. Addiction. 2016;111:331–9.View ArticleGoogle Scholar Jackson D, Mason D, White IR, Sutton S. An exploration of the missing data mechanism in an internet based smoking cessation trial. BMC Med Res Methodol. 2012;12:157.View ArticleGoogle Scholar White IR, Royston P, Wood AM. Miltiple imputation using chained equations: issues and guidance for practice. Statist Med. 2011;30:377–99.View ArticleGoogle Scholar Sullivan TR, White IR, Salter AB, Ryan P, Lee KJ. Should multiple imputation be the method of choice for handling missing data in randomized trials? Stat Methods Med Res. 2018;27:2610–26.View ArticleGoogle Scholar Daniels MJ, Hogan JW. Missing data in longitudinal studies. Taylor & Francis Group; 2008.Google Scholar Demirtas H. Multiple imputation under Bayesianly smoothed pattern-mixture models for non-ignorable drop-out. Statist Med. 2005;24:2345–63.View ArticleGoogle Scholar Yang X, Shoptaw S. Assessing missing data assumptions in longitudinal studies: an example using a smoking cessation trial. Drug Alcohol Depend. 2005;77:213–25.View ArticleGoogle Scholar
CommonCrawl
An analytical investigation of elastic-plastic deformation of FGM hollow rotors under a high centrifugal effect Shams Torabnia ORCID: orcid.org/0000-0002-0247-06851, Sepideh Aghajani2 & Mohammadreza Hemati2 International Journal of Mechanical and Materials Engineering volume 14, Article number: 16 (2019) Cite this article Functionally graded material shafts are the main part of many modern rotary machines such as turbines and electric motors. The purpose of this study is to present an analytical solution of the elastic-plastic deformation of functionally graded material hollow rotor under a high centrifugal effect and finally determine the maximum allowed angular velocity of a hollow functionally graded material rotating shaft. Introducing non-dimensional parameters, the equilibrium equation has been analytically solved. The results for variable material properties are compared with the homogeneous rotor and the case in which Young's modulus is the only variable while density and yield stress are considered to be constant. It is shown that material variation has a considerable effect on the stress and strain components and radial displacement. Considering variable density and yield stress causes yielding onset from inner, outer, or simultaneously from both inner and outer rotor shaft radius in contrast to earlier researches that modulus of elasticity was the only variable. The effects of the density on the failure of a functionally graded material elastic fully plastic in a hollow rotating shaft are investigated for the first time in this study with regard to Tresca's yield criterial. Numerical simulations are used to verify the derived formulations which are in satisfying agreement. Functionally graded materials (FGM) are finding vast applications in different rotary systems such as DC motors with a magnetic membrane and chemical resistant hydraulic motors (Mahamood & Akinlabi, 2017), gas turbine rotors (Bahaloo, Papadopolus, & Ghosha, 2016; Klocke, Klink, & Veselovac, 2014; Lal, Jagtap, & Singh, 2013), and modern vehicle drive train systems (Kaviprakash, Kannan, Lawrence, & Regan, 2014; Lee, Kim, Kim, & Kim, 2004; Moorthy, Mitiku, & Sridhar, 2013). Computing different stresses and the radial displacement of FGM rotors are required to determine the maximum allowed angular velocity (Nino, Hirai, & Watanabe, 1987). Timoshenko (Timoshenko & Goodier, 1970), Mendelson (Mendelson, 1968), Chakrabarty (Chakrabarty, 2006) and Mack (Mack, 1991) analyzed a homogenous rotor. You analyzed a rotating FGM disk (You, You, Zhang, & Li, 2007) and Dai considered the magnetic properties of the FGM disk (Dai & Dai, 2017). Fukui and Yamanaka studied elastic analysis for thick-walled FGM tubes subjected to internal pressure (Fukui & Yamanaka, 1991). Figueiredo studied FGM pipes (Figueiredo, Borges, & Rochinha, 2008). Tutunku and Ozturk determined solutions for stresses in FGM pressure vessels (Tutuncu & Ozturk, 2001). Jabbari (Jabbari, Sohrabpour, & Eslami, 2002) and Ansari (AnsariSadrabadi et al., 2017) investigated mechanical and thermal stresses in an FGM hollow cylinder under symmetric loads. You considered an FGM pressurized sphere with a nonlinear variable modulus of elasticity in a radial direction (You, Zhang, & You, 2005). Dai et al. studied a pressurized magneto elastic FGM tube (Dai, Fu, & Dong, 2006). Hosseini et al. analyzed the thermo-elastic behavior of an FG rotating disk (HosseiniKordkheili & Naghdabadi, 2006). Duc (Duc, Lee, Nguyen-Thoi, & Thang, 2017; Duc, Thang, Dao, & Vantac, 2015), Khoa (Khoa, Thiem, Thiem, & Duc, 2019), and El-Haina (El-Haina, Bakora, Bousahla, Tounsi, & Mahmoud, 2017) considered the buckling problem in their research. Thom (Thom, Kien, Duc, Duc, & Tinh, 2017) analyzed a two-dimensional analysis on an FGM plane by plane strain theories. Eraslan (Eraslan & Akis, 2006a) gave an analytical solution for rotating disks and tubes in plane stress and plane strain state, and studied stress solutions of FGM shafts and disks (Eraslan & Akis, 2006b; Eraslan & Akis, 2006c). Kargarnovin et al. (Kargarnovin, Faghidian, & Arghavani, 2007) investigated FGM circular plates with arbitrary rotational symmetric load. Akis (Akis & Eraslan, 2007) studied a rotating FGM shaft problem in the elastic-plastic state of stress with a variable modulus of elasticity. Tsiatas (Tsiatas & Babouskos, 2017) worked on torsional FGM bar. Akis studied the elasticity solution for thick-walled FG spherical pressure vessels with linearly and exponentially varying properties (Akis, 2009). ZamaniNejad and Rahimi studied the elasticity of an FGM rotating cylindrical pressure vessels (ZamaniNejad & Rahimi, 2010). Peng and Li investigated an orthotropic hollow rotating disk with a variable modulus of elasticity and density (Peng & Li, 2012). Some others studied creep for FGM material under thermal condition (Bose & Rattan, 2018; Khanna, Gupta, & Nigam, 2017; Zharfi & EkhteraeiToussi, 2018). Yildirim (Yildirim & Tutuncu, 2018), Seraj (Seraj & Ganesan, 2018), Bahaadini (Bahaadini & Saidi, 2018), Swaminathan (Swaminathan, Naveenkumar, Zenkour, & Carrera, 2015), Duc (Duc, 2013; Duc & Cong, 2018; Duc, Homayoun, Quan, & Khoa, 2019; Duc, Nguyen, & Khoa, 2017; Duc, Tran, & Cong, 2016), and Bouderba (Bouderba, Houari, Tounsi, & Mahmoud, 2016) worked on rotor instabilities and vibrations under different conditions. Burzyński (Burzyński, Chróścielewski, Daszkiewicz, & Witkowski, 2018) worked on a FEM method to understand elasto-plastic behaviors of FGM shells, and Mathew (Mathew, Natarajan, & Pañeda, 2018) considered size effects in his researches. Duc (Duc, 2016a; Duc, 2016b; Duc et al., 2015; Duc, Bich, & Cong, 2016; Duc, Khoa, & Thiem, 2018; Duc, Kim, & Chan, 2018; Duc, Thuy Anh, & Cong, 2014) specifically studied thermal effects such as buckling, thermal instability, and dynamic thermal loads circular sections. The authors (Torabnia, Hemati, & Aghajanib, 2019) considered the elastic behavior of a hollow FGM rotor. Although the previous studies are valuable, none of them considered the plastic effects in an analytical model. All previous jobs used a numerical method such as FEM to solve the plastic model. In the present work, the analysis is based on small deformation theory. The shaft is assumed to be infinitely long (plane strain). The maximum allowed angular velocity has been defined as the angular velocity in which yielding initiates based on Tresca's criterion. Non-dimensional parameters are introduced based on the geometry and material parameters. Stress components are derived using generalized Hook's law. To identify the stress components ordering, non-dimensional stress components are plotted for the special case of equal exponent parameters with the variable radius ratio. The results show when the exponent parameters vary between − 2 and 2, hoop stress and radial stress components are the largest and the smallest stress components. The effect of variation of density and yield stress is investigated on the maximum allowed angular velocity and has a considerable effect on the stress distribution and yielding initiation and the maximum allowed angular velocity. For the first time, density variation is considered with variable density and radius ratio of a hollow rotor on elastic and plastic behavior and maximum allowed angular velocity are discussed (Fig. 1). Schematic of the rotor Methods/experimental In this section, the aims and methodology of the study presented by an explanation of the governing equations of a hollow rotor with variable properties through its geometry. Material properties in an FGM may vary in any direction. Here, modulus of elasticity, density, and yield stress are functions of radial dimension: $$ E(r)={E}_0{\left(r/b\right)}^{n_E},\rho (r)={\rho}_0{\left(r/b\right)}^{n_{\rho }},{\sigma}_{\mathrm{Y}}(r)={\sigma}_0{\left(r/b\right)}^{n_{\sigma }} $$ The material properties modeled with the power-law function. Different exponent parameters allow various shapes for material variation. By formulating in the cylindrical coordinate system (r, θ, z) for an infinitely long tube which rotates about longitude axis (Timoshenko & Goodier, 1970): $$ \frac{d}{dr}\left(r{\sigma}_r\right)-{\sigma}_{\theta }=-\rho {r}^2{\omega}^2 $$ The strain and radial displacement relation is: $$ {\varepsilon}_r= du(r)/ dr,{\varepsilon}_{\theta }=u\;(r)/r $$ Plane strain condition is due to a long tube which causes the zero value for the longitude strain. Manipulating stress-strain and radial displacement: $$ {\sigma}_r=\frac{E}{\left(1+v\right)\left(1-2v\right)}\left(\left(1-v\right)\frac{du(r)}{dr}+v\frac{u(r)}{r}\right),{\sigma}_{\theta }=\frac{E}{\left(1+v\right)\left(1-2v\right)}\left(\left(1-v\right)\frac{u(r)}{r}+v\frac{du(r)}{dr}\right),{\sigma}_z=v\left({\sigma}_r+{\sigma}_{\theta}\right) $$ Substituting (4) into (2) in elastic region: $$ {r}^2\frac{d^2}{d{r}^2}u(r)+\left(1+{n}_E\right)r\left(\frac{d}{dr}u(r)\right)-\frac{1-v\left(1+{n}_E\right)}{1-v}u(r)=-\frac{\left(1+v\right)\left(1-2v\right){\rho}_0{\omega}^2}{\left(1-v\right){E}_0}{b}^{\left({n}_E- n\rho \right)}{r}^{\left(3+ n\rho -{n}_E\right)} $$ The general reformed solution of (5) is: $$ \overline{u}\left(\overline{r}\right)={C}_1{\overline{r}}^{\left(\frac{-{n}_E-k}{2}\right)}+{C}_2{\overline{r}}^{\left(\frac{-{n}_E+k}{2}\right)}-{A}_1{\overline{\omega}}^2{\overline{r}}^{\left( n\rho -{n}_E+3\right)} $$ A solution of (6) is simplified and taken into a non-dimensional form to be independent of material properties. Non-dimensional quantities are presented: $$ {\displaystyle \begin{array}{l}\overline{r}=r/b,\overline{h}=a/b,\overline{u}=u/b,{\overline{\omega}}^2={\rho}_0{\omega}^2{b}^2/{E}_0\\ {}k=\frac{\sqrt{{n_E}^2-2{n_E}^2v+{v}^2{n_E}^2+4-8v+4{v}^2-4v{n}_E+4{v}^2{n}_E}}{1-v}\\ {}{A}_1=\frac{2{v}^2+v-1}{\left(4+{n}_{\rho}\right)\left({n}_{\rho }-{n}_E+2\right)v-{n_{\rho}}^2+\left({n}_E-6\right){n}_{\rho }-8+3{n}_E}\end{array}} $$ ω̅ defined as the non-dimensional rotating velocity. Substituting radial displacement into (5): $$ {\displaystyle \begin{array}{l}{\overline{\sigma}}_r=\frac{{\overline{r}}^{n_E}}{\left(1+v\right)\left(1-2v\right)}\left\{\begin{array}{l}v\left({C}_1{\overline{r}}^{m_1}+{C}_2{\overline{r}}^{m_2}-{A}_1{\overline{\omega}}^2{\overline{r}}^{\left(2+{n}_{\rho }-{n}_E\right)}\right)+\\ {}\left(1-v\right)\left(\begin{array}{l}{C}_1{\overline{r}}^{m_1}\frac{\left(-{n}_E-k\right)}{2}+{C}_2{\overline{r}}^{m_2}\frac{\left(-{n}_E+k\right)}{2}\\ {}-{A}_1{\overline{\omega}}^2\left(3+{n}_{\rho }-{n}_E\right)\overline{r}\left(2+{n}_{\rho }-{n}_E\right)\end{array}\right)\end{array}\right\}\\ {}{\overline{\sigma}}_{\theta }=\frac{{\overline{r}}^{n_E}}{\left(1+v\right)\left(1-2v\right)}\left\{\begin{array}{l}\left(1-v\right)\left({C}_1{\overline{r}}^{m_1}+{C}_2{\overline{r}}^{m_2}-{A}_1{\overline{\omega}}^2{\overline{r}}^{\left(2+{n}_{\rho }-{n}_E\right)}\right)+\\ {}v\left(\begin{array}{l}{C}_1{\overline{r}}^{m_1}\frac{\left(-{n}_E-k\right)}{2}+{C}_2{\overline{r}}^{m_2}\frac{\left(-{n}_E+k\right)}{2}\\ {}-{A}_1{\overline{\omega}}^2\left(3+{n}_{\rho }-{n}_E\right){\overline{r}}^{\left(2+{n}_{\rho }-{n}_E\right)}\end{array}\right)\end{array}\right\}\\ {}{\overline{\sigma}}_z=\frac{v\times {\overline{r}}^{n_E}}{\left(1+v\right)\left(1-2v\right)}\left\{{C}_1{\overline{r}}^{m_1}{m}_3+{C}_2{\overline{r}}^{m_2}{m}_4-{A}_1{\overline{\omega}}^2{\overline{r}}^{\left(2+{n}_{\rho }-{n}_E\right)}\left(4+{n}_{\rho }-{n}_E\right)\right\}\end{array}} $$ σ̅i stands for non-dimensional stress which is defined in the form σ̅i = σi/E0. The constants used in (8) are: $$ {m}_1=\frac{-{n}_E-k-2}{2},{m}_2=\frac{-{n}_E+k-2}{2},{m}_3=\frac{-{n}_E-k+2}{2},{m}_4=\frac{-{n}_E+k+2}{2} $$ To obtain C1 and C2 in radial displacement (7), two boundary conditions are needed. Since no pressure is applied to the inner and outer surfaces of the rotor, the boundary conditions are considered as σ̅Y(r̅=h̅) = 0 & σ̅Y(r̅=1) = 0. Constants C1 and C2 are: $$ {C}_1=-2{A}_1{A}_2{\overline{\omega}}^2\frac{\left[{\overline{h}}^{n_p+4}-{\overline{h}}^{-{m}_1}\right]}{\left({\overline{h}}^{-{m}_1}-{\overline{h}}^{-{m}_2}\right)},{C}_2=-2{A}_1{A}_3{\overline{\omega}}^2\frac{\left[{\overline{h}}^{n_{\rho }+4}-{\overline{h}}^{-{m}_2}\right]}{\left({\overline{h}}^{-{m}_1}-{\overline{h}}^{-{m}_2}\right)} $$ For A2 and A3: $$ {A}_2=\frac{3-v{n}_{\rho }+v{n}_E-2v+{n}_{\rho }-{n}_E}{-{n}_E-k+v{n}_E+ vk+2v},{A}_3=\frac{3-v{n}_{\rho }+v{n}_E-2v+{n}_{\rho }-{n}_E}{-2v-v{n}_E-k+{n}_E+ vk} $$ Tresca's criterion is considered to determine yield condition and allowed the angular velocity of the shaft. As the results show, yielding is a function of exponent parameters of material variables (nE, nρ, nσ). In this paper, the results are discussed on the equality of the exponents of material variables. According to Fig. 3 for the state of equal exponents in the range of − 2 ≤ ni ≤ 2 and 0.5 ≤ h̅ ≤ 1, the stress components have the order of σθ ≥ σz ≥ σr. The yield criterion is in the form of σθ-σr = σY. Rearranging into a non-dimensional form gives: $$ {\overline{\sigma}}_{\theta }-{\overline{\sigma}}_{\overline{r}}=\frac{\sigma_0}{E_0}{\overline{r}}^{n_{\sigma }}={\sigma}_0{\overline{r}}^{n_{\sigma }}\Rightarrow {\overline{\sigma}}_{\mathrm{Tresca}}=\frac{\left({\overline{\sigma}}_{\theta }-{\overline{\sigma}}_{\overline{r}}\right)}{{\overline{\sigma}}_0}{-}_{\overline{r}}{n}_{\sigma } $$ By substituting hoop and radial stresses in the yield's criterion following equation formed. $$ {\overline{\sigma}}_{\mathrm{Tresca}}=\frac{{\overline{\omega}}^2{A}_1{\overline{r}}^{n_E}}{\left(1+v\right){\overline{\sigma}}_0}\left\{\frac{2\left[{}_{+{m}_2{A}_3\left({\overline{h}}^{4+{n}_{\rho }}-{\overline{h}}^{-{m}_2}\right){\overline{r}}^{m_2}}^{m_1{A}_2\left({\overline{h}}^{4+{n}_{\rho }}-{\overline{h}}^{-{m}_1}\right){\overline{r}}^{m_1}}\right]}{\begin{array}{l}{\overline{h}}^{-{m}_1}-{\overline{h}}^{-{m}_2}\\ {}+\left(3+{n}_{\rho }-{n}_E\right){\overline{r}}^{\left(2-{n}_E+{n}_{\rho}\right)}\end{array}}\right\}-{\overline{r}}^{n_{\sigma }} $$ Yielding occurs when above equation equals zero for the corresponding load parameters such as angular velocity ω̅, yield stress σ0, and modulus of elasticity E. These parameters are rearranged and defined together as non-dimensional loading parameter (NLP): $$ NLP=\frac{\overline{\omega}}{\sqrt{{\overline{\sigma}}_0}} $$ NLP in the causes of which σ̅Tresca = 0 is called the maximum angular velocity of the shaft. As it is shown in Fig. 4 considering variable modulus of elasticity, density, and yield stress may cause yielding starts from the inner, outer, or simultaneously from the inner and outer surfaces of the shaft. The plastic region grows through the radial direction by increasing the angular velocity of the shaft results in raising the plastic elastic region ratio. Hence, determining the effect of radius ratio on stress ordonnance and then resumption of yielding initiation for the remaining elastic region is needed. Equilibrium equation of rotating tube (2) is independent of the elastic-plastic behavior of the material. Using Tresca's yield criterion will give radial stress as follows: $$ {\sigma}_r=\left({\sigma}_0/n\right){\overline{r}}^n-{\rho}_0{\overline{r}}^m\left[{r}^2{\omega}^2/\left(m+2\right)\right]+{C}_3 $$ To verify different conditions, it is supposed that yielding will initiate from both surfaces of the tube. For the yielding initiation from the inner radius of the shaft, the proposed boundary condition to determine C3 is (σ̅r(r̅=h̅) = 0). Substituting C3 into radial stress and using stress relations will result: $$ {\displaystyle \begin{array}{l}{\overline{\sigma}}_{\overline{r}}=\frac{{\overline{\sigma}}_0}{n_{\sigma }}\left({\overline{r}}^{n_{\sigma }}-{\overline{h}}^{n_{\sigma }}\right)+\frac{{\overline{\omega}}^2}{n_{\rho }+2}\left({\overline{h}}^{n_{\rho }+2}-{\overline{r}}^{n_{\rho }+2}\right)\\ {}{\overline{\sigma}}_{\theta }=\frac{{\overline{\sigma}}_0}{n_{\sigma }}\left({\overline{r}}^{n_{\sigma }}\left(1+{n}_{\sigma}\right)-{\overline{h}}^{n_{\sigma }}\right)+\frac{{\overline{\omega}}^2}{n_{\rho }+2}\left({\overline{h}}^{n_{\rho }+2}-{\overline{r}}^{n_{\rho }+2}\right)\\ {}{\overline{\sigma}}_z=v\left(\frac{{\overline{\sigma}}_0}{n_{\sigma }}\left(-2{\overline{h}}^{n_{\sigma }}+{\overline{r}}^{n_{\sigma }}\left(2+{n}_{\sigma}\right)\right)+\frac{2{\overline{\omega}}^2}{n_{\rho }+2}\left({\overline{h}}^{n_{\rho }+2}-{\overline{r}}^{n_{\rho }+2}\right)\right)\end{array}} $$ Constants C1 and C2 related to the elastic region are obtained using stress continuity condition in different regions (σ̅r(r̅ = r̅ep)elastic = σ̅r(r̅ = r̅ep)plastic, σ̅r(r̅ = 1)elastic = 0, r̅ep is the elastic-plastic border of the shaft). In the case of yielding initiation from the outer radius of the shaft, the following boundary condition is used to determine C3 (σ̅r(r̅ = 1) = 0). The plastic stresses are: $$ {\displaystyle \begin{array}{l}{\overline{\sigma}}_r=\frac{{\overline{\sigma}}_0}{n_{\overline{\sigma}}}\left[{\overline{r}}^{n_{\sigma }}-1\right]+\frac{{\overline{\omega}}^2}{n_{\rho }+2}\left[1-{\overline{r}}^{n_{\rho +2}}\right]\\ {}\left({n}_i\to 0\right)\Rightarrow {\overline{\sigma}}_r=\left(\frac{{\overline{\omega}}^2}{2}\right)\left(1-{\overline{r}}^2\right)+{\overline{\sigma}}_0\ln \left(r/a\right)\\ {}{\overline{\sigma}}_{\theta }=\frac{{\overline{\sigma}}_0}{n_{\sigma }}\left({\overline{r}}^{n_{\sigma }}\left(1+{n}_{\sigma}\right)-1\right)+\frac{{\overline{\omega}}^2}{n_{\rho }+2}\left(1-{\overline{r}}^{n_{\rho }+2}\right)\\ {}\left({n}_i\to 0\right)\Rightarrow {\overline{\sigma}}_{\theta }=\frac{{\overline{\omega}}^2}{2}\left(1-{\overline{r}}^2\right)+{\overline{\sigma}}_0\ln \left(r/a\right)+{\overline{\sigma}}_0\\ {}{\overline{\sigma}}_z=v\left(\frac{{\overline{\sigma}}_0}{n_{\sigma }}\left({\overline{r}}^{n_{\sigma }}\left(2+{n}_{\sigma}\right)-2\right)+\left[\frac{2{\overline{\omega}}^2}{n_{\rho }+2}\right]\left(1-{\overline{r}}^{n_{\rho +2}}\right)\right)\\ {}\left({n}_i\to 0\right)\Rightarrow {\overline{\sigma}}_z=v\left({\overline{\omega}}^2\left({1}^2-{\overline{r}}^2\right)+2{\overline{\sigma}}_0\ln \left(r/a\right)\right)\end{array}} $$ In the state of yielding initiation simultaneously from inner and outer radii of the shaft, the constants C1 and C2 related to radial elastic displacement and C3 and C4 related to radial plastic displacement and also rep1 and rep2 should be obtained simultaneously. Six equations are needed: $$ {\displaystyle \begin{array}{l}{\overline{\sigma}}_{\overline{r}}{\left|{}^{elastic}\left(\overline{r}={\overline{r}}_{ep1}\right)={\overline{\sigma}}_{\overline{r}}\right|}^{elastic}\left(\overline{r}={\overline{r}}_{ep1}\right)\\ {}\overline{u}{\left|{}^{elastic}\left(\overline{r}={\overline{r}}_{ep1}\right)=\overline{u}\right|}^{plastic}\left(\overline{r}={\overline{r}}_{ep1}\right)\\ {}\overline{u}{\left|{}^{elastic}\left(\overline{r}={\overline{r}}_{ep1}\right)=\overline{u}\right|}^{plastic}\left(\overline{r}={\overline{r}}_{ep1}\right)\\ {}{\overline{\sigma}}_{\overline{r}}{\left|{}^{elastic}\left(\overline{r}={\overline{r}}_{ep2}\right)={\overline{\sigma}}_{\overline{r}}\right|}^{elastic}\left(\overline{r}={\overline{r}}_{ep2}\right)\\ {}\overline{u}{\left|{}^{elastic}\left(\overline{r}={\overline{r}}_{ep2}\right)=\overline{u}\right|}^{plastic}\left(\overline{r}={\overline{r}}_{ep2}\right)\\ {}{\overline{\sigma}}_{\theta }{\left|{}^{elastic}\left(\overline{r}={\overline{r}}_{ep2}\right)-{\overline{\sigma}}_{\overline{r}}\right|}^{elastic}\left(\overline{r}={\overline{r}}_{ep2}\right)={\overline{\sigma}}_Y\end{array}} $$ Associated flow rule for this state of stress order (Akis & Eraslan, 2007) is εθp = -εrp and εzp = 0. Superscripts e and p refer to elastic and plastic states (Fig. 2). $$ {\displaystyle \begin{array}{l}{\varepsilon}^T={\varepsilon}^p+{\varepsilon}^e\\ {}{\varepsilon}^p={\varepsilon}_r^p+{\varepsilon}_{\theta}^p+{\varepsilon}_z^p\\ {}{\varepsilon}^e={\varepsilon}_r^e+{\varepsilon}_{\theta}^p+{\varepsilon}_z^e\end{array}} $$ Plastic deformations initiate from (a) the internal surface, (b) the outer surface of the rotor, and (c) both The associated flow rule expresses that the total plastic strain equals zero (εp = 0). Hence, total elastic and plastic strains are as follows: $$ {\varepsilon}^T={\varepsilon}^p+{\varepsilon}^e={\varepsilon}_r^e+{\varepsilon}_{\theta}^e=\frac{d\overline{u}}{d\overline{r}}+\frac{\overline{u}}{\overline{r}} $$ By knowing general stress-strain relations and using Hook's general law and Tresca's yield criterion, the stress-displacement equation becomes: $$ {\displaystyle \begin{array}{l}{\varepsilon}_{ij}=\frac{1}{E}\left({\sigma}_{ij}-v\left({\sigma}_{kk}-{\sigma}_{ij}\right)\right);\sigma ij=\frac{\partial u}{\partial {\varepsilon}_{ij}};i,j,k=x,y,z\\ {}\frac{d\overline{u}}{d\overline{r}}+\frac{\overline{u}}{\overline{r}}=\frac{1}{{\overline{r}}^{n_E}}\left(\left(1-v-{v}^2\right)\left(2{\overline{\sigma}}_r+{\overline{\sigma}}_Y\right)\right)\end{array}} $$ Substituting obtained plastic stresses into the above relation and rearranging it, we have: $$ \frac{d\overline{u}}{d\overline{r}}+\frac{\overline{u}}{\overline{r}}=\frac{2\left(1-v-2{v}^2\right)}{{\overline{r}}^{n_E}}\left\{\begin{array}{l}\frac{{\overline{\sigma}}_0}{n_{\sigma }}\left(\left(\frac{n_{\sigma }}{2}+1\right){\overline{r}}^{n_{\sigma }}-{\overline{h}}^{n_{\sigma }}\right)\\ {}+\frac{{\overline{\omega}}^2}{n_{\rho }+2}\left({\overline{h}}^{n_{\rho }+2}-{\overline{r}}^{n_{\rho }+2}\right)\end{array}\right\} $$ A non-dimensional solution of the above equation gives plastic radial displacement as follows: $$ \frac{u(r)}{r}=\frac{1-v-2{v}^2}{n_{\rho }+2}\times \left\{\begin{array}{l}\frac{-2{\overline{r}}^{-{n}_E}}{n_{\sigma}\left(-2+{n}_E\right)}\left(\begin{array}{l}{\sigma}_0{\overline{h}}^{n_{\sigma }}\left({n}_{\rho }+2\right)\\ {}-{\overline{\omega}}^2{n}_{\sigma }{\overline{h}}^{n_{\rho }+2}\end{array}\right)-\frac{2{\overline{\omega}}^2{\overline{r}}^{-{n}_E+{n}_{\rho }+2}}{-4+{n}_E-{n}_{\rho }}\\ {}+\frac{{\overline{\sigma}}_0\left({n}_{\sigma }{n}_{\rho }+2{n}_{\sigma }+2{n}_{\rho }+4\right){\overline{r}}^{-{n}_E+{n}_{\sigma }}}{n_{\sigma}\left(-2+{n}_E-{n}_{\sigma}\right)}\end{array}\right\}+\frac{C_4}{r^2} $$ To obtain C4, the continuity condition of radial displacement through the elastic and plastic border is considered. $$ u{\left|{}^{elastic}\left(\overline{r}={\overline{r}}_{ep}\right)=u\right|}^{plastic}\left(\overline{r}={\overline{r}}_{ep}\right) $$ Elastic results Verification has been done by comparing results with articles discussed on homogenous materials and prior FGM articles which are shown on subsequent plots and considering modulus of elasticity as the only variable property of the material. nE = nρ = nσ = 0 (the homogenous material condition) for the limit of (21) and ni = 0 creates: $$ \varOmega =\overline{\omega}/\sqrt{{\overline{\sigma}}_0}=2\;\left(\overline{h}\right)\;\sqrt{\left(1-v\right)/\left(1-2v\right)+\left(3-2v\right)\;{\left(\overline{h}\right)}^2} $$ The above equation is the maximum allowed angular velocity in a homogeneous tube (Nino et al., 1987). The results are discussed for − 2 ≤ ni ≤ 2 and h̅ = 0.5 and h̅ = 0.55 for ν = 0.3 (Akis & Eraslan, 2007; Dai et al., 2006). To form Tresca's yield criterion, the stress collocation must be determined. Hoop, radial, and longitude stresses are plotted for different h̅ = a/b ratios and different exponent parameters for both constant and variable density (Fig. 3). Effect of density and radius ratio on stress collocation for ni = 2 (a, b) and ni = − 2 (c, d) When ni = 2, the Hoop and longitude stress rising up for higher r̅ = r/b, but radial stresses have a peak in the midrange of r̅. Hoop and longitude stresses have a higher value of dimensionless stress components for constant density in comparison with the variable density. Another fact is in h̅ > 0.9 all studied stresses remain constant for different r̅. On the other hand, for ni = − 2, Hoop and longitude stresses descending, but there is no significant change in radial stress trends. The effects of the constant and variable density in values of dimensionless components are different and the higher values belong to variable density. These effects are identified in this study for the first time. Figure 3 reveals that the hoop stress is in maximum and radial stress is in the minimum value for − 2 ≤ ni ≤ 2 and 0.5 ≤ h̅ ≤ 1; hence, the Tresca's yield criterion is as defined before. Also, higher h̅ ratios make the results more linear. In previous articles on the elastic-plastic behavior of FG rotating tube, the only variable of the material is defined as modulus of elasticity (Akis & Eraslan, 2007; Tsiatas & Babouskos, 2017). Circumferential stress has a smaller value when considering variable density. This trend is similar through a/b = 0.5 to 1 for ni = 2, but these results are reverse for ni = − 2. This phenomenon could be explained by considering power ni. The power value sign is the reverse of material distribution through rotor wall thickness. For instance, when power law is a positive value, material distributed at outer radius of the rotor has greater density. So negative value of power will reverse material distribution which will result in different stress orders. Plastic region To validate the model for plastic deformations, a variable module of elasticity material (nE = 1.3826, nρ = nσ) considered to compare with (Akis & Eraslan, 2007). Figure 4a and b show that the results are the same for different NLP and rep/b. As Fig. 4a and b show, considering variable modulus of elasticity and constant density and yield stress limit may cause yielding initiation from an inner and outer radius of the shaft, simultaneously. High angular velocities create high centrifugal forces that made plastic deformations in the rotor. As shown in Fig. 4c, considering variable modulus of elasticity, density, and yield stress limit with the equal exponential rates will cause yielding from the inner radius of the shaft (nE = nρ = nσ = ni and − 2 ≤ ni ≤ 2). Also, the homogenous behavior of the rotating shaft is obtained (nE = nρ = nσ = 0). As it is depicted in Fig. 4c, higher plastic growth happened for higher NLPs and higher rotational speed as expected. NLP plotted in different non-dimensional elasto-plastic boundary radius for (a) different nis, (b, c) nE = 1.3826, nρ = nσ, comparing Akis results (Duc, 2016b) in different modulus of elasticity Plastic growth through radial coordinate by increasing the angular velocity of the shaft is shown in Fig. 5 for the different exponent of parameters. Maximum elastic and plastic velocity are also shown. In both, increasing ni results in a reduction of non-dimensional loading parameter which causes yielding at lower speeds for the rotor. This is happening because of the lowering of the average of the material properties in higher exponential rates of material change. Neglecting yield and density changes make a considerable error not only in the calculation of non-dimensional loading parameters but also in the determination of yielding initiation point. Ωfp and Ωy to the loading parameter exponent nj (h̅ = 0.5) In Fig. 6, the elastic-plastic stresses are plotted for the state of plastic growth. To compare obtained results, plastic radial displacement is plotted for two conditions: considering variable modulus elasticity as the only variable of the material as Eraslan and Akis (Akis & Eraslan, 2007) (Fig. 6a) and considering variable modulus of elasticity, density, and yield stress limit as discussed above (Fig. 6b–d). Radial and hoop strains are plotted to verify and compare results. The plotted results are similar to the results from reference (Akis & Eraslan, 2007). According to Fig. 6, considering variable density and yield stress will change elastic-plastic radial displacement significantly. Similar to Fig. 3, the effects of the exponent rate are presented in Fig. 6c and d. Stress distribution for (a) nE = 1.3826, np = no = 0. Yields from inside and outside (h̅ = 0.55, Ω = 1.307), (b) nE = np = no = 1.3826, h̅ = 0.55 and r͞ep = 0.65(Ω = 1.1999) yields from inside, (c) nE = np = no = − 2, and (d) nE = np = no = 2 Plastic strains for the case of constant yield and density (Fig. 7a) and equal exponent rate for density, yield, and elastic modulus (Fig. 7b) investigated. The results are quite different in two cases for the equivalent non-dimensional loading parameter. Non-dimensional loading parameter (Ω) calculated as 1.307 according to Fig. 5 for fully plastic behavior (Fig. 7a) and 1.14 for a yield initiated case (Fig. 7b). The analogy between two cases again reveals that the yield initiates from inside if density, yield, and elastic modulus are variable. To have a better comparison, radial displacements for two cases (Fig. 7c and d) presented. Stress distribution for (a) nE = 1.3826, np = no = 0. Yields from inside and outside (h̅ = 0.55, Ω = 1.307), (b) nE = np = no = 1.3826, h̅ = 0.55 and r͞ep = 0.65 (Ω = 1.19999) yields from inside, (c) nE = np = no = − 2, and (d) nE = np = no = 2 Finally, to get an analogy in different cases, the effect of variable density on radial displacement for a/b = 0.5 graphed in Fig. 8. It shows that for the same elastic modulus and yield stress exponent, lower density exponent reveals lower rotor displacement. Which is expected based on the authors' experience in different gas turbine rotor design and maintenance. Effect of variable density on radial displacement for a/b = 0.5 In the present article, elastic-plastic behavior of a rotating shaft made of FGM under high centrifugal forces is investigated for the first time. Modulus of elasticity, density, and yield stress is assumed to have a power-law function of the cylindrical coordinate system and all parameters concluded in an analytical model which is an improvement regarding previous jobs. The analytical equations derived based on different studies, and non-dimensional parameters defined to create comprehensive and analogical outcomes. The results are compared and validated with homogenous materials and previously published articles which considered modulus of elasticity as the only variable of the material (Akis & Eraslan, 2007). According to the presented research, the shaft's deformations and strength have a great dependency on the material property definition. The results show that neglecting the variety of density and yield stress causes a considerable difference in stress and strain and yielding initiation behavior may change from the inner surface of the shaft to its outer. It is essential to put great care to determine material properties for high-speed components such as hollow shaft to prevent design flaws in such sensible parts of the machine. Due to the experience of the authors in gas turbine design industries, there is a need to have some robust formulas to check the yield point of hollow shafts during turbine maintenance. Different Rolls Royce gas turbine series such as Trent and AVON, Siemens SGT 800, and many other midsized turbines are using hollow shafts in their compressor and turbine parts. This model will help to control the yield start point for a hollow shaft measured during maintenance. This paper will pave a reliable way to design many high-speed rotary components. Although this research is about FGM materials, it could be outstretched for orthogonal and non-isotropic materials as well. The results will help designers to get a better perception of hollow shafts possible weaknesses and failures to design more efficient rotary machines. Data are available by the request. FGM: Functionally graded materials Akis, T. (2009). Elastoplastic analysis of FG spherical pressure vessels. Computational Materials Science, 46, 545–554. https://doi.org/10.1016/j.commatsci.2009.04.017. Akis, T., & Eraslan, A. (2007). Exact solution of rotating FGM shaft problem in the elastoplastic state of stress. Archive of Applied Mechanics, 77, 745–765. https://doi.org/10.1007/s00419-007-0123-3. Article MATH Google Scholar AnsariSadrabadi, S., Rahimi, G., Citarella, R., ShahbaziKarami, J., Sepe, R., & Esposito, R. (2017). Analytical solutions for yield onset achievement in FGM thick walled cylindrical tubes undergoing thermomechanical loads. Composites Part B: Engineering, 116, 211–223. https://doi.org/10.1016/j.compositesb.2017.02.023. Bahaadini, R., & Saidi, A. (2018). Stability analysis of thin-walled spinning reinforced pipes conveying fluid in thermal environment. European Journal of Mechanics - A/Solids, 72, 298–309. https://doi.org/10.1016/j.euromechsol.2018.05.015. Bahaloo, H., Papadopolus, J., & Ghosha, R. (2016). Transverse vibration and stability of an FG rotating annular disk with a circumferential crack. International Journal of Mechanical Sciences, 113, 26–35. https://doi.org/10.1016/j.ijmecsci.2016.03.004. Bose, T., & Rattan, M. (2018). Effect of thermal gradation on steady state creep of functionally graded rotating disc. European Journal of Mechanics - A/Solids, 67, 169–176. https://doi.org/10.1016/j.euromechsol.2017.09.014. Bouderba, B., Houari, M., Tounsi, A., & Mahmoud, S. (2016). Thermal stability of FG sandwich plates using a simple shear deformation theory. Structural Engineering & Mechanics, 58-3, 397–422. https://doi.org/10.12989/sem.2016.58.3.397. Burzyński, S., Chróścielewski, J., Daszkiewicz, K., & Witkowski, W. (2018). Elastoplastic nonlinear FEM analysis of FGM shells of Cosserat type. Composites Part B: Engineering, 154, 478–491. https://doi.org/10.1016/j.compositesb.2018.07.055. J.Chakrabarty, "Theory of plasticity", 3rd ed. Elsevier Butterworth-Heinemann, 2006. Dai, H., Fu, Y., & Dong, Z. (2006). Exact solutions for functionally graded pressure vessels in a uniform magnetic field. International Journal of Solids and Structures, 43, 5570–5580. https://doi.org/10.1016/j.ijsolstr.2005.08.019. Dai, T., & Dai, H. L. (2017). Analysis of a rotating FGMEE circular disk with variable thickness under thermal environment. Applied Mathematical Modelling, 45, 900–924. https://doi.org/10.1016/j.apm.2017.01.007. Duc, N. D. (2013). Nonlinear dynamic response of imperfect eccentrically stiffened FGM double curved shollow shells on elastic foundation. Journal of Composite Structures, 102, 306–314. https://doi.org/10.1016/j.compstruct.2012.11.017. Duc, N. D. (2016a). Nonlinear thermal dynamic analysis of eccentrically stiffened S-FGM circular cylindrical shells surrounded on elastic foundations using the Reddy's third-order shear deformation shell theory. Journal of European Journal of Mechanics - A/Solids, 58, 10–30. https://doi.org/10.1016/j.euromechsol.2016.01.004. Duc, N. D. (2016b). Nonlinear thermo-electro-mechanical dynamic response of shear deformable piezoelectric Sigmoid functionally graded sandwich circular cylindrical shells on elastic foundations. Journal of Sandwich Structures and Materials, 20-3, 351–378. https://doi.org/10.1177/1099636216653266. Duc, N. D., Bich, D. H., & Cong, P. H. (2016). Nonlinear thermal dynamic response of shear deformable FGM plates on elastic foundations. Journal of Thermal Stresses, 39-3, 278–297. https://doi.org/10.1080/01495739.2015.1125194. Duc, N. D., & Cong, P. H. (2018). Nonlinear dynamic response and vibration of sandwich composite plates with negative Poisson's ratio in auxetic honeycombs. Journal of Sandwich Structures and Materials, 20-6, 692–717. https://doi.org/10.1177/1099636216674729. Duc, N. D., Cong, P. H., Anh, V. M., Quang, V. D., Phuong, T., Tuan, N. D., & Thinh, N. H. (2015). Mechanical and thermal stability of eccentrically stiffened functionally graded conical shell panels resting on elastic foundations and in thermal environment. Journal of Composite Structures, 132, 597–609. https://doi.org/10.1016/j.compstruct.2015.05.072. Duc, N. D., Homayoun, H., Quan, T. Q., & Khoa, N. D. (2019). Free vibration and nonlinear dynamic response of imperfect nanocomposite FG-CNTRC double curved shollow shells in thermal environment. European Journal of Mechanics - A/Solids, 75, 355–366. https://doi.org/10.1016/j.euromechsol.2019.01.024. Duc, N. D., Khoa, N. D., & Thiem, H. T. (2018). Nonlinear thermo-mechanical response of eccentrically stiffened Sigmoid FGM circular cylindrical shells subjected to compressive and uniform radial loads using the Reddy's third-order shear deformation shell theory. Journal of Mechanics of Advanced Materials and Structures, 25-13, 1157–1167. https://doi.org/10.1080/15376494.2017.1341581. Duc, N. D., Kim, S. E., & Chan, D. Q. (2018). Thermal buckling analysis of FGM sandwich truncated conical shells reinforced by FGM stiffeners resting on elastic foundations using FSDT. Journal of Thermal Stresses, 41-3, 331–365. https://doi.org/10.1080/01495739.2017.1398623. Duc, N. D., Lee, J., Nguyen-Thoi, T., & Thang, P. T. (2017). Static response and free vibration of functionally graded carbon nanotube-reinforced composite rectangular plates resting on Winkler-Pasternak elastic foundations. Journal of Aerospace Science and Technology, 68, 391–402. https://doi.org/10.1016/j.ast.2017.05.032. Duc, N. D., Nguyen, P. D., & Khoa, N. D. (2017). Nonlinear dynamic analysis and vibration of eccentrically stiffened S-FGM elliptical cylindrical shells surrounded on elastic foundations in thermal environments. Journal of Thin Walled Structures, 117, 178–189. https://doi.org/10.1016/j.tws.2017.04.013. Duc, N. D., Thang, P., Dao, N., & Vantac, N. (2015). Nonlinear buckling of higher deformable S-FGM thick circular cylindrical shells with metal–ceramic–metal layers surrounded on elastic foundations in thermal environment. Composite Structure, 121, 134–141. https://doi.org/10.1016/j.compstruct.2014.11.009. Duc, N. D., Thuy Anh, V. T., & Cong, P. H. (2014). Nonlinear axisymmetric response of FGM shollow spherical shells on elastic foundations under uniform external pressure and temperature. Journal of European Journal of Mechanics - A/Solids, 45, 80–89. https://doi.org/10.1016/j.euromechsol.2013.11.008. ND Duc, N.Tuan, P.Tran, P.Cong, P.Nguyen, "Nonlinear stability of eccentrically stiffened S-FGM elliptical cylindrical shells in thermal environment", Thin-Walled Structures, 108(2016) p.p.280-290 https://doi.org/10.1016/j.tws.2016.08.025. El-Haina, F., Bakora, A., Bousahla, A., Tounsi, A., & Mahmoud, S. (2017). A simple analytical approach for thermal buckling of thick functionally graded sandwich plates. Structural Engineering Mechanics, 63-5, 585–595. https://doi.org/10.12989/sem.2017.63.5.585. Eraslan, A., & Akis, T. (2006a). Plane strain analytical solutions for a functionally graded elastic–plastic pressurized tube. International Journal of Pressure Vessels and Piping, 83, 635–644. https://doi.org/10.1016/j.ijpvp.2006.07.003. Eraslan, A., & Akis, T. (2006b). On the plane strain and plane stress solutions of functionally graded rotating solid shaft and solid disk problems. Acta Mechanica, 181, 43–63. https://doi.org/10.1007/s00707-005-0276-5. Eraslan, A., & Akis, T. (2006c). The stress response of partially plastic rotating FGM hollow shafts: Analytical treatment for axially constrained ends. Mechanics Based Design of Structures and Machines, 34-3, 241–260. https://doi.org/10.1080/15397730600779285. F.Figueiredo, L.Borges, F.Rochinha, "Elastoplastic stress analysis of thick-walled FGM pipes" AIP Conference Proceedings(2008) p.p. 147-52. https://doi.org/10.1063/1.2896766 Fukui, Y., & Yamanaka, N. (1991). Elastic analysis for thick-walled tubes of functionally graded material subjected to internal pressure. JSME International Journal, 35-4, 379–385. https://doi.org/10.1299/jsmea1988.35.4_379. HosseiniKordkheili, S., & Naghdabadi, R. (2006). Thermoelastic analysis of a functionally graded rotating disk. Composite Structure, 79-4, 508–516. https://doi.org/10.1016/j.compstruct.2006.02.010. Jabbari, M., Sohrabpour, S., & Eslami, M. (2002). Mechanical and thermal stresses in a functionally graded hollow cylinder due to radially symmetric loads. International Journal of Pressure Vessels & Piping, 79-7, 493–497. https://doi.org/10.1016/S0308-0161(02)00043-1. Kargarnovin, M., Faghidian, S., & Arghavani, J. (2007). Limit analysis of FGM circular plates subjected to arbitrary rotational symmetric loads. World Academy of Science, Engineering and Technology, 36. https://doi.org/10.5281/zenodo.1332230. Kaviprakash, G., Kannan, C., Lawrence, I., & Regan, A. (2014). Design and analysis of composite drive shaft for automotive application. International Journal of Engineering Research & Technology, 3, 429–436. Khanna, K., Gupta, V., & Nigam, S. (2017). Creep analysis in functionally graded rotating disc using Tresca criterion and comparison with von-Mises criterion. Materials Today Proceedings, 4-2-A, 2431–2438. https://doi.org/10.1016/j.matpr.2017.02.094. Khoa, N. D., Thiem, H. T., Thiem, o. T., & Duc, N. D. (2019). Nonlinear buckling and postbuckling of imperfect piezoelectric S-FGM circular cylindrical shells with metal-ceramic-metal layers in thermal environment using Reddy's third-order shear deformation shell theory. Journal Mechanics of Advanced Materials and Structures, 26-3, 248–259. https://doi.org/10.1080/15376494.2017.1341583. Klocke, F., Klink, A., & Veselovac, D. (2014). Turbomachinery component manufacture by application of electrochemical, electro-physical and photonic processes. CIRP Annals, 63-2, 703–726 https://doi.org/10.1016/j.cirp.2014.05.004. Lal, A., Jagtap, K., & Singh, B. (2013). Post buckling response of FGM plate subjected to mechanical and thermal loadings with random material properties. Applied Mathematical Modelling, 37-5, 2900–2920 https://doi.org/10.1016/j.apm.2012.06.013. Lee, D., Kim, H., Kim, J., & Kim, J. (2004). Design and manufacture of an automotive hybrid aluminum composite drive shaft. Composite Structures, 63, 87–99. https://doi.org/10.1016/S0263-8223(03)00136-3. Mack, W. (1991). Rotating elastic-plastic tube with free ends. International Journal of Solids and Structures, 27, 1462–1476. https://doi.org/10.1016/0020-7683(91)90042-E. Mahamood, R., & Akinlabi, E. (2017). "Functionally graded materials", Topics in Mining. Springer, Switzerland: Metallurgy & Materials Eng. Mathew, T., Natarajan, S., & Pañeda, E. (2018). Size effects in elastic-plastic functionally graded materials. Composite Structures, 204, 43–51. https://doi.org/10.1016/j.compstruct.2018.07.048. Mendelson, A. (1968). Plasticity, theory and application. NewYork: Macmillman. Moorthy, R., Mitiku, Y., & Sridhar, K. (2013). Design of automobile driveshaft using carbon/epoxy and kevlar/epoxy composites. American Journal of Engineering Research, 2, 173–179. Nino, M., Hirai, T., & Watanabe, R. (1987). The functionally gradient materials. Journal of Japan Society of Composite Material, 13, 257–264. Peng, X., & Li, X. (2012). Elastic analysis of rotating functionally graded polar orthotropic disks. International Journal of Mechanical Sciences, 60, 84–91. https://doi.org/10.1016/j.ijmecsci.2012.04.014. Seraj, S., & Ganesan, R. (2018). Dynamic instability of rotating doubly-tapered laminated composite beams under periodic rotational speeds. Composite Structures, 200, 711–728. https://doi.org/10.1016/j.compstruct.2018.05.133. Swaminathan, K., Naveenkumar, D., Zenkour, A., & Carrera, E. (2015). Stress, vibration and buckling analyses of FGM plates—A state-of-the-art review. Composite Structures, 120, 10–31. https://doi.org/10.1016/j.compstruct.2014.09.070. Thom, D. V., Kien, N. D., Duc, N. D., Duc, D. H., & Tinh, B. Q. (2017). Analysis of bi-directional functionally graded plates by FEM and a new third-order shear deformation plate theory. Journal of Thin Walled Structures, 119, 687–699. https://doi.org/10.1016/j.tws.2017.07.022. S. P. Timoshenko and J. N. Goodier, "Theory of elasticity", 3rd edition, McGraw-Hill, NY, 1970. Torabnia, S., Hemati, M., & Aghajanib, S. (2019). Investigation of a hollow shaft to determine the maximum angular velocity regarding the FGM properties. Materials Science Forum, 969, 669–677 https://doi.org/10.4028/www.scientific.net/MSF.969.669. Tsiatas, G., & Babouskos, N. (2017). Elastic-plastic analysis of functionally graded bars under torsional loading. Composite Structures, 176, 254–267. https://doi.org/10.1016/j.compstruct.2017.05.044. Tutuncu, N., & Ozturk, M. (2001). Exact solutions for stress in functionally graded pressure vessels. Composites Part B: Engineering, 32-8, 683–686. https://doi.org/10.1016/S1359-8368(01)00041-5. Yildirim, S., & Tutuncu, N. (2018). On the inertio-elastic instability of variable-thickness functionally-graded disks. Mechanics Research Communications, 91, 1–6. https://doi.org/10.1016/j.mechrescom.2018.04.011. You, L., You, X., Zhang, J., & Li, J. (2007). On rotating circular disks with varying material properties. Zeitschrift für angewandte Mathematik und Physik, 58, 1068–1084. https://doi.org/10.1007/s00033-007-5094-2. You, L., Zhang, J., & You, X. (2005). Elastic analysis of internally pressurized thick-walled spherical pressure vessels of functionally graded materials. International Journal of Pressure Vessels and Piping, 82, 347–354. https://doi.org/10.1016/j.ijpvp.2004.11.001. ZamaniNejad, M., & Rahimi, G. (2010). Elastic analysis of FGM rotating cylindrical pressure vessels. Journal of the Chinese Institute of Engineers, 33-4, 525–530. https://doi.org/10.1080/02533839.2010.9671640. Zharfi, H., & EkhteraeiToussi, H. (2018). Time dependent creep analysis in thick FGM rotating disk with two-dimensional pattern of heterogeneity. International Journal of Mechanical Sciences, 140, 351–360. https://doi.org/10.1016/j.ijmecsci.2018.03.010. The paper is theoretical work of the authors. Sharif University of Technology, Azadi St., Tehran, Iran Shams Torabnia Isfahan University of Technology, Daneshgah-e Sanati Hwy, Isfahan, Iran Sepideh Aghajani & Mohammadreza Hemati Sepideh Aghajani Mohammadreza Hemati ST coordinated, validated, and formulated the data of the study. SA and MH contributed to literature review, calculations, and numerical analysis. All authors read and approved the final manuscript. Correspondence to Shams Torabnia. Torabnia, S., Aghajani, S. & Hemati, M. An analytical investigation of elastic-plastic deformation of FGM hollow rotors under a high centrifugal effect. Int J Mech Mater Eng 14, 16 (2019). https://doi.org/10.1186/s40712-019-0112-7 Functionally graded material Elastic-plastic analysis Plane strain
CommonCrawl
Estimate random effects for a new individual with a linear mixed effects model Consider repeated observations $\mathcal{Y} = (y_{i,j})_{i,j}$ obtained for $p$ individuals ($1 \leq i \leq p$), at different time points $t_{i,j}$ $(1 \leq j \leq n_i$). The "random slope and intercept" model writes: $$ y_{i,j} = \left( \beta_0 + b_{i,0} \right) + \left( \beta_1 + b_{i,1} \right) t_{i,j} + \varepsilon_{i,j}, $$ where $\beta = \begin{bmatrix} \beta_0 & \beta_1 \end{bmatrix}^{\top}$ denote the fixed effects of the model and $$b_i = \begin{bmatrix} b_{i,0} & b_{i,1} \end{bmatrix}^{\top} \sim \mathcal{N}\left( 0, \mathbf{D} \right), \quad b_i \perp\kern-5pt\perp \varepsilon_i,$$ denote the random effects, $\varepsilon_{i,j} \sim \mathcal{N}\left( 0, \sigma^2 \right)$. Let $\theta = \left( \beta, \mathrm{vech}\left( \mathbf{D} \right), \sigma^2 \right)$ denote the model parameters. Given $\mathcal{Y}$, one can obtain an estimator $\hat{\theta}$ of $\theta$ by maximizing the model likelihood (or restricted likelihood). Now, say that we have some data $\mathbf{y}_{\mathrm{new}}^{\ast} = \left( y_{\mathrm{new},1}, \ldots, y_{\mathrm{new}, n^{\ast}}\right)$ for a new individual. We want to estimate the trajectory (i.e., the straight line) of this new individual. To do that, we only need to estimate his random effects $\mathbf{b}_{\mathrm{new}}$. How do we do that? One could get $\mathbf{b}_{\mathrm{new}}$ from the posterior $p\left( \mathbf{b}_{\mathrm{new}} \mid \mathbf{y}_{\mathrm{new}}, \hat{\theta} \right)$. Unless I am mistaken, this is what D. Rizopoulos proposed in his answer to a similar question. Using the Bayes rule, we get: $$ p\left( \mathbf{b} \mid \mathbf{y}_{\mathrm{new}}, \hat{\theta} \right) \propto p\left( \mathbf{y}_{\mathrm{new}} \mid \mathbf{b}, \hat{\theta} \right) p\left( \mathbf{b} \mid \hat{\theta} \right), $$ and we could have: $$ \mathbf{b}_{\mathrm{new}} \in \mathop{\mathrm{argmax}} \limits_{\mathbf{b}} p\left( \mathbf{y}_{\mathrm{new}} \mid \mathbf{b}, \hat{\theta} \right) p\left( \mathbf{b} \mid \hat{\theta} \right), $$ which would yield, unless I am mistaken, the BLUP (Best Linear Unbiased Predictor) of this new individual's random effects. Would it make sense to estimate $\mathbf{b}_{\mathrm{new}}$ by maximizing the following instead? $$ \int p\left( \mathbf{b} \mid \mathbf{y}_{\mathrm{new}}, \theta \right) p\left( \theta \mid \mathcal{Y} \right) \, d\theta, $$ which would be $\mathbb{E}_{p\left( \theta \mid \mathcal{Y} \right)}\left[ p\left( \mathbf{b} \mid \mathbf{y}_{\mathrm{new}}, \theta \right) \right]$. I am not sure this makes sense but I was thinking of something similar to the posterior predictive distribution. mixed-model conditional-probability prediction posterior Dimitris Rizopoulos PouteriPouteri $\begingroup$ Hi, the posterior predictive distribution makes sense to me, but then I like Bayesian methods. I would prefer an interval to a point estimate. But I think as stated by your question the BLUP is a point estimate of the random effect which could use in prediction. $\endgroup$ – Paul Hewson The difference between the two approaches is actually a difference between an Empirical Bayes versus a fully Bayesian approach to estimate the same thing. If you fit the mixed model using maximum likelihood, then you typically follow the first option, whereas, under the fully Bayesian approach from which you take posterior samples also for $\theta$, you would go for the second one. I would not expect to see big differences with regard to point estimates between the two. But if you are also interested in the variance of these estimates, then the second approach accounts for the uncertainty in estimating $\theta$ whereas the first one does not. You could still use the second approach to get $\mathbf{b}_{\mathrm{new}}$ even you fit the model with maximum likelihood by approximating the posterior distribution of the parameters $[\theta \mid \mathcal{Y}]$ with a multivariate normal distribution with mean the maximum likelihood estimates (MLEs) and covariance matrix the variance-covariance matrix of the MLEs. Dimitris RizopoulosDimitris Rizopoulos Not the answer you're looking for? Browse other questions tagged mixed-model conditional-probability prediction posterior or ask your own question. How does a fitted linear mixed effects model predict longitudinal output for a new subject? Testing the random slope with correlated random effects Posterior distribution $\boldsymbol\theta \mid \mathbf{y}$ for linear regression model Finding the MLE for covariance matrix in random effects model Joint models with time to progression Likelihood of linear mixed effects model Predictions and forecasting with mixed-effects models How to prove that the posterior of the regression coefficients $\mathbf{w}$ is roughly gaussian in MAP regularized logistic regression?
CommonCrawl
PHYSIOLOGY AND METABOLISM Systems-Level Metabolic Flux Profiling Elucidates a Complete, Bifurcated Tricarboxylic Acid Cycle in Clostridium acetobutylicum Daniel Amador-Noguez, Xiao-Jiang Feng, Jing Fan, Nathaniel Roquet, Herschel Rabitz, Joshua D. Rabinowitz Daniel Amador-Noguez Lewis Sigler Institute for Integrative Genomics, Princeton University, Princeton, New Jersey Xiao-Jiang Feng Department of Chemistry, Princeton University, Princeton, New Jersey Jing Fan Lewis Sigler Institute for Integrative Genomics, Princeton University, Princeton, New JerseyDepartment of Chemistry, Princeton University, Princeton, New Jersey Nathaniel Roquet Herschel Rabitz Joshua D. Rabinowitz For correspondence: [email protected] DOI: 10.1128/JB.00490-10 Obligatory anaerobic bacteria are major contributors to the overall metabolism of soil and the human gut. The metabolic pathways of these bacteria remain, however, poorly understood. Using isotope tracers, mass spectrometry, and quantitative flux modeling, here we directly map the metabolic pathways of Clostridium acetobutylicum, a soil bacterium whose major fermentation products include the biofuels butanol and hydrogen. While genome annotation suggests the absence of most tricarboxylic acid (TCA) cycle enzymes, our results demonstrate that this bacterium has a complete, albeit bifurcated, TCA cycle; oxaloacetate flows to succinate both through citrate/α-ketoglutarate and via malate/fumarate. Our investigations also yielded insights into the pathways utilized for glucose catabolism and amino acid biosynthesis and revealed that the organism's one-carbon metabolism is distinct from that of model microbes, involving reversible pyruvate decarboxylation and the use of pyruvate as the one-carbon donor for biosynthetic reactions. This study represents the first in vivo characterization of the TCA cycle and central metabolism of C. acetobutylicum. Our results establish a role for the full TCA cycle in an obligatory anaerobic organism and demonstrate the importance of complementing genome annotation with isotope tracer studies for determining the metabolic pathways of diverse microbes. In soil ecology, obligatory anaerobic bacteria are key contributors to the putrefaction of dead organic matter (18). In the human intestine, they are the dominant flora, playing a central role in metabolism, immunity, and disease (16, 24, 30). Obligatory anaerobes also encompass some of the most promising bioenergy organisms. The soil bacterium Clostridium acetobutylicum is capable of fermenting carbohydrates into hydrogen gas and solvents (acetone, butanol, and ethanol). During World War I, it was used to develop an industrial starch-based process for the production of acetone and butanol that remained the major production route for these solvents during the first half of the last century (5). Since then, and particularly during the last few decades, an active research area has developed to understand and manipulate the metabolism of this organism with the goal of improving hydrogen and solvent production (5, 15). Despite this long history, there are still key pathways of primary metabolism in C. acetobutylicum that remain unresolved. In particular, as is common for most anaerobic bacteria, the tricarboxylic acid (TCA) cycle remains ill-defined (14, 23, 28). C. acetobutylicum is capable of growing on minimal medium (i.e., using glucose as the sole carbon source) (20), and it therefore must be able to synthesize α-ketoglutarate, the carbon skeleton of the glutamate family of amino acids. Its genome sequence, however, lacks obvious homologues of many of the enzymes of the TCA cycle, including citrate synthase, α-ketoglutarate dehydrogenase, succinyl-coenzyme A (CoA) synthetase, and fumarate reductase/succinate dehydrogenase (23). The apparent lack of these genes precludes the production of α-ketoglutarate by running the TCA cycle in either the oxidative or reductive direction. Two recent attempts at reconstructing a genome-scale model of C. acetobutylicum metabolism have encountered this problem. In one case, it was proposed that the TCA cycle functions in the reductive (counterclockwise) direction to produce α-ketoglutarate (14). In the second, it was hypothesized that glutamate is synthesized from ornithine by running the arginine biosynthesis pathway in reverse, bypassing the need for the TCA cycle (28). With the exception of the TCA cycle, the other core metabolic pathways, e.g., of glucose catabolism and amino acid and nucleotide biosyntheses, appear based on sequence homology to be complete and analogous to those in more well-studied bacteria, such as Escherichia coli (23). Here, we use 13C-labeled nutrients as isotopic tracers to follow the operation of C. acetobutylicum's TCA cycle and other primary metabolic pathways directly in live cells. In contrast to the previously proposed hypotheses, we find a complete, albeit bifurcated, TCA cycle. α-Ketoglutarate is produced in the oxidative direction from oxaloacetate and acetyl-CoA via citrate. Succinate can be produced in both the reductive direction via malate and fumarate and the oxidative direction via α-ketoglutarate. We also observe that C. acetobutylicum's one-carbon metabolism is distinct from that of more well-studied bacteria; the carboxyl group of pyruvate undergoes reversible exchange with free carbon dioxide, and the one-carbon units required for methionine, purine, and pyrimidine biosyntheses are derived primarily from the carboxyl group of pyruvate with minimal contribution from serine or glycine. To obtain a quantitative understanding of the newly proposed metabolic network, we formulated an ordinary differential equation (ODE) model that allowed us to calculate the metabolic fluxes through glycolysis, the Entner-Doudoroff pathway (which was inactive), the nonoxidative pentose phosphate pathway (there is no oxidative pentose phosphate pathway), the TCA cycle, and adjacent amino acid biosynthesis pathways. Beyond providing a quantitative description of metabolic flux, this model was useful for unraveling ambiguities in the network structure that were not readily distinguished based on qualitative labeling patterns alone. This study represents the first in vivo experimental characterization of the TCA cycle and central metabolic pathways in a Clostridium species and demonstrates the importance of complementing genome annotation with isotope tracer studies in the construction of genome-scale metabolic networks. Media, culture conditions, and metabolite extraction.C. acetobutylicum ATCC 824 was grown anaerobically at 37°C inside an environmental chamber (Bactron IV Shel Lab anaerobic chamber) with an atmosphere of 90% nitrogen, 5% hydrogen, and 5% carbon dioxide. The minimal medium formulation used in both liquid and filter cultures was 2 g/liter KH2PO4, 2 g/liter K2HPO4, 0.2 g/liter MgSO4·7H2O, 1.5 g/liter NH4Cl, 0.13 mg/liter biotin, 32 mg/liter FeSO4·7H2O, 0.16 mg/liter 4-aminobenzoic acid, and 10 g/liter glucose (20). In the pertinent experiments, acetate, glutamate, aspartate, or ornithine was added at a concentration of 2 g/liter. In addition to the appropriate minimal medium, the plates used in filter cultures contained 1.5% ultrapure agarose. Detailed protocols for preparing filter cultures and extracting metabolites in Escherichia coli have been published (2, 31), and these methods were adapted for use in C. acetobutylicum. Briefly, for the preparation of filter cultures, single colonies were picked from agar-solidified reinforced clostridial medium (RCM; Difco), resuspended in liquid RCM, heat treated at 80°C for 20 min, and grown to saturation overnight. This overnight culture was then used to inoculate a liquid minimal medium culture to an initial optical density at 600 nm (OD600) of 0.03. When this liquid culture reached an OD600 of ∼0.1, 1.6-ml aliquots were taken and passed through 47-mm-diameter round hydrophilic nylon filters (HNWP04700; Millipore), which were then placed on top of agarose plates with the appropriate minimal medium. Cellular metabolism was quenched, and metabolites were extracted by submerging the filters into 0.8 ml of acetonitrile-methanol-water (40:40:20) at −20°C (25). The filters were then washed with the extraction solvent, the cellular extractions were transferred and centrifuged in Eppendorf tubes, and the supernatant was collected and stored at −20°C until analysis. To measure growth, filters from parallel cultures were washed thoroughly with 1.6 ml of fresh medium and absorbance at 600 nm was determined. Metabolite and flux measurement.Cell extracts were analyzed by reversed-phase ion-pairing liquid chromatography (LC) coupled with electrospray ionization (ESI) (negative mode) to a high-resolution, high-accuracy mass spectrometer (Exactive; Thermo Fisher) operated in full scan mode for the detection of targeted compounds based on their accurate masses. This analysis was complemented with liquid chromatography coupled with ESI (positive and negative modes) to Thermo TSQ Quantum triple quadrupole mass spectrometers operating in selected reaction monitoring mode (1, 19). Hydrophilic interaction chromatography was used for positive-mode ESI, and ion-pairing reversed-phase chromatography was used for negative-mode ESI. Amino acids were derivatized with benzyl chloroformate before their quantitation by negative-mode LC-ESI-tandem mass spectrometry (MS/MS) (13). Absolute intracellular metabolite concentrations were determined using an isotope ratio-based approach previously described (2). Briefly, C. acetobutylicum was grown in [U-13C]glucose medium to near-complete isotopic enrichment and then extracted with quenching solvent containing known concentrations of unlabeled internal standards. The concentrations of metabolites in the cells can then be calculated using the ratio of labeled endogenous metabolite to nonlabeled internal standard. We used kinetic flux profiling (KFP) for measuring metabolite fluxes and elucidating the metabolic network structure (31). Filter cultures were grown on minimal medium plates to an OD600 of 0.35 and then transferred to minimal medium plates containing uniformly 13C-labeled glucose as the sole carbon source. At defined time points after the transfer (e.g., 1, 2, 4, 7, 10, 15, 30, and 60 min), metabolism was quenched and cell extracts were prepared and analyzed. The multiple isotopomers produced by the 13C labeling were monitored simultaneously using LC-MS. Metabolic fluxes were calculated based on the kinetics of the replacement of the unlabeled metabolites with the labeled ones. Similarly, the KFP experiments with uniformly 13C-labeled acetate were performed by transferring the cells to glucose minimal medium plates with added [U-13C]acetate. For the long-term labeling experiments using [3-13C]glucose, [4-13C]glucose, [1,2-13C]glucose, [13C]glutamate, [13C]ornithine, or [13C]aspartate, the cells were extracted 2 h after they were transferred to the plates containing each labeled substrate. In all the experiments with 13C-labeled amino acids, the usual concentration of nonlabeled glucose was maintained. The 13CO2 labeling experiments were performed by adding increasing concentrations of NaH13CO3 into exponentially growing liquid cultures (OD600 = 0.35). After 1 h, the cells were quickly filtered and extracted using acetonitrile-methanol-water (40:40:20) at −20°C. The labeling of the C1 unit pool was determined from the labeling patterns of various intermediates in nucleotide biosynthetic pathways that incorporate C1 units. We used 5′-phosphoribosyl-N-formylglycinamide and IMP, which incorporate C1 units from 10-formyl-tetrahydrofolate as well as dTMP, which incorporates a C1 unit from 5,10-methylene-trahydrofolate. In addition, methionine, which incorporates a C1 unit from 5-methyl-tetrahydrofolate, was used for corroborating data in some instances. We used metabolic flux profiling to complement the information obtained from KFP and to determine flux ratios in various pathways. We followed the general approach described in reference 9, with the difference that instead of inferring labeling patterns from proteinogenic amino acids, we quantified them directly for most metabolites. In all experiments, the labeling data were corrected for natural abundance of 13C in nonlabeled substrates and for the 12C impurity present in 13C-labeled substrates in a fashion similar to that reported previously (2, 31). ODE modeling and parameter identification.We constructed an ODE model for the metabolic network shown in Fig. 4 as well as Fig. S7 in the supplemental material and then identified model parameters (fluxes and unmeasured pool sizes) that reproduce the laboratory data. The procedure was based on methods previously developed (8, 22). The ODEs describe the rates of loss of unlabeled forms of metabolites (and the creation of particular labeled forms) after feeding of [U-13C]glucose. The equations are based on flux balance of metabolites and take the form $$mathtex$$\[\frac{dB}{dt}{=}\ {{\sum}_{i{=}1}^{N}}F_{i}\frac{A_{i}}{A_{\mathrm{tot}}}{-}F_{\mathrm{tot}}\frac{B}{B_{\mathrm{tot}}}\]$$mathtex$$ where metabolite B, which can be in labeled or unlabeled form, is downstream of another metabolite, Ai. The outflux, Ftot, balances the sum of N influxes, Fi, with Ai. Atot and Btot are the total pool sizes of the corresponding metabolites (sum of labeled and unlabeled forms). The unknown model parameters were identified by a genetic algorithm that minimizes a cost function (7, 8). The cost function quantifies the difference between the computational results and the laboratory measurements for the labeling dynamics, together with the additional constraints indicated in Table S1 in the supplemental material. One thousand sets of model parameters that can reproduce the laboratory data were identified, forming a distribution for each parameter (see Fig. S9 in the supplemental material). The median value and the breadth of the distribution then provide a representation of the fluxes consistent with the laboratory data. The C/C++ programs used for modeling and parameter identification are available upon request. Glucose catabolism to pyruvate.To probe metabolic flux in growing C. acetobutylicum, we monitored the dynamic (time-dependent) or long-term (steady-state) incorporation of 13C-labeled nutrients (glucose, acetate, CO2, and selected amino acids) into downstream metabolites in glycolysis, the pentose phosphate pathway, the TCA cycle, and amino acid and nucleotide biosynthetic pathways. C. acetobutylicum can potentially metabolize glucose to trioses via three different pathways: glycolysis (the Embden-Meyerhof pathway), the Entner-Doudoroff pathway, and the pentose phosphate pathway. Homologues of enzymes of each of the above-mentioned pathways, with the exception of the oxidative pentose phosphate pathway, are present in the C. acetobutylicum genome (23). The contribution of glycolysis to pyruvate synthesis relative to that of the Entner-Doudoroff pathway can be determined from cells grown in [1,2-13C]glucose (carbons 1 and 2 are 13C labeled) or [3-13C]glucose (carbon 3 is labeled) since each pathway yields distinct positional labeled forms of pyruvate, which can be distinguished by tandem mass spectrometry (MS/MS). For C. acetobutylicum growing exponentially on glucose as the carbon source, all pyruvate appeared to be derived from glycolysis, with no detectable Entner-Doudoroff pathway flux (see Fig. S1 in the supplemental material). The pentose phosphate pathway provides essential precursors (ribose-5P and erythrose-4P) for nucleotide and amino acid biosyntheses. Ribose-5P molecules can be produced by the oxidative pentose pathway from glucose-6P, by the nonoxidative pentose phosphate pathway via the transketolase reaction, or by the combined activity of transaldolase and transketolase. Consistent with the lack of oxidative pentose phosphate pathway enzyme homologues in the C. acetobutylicum genome, feeding of [1,2-13C]glucose resulted in no detectable production of ribose-5P containing a single 13C atom, the hallmark of oxidative pentose phosphate production. Pentoses were instead produced via transketolase (∼80%) and via transaldolase-transketolase (∼20%) (see Fig. S2 in the supplemental material). The above-described experiments suggest normal catabolism of glucose into pyruvate via glycolysis, and consistent with this, feeding of [U-13C]glucose as the sole carbon source resulted in rapid and complete labeling of glycolysis intermediates through phosphoenolpyruvate. Pyruvate, however, appeared in roughly equimolar amounts in its fully labeled form and in an unexpected form with two 13C carbons (Fig. 1 B). Glycolysis splits glucose down the middle, converting carbon positions 1, 2, and 3 (and 6, 5, and 4) into the methyl, carbonyl, and carboxyl carbons of pyruvate, respectively. As shown in Fig. 1C, growth of C. acetobutylicum in [1,2-13C]glucose (100%) resulted, as expected, in ∼50% of phosphoenolpyruvate and pyruvate each containing two 13C carbons. In contrast, feeding of [3-13C]glucose (100%) or [4-13C]glucose (100%) resulted in 50% labeling of phosphoenolpyruvate (and upstream trioses) but only ∼25% labeling of pyruvate. This suggested that the 13C label was being lost specifically from the carboxyl carbon of pyruvate, presumably in an exchange reaction with environmental carbon dioxide (CO2), which comprises 5% of the anaerobic gaseous environment and is ∼99% nonlabeled. Consistent with exchange of the carboxyl carbon of pyruvate with carbon dioxide, growth of cells in the presence of NaH13CO2 resulted in the formation of [1-13C]pyruvate, with the fraction of labeling increasing with increasing concentrations of NaH13CO2 (Fig. 1D). There was minimal or no labeling of upstream metabolites. This confirms the rapid interchange between the carboxylic acid group in pyruvate and environmental CO2. Glycolysis and the rapid interchange between the carboxyl group of pyruvate and CO2. (A) Overview of active and inactive pathways. Glycolysis operates normally through phosphoenolpyruvate with the Entner-Doudoroff pathway inactive (see Fig. S1 in the supplemental material). The reaction catalyzed by pyruvate ferredoxin oxidoreductase (PFOR) is partially, but not fully, reversible. GAP, glyceraldehyde-3-phosphate; DHAP, dihydroxyacetone phosphate. (B) Dynamic incorporation of uniformly 13C-labeled glucose (100%) into glycolysis intermediates. Glycolysis intermediates through phosphoenolpyruvate were labeled rapidly and completely. Pyruvate, however, appeared in roughly equimolar amounts in its fully labeled form and in an unexpected form with two 13C carbons. Environmental CO2 was ∼99% nonlabeled. The x axis represents minutes after the switch from unlabeled to [U-13C]glucose medium, and the y axis represents the fraction of the observed compound of the indicated isotopic form. (C) Steady-state labeling patterns of phosphoenolpyruvate and pyruvate obtained from cells grown in [3-13C]glucose (100%), [4-13C]glucose (100%), or [1,2-13C]glucose (100%). In [3-13C]glucose or [4-13C]glucose, about half of the phosphoenolpyruvate was labeled but only about a quarter of pyruvate was labeled. In contrast, growth in [1,2-13C]glucose resulted in identical labeling patterns for phosphoenolpyruvate and pyruvate. Environmental CO2 was ∼99% nonlabeled. These results indicate that the 13C label in pyruvate is specifically lost from the carboxyl carbon. (D) The fraction of [1-13C]pyruvate increased with increasing amounts of NaH13CO3 added to the medium. Cells were fed unlabeled glucose throughout, and labeling of upstream glycolysis intermediates was minimal or nonexistent (not shown). This experiment was performed in liquid closed-vessel cultures. The data, in conjunction with those in panels B and C, indicate exchange of the carboxyl carbon of pyruvate with carbon dioxide. (E) [U-13C]acetate was assimilated and incorporated into acetyl phosphate and acetyl-CoA. Pyruvate, however, remained unlabeled. The data suggest that the reaction catalyzed by PFOR is not fully reversible. The error bars in panels B through E show standard deviations (SD) (n = 2 to 4 independent experiments). In anaerobic organisms, the oxidative decarboxylation of pyruvate to produce acetyl-CoA and CO2 is catalyzed by pyruvate-ferredoxin oxidoreductase (PFOR, also known as pyruvate synthase). The use of ferredoxin, whose redox potential is close to that of pyruvate, as the oxidant has the potential to make the overall reaction reversible (10, 26). When 13C-labeled acetate was added to the medium, however, there was no detectable labeling of pyruvate, even when a significant fraction of the acetyl-CoA pool was labeled (Fig. 1E). In the proposed mechanism of acetyl-CoA synthesis by PFOR, pyruvate is first decarboxylated to form the intermediate hydroxyethyl-thiamine pyrophosphate (TPP, the prosthetic group in PFOR). This intermediate then reacts with CoA (coenzyme A) to produce acetyl-CoA (26). Our data indicate that the carboxylic group in pyruvate interchanges rapidly with CO2 but that the overall PFOR reaction is essentially irreversible. We accordingly propose that the interchange results from the reversibility of the decarboxylation step of the PFOR reaction. The interchange between the carboxyl group in pyruvate and atmospheric CO2 could also be explained by reverse flux through pyruvate dehydrogenase or pyruvate formate lyase. However, the presence of a pyruvate dehydrogenase complex has never been reported in C. acetobutylicum or in any other Clostridium species (5). Additionally, isotopic tracer experiments with aerobically and anaerobically grown E. coli strains indicate that neither pyruvate dehydrogenase nor pyruvate formate lyase is capable of producing the exchange between the carboxyl group in pyruvate and CO2 that we observe in C. acetobutylicum (see Fig. S3 in the supplemental material). Complete bifurcated TCA cycle.After gaining an understanding of the pathways that catabolize glucose to trioses, we examined the TCA cycle (Fig. 2). Feeding of [U-13C]glucose (100%) resulted in labeling patterns of oxaloacetate, malate, and fumarate which closely matched the labeling pattern of pyruvate, with the appearance of close to equimolar amounts of isotopomers with two or three 13C carbons (Fig. 2B). This observation is consistent with the synthesis of oxaloacetate from pyruvate and atmospheric CO2 (which is nonlabeled) and with the production of malate and fumarate from oxaloacetate by running the TCA cycle in the reductive (counterclockwise) direction. Succinate's labeling pattern, however, did not fully agree with that of pyruvate. Although succinate showed the same predominant labeled forms, their ratios were different, with nearly twice as much succinate containing three 13C carbons than two 13C carbons (Fig. 2B). This suggested that although succinate may be produced from fumarate, there must also be another source to account for the enhanced triple 13C labeling. Complete bifurcated TCA cycle in C. acetobutylicum. (A) The diagram represents the proposed bifurcated TCA cycle in C. acetobutylicum. α-Ketoglutarate is produced from oxaloacetate and acetyl-CoA via citrate. Succinate can be produced reductively from fumarate or oxidatively from α-ketoglutarate. Gray boxes show the fate of the carbons in the incoming acetyl group from acetyl-CoA, and dotted boxes show the fate of the carbons in the carboxyl group from pyruvate. The unusual stereospecificity of citrate synthesis was confirmed by MS/MS analysis (see Fig. S5 in the supplemental material). Panels B and C show the dynamic incorporation of [U-13C]glucose (100%) and [U-13C]acetate (in the presence of nonlabeled glucose) into TCA metabolites and glutamate. The x axis represents minutes after the switch from unlabeled to 13C-labeled medium, and the y axis represents the fraction of the observed compound of the indicated isotopic form. There was no detectable labeling of oxaloacetate, malate, or fumarate in the [U-13C]acetate experiments (not shown). These results are consistent with a bifurcated TCA cycle in which oxaloacetate flows to succinate both through citrate/α-ketoglutarate and via malate/fumarate as shown in panel A. Panels D and E show the long-term labeling patterns of TCA metabolites when cells are grown in glucose minimal medium supplemented with [U-13C]aspartate or with [U-13C]glutamate. These data corroborate the results obtained for panels B and C and the existence of a bifurcated TCA cycle. AKG, α-ketoglutarate. In all experiments, the environmental CO2 comprised 5% of the anaerobic gaseous environment and was ∼99% nonlabeled. In panels B through D, the error bars show SD (n = 2 to 4 independent experiments). The labeling pattern for α-ketoglutarate differed from that of pyruvate or succinate. If, as previously hypothesized (14, 23), α-ketoglutarate is produced from succinate and CO2 by running the TCA cycle reductively, α-ketoglutarate containing two and three 13C carbons should have appeared. However, the predominant form of α-ketoglutarate had four 13C carbons. The actual route of α-ketoglutarate production was revealed by the labeling patterns in citrate (Fig. 2B). Despite the putative lack of citrate synthase, there was a measurable intracellular pool of citrate that labeled rapidly after feeding of [U-13C]glucose. Citrate (a molecule with six carbons) was produced in two major isotopic forms, containing either four or five 13C carbons. This labeling pattern is consistent with citrate's production from oxaloacetate (with two or three 13C carbons) and acetyl-CoA (where the 2-carbon acetyl moiety is fully 13C labeled). The labeling pattern of α-ketoglutarate was then readily explained based on its production via citrate. Citrate containing either four or five 13C carbons produces α-ketoglutarate with four 13C carbons because the additional 13C carbon in citrate corresponds to the carboxyl group that is lost during oxidative decarboxylation of isocitrate to α-ketoglutarate. The labeling of glutamate matched that of α-ketoglutarate, consistent with its formation by reductive amination of α-ketoglutarate driven by either ammonia or glutamine. To rule out the previous hypothesis that glutamate could be synthesized from ornithine by running the arginine biosynthesis pathway in reverse (28), we added [U-13C]ornithine to the medium. While arginine pathway compounds downstream of ornithine became labeled, we observed no production of labeled glutamate (see Fig. S4 in the supplemental material). The lack of production of succinate containing four 13C carbons initially suggested that there was no production of succinate via α-ketoglutarate. However, experiments with additional 13C-labeled nutrients proved that this does occur. When cells were grown in unlabeled glucose plus [U-13C]acetate, 13C was assimilated into acetyl-CoA. Consistent with turning of the TCA cycle in the oxidative direction, citrate, α-ketoglutarate, and glutamate with two 13C carbons were produced, but the cycle was incomplete; there was no detectable labeling in oxaloacetate, malate, or fumarate. Interestingly, however, we observed the production of succinate with one 13C carbon (Fig. 2C). This labeling of succinate is consistent with its production from α-ketoglutarate, but the stereospecificity of citrate synthase was the opposite of that found in common bacterial model organisms and eukaryotes. This unusual Re-stereospecificity of citrate synthase was confirmed by examining the positions of 13C carbons within glutamate and proline by MS/MS analysis (see Fig. S5 in the supplemental material). Production of succinate from α-ketoglutarate explains the succinate labeling patterns in the [U-13C]glucose labeling experiments; succinate was synthesized both via fumarate (producing succinate containing two and three 13C carbons) and via α-ketoglutarate (producing succinate containing three 13C carbons). With glucose as the sole carbon source, the relative contributions from each route to succinate production are ∼60% and 40%, respectively. Aspartate and glutamate can be deaminated to oxaloacetate and α-ketoglutarate, respectively, to enter the TCA cycle. When [U-13C]aspartate is added to the medium (in the presence of unlabeled glucose), a large fraction (>80%) of the malate, fumarate, succinate, and citrate pools becomes quadrupoly 13C labeled. α-Ketoglutarate and glutamate become triply 13C labeled (Fig. 2D). When cells are grown in the presence of [U-13C]glutamate plus unlabeled glucose, both α-ketoglutarate and succinate become fully 13C labeled. In this case, oxaloacetate, malate, and fumarate are not 13C labeled (Fig. 2E). These observations corroborate the existence of a complete bifurcated TCA cycle in C. acetobutylicum. Amino acid biosynthetic pathways and C1 metabolism.By analyzing the labeling patterns of amino acids and key biosynthetic intermediates, we were able to resolve most of the amino acid biosynthesis pathways in C. acetobutylicum. The observed labeling patterns were consistent with those expected based on canonical amino acid biosynthesis pathways, with the exception of isoleucine and glycine production (see Table S2 in the supplemental material). The labeling patterns in isoleucine indicated that it is not synthesized by the canonical pathway via threonine but are instead consistent with its production from acetyl-CoA and pyruvate via the citramalate pathway (see Table S2). For glycine, there are two alternative pathways (Fig. 3 A). The more common pathway involves synthesis of glycine from serine by the enzyme serine hydroxymethyltransferase, which transfers the methanol group from serine to tetrahydrofolate (THF). The resulting methyl-folate species provide C1 units for the biosynthesis of purines, thymidine, and methionine. Alternatively, in Saccharomyces cerevisiae and some bacteria, glycine can be synthesized by degradation of threonine, e.g., into acetaldehyde and glycine (12, 21). When cells were grown on [U-13C]glucose, serine (synthesized via 3-phosphoglycerate) became fully labeled, whereas threonine (synthesized via pyruvate) was ∼50% triply 13C labeled and ∼50% doubly 13C labeled. Consistent with its predominant formation from threonine but not from serine, glycine was ∼50% fully labeled and ∼50% singly 13C labeled (Fig. 3B). The synthesis of glycine from threonine was further corroborated by growing cells in [U-13C]glucose plus nonlabeled aspartate and observing that both the threonine and glycine pools are mostly nonlabeled while serine was largely fully labeled (Fig. 3C). One-carbon metabolism in C. acetobutylicum. (A) Proposed network of one-carbon metabolism in C. acetobutylicum. Blue arrows highlight the major production routes for glycine, serine, and one-carbon units (C1 folates). The fate of the carbons originating from pyruvate is highlighted by gray and dotted boxes. (B) Dynamic incorporation of [U-13C]glucose (100%) into the amino acids serine, glycine, and threonine. The labeling patterns observed in glycine indicate that its primary route of production is via threonine and not serine. (C) The synthesis of glycine from threonine was confirmed by growing cells on [U-13C]glucose plus nonlabeled aspartate and observing that both the threonine and glycine pools are largely nonlabeled while the serine pool remains largely labeled. (D) Cells grown in [1,2-13C]glucose (100%) showed less than a 5% label in C1 units, even though the precursor carbon in glycine (methylene group highlighted in gray in panel A) is ∼50% labeled. (E) Correlation between the labeled fractions of the carboxylic acid carbon of pyruvate and the labeled fractions of C1 units across diverse labeling experiments. The addition of unlabeled aspartate to cells growing in [U-13C]glucose (100%), which results in the production of unlabeled glycine (C), does not affect labeling of C1 units. (F) Cells grown with increasing concentrations of NaH13CO3 showed increasing labeling of C1 units that closely followed the labeling in the carboxyl group of pyruvate but not the labeling of CO2 present in the medium. The fraction of labeled CO2 medium was determined based on labeling of CO2 assimilated into pyrimidines. In panels B through E, the environmental CO2 comprised 5% of the anaerobic gaseous environment and was ∼99% nonlabeled. In panel F, the experiments were performed in liquid closed-vessel cultures to minimize the interchange between atmospheric 12CO2 and NaH13CO3. Since glycine is synthesized primarily from threonine, C1 units must be obtained from a precursor other than serine. Glycine is also commonly used as a precursor of C1 units, but we also found that this route is nearly inactive in C. acetobutylicum. When cells were grown in [1,2-13C]glucose, less than 5% of C1 units were 13C labeled, even though the methylene group in glycine (the precursor of C1 units) was ∼50% labeled (Fig. 3D). Conversely, when cells were grown in either [3-13C]glucose or [4-13C]glucose, the methylene group in glycine was nonlabeled but ∼25% of the C1 unit pool was 13C labeled (Fig. 3E). The possibility that C1 units could be synthesized from the carboxyl group of glycine via some noncanonical pathway was ruled out by the observation that the percentage of 13C-labeled C1 units is essentially unchanged between cells grown in [U-13C]glucose and cells grown in [U-13C]glucose plus nonlabeled aspartate (Fig. 3E), even though the 13C label in the carboxyl group of glycine decreases from ∼50% to ∼10% (Fig. 3B and C). These experiments show that there is minimal production of C1 units via serine or glycine. We found, however, a strong correlation between the labeled fraction of the carboxylic group of pyruvate and the labeled fraction of C1 units across all these labeling experiments (Fig. 3E). In addition, when NaH13CO2 was added to the medium, 13CO2 was incorporated into C1 units. The fraction of labeled C1 units did not correspond directly to the fraction of labeled CO2 but did correspond to the fraction of labeled CO2 that was incorporated into pyruvate (Fig. 3F). Our data therefore indicate that in C. acetobutylicum, C1 units are derived primarily (>90%) from the carboxylic group of pyruvate, likely through the combined action of pyruvate formate lyase and formate-tetrahydrofolate ligase. Metabolic flux quantitation.Among the most important characteristics of a biochemical network are the in vivo reaction rates. To achieve a quantitative understanding of the fluxes in the newly proposed metabolic network, we developed an ordinary differential equation (ODE) model that describes the isotope labeling kinetics of metabolites following the addition of universally labeled [13C]glucose (see Materials and Methods and Fig. S7 in the supplemental material). Given the model equations, we employed a nonlinear global search algorithm to identify the fluxes that can quantitatively reproduce the laboratory data (8). In addition to the labeling kinetics, inputs to the model included intracellular concentrations of glycolysis and TCA cycle intermediates and amino acids (see Table S3 in the supplemental material), nutrient uptake rates, excretion rates (see Fig. S6 in the supplemental material), and specific flux branch point data obtained from the steady-state labeling experiments discussed previously. The details of the cost function used for model fitting are presented in Table S1 in the supplemental material. To avoid overfitting the data, any simulations which fell within the 95% confidence limits of the laboratory data were considered acceptable; only more severe misfits were penalized during the search. A total of 1,000 well-fitting sets of fluxes were identified and used to estimate flux confidence intervals. Figure 4 shows representative results for the ODE model fitting and a map of the identified median flux values in central metabolism. The complete results are presented in Fig. S8 and S9 in the supplemental material. The ODE model fits all of the observed data. Most of the identified fluxes, with the exception of several exchange fluxes, were tightly constrained, indicating that they are reliably defined by the available laboratory data. The results show that glycolytic flux predominates and is directed primarily toward acid production. Other significant fluxes include aspartate production via pyruvate/oxaloacetate, fatty acid production from dihydroxyacetone phosphate, and ribose-phosphate production from glycolytic intermediates. Within the TCA cycle, the flux through the oxidative branch is slightly larger than through the reductive branch. The production of succinate from succinyl-CoA can occur via succinyl-CoA synthetase but is also expected to occur via the canonical methionine and lysine pathways. The computational results indicate that the median contribution of the succinyl-CoA synthetase flux to the total succinyl-CoA flux into succinate is about 25%, while the methionine and lysine pathways combined contribute to ∼75% of the total flux (see Fig. S9). Quantitation of fluxes in central metabolism. (A) Ordinary differential equation (ODE) model fitting (lines) to the [U-13C]glucose dynamic labeling data (error bars) for three representative metabolites. Complete results are in Fig. S8 in the supplemental material. (B) Metabolic fluxes identified from the ODE model. Arrow sizes indicate absolute values (in logarithmic scale) of net fluxes. The fluxes shown are median values of 1,000 sets of identified fluxes, whose distributions are plotted in Fig. S9 in the supplemental material. The flux from succinyl-CoA into succinate is a combination of the flux through succinyl-CoA synthetase (∼25% median contribution) and the fluxes through the methionine and lysine biosynthesis pathways that are coupled with the conversion of succinyl-CoA into succinate (∼75% median contribution). Hexose-P, combined pools of glucose-1-phosphate, glucose-6-phosphate, and fructose-6-phosphate; FBP, fructose-1,6-bisphosphate; DHAP, combined pools of dihydroxyacetone phosphate and glyceraldehyde-3-phosphate; 3PG, combined pools of glycerate-3-phosphate and glycerate-2-phosphate; PEP, phosphoenolpyruvate; Pentose-P, combined pools of ribose-5-phosphate, xylulose-5-phosphate, and ribulose-5-phosphate; OAA, oxaloacetate; αKG, α-ketoglutarate; SucCoA, succinyl-CoA; Asp, aspartate; Glu, glutamate; Gln, glutamine. In addition to providing quantitative flux values, the ODE model also helped to resolve an ambiguity in the network structure that was not adequately addressed by qualitative analysis of the isotope labeling patterns alone. Malate and fumarate production can occur directly via the reductive TCA cycle or alternatively from passage of carbon from aspartate to fumarate, which would then be oxidized to malate (see the alternative pathway in Fig. S7 in the supplemental material). Both pathways result in qualitatively indistinguishable labeling patterns. To distinguish them, we constructed ODE models for the two alternative pathways and performed flux identification using the procedure described above. Both models were able to describe the quantitative dynamic data following [U-13C]glucose labeling. However, in the second model, because fumarate is partly used for the production of malate, the contribution of fumarate to succinate production is smaller than that in the first model. Quantitatively, the percentage of succinate produced from fumarate is ∼54% for the first model and ∼6% for the second model. Compared with the experimentally measured value of ∼60%, the computational results indicate that the second model is inaccurate and the first one is correct, meaning that malate is produced primarily reductively from oxaloacetate rather than oxidatively from fumarate. Comparative genome sequence analysis has become the predominant tool for genome-scale reconstruction of the metabolic network of microorganisms. Frequently, however, due to incomplete annotation or undocumented functional genes, there are gaps and uncertainties within the metabolic network that need to be resolved experimentally. These limitations get in the way of a comprehensive understanding of their metabolism and interfere with the creation of quantitative genome-scale models of metabolism. This hinders the ability to rationally modulate metabolism for biotechnological or medical purposes. Here, we used 13C-labeled tracer experiments to elucidate the in vivo function of the TCA cycle and other primary metabolic pathways in C. acetobutylicum. In contrast to the prevailing hypothesis, we found that this organism has a complete, albeit bifurcated, TCA cycle; oxaloacetate flows to succinate both through citrate/α-ketoglutarate and via malate/fumarate. Although there is currently no gene annotated as citrate synthase in C. acetobutylicum, our data revealed the presence of a citrate synthase with Re-stereospecificity. An Re-citrate synthase has recently been identified in Clostridium kluyveri as the product of a gene predicted to encode isopropylmalate synthase (17). The corresponding protein in C. acetobutylicum, CAC0970, has a 64% amino acid sequence identity and is one candidate for the Re-citrate synthase in this organism. While aconitase and isocitrate dehydrogenase were not annotated when the genome sequence of C. acetobutylicum was first released (23), the genes CAC0971 and CAC0971 are now annotated as such in the Kyoto Encyclopedia of Genes and Genomes (KEGG). The products of these genes, however, have not yet been characterized in C. acetobutylicum or in any other clostridia. The α-ketoglutarate dehydrogenase complex is missing in the C. acetobutylicum genome, but it has been hypothesized that a putative 2-oxoacid ferredoxin oxidoreductase (CAC2458) could catalyze succinyl-CoA formation from α-ketoglutarate (23). There are still no candidate genes encoding fumarate reductase/succinate dehydrogenase or succinyl-CoA synthetase. Initially defined by a set of broad phenotypic characteristics such as rod-like morphology, Gram-positive cell walls, endospore formation, and strict anaerobic metabolism, Clostridium is one of the most heterogeneous bacterial genera (5). In a sequence-based species tree, there are a number of independent and deeply branching sublines within the Clostridium subdivision, which also includes many nonclostridial species (4). Among the clostridia, C. kluyveri shows a unique metabolism; it grows anaerobically on ethanol and acetate as sole energy sources (27). Only about half of the genes in C. kluyveri show more than 60% similarity in C. acetobutylicum (3). The similarities that we observe between C. acetobutylicum and C. kluyveri regarding the oxidative production of α-ketoglutarate and one-carbon metabolism (as discussed further below) are therefore noteworthy. In both the initial genome sequencing and a subsequent genome-scale reconstruction of the C. acetobutylicum metabolic network, it was proposed that α-ketoglutarate is synthesized from oxaloacetate by running the TCA cycle reductively. It was argued that a reductive TCA cycle would be favored given the low redox potential of the internal anaerobic environment of C. acetobutylicum. It is therefore intriguing that C. acetobutylicum synthesizes α-ketoglutarate exclusively oxidatively. The reasons for this remain unclear, but the conversion of α-ketoglutarate into succinyl-CoA appears to be irreversible in this organism; although succinate is readily synthesized via α-ketoglutarate, there is no back-flux from succinate to α-ketoglutarate, even under conditions in which there is ample production of succinate by the reductive TCA cycle (as when cells are grown in the presence of aspartate). The irreversibility is expected if this reaction is catalyzed by a yet-to-be-identified α-ketoglutarate dehydrogenase but not if it is catalyzed, as previously proposed, by a reversible 2-oxoacid ferredoxin oxidoreductase. Given that α-ketoglutarate is synthesized solely via citrate, succinate becomes a metabolite of limited biosynthetic value. The benefit of maintaining two different routes for its production is therefore unclear. A possibility is that a bifurcated TCA cycle ending in succinate plays a role in cellular redox balance. However, the rate of succinate excretion (∼4 μmol/h/g cells [dry weight]) is very low compared to that of other fermentation products such as acetate and butyrate (∼4 mmol/h/g cells [dry weight]) (see Fig. S6 in the supplemental material). Another possibility is that this particular arrangement of the TCA cycle facilitates the utilization of certain amino acids as nitrogen sources. For example, when C. acetobutylicum is grown in glutamate or aspartate as the sole nitrogen source, large amounts of α-ketoglutarate or oxaloacetate are produced during deamination. While a fraction of these carbon skeletons may be used for biosynthetic purposes, most must be discarded. Their conversion to succinate, and subsequent excretion, provides a short and rapid route. These hypotheses are consistent with the data obtained from our experiments with [13C]glutamate and [13C]aspartate. In most organisms, glycine is synthesized from serine, producing a C1 unit during the process. Glycine, in turn, can also be used to produce a C1 unit. In contrast, in C. acetobutylicum, the major route (∼90%) for the production of glycine is via threonine. This necessitates C1 unit production from a precursor other than serine, and we found that C1 units are derived predominantly (∼90%) from the carboxyl group of pyruvate. A related situation has been observed in C. kluyveri, in which 67% of glycine is formed from threonine and 33% from serine, and about 25% of C1 units are synthesized from serine and 75% from CO2 (11). The production of C1 units from the carboxyl group of pyruvate (oxidation state, +3) can be viewed as a reductive pathway while their production from the methylene group of serine or glycine (oxidation state, −1) can be considered an oxidative pathway. For example, using serine as the source for C1 units, the production of 10-formyl-tetrahydrofolate (used in purine biosynthesis) is accompanied by the production of one NADH; using glycine, two NADHs are produced. However, no NADH is produced when pyruvate is used as the source of C1 units for the production of 10-formyl-tetrahydrofolate. Therefore, for an anaerobic bacterium such as C. acetobutylicum, it makes sense to derive C1 units from the carboxyl group of pyruvate. Also, the capacity to produce C1 units both reductively and oxidatively suggests that the relative utilization of these pathways may be yet another way to control cellular redox balance. Our observations strengthen the notion that pyruvate constitutes a pivotal metabolic crossroads in C. acetobutylicum, linking the TCA cycle, amino acid biosynthesis pathways, one-carbon metabolism, and acid/solvent-producing pathways. It therefore represents a control point that could be exploited to improve biofuel production. For example, decreasing the activity of pyruvate carboxylase should decrease the flux of pyruvate into the TCA cycle and associated amino acid biosynthesis pathways and increase pyruvate flux into acetyl-CoA and solvent production. The dynamic isotope labeling approach (kinetic flux profiling) that we use here is different from the steady-state isotopic approach (metabolic flux analysis) recently used in similar contexts (6, 29). One major advantage of our approach is that it provides absolute fluxes throughout the network instead of just ratios of fluxes at branch points. Additional advantages include easy data deconvolution and short labeling time. The quantitative modeling technique used in this study is generally applicable for the identification of metabolic fluxes from dynamic isotope tracer experiments (22). In addition to providing a quantitative understanding of the target metabolic networks, we have shown its ability to discriminate among competing network structures that produce qualitatively indistinguishable labeling patterns. Moreover, given the appropriate input data, the general nonlinear identification strategy can also be employed for the construction of dynamic models that reflect the regulation of metabolic fluxes (8, 32). Such dynamic models can enable a more comprehensive understanding and rational engineering of metabolic networks. In the case of C. acetobutylicum, for example, a model of dynamic regulation could be used to design genetic and nutrient perturbations that enhance solvent and/or biohydrogen production. This study represents the first in vivo experimental characterization of the TCA cycle and central metabolism in C. acetobutylicum and exemplifies the potential of dynamic isotope tracer studies and quantitative flux modeling in complementing genome-based metabolic network reconstruction. Received 30 April 2010. Accepted 27 June 2010. ↵▿ Published ahead of print on 9 July 2010. Bajad, S. U., W. Lu, E. H. Kimball, J. Yuan, C. Peterson, and J. D. Rabinowitz. 2006. Separation and quantitation of water soluble cellular metabolites by hydrophilic interaction chromatography-tandem mass spectrometry. J. Chromatogr. A 1125:76-88. Bennett, B. D., J. Yuan, E. H. Kimball, and J. D. Rabinowitz. 2008. Absolute quantitation of intracellular metabolite concentrations by an isotope ratio-based approach. Nat. Protoc. 3:1299-1311. Brinkac, L. M., T. Davidsen, E. Beck, A. Ganapathy, E. Caler, R. J. Dodson, A. S. Durkin, D. M. Harkins, H. Lorenzi, R. Madupu, Y. Sebastian, S. Shrivastava, M. Thiagarajan, J. Orvis, J. P. Sundaram, J. Crabtree, K. Galens, Y. Zhao, J. M. Inman, R. Montgomery, S. Schobel, K. Galinsky, D. M. Tanenbaum, A. Resnick, N. Zafar, O. White, and G. Sutton. 2010. Pathema: a clade-specific bioinformatics resource center for pathogen research. Nucleic Acids Res. 38:D408-D414. Dehal, P. S., M. P. Joachimiak, M. N. Price, J. T. Bates, J. K. Baumohl, D. Chivian, G. D. Friedland, K. H. Huang, K. Keller, P. S. Novichkov, I. L. Dubchak, E. J. Alm, and A. P. Arkin. 2010. MicrobesOnline: an integrated portal for comparative and functional genomics. Nucleic Acids Res. 38:D396-D400. Dürre, P. 2005. Handbook on clostridia. Taylor & Francis, Boca Raton, FL. Feng, X., H. Mouttaki, L. Lin, R. Huang, B. Wu, C. L. Hemme, Z. He, B. Zhang, L. M. Hicks, J. Xu, J. Zhou, and Y. J. Tang. 2009. Characterization of the central metabolic pathways in Thermoanaerobacter sp. strain X514 via isotopomer-assisted metabolite analysis. Appl. Environ. Microbiol. 75:5001-5008. Feng, X. J., S. Hooshangi, D. Chen, G. Li, R. Weiss, and H. Rabitz. 2004. Optimizing genetic circuits by global sensitivity analysis. Biophys. J. 87:2195-2202. Feng, X. J., and H. Rabitz. 2004. Optimal identification of biochemical reaction networks. Biophys. J. 86:1270-1281. Fischer, E., and U. Sauer. 2003. Metabolic flux profiling of Escherichia coli mutants in central carbon metabolism using GC-MS. Eur. J. Biochem. 270:880-891. Furdui, C., and S. W. Ragsdale. 2000. The role of pyruvate ferredoxin oxidoreductase in pyruvate synthesis during autotrophic growth by the Wood-Ljungdahl pathway. J. Biol. Chem. 275:28494-28499. Jungermann, K. A., W. Schmidt, F. H. Kirchniawy, E. H. Rupprecht, and R. K. Thauer. 1970. Glycine formation via threonine and serine aldolase. Its interrelation with the pyruvate formate lyase pathway of one-carbon unit synthesis in Clostridium kluyveri. Eur. J. Biochem. 16:424-429. Kataoka, M., M. Ikemi, T. Morikawa, T. Miyoshi, K. Nishi, M. Wada, H. Yamada, and S. Shimizu. 1997. Isolation and characterization of d-threonine aldolase, a pyridoxal-5′-phosphate-dependent enzyme from Arthrobacter sp. DK-38. Eur. J. Biochem. 248:385-393. Kraml, C. M., D. Zhou, N. Byrne, and O. McConnell. 2005. Enhanced chromatographic resolution of amine enantiomers as carbobenzyloxy derivatives in high-performance liquid chromatography and supercritical fluid chromatography. J. Chromatogr. A 1100:108-115. Lee, J., H. Yun, A. M. Feist, B. O. Palsson, and S. Y. Lee. 2008. Genome-scale reconstruction and in silico analysis of the Clostridium acetobutylicum ATCC 824 metabolic network. Appl. Microbiol. Biotechnol. 80:849-862. Lee, S. Y., J. H. Park, S. H. Jang, L. K. Nielsen, J. Kim, and K. S. Jung. 2008. Fermentative butanol production by clostridia. Biotechnol. Bioeng. 101:209-228. Ley, R. E., M. Hamady, C. Lozupone, P. J. Turnbaugh, R. R. Ramey, J. S. Bircher, M. L. Schlegel, T. A. Tucker, M. D. Schrenzel, R. Knight, and J. I. Gordon. 2008. Evolution of mammals and their gut microbes. Science 320:1647-1651. Li, F., C. H. Hagemeier, H. Seedorf, G. Gottschalk, and R. K. Thauer. 2007. Re-citrate synthase from Clostridium kluyveri is phylogenetically related to homocitrate synthase and isopropylmalate synthase rather than to Si-citrate synthase. J. Bacteriol. 189:4299-4304. Ljungdahl, L. G. 2003. Biochemistry and physiology of anaerobic bacteria. Springer, New York. NY. Lu, W., B. D. Bennett, and J. D. Rabinowitz. 2008. Analytical strategies for LC-MS-based targeted metabolomics. J. Chromatogr. B Analyt. Technol. Biomed. Life Sci. 871:236-242. Monot, F., J. R. Martin, H. Petitdemange, and R. Gay. 1982. Acetone and butanol production by Clostridium acetobutylicum in a synthetic medium. Appl. Environ. Microbiol. 44:1318-1324. Monschau, N., K. P. Stahmann, H. Sahm, J. B. McNeil, and A. L. Bognar. 1997. Identification of Saccharomyces cerevisiae GLY1 as a threonine aldolase: a key enzyme in glycine biosynthesis. FEMS Microbiol. Lett. 150:55-60. Munger, J., B. D. Bennett, A. Parikh, X. J. Feng, J. McArdle, H. A. Rabitz, T. Shenk, and J. D. Rabinowitz. 2008. Systems-level metabolic flux profiling identifies fatty acid synthesis as a target for antiviral therapy. Nat. Biotechnol. 26:1179-1186. Nolling, J., G. Breton, M. V. Omelchenko, K. S. Makarova, Q. Zeng, R. Gibson, H. M. Lee, J. Dubois, D. Qiu, J. Hitti, Y. I. Wolf, R. L. Tatusov, F. Sabathe, L. Doucette-Stamm, P. Soucaille, M. J. Daly, G. N. Bennett, E. V. Koonin, and D. R. Smith. 2001. Genome sequence and comparative analysis of the solvent-producing bacterium Clostridium acetobutylicum. J. Bacteriol. 183:4823-4838. Qin, J., R. Li, J. Raes, M. Arumugam, K. S. Burgdorf, C. Manichanh, T. Nielsen, N. Pons, F. Levenez, T. Yamada, D. R. Mende, J. Li, J. Xu, S. Li, D. Li, J. Cao, B. Wang, H. Liang, H. Zheng, Y. Xie, J. Tap, P. Lepage, M. Bertalan, J. M. Batto, T. Hansen, D. Le Paslier, A. Linneberg, H. B. Nielsen, E. Pelletier, P. Renault, T. Sicheritz-Ponten, K. Turner, H. Zhu, C. Yu, M. Jian, Y. Zhou, Y. Li, X. Zhang, N. Qin, H. Yang, J. Wang, S. Brunak, J. Dore, F. Guarner, K. Kristiansen, O. Pedersen, J. Parkhill, J. Weissenbach, P. Bork, and S. D. Ehrlich. 2010. A human gut microbial gene catalogue established by metagenomic sequencing. Nature 464:59-65. Rabinowitz, J. D., and E. Kimball. 2007. Acidic acetonitrile for cellular metabolome extraction from Escherichia coli. Anal. Chem. 79:6167-6173. Ragsdale, S. W. 2003. Pyruvate ferredoxin oxidoreductase and its radical intermediate. Chem. Rev. 103:2333-2346. Seedorf, H., W. F. Fricke, B. Veith, H. Bruggemann, H. Liesegang, A. Strittmatter, M. Miethke, W. Buckel, J. Hinderberger, F. Li, C. Hagemeier, R. K. Thauer, and G. Gottschalk. 2008. The genome of Clostridium kluyveri, a strict anaerobe with unique metabolic features. Proc. Natl. Acad. Sci. U. S. A. 105:2128-2133. Senger, R. S., and E. T. Papoutsakis. 2008. Genome-scale model for Clostridium acetobutylicum. I. Metabolic network resolution and analysis. Biotechnol. Bioeng. 101:1036-1052. Tang, Y. J., S. Yi, W.-Q. Zhuang, S. H. Zinder, J. D. Keasling, and L. Alvarez-Cohen. 2009. Investigation of carbon metabolism in "Dehalococcoides ethenogenes" strain 195 by use of isotopomer and transcriptomic analyses. J. Bacteriol. 191:5224-5231. Turnbaugh, P. J., and J. I. Gordon. 2009. The core gut microbiome, energy balance and obesity. J. Physiol. 587:4153-4158. Yuan, J., B. D. Bennett, and J. D. Rabinowitz. 2008. Kinetic flux profiling for quantitation of cellular metabolic fluxes. Nat. Protoc. 3:1328-1340. Yuan, J., C. D. Doucette, W. U. Fowler, X. J. Feng, M. Piazza, H. A. Rabitz, N. S. Wingreen, and J. D. Rabinowitz. 2009. Metabolomics-driven quantitative analysis of ammonia assimilation in E. coli. Mol. Syst. Biol. 5:302. Journal of Bacteriology Aug 2010, 192 (17) 4452-4461; DOI: 10.1128/JB.00490-10 You are going to email the following Systems-Level Metabolic Flux Profiling Elucidates a Complete, Bifurcated Tricarboxylic Acid Cycle in Clostridium acetobutylicum
CommonCrawl
Genetics Selection Evolution Approximated prediction of genomic selection accuracy when reference and candidate populations are related Jean-Michel Elsen1,2 Genetics Selection Evolution volume 48, Article number: 18 (2016) Cite this article Genomic selection is still to be evaluated and optimized in many species. Mathematical modeling of selection schemes prior to their implementation is a classical and useful tool for that purpose. These models include formalization of a number of entities including the precision of the estimated breeding value. To model genomic selection schemes, equations that predict this reliability as a function of factors such as the size of the reference population, its diversity, its genetic distance from the group of selection candidates genotyped, number of markers and strength of linkage disequilibrium are needed. The present paper aims at exploring new approximations of this reliability. Two alternative approximations are proposed for the estimation of the reliability of genomic estimated breeding values (GEBV) in the case of non-independence between candidate and reference populations. Both were derived from the Taylor series heuristic approach suggested by Goddard in 2009. A numerical exploration of their properties showed that the series were not equivalent in terms of convergence to the exact reliability, that the approximations may overestimate the precision of GEBV and that they converged towards their theoretical expectations. Formulae derived for these approximations were simple to handle in the case of independent markers. A few parameters that describe the markers' genotypic variability (allele frequencies, linkage disequilibrium) can be estimated from genomic data corresponding to the population of interest or after making assumptions about their distribution. When markers are not in linkage equilibrium, replacing the real number of markers and QTL by the "effective number of independent loci", as proposed earlier is a practical solution. In this paper, we considered an alternative, i.e. an "equivalent number of independent loci" which would give a GEBV reliability for unrelated individuals by considering a sub-set of independent markers that is identical to the reliability obtained by considering the full set of markers. This paper is a further step towards the development of deterministic models that describe breeding plans based on the use of genomic information. Such deterministic models carry low computational burden, which allows design optimization through intensive numerical exploration. The effectiveness of genomic selection comes from the possibility of predicting breeding values on un-phenotyped and young animals [1]. Genomic selection promised and proved to be extremely efficient and beneficial for dairy cattle (e.g. [2–7]), but debate continues for other species and production sectors (e.g. [8–12]). A key criterion to decide whether or not selection schemes (also referred to here as breeding plans) should include genomic information is the reliability of the genomic predictor. It was clearly shown that this reliability depends on the structure of the reference population and on the characteristics of the marker set used. The size of this reference population, its diversity, the genetic distance between the reference and the group of selection candidates genotyped, the number of markers, and the degree or strength of the linkage disequilibrium are the main factors that influence this reliability [13–23]. An extensive literature exists on the mathematical modeling of selection schemes prior to their implementation, in order, for instance, to optimize their design, or to evaluate the usefulness of new technologies such as embryo transfer, sperm selection, DNA markers and others (e.g. [24] for a review). These models account for factors such as selection intensities and maintenance or loss of genetic variability. Among these parameters, the precision of breeding value estimates is central. To model genomic selection schemes, equations that predict this reliability as a function of the factors cited above are needed (e.g. [6, 25, 26]). The quantitative influence of these factors (size of the reference population, its diversity, etc.) was assessed by simulation studies [18–21, 27, 28]. An equation that predicts the reliability of genomic evaluation in the very simple situation of independent quantitative trait loci (QTL), that are perfectly marked by single nucleotide polymorphisms (SNPs) and populations (reference and candidates) of unrelated individuals was derived [13]. This approach was extended to the case when only a part of the genetic variability is imperfectly marked by SNPs [16, 19], and the situation of non-independence between reference and candidate populations was explored [17]. It was demonstrated that genomic information captures historical linkage disequilibrium, short-term linkage between QTL and markers and additive relationships between reference and candidate individuals, the equation of the reliability accounting for these three phenomena being derived in a very simple case of one QTL marked by a single SNP [22]. A Taylor expansion of a matrix inverse involved in the reliability formula was suggested [18], which led to the algebraic development of an approximation. This approximation seems to work well in the simple situation but lacks generality. In this paper, an alternative approximation is proposed, opening a way to include non-independence between reference and candidate populations, and between markers. After a formalization of the genomic selection context, the principles that underlie these approximations are presented and their properties are compared by using a simple example. Then, the new approximation is derived when reference and candidate animals are related. This is illustrated by some numerical examples. Finally, the extension to the linkage disequilibrium situation is described. General framework Although the prediction equations derived below were based on a number of simplifying assumptions, it is important to first draw a complete description of the biological framework, as a basis to subsequently simplify the discussion. The SNP effects are estimated in a reference population, Pr, comprising n r individuals. The genomic estimated breeding values (GEBV) are calculated for a population of candidates for selection and used in breeding, Pc, comprising n c individuals. Let \( {\mathbf{\mathcal{P}}} = \left( {{\mathbf{\mathcal{P}}}_{r} ,{\mathbf{\mathcal{P}}}_{c} } \right) \) the population structure (including pedigree relationships between individuals and marker allele frequencies, but not including genotypes and phenotypes). Individuals are characterized by their genotypes at n M markers (observed) and at n Q QTL (unknown). Alleles will be noted A m and B m for the marker m and A q and B q for the QTL q. Let a tim ∊ {0, 1, 2} and a tiq ∊ {0, 1, 2} be the numbers of B m (and respectively, B q ) alleles that an individual i from population Pt (Pr or Pc) carries at marker m (respectively, QTL q). Let p tm and p tq be the frequencies of alleles B m and B q in Pt. Genotypic values will be assigned to the different markers and QTL genotypes. Following [18], genotypes will be coded as x tim = a tim − 2p tm and w tiq = a tiq − 2p tq . Different codifications can be proposed [15]. In particular, as described for instance in [29], genotypic values may be standardized, i.e. x tim = (a tim − 2p tm )/σ tm and w tiq = (a tiq −2p tq )/σ tq , with variances σ 2 tm = 2p tm (1 − p tm ) and σ 2 tq = 2p tq (1 − p tq ). Most of the following developments are given with the first codification here, and the results with the second codification are described in a specific section. These genotypic values are assembled in matrices X (dim (X) = (n r + n c ) × n M ) and W (dim (W) = (n r + n c ) × n Q ). Sub-matrices corresponding to sub-populations will be noted in the following way: X ′ = (X ′ r , X ′ c ) and W ′ = (W ′ r , W ′ c ). The genetic model assumes additivity of QTL effects. The additive genetic value of an individual is described as \( g_{ti} = \sum\nolimits_{q = 1}^{{n_{Q} }} {w_{tiq} \alpha_{q} } \) and, in general, \( {\mathbf{ g = W\alpha }} \). The phenotypic values when observed are y = g + ɛ. A statistical model describes the performances in the reference population as random variables for which the expectations are linear combinations of SNP effects: \( y_{ri} = \sum\nolimits_{m = 1}^{{n_{S} }} {X_{rim} \beta_{m} + e_{ri} } \) and, in general, y = Xβ + e. In these models, the SNP (or QTL) effects may be considered as fixed, or random. Since the number of SNPs is much bigger than the number of individuals (n M ≫ n r ) the second solution is generally chosen in the statistical model (but not always see [1, 13]). In the random model, a distribution \( {\mathcal{L}}\left( {{\varvec{\uptheta}}_{{\upbeta }} ,{\mathbf{V}}_{{\upbeta }} } \right) \) (respectively \( {\mathcal{L}}\left( {{\varvec{\uptheta}}_{\upalpha} ,{\mathbf{V}}_{\upalpha} } \right) \)) of the SNP (respectively QTL) effects is assumed, with θ β (respectively, θ α) being the vector of expectations and V β (respectively, V α) being the matrix of variances. For a full description of the variability, the V β and V α matrices are each subdivided into four blocks corresponding to the reference and candidate populations and their covariances. Covariances between the α and β vectors have also to be considered. Most generally, the SNP (QTL) effects are supposed i.i.d. giving V β = Iσ 2β (V α = Iσ 2α ). The interpretation of these QTL effects is nicely debated in Gianola et al. [30]. In the frequentist view, we simply have to imagine that QTL effects are randomly sampled from a distribution with a σ 2α variance. In the Bayesian context, the prior variability of the SNP effects was most generally described as heteroskedastic or even coming from mixtures of SNPs with or without an effect on the trait. The expectations \( {\varvec{\uptheta}}_{{\upbeta }} \left( {{\varvec{\uptheta}}_{{\upalpha }} } \right) \) are generally assumed equal to zero, but when information about population history is available (in particular, when we know it is a mixed population), non-zero values should be considered. The vector q = Xβ is a quantity similar but not equal to the genetic value g. Its element q ti is the molecular score of individual i in population t. This vector may be segmented in two parts: \( {\mathbf{q^{\prime}}} = \left( {{\mathbf{q^{\prime}_{{{r}}}}} ,{\mathbf{q^{\prime}_{{\mathbf{c}}} }}} \right) \). Since the variances may be defined within a population, we have \( v\left( {{\mathbf{q}}_{{\mathbf{r}}} |{\mathbf{X}}} \right) = {\mathbf{X}}_{{\mathbf{r}}} {\mathbf{V}}_{{{\bf\upbeta r}}} {\mathbf{X}}^{\prime}_{{\mathbf{r}}} \), and v(q c |X) = X c V βc X ′ c . The residual variance is v(e) = I σ 2 e . Assuming that the distribution of marker effects is centered (\( {\varvec{\uptheta}}_{{\varvec{\upbeta}}} = {\bf 0} \)) and i.i.d. (\( {\mathbf{V}}_{{{\upbeta r}}} = {\mathbf{I}}\sigma_{{{\upbeta r}}}^{2} \, {\text{and}} \,{\mathbf{V}}_{{{\upbeta c}}} = {\mathbf{I}}\sigma_{{{\upbeta c}}}^{2} \)), and extending Gianola et al. [30], we have \( v\left( {q_{ri} } \right) = {\text{E}}_{{\mathbf{X}}} \left[ {v\left( {q_{\text{ri}} |{\mathbf{X}}} \right)} \right] = \sigma_{\beta r}^{2} \sum\nolimits_{m} {2p_{mr} \left( {1 - p_{mr} } \right)} = \sigma_{\beta r}^{2} \sum\nolimits_{m} {\sigma_{mr}^{2} = \sigma_{\beta r}^{2} \tau_{r} } \) in the reference population, and v(q ci ) = σ 2βc τ c in the candidate population. Assuming that the distribution of the marker effects and genotypes are the same in Pr and Pc, i.e. \( p_{rm} = p_{cm} = p_{m} , p_{rq} = p_{cq} = p_{q} \), thus τ r = τ C = τ and σ 2 βr = σ 2 βc = σ 2 β , we define σ 2 q = τσ 2β . Thus, \( v\left( {{\mathbf{q}}|{\mathbf{X}}} \right) = \frac{1}{\tau }{\mathbf{XX^{\prime}}}\sigma_{{q}}^{2} \). These equations hold even if the markers are in linkage disequilibrium (LD) as shown in Eq. A2 from Gianola et al. [30]. We note σ 2 as the total phenotypic variance, i.e. σ 2 = σ 2 q + σ 2 e , and ν 2 as the proportion of this variance explained by the molecular score \( \left( {\nu^{2} = \frac{{\sigma_{q}^{2} }}{{\sigma^{2} }}} \right) \). The ratio \( \frac{{\sigma_{q}^{2} }}{{\sigma_{e}^{2} }} \) will be noted γ. The SNP effects β may be estimated in different ways. The genomic best linear unbiased prediction (BLUP) will only be considered here, with \( {\hat{\varvec{\upbeta} }} = {\text{cov}}\left( {{\varvec{\upbeta}},{\mathbf{y}}} \right){\text{var}}\left( {\mathbf{y}} \right)^{ - 1} {\mathbf{y}} \). Classically, this equation becomes \( {\hat{\varvec{\upbeta }}} = \sigma_{{{\upbeta }}}^{2} {\mathbf{X}}^{\prime}_{{\mathbf{r}}} \left[ {\sigma_{{{\upbeta }}}^{2} \left( {{\mathbf{X}}_{{\mathbf{r}}} {\mathbf{X}}^{\prime}_{{\mathbf{r}}} + {\mathbf{I}}{{\uplambda }}_{{{\upbeta }}} } \right)} \right]^{ - 1} {\mathbf{y}} = \left( {{\mathbf{X}}^{\prime}_{{\mathbf{r}}} {\mathbf{X}}_{{\mathbf{r}}} + {\mathbf{I}}{{\uplambda }}_{{{\upbeta }}} } \right)^{ - 1} {\mathbf{X}}^{\prime}_{{\mathbf{r}} }{\mathbf{y}} \) with λβ = σ 2e /σ 2β . The linear combination \( \hat{\varvec{q}}_{\varvec{c}} = {\mathbf{X}}_{\varvec{c}} {\hat{\varvec{\upbeta}}} \) is the GBLUP vector for candidates in Pc. It must be emphasized that these estimations and predictions are conditional on the genotypic structures defined by X (X r and \( {\mathbf{X}}_{\varvec{c}} \)). Given X, the reliability of the GBLUP is \( r^{2} \left( {g_{ci} ,\hat{q}_{ci} |{\mathbf{X}}} \right) = \frac{{cov^{2} \left( {g_{ci} ,\hat{q}_{ci} |{\mathbf{X}}} \right)}}{{v\left( {g_{ci} |{\mathbf{X}}} \right)v\left( {\hat{q}_{ci} |{\mathbf{X}}} \right)}} \). In [16], the reliability is described (Eq. 6 in [16]) as \( r\left( {g_{ci} ,\hat{q}_{ci} } \right) = r\left( {g_{ci} ,q_{ci} } \right) \times r\left( {q_{ci} ,\hat{q}_{ci} } \right) \), by ignoring the conditioning on X. In Goddard et al. [18], the reliability is described as \( r_{{g_{ci} ,\hat{q}_{ci} }}^{2} = \frac{{v\left( {\hat{q}_{ci} } \right)}}{{v\left( {g_{ci} } \right)}} = \frac{{v\left( {q_{ci} } \right)}}{{v\left( {g_{ci} } \right)}}\frac{{v\left( {\hat{q}_{ci} } \right)}}{{v\left( {q_{ci} } \right)}} \). In this formulation, \( \frac{{v\left( {q_{ci} } \right)}}{{v\left( {g_{ci} } \right)}} \) is the proportion of the genetic variance explained by the markers and \( \frac{{v\left( {\hat{q}_{ci} } \right)}}{{v\left( {q_{ci} } \right)}} \) is the accuracy of estimated marker effects. This is similar to the \( {\text{qr}}_{{\hat{Q}}} \) reported by Dekkers et al. [25]. All these reliability formulae are approximations since \( cov^{2} \left( {g_{ci} ,\hat{q}_{ci} } \right) = cov^{2} \left( {\sum w_{ciq} \alpha_{q} ,\sum x_{cis} \hat{\beta }_{s} } \right) \ne v\left( {\hat{q}_{ci} } \right) = v\left( {\sum x_{cis} \hat{\beta }_{s} } \right) \), in general. Situation analyzed in this paper In the following, ignoring the difficulty that was mentioned above, we will assume \( r^{2} \left( {q_{\text{ci}} ,\hat{q}_{ci} |{\mathbf{X}}} \right) = \frac{{cov^{2} \left( {q_{ci} ,\hat{q}_{ci} |{\mathbf{X}}} \right)}}{{v\left( {q_{ci} |{\mathbf{X}}} \right)v\left( {\hat{q}_{ci} |{\mathbf{X}}} \right)}} = \frac{{v\left( {\hat{q}_{ci} |{\mathbf{X}}} \right)}}{{v\left( {q_{ci} |{\mathbf{X}} } \right)}} \). We are interested in a single candidate in \( \varvec{P}_{\varvec{c}} \) with a x c vector of marker genotypes. Formulae were simplified in two ways. (1) the i index of the candidate was omitted in the following developments: the genetic value of the candidate is noted q c , estimated by \( \hat{q}_{c} = cov\left( {q_{c} ,{\mathbf{y}}} \right)v\left( {\mathbf{y}} \right)^{ - 1} {\mathbf{y}} \), and its precision is \( r^{2} \left( {q_{\text{c}} ,\hat{q}_{c} |{\mathbf{X}}} \right) = \frac{{{\text{v}}(\hat{q}_{c} |{\mathbf{X}})}}{{{\text{v}}\left( {q_{\text{c}} |{\mathbf{X}}} \right)}} \), with \( v\left( {q_{\text{c}} |{\mathbf{X}}} \right) = \sigma_{{{\upbeta }}}^{2} {\mathbf{x}}_{{\mathbf{c}}} {\mathbf{x}}^{\prime}_{{\mathbf{c}}} \) and \( v\left( {\hat{q}_{c} |{\mathbf{X}}} \right) = \sigma_{{{\upbeta }}}^{2} {\mathbf{x}}_{{\mathbf{c}}} {\mathbf{X}}^{\prime}_{{\mathbf{r}}} \left( {{\mathbf{X}}_{{\mathbf{r}}} {\mathbf{X}}^{\prime}_{{\mathbf{r}}} + {\mathbf{I}}{{\uplambda }}_{{{\upbeta }}} } \right)^{ - 1} {\mathbf{X}}_{{\mathbf{r}}} {\mathbf{x}}^{\prime}_{{\mathbf{c}}} \) (where \( {\mathbf{x}}_{{\mathbf{c}}} \) is a row vector); (2) the r indices of reference individuals were most often omitted, which resulted in y i for their phenotypes and q i for their molecular scores. In fact, our objective was to estimate the expectation of this precision across the variation domain of X r and x c given the pedigree structure \( \left( {{\mathbf{\mathcal{P}}}_{\varvec{r}} ,{\mathbf{\mathcal{P}}}_{\varvec{c}} } \right):\;{\text{E}}_{{\mathbf{X}}} \left[ {r^{2} \left( {q_{\text{c}} ,\hat{q}_{c} |{\mathbf{X}}} \right)|{\mathbf{\mathcal{P}}}} \right] \). It will be noted \( E\left[ {r_{{q_{\text{c}} ,\hat{q}_{\text{c}} }}^{2} } \right] \). The following approximation was made: \( {\text{E}}\left[ {r_{{q_{\text{c}} ,\hat{q}_{\text{c}} }}^{2} } \right] = \frac{{{\text{E}}_{{\mathbf{X}}} \left[ {{\text{v}}(\hat{q}_{c} |{\mathbf{X}})} \right]}}{{{\text{E}}_{{\mathbf{X}}} \left[ {{\text{v}}\left( {q_{\text{c}} |{\mathbf{X}}} \right)} \right]}} = \frac{{{\text{E}}\left[ {{\text{v}}\left( {\hat{q}_{c} } \right)} \right]}}{{{\text{E}}\left[ {{\text{v}}\left( {q_{\text{c}} } \right)} \right]}} \). Let A be the pedigree relationship matrix between individuals in \( {\mathbf{\mathcal{P}}} \). Its blocks are \( {\mathbf{A}} = \left( {\begin{array}{*{20}c} {{\text{a}}_{\text{cc}} } & {{\mathbf{A}}_{{{\mathbf{cr}}}} } \\ {{\mathbf{A}}_{{{\mathbf{rc}}}} } & {{\mathbf{A}}_{{{\mathbf{rr}}}} } \\ \end{array} } \right) \). Let \( {\mathbf{G}}^{ *} = {\mathbf{XX^{\prime}}} = \left( {\begin{array}{*{20}c} {{\mathbf{x}}_{{\mathbf{c}}} {\mathbf{x}}^{\prime}} & {{\mathbf{x}}_{{\mathbf{c}}} {\mathbf{X}}^{\prime}_{{\mathbf{r}}} } \\ {{\mathbf{X}}_{{\mathbf{r}}} {\mathbf{x}}^{\prime}_{{\mathbf{c}}} } & {{\mathbf{X}}_{{\mathbf{r}}} {\mathbf{X}}^{\prime}_{{\mathbf{r}}} } \\ \end{array} } \right) \), which results in \( {\mathbf{V}} = \frac{1}{\tau }{\mathbf{G}}^{ *} \sigma_{q}^{2} + {\mathbf{I}}\sigma_{e}^{2} \). It must be noted that the σ 2 e term in the diagonal of the V submatrix corresponding to the candidate population is artificial since candidates are not phenotyped. We have E[G*] = A τ. The limits of this equality will be discussed below. As indicated above, the denominator of the expected reliability \( {\text{E}}_{{\mathbf{X}}} \left[ {v\left( {q_{\text{c}} |{\mathbf{X}}} \right)} \right] \), is τσ 2β = σ 2 q . Approximating \( {\text{E}}\left[ {v\left( {\hat{q}_{c} } \right)} \right] \) by E[cov(q c , y)]E[v(y)]−1 E[cov(y, q c )] is useless because it makes an oversimplification of the relationships between the reference and the candidate population: it considers separately the marginal distributions of x c X ′ r and (X r X ′ r + Iλβ)−1, while these random matrices are correlated. Estimating directly E[cov(q c , y)v(y)−1 cov(y, q c )] seems impossible in the general case. The approach of Goddard et al. [18] avoids this difficulty, i.e. the variance \( v\left( {\hat{q}_{c} |{\mathbf{X}}} \right) = \sigma_{{{\upbeta }}}^{2} {\mathbf{x}}_{{\mathbf{c}}} {\mathbf{x}}^{\prime}_{{\mathbf{c}}} + \sigma_{e}^{2} - \frac{1}{{\left\{ {{\mathbf{V}}^{ - 1} } \right\}_{{\varvec{cc}}} }} \), and V −1 is approximated by a second degree Taylor expansion (\( {\mathbf{V}}^{ - 1} \sim {\varvec{\Lambda}}\left( {\mathbf{X}} \right) \)), giving \( v\left( {\hat{q}_{c} |{\mathbf{X}}} \right)\sim \sigma_{{{\upbeta }}}^{2} {\mathbf{x}}_{{\mathbf{c}}} {\mathbf{x}}^{\prime}_{{\mathbf{c}}} + \sigma_{e}^{2} - \frac{1}{{{\varvec{\Lambda}}_{\text{cc}} \left( {{\mathbf{x}}_{{\mathbf{c}}} ,{\mathbf{X}}_{{\mathbf{r}}} } \right)}} \). Alternative approximations of the reliability Extension of Goddard's formula In their "heuristic approximation for V *−1", Goddard et al. [18] considered the situation where unrelated individuals are included in the reference and candidate populations, that is E[G *] = I τ and \( {\mathbf{G}}^{ *} = {\mathbf{I}}\tau + {\mathbf{E}} \), with E, a "noise" matrix centered on the null matrix \( {\bf 0}. \) A direct extension of their development would be the following. The matrix \( {\mathbf{V}} = \frac{1}{\tau }{\mathbf{G}}^{ *} \sigma_{q}^{2} + {\mathbf{I}}\sigma_{e}^{2} \) can be written as: V = σ 2 e (I + A γ)[I + D γ], with \( {\mathbf{D}} = \left( {{\mathbf{I}} + {\mathbf{A}}\gamma } \right)^{ - 1} \left( {\frac{1}{\tau }{\mathbf{G}}^{ *} - {\mathbf{A}}} \right) = {\mathbf{T}}\left( {\frac{1}{\tau }{\mathbf{G}}^{ *} - {\mathbf{A}}} \right) \), and \( \gamma = \frac{{\sigma_{q}^{2} }}{{\sigma_{e}^{2} }} \). Thus, \( {\mathbf{V}}^{ - 1} = \frac{1}{{\sigma_{e}^{2} }}\left[ {{\mathbf{I}} + {\mathbf{D}}\gamma } \right]^{ - 1} {\mathbf{T}} \). The inverse matrix [I + D γ]−1 will be approximated using a Taylor series. It must be emphasized that the Taylor series I − D γ + (D γ)2 − (D γ)3 + ··· converges towards [I + D γ]−1 only if the highest Eigen value of D γ is smaller than 1, i.e. if \( \left( {{\mathbf{D}}\gamma } \right)^{\text{t}} \to {\bf 0} \) when t → ∞. The second order approximation of V −1 is equal to \( \frac{1}{{\sigma_{e}^{2} }}\left( {{\mathbf{I}} - {\mathbf{D}}\gamma + {\mathbf{D}}^{2} \gamma^{2} } \right){\mathbf{T}} \). As E[D] = 0 and \( {\text{E}}\left[ {{\mathbf{D}}^{2} } \right] = {\mathbf{T}}\left( {\frac{1}{{\tau^{2} }}{\text{E}}\left[ {{\mathbf{G}}^{ *} {\mathbf{TG}}^{ *} } \right] - {\mathbf{ATA}} } \right), \) its expectation \( {\text{E}}\left[ {\varvec{\Lambda}} \right] = \frac{1}{{\sigma_{e}^{2} }}\left( {{\mathbf{I}} - {\text{E}}\left[ {\mathbf{D}} \right]\gamma + {\text{E}}\left[ {{\mathbf{D}}^{2} } \right]\gamma^{2} } \right){\mathbf{T}} \) i.e. \( {\text{E}}\left[ {\varvec{\Lambda}} \right] = \frac{1}{{\sigma_{e}^{2} }}\left( {{\mathbf{I}} - \gamma^{2} {\mathbf{TATA}} + \frac{{\gamma^{2} }}{{\tau^{2} }}{\text{E}}\left[ {{\mathbf{TG}}^{ *} {\mathbf{TG}}^{ *} } \right]} \right){\mathbf{T}} \). Finally, the reliability of the candidate GBLUP is approximated by: $$ {\tilde{\text{E}}}\left[ {r_{{q_{\text{c}} ,\hat{q}_{\text{c}} }}^{2} } \right]\sim \frac{1}{{\nu^{2} }} - \frac{1}{{\gamma {\mathbf{T}}_{\text{cc}} - \gamma^{3} \left\{ {{\mathbf{TATAT}}} \right\}_{\text{cc}} + \frac{{\gamma^{3} }}{{\tau^{2} }}\left\{ {{\mathbf{T}}{\text{E}}\left[ {{\mathbf{G}}^{ *} {\mathbf{TG}}^{ *} } \right]{\mathbf{T}}} \right\}_{\text{cc}} }} . $$ A difficulty with this approximation comes from the T term. As an example, consider a reference population composed of n r half-sibs of the candidate, \( {\mathbf{T}} = {{\upxi }}{\mathbf{I}} + {{\uppsi }}{\mathbf{J}} \) with \( {{\upxi }} = \frac{4}{4 + 3\gamma } \). As \( {\mathbf{T}}^{\varvec{t}} = {{\upxi }}^{\varvec{t}} {\mathbf{I}} + \left[ {n_{r}^{t} {{\upxi }}^{\varvec{t}} + \cdots } \right]{\mathbf{J}} \), the J coefficient will tend to ∞ as soon as \( n_{r} {{\upxi }} = \frac{{4n_{r} }}{4 + 3\gamma } > 1 \), a very realistic situation. Thus, the convergence of the Taylor series will be a balance between the increase of \( {\mathbf{T}}^{\varvec{t}} \) and decrease of \( \left[ {{\mathbf{D}}\gamma } \right]^{\varvec{t}} \). Another approximation of the reliability Using the classical matrix inversion lemma, the variance \( v\left( {\hat{q}_{{\text{c}}} |{\mathbf{x}}_{{\mathbf{c}}} ,{\mathbf{X}}_{{\mathbf{r}}} } \right) = \sigma _{\upbeta }^{2} {\mathbf{x}}_{{\mathbf{c}}} {\mathbf{X}}^{\prime}_{{\mathbf{r}}} \left( {{\mathbf{X}}_{{\mathbf{r}}} {\mathbf{X}}^{\prime}_{{\mathbf{r}}} + {\mathbf{I}}\uplambda _{\upbeta } } \right)^{{ - 1}} {\mathbf{X}}_{{\mathbf{r}}} {\mathbf{x}}^{\prime}_{{\text{c}}} \) may also be defined as \( {\boldsymbol{v}}\left( {\hat{q}_{c} |{\mathbf{x}}_{{\mathbf{c}}} ,{\mathbf{X}}_{{\mathbf{r}}} } \right) = \sigma_{{{\upbeta }}}^{2} {\mathbf{x}}_{{\mathbf{c}}} {\mathbf{x}}^{\prime}_{{\mathbf{c}}} - \sigma_{\text{e}}^{2} {\mathbf{x}}_{{\mathbf{c}}} \left( {{\mathbf{X}}^{\prime}_{{\mathbf{r}}} {\mathbf{X}}_{{\mathbf{r}}} + {\mathbf{I}}{{\uplambda }}_{{{\upbeta }}} } \right)^{ - 1} {\mathbf{x}}^{\prime}_{{\mathbf{c}}} \). \( {\mathbf{X}}^{\prime}_{{\mathbf{r}}} {\mathbf{X}}_{{\mathbf{r}}} \) is a very large matrix \( \left( {n_{M} \times n_{M} } \right) \) that describes the LD between markers: its elements tend to be smaller when they are more distant from the diagonal. Elements of E[X ′ r X r ] are the following: \( {\text{E}}\left[ {{\mathbf{X}}^{\prime}_{{\mathbf{r}}} {\mathbf{X}}_{{\mathbf{r}}} } \right]_{\text{ml}} = {\text{E}}\left[ {\mathop {\sum\nolimits_{\text{i}} {\left( {a_{im} - 2p_{m} } \right)\left( {a_{il} - 2p_{l} } \right)} }\limits_{{}} } \right] = 2n_{r} \Updelta_{ml}, \) with \( \Updelta_{ml} \) the LD between loci m and l. E[X ′ r X r ]mm = E[ ∑ i(a im − 2p m )2] = n r σ 2m . Let \( {\mathbf{C}} = {\mathbf{I}}{{\uplambda }}_{{{\upbeta }}} + n_{r} {\text{diag}}\left[ {\sigma_{1}^{2} , \ldots ,\sigma_{{n_{M} }}^{2} } \right] \), the X ′ r X r + Iλβ matrix may be written as: \( {\mathbf{X_r^\prime}} {\mathbf{X}}_{{\mathbf{r}}} + {\mathbf{I}}{{\uplambda }}_{{{\upbeta }}} = \left[ {\left( {{\mathbf{X}}^{\prime}_{{\mathbf{r}}} {\mathbf{X}}_{{\mathbf{r}}} - n_{r} {\text{diag}}\left[ {\sigma_{1}^{2} , \ldots ,\sigma_{{n_{M} }}^{2} } \right]} \right){\mathbf{C}}^{ - 1} + {\mathbf{I}}} \right]{\mathbf{C}} \), which results in: X ′ r X r + Iλβ = [I + B]C. The convergence of the Taylor series I − B + B 2 − B 3 + ··· to (I + B)−1 depends on the structure of the B matrix, which varies depending on the sample. However, we can examine the case of its expectation E[B]. E[B] mm = 0 and \( {\text{E}}\left[ {\mathbf{B}} \right]_{ml} = \frac{{2n_{r} \Updelta_{ml} }}{{{{\uplambda }}_{{{\upbeta }}} + n_{r} \sigma_{\text{l}}^{2} }} \). The ratio λβ is proportional to the number of markers (\( {{\uplambda }}_{{{\upbeta }}} = n_{M} \frac{{\bar{\sigma }_{\text{m}}^{2} \sigma_{e}^{2} }}{{\sigma_{\text{q}}^{2} }}) \) and dominates the denominator when n M ≫ n r . The (m, l) term in E[B]2, i.e. \( {\text{E}}\left[ {\mathbf{B}} \right]_{{\varvec{ml}}}^{2} = \sum\nolimits_{k} {\frac{{4n_{r}^{2} \Updelta_{mk} \Updelta_{kl} }}{{\left( {{{\uplambda }}_{{{\upbeta }}} + n_{r} \sigma_{\text{k}}^{2} } \right)\left( {{{\uplambda }}_{{{\upbeta }}} + n_{r} \sigma_{\text{l}}^{2} } \right)}}} \), is of order \( \frac{1}{{n_{M} }} \). Thus, we expect the Taylor series to converge to (I + E[B])−1. First order approximation At the first order, \( v\left({\hat{q}_{c} |{\mathbf{x}}_{{\mathbf{c}}} ,{\mathbf{X}}_{{\mathbf{r}}} } \right) \sim \sigma_{{{\upbeta }}}^{2} {\mathbf{x}}_{{\mathbf{c}}} {\mathbf{x}}^{\prime}_{{\mathbf{c}}} - \sigma_{\text{e}}^{2} {\mathbf{x}}_{{\mathbf{c}}} {\mathbf{C}}^{ - 1} \left( {{\mathbf{I}} - {\mathbf{B}}} \right){\mathbf{x}}^{\prime}_{{\mathbf{c}}} \) and the expectation of the reliability of the candidate GBLUP is approximated by \( \tilde{\tilde{\text{E}}}\left[ r_{q_{{\text{c}}} ,\hat{q}_{{\text{c}}} }^{2} \right] = 1 - \frac{{\sigma _{{\text{e}}}^{2} {\text{E}}\left[ {{\mathbf{x}}_{{\mathbf{c}}} {\mathbf{C}}^{{ - 1}} \left( {{\mathbf{I}} - {\mathbf{B}}} \right){\mathbf{x}}^{\prime}_{{\mathbf{c}}} } \right]}}{{\sigma _{\upbeta }^{2} {\text{E}}\left[ {{\mathbf{x}}_{{\mathbf{c}}} {\mathbf{x}}^{\prime}_{{\mathbf{c}}} } \right]}} \). $$ \begin{aligned} {\mathbf{x}}_{{\mathbf{c}}} {\mathbf{C}}^{-1} \left( {{\mathbf{I}} - {\mathbf{B}}}\right){\mathbf{x^{\prime}_{{\mathbf{c}}}}} &= \mathop \sum \limits_{m} \frac{{x_{cm}^{2}}}{{\lambda_{\beta } + n_{r} \sigma_{m}^{2} }} + n_{r} \mathop \sum \limits_{m} \frac{{\sigma_{m}^{2} x_{cm}^{2} }}{{\left( {\lambda_{\beta } + n_{r} \sigma_{m}^{2} } \right)^{2} }} \\ &\quad- {\mathbf{x}}_{{\mathbf{c}}} {\mathbf{C}}^{ - 1} \left( {\mathbf{X_r^\prime}}{\mathbf{X}}_{{\mathbf{r}}} \right){\mathbf{C}}^{ - 1} {\mathbf{x^{\prime}_{{\mathbf{c}}}}} .\end{aligned}$$ Using \( {\mathbf{x}}_{{\mathbf{c}}} {\mathbf{C}}^{ - 1} {\mathbf{X}}^{\prime}_{{\mathbf{r}}} = \left({\begin{array}{*{20}l} {\sum\nolimits_{\text{m}} {\frac{{{\text{x}}_{\text{cm}} {\text{X}}_{{{\text{r}}1{\text{m}}}} }}{{{{\uplambda }}_{{{\upbeta }}} + {\text{n}}_{\text{r}} {{\sigma }}_{\text{m}}^{2} }}} } & \cdots & {\sum\nolimits_{\text{m}} {\frac{{{\text{x}}_{\text{cm}} {\text{X}}_{{{\text{rn}}_{\text{r}} {\text{m}}}} }}{{{{\uplambda }}_{{{\upbeta }}} + {\text{n}}_{\text{r}} {{\sigma }}_{\text{m}}^{2} }}} } \\ \end{array} }\right) \), the last term is: \( \sum\nolimits_{i} {\left( {\mathop \sum \nolimits_{m} \frac{{x_{cm} {\text{X}}_{rim} }}{{\lambda_{\beta } + n_{r} \sigma_{m}^{2} }}} \right)^{2} } .\) Finally, the expectation is: $$ \begin{aligned} {{\tilde{\tilde{E}}}}\left[ {r_{{q_{\text{c}} ,\hat{q}_{\text{c}} }}^{2} } \right] =& \,1 - \frac{{\lambda_{\beta } }}{\tau }\left\{ {\sum\nolimits_{m} {\left[ {\frac{{\sigma_{m}^{2} }}{{\lambda_{\beta } + n_{r} \sigma_{m}^{2} }} + \frac{{n_{r} \sigma_{m}^{4} }}{{\left( {\lambda_{\beta } + n_{r} \sigma_{m}^{2} } \right)^{2} }}} \right]} } \right. \hfill \\ & \left. { - \sum\nolimits_{i} {\sum\nolimits_{m} {\left[ {\frac{{{\text{E}}\left[ {x_{cm}^{2} X_{rim}^{2} } \right]}}{{\left( {\lambda_{\beta } + n_{r} \sigma_{m}^{2} } \right)^{2} }} + \sum\nolimits_{l \ne m} {\frac{{{\text{E}}\left[ {x_{cm} X_{rim} x_{cl} X_{ril} } \right]}}{{\left( {\lambda_{\beta } + n_{r} \sigma_{m}^{2} } \right)\left( {\lambda_{\beta } + n_{r} \sigma_{l}^{2} } \right)}}} } \right]} } } \right\} \hfill \\. \end{aligned} $$ Application in the case of independent markers This situation either assumes low density marker information, or corresponds to the idea of an effective number of loci that was developed by Goddard [16, 31]. In the first case, the proportion of the genetic variance explained by the markers \( \frac{{v\left( {q_{ci} } \right)}}{{v\left( {g_{ci} } \right)}} \) is small and this quantity should be considered when estimating the genomic precision. First approximation Using the notation X ′ = (x ′ c , X ′ r ), the (i, j) element of \( {\mathbf{G}}^{ *} {\mathbf{TG}}^{ *} \) is: \( \left\{ {{\mathbf{XX^{\prime}TXX^{\prime}}}} \right\}_{\text{ij}} = \sum\nolimits_{l} {\sum\nolimits_{k} {t_{kl} \left( {\sum\nolimits_{m} {X_{im} X_{km} } } \right)\left( {\sum\nolimits_{m} {X_{jm} X_{lm} } } \right)} }. \) Thus, elements of E[XX ′ TXX ′] will involve expectations of fourth level moments of X im within m joint distributions: E[\( {\text{E}}\left[ {X_{im}^{2} X_{jm} X_{km} } \right], {\text{E}}\left[ {X_{im}^{2} X_{jm}^{2} } \right], {\text{E}}\left[ {X_{im}^{3} X_{jm} } \right] \) X im X jm X km X lm ], and E[X 4 im ]. Defining and \( \tau _{2} = \sum _{m} [2p_{m} (1 - p_{m} )]^{2} \) a ij the coancestry coefficient between individuals i and j, we found that [See Additional file 1]: \( \left\{ {{\text{E}}\left[ {{\mathbf{XX^{\prime}TXX^{\prime}}}} \right]} \right\}_{\text{ij}} = \sum\nolimits_{l} {\sum\nolimits_{k} {t_{kl} \left( {\frac{1}{2}\tau \alpha_{ijkl}^{1111} - \frac{1}{4}\tau_{2} \gamma_{ijkl}^{1111} + 4a_{ik} a_{jl} \left[ {\tau^{2} - \tau_{2} } \right]} \right)} }, \) where parameters \( \alpha_{ij \cdots K}^{{d_{i} d_{j} \cdots d_{K} }} \) and \( \gamma_{ij \cdots K}^{{d_{i} d_{j} \cdots d_{K} }} \) are functions of the probabilities of the identity states between gametes of \( ij \cdots K \) individuals at marker m (Table 1). In the summations above, when individuals are repeated (e.g. i = j), the corresponding exponents are summed (e.g. α 1111 iiil = α 31 il ).The resulting X im moments are in Table 2. Table 1 Coefficients describing the genotypes' distributions moments when using the relation \( {\text{E}}\left[ {X_{im}^{{d_{i} }} X_{jm}^{{d_{j} }} \cdots X_{Km}^{{d_{K} }} } \right] = p_{m} \left( {1 - p_{m} } \right)\alpha_{ij \cdots K}^{{d_{i} d_{j} \cdots d_{K} }} - \left[ {p_{m} \left( {1 - p_{m} } \right)} \right]^{2} \gamma_{ij \cdots K}^{{d_{i} d_{j} \cdots d_{K} }} \) from Additional file 1 Table 2 Moments of genotypes' distributions depending on genotype codification Second approximation The expectations \( {\text{E}}[ {x_{cm}^{2} x_{rim}^{2} }]\) and \({\text{E}}\left[ {x_{{cm}}^{2} x_{{rim}} x_{{cl}} x_{{ril}} } \right] \)are also obtained from the coefficients in Table 1, i.e.: \( {\text{E}}\left[ {x_{cm}^{2} x_{rim}^{2} } \right] = \frac{1}{2}\sigma_{m}^{2} \alpha_{ci}^{22} - \frac{1}{4}\sigma_{m}^{4} \gamma_{ci}^{22} \) and, when markers are independent, E[x cm X rim x cl X ril ] = E[x cm X rim ].E[x cl X ril ] = 4a 2 ci σ 2m σ 2l . Let \( \rho_{m} = \frac{{n_{r} \sigma_{m}^{2} }}{{\lambda_{\beta } + n_{r} \sigma_{m}^{2} }} \). After some algebra, it appears that: $$\begin{aligned} \tilde{\tilde{E}}\left[ {r_{{q_{{\text{c}}} ,\hat{q}_{{\text{c}}} }}^{2} } \right] = &1 - \frac{{\lambda _{\beta } }}{{n_{r} \tau }}\left\{ {\left( {\sum\limits_{m} {\rho _{m} } } \right) + \left( {\sum\limits_{m} {\rho _{m}^{2} } } \right)\left( {1 + 4\bar{a}_{{ci}}^{2} + \frac{1}{4}\bar{\gamma }_{{ci}}^{{22}} } \right)} \right. \\ & \left. {\quad - \left( {\sum\limits_{m} {\rho _{m} } } \right)^{2} \left( {4\bar{a}_{{ci}}^{2} } \right) - \left( {\sum\limits_{m} {\frac{{\rho _{m}^{2} }}{{\sigma _{m}^{2} }}} } \right)\left( {\frac{1}{2}{\bar{\alpha}}_{{ci}}^{{22}} } \right)} \right\} \\ \end{aligned} $$ where \( \bar{a}_{ci}^{2} \), \( \bar{\alpha}_{ci}^{22} \) and \( \bar{\gamma }_{ci}^{22} \) are the means of the corresponding coefficients, considering all possible i reference individuals. The parameters τ = ∑ m σ 2 m and τ 2 = ∑ m σ 4 m that appear in the first approximation, and the parameters \( \sum\nolimits_{m} {\rho_{m} } \), ∑ m ρ 2 m and \( \sum\nolimits_{m} {\frac{{\rho_{m}^{2} }}{{\sigma_{m}^{2} }}} \) that appear in the second approximation, are unknown. Their expectations can be derived by making assumptions about the distribution of the marker allele frequencies. They were derived assuming either a uniform distribution of allele frequencies or the U-shaped distribution of allelic frequencies proposed by Goddard [16]: \( f( p) = {k \mathord{\left/ {\vphantom {k {2{\text{p}}\left( {1 - {\text{p}}} \right) }}} \right. \kern-0pt} {2{\text{p}}\left( {1 - {\text{p}}} \right) }} \) with the constant k estimated as 1/log2N e , N e being the effective size of the reference population. The expectations of the parameters are in Table 3. The corresponding algebra is detailed in Additional file 2. Table 3 Expectation of elements involved in precision formulae when a uniform \( ( f ( p ) = 1 ) \) or a U shaped distribution of allelic frequencies is assumed \( \left( {f( p) = {k \mathord{\left/ {\vphantom {k {2p\left( {1\text{ - }p} \right)}}} \right. \kern-0pt} {2p\left( {1-p} \right)}}} \right) \) The parameters τ and τ 2 are linked to the number M e of independent segments. This quantity M e was defined by Goddard [16] as the number of independent chromosomal segments which would give the same variance of genomic covariances \( c_{ij} \) between individuals i and j as that observed, i.e. when LD exists. Conditional on the genotypic observation, the genomic covariance between two individuals is cov(q i ,q j |X) = σ 2β ∑qXiqXjq = c ij . Thus, vX(c ij ) = σ 4β v[∑qXiqXjq], or vX(c ij ) = σ 4β (∑qv(XiqXjq) + ∑q∑q′≠qcov(XiqXjq,Xiq′Xjq′)). When the markers are in linkage equilibrium, the covariance term is null, and \( {\text{v}}_{\text{X}} \left( {c_{ij} } \right) = \sigma_{{{\upbeta }}}^{4} \left[ {\frac{1}{2}\tau \alpha_{ij}^{22} - \frac{1}{4}\tau_{2} \gamma_{ij}^{22} - \frac{1}{4}{\text{a}}_{\text{ij}}^{2} \tau_{2} } \right] \). If individuals are unrelated, α 22 ij = 0, γ 22 ic = −4 and aij = 0. Thus, vX(c ij ) = σ 4β τ 2. As σ 2 q = σ 2β τ, \( {\text{v}}_{\text{X}} \left( {c_{ij} } \right) = \sigma_{q}^{4} \frac{{\tau_{2} }}{{\tau^{2} }} \). From the appendix in the paper of Goddard [16], this variance is vX(c ij ) = σ 4 q /M e . Thus: $$ M_{e} = \tau^{2} /\tau_{2} . $$ It must be emphasized that M e , which depends on the variability of allele frequencies, is not the number of markers n M . The case of unrelated individuals The first approximation gives results similar to Goddard et al. [18] when individuals are not related. In this case, A = I then \( {\mathbf{T}} = \frac{1}{1 + \gamma }{\mathbf{I}} = \frac{{\sigma_{e}^{2} }}{{\sigma_{q}^{2} + \sigma_{e}^{2} }}{\mathbf{I}} = \frac{{\sigma_{e}^{2} }}{{\sigma^{2} }}{\mathbf{I}} \). The ratio \( \frac{\gamma }{1 + \gamma } = \frac{{\sigma_{q}^{2} }}{{\sigma^{2} }} = \nu^{2} \) is the proportion of the phenotypic variance explained by the molecular score. $$ \begin{aligned} {\text{E}}\left[ {{\varvec{\Lambda}}_{\text{cc}} } \right] = \frac{1}{{\sigma_{e}^{2} }}\left\{ {{\mathbf{T}}_{\text{cc}} - \gamma^{2} \left\{ {{\mathbf{TATAT}}} \right\}_{\text{cc}} + \frac{{\gamma^{2} }}{{\tau^{2} }}\left\{ {{\mathbf{T}}{\text{E}}\left[ {{\mathbf{G}}^{ *} {\mathbf{TG}}^{ *} } \right]{\mathbf{T}}} \right\}_{\text{cc}} } \right\} \hfill \\ = \frac{1}{{\sigma_{e}^{2} }}\left\{ {\frac{1}{1 + \gamma } - \frac{{\gamma^{2} }}{{\left( {1 + \gamma } \right)^{3} }} + \frac{{\gamma^{2} }}{{\tau^{2} \left( {1 + \gamma } \right)^{2} }}{\text{E}}\left[ {{\mathbf{G}}^{ *} {\mathbf{TG}}^{ *} } \right]_{\text{cc}} } \right\} \hfill \\ \end{aligned} $$ $$ \left\{ {{\text{E}}\left[ {{\mathbf{G}}^{ *} {\mathbf{TG}}^{ *} } \right]} \right\}_{\text{cc}} = \sum\nolimits_{l} {\sum\nolimits_{k} {t_{kl} \left( {\frac{1}{2}\tau \alpha_{cckl}^{1111} - \frac{1}{4}\tau_{2} \gamma_{cckl}^{1111} + 4a_{ck} a_{cl} \left[ {\tau^{2} - \tau_{2} } \right]} \right)} } $$ T being diagonal, this equation simplifies to \( \left\{ {{\text{E}}\left[ {{\mathbf{G}}^{ *} {\mathbf{TG}}^{ *} } \right]} \right\}_{\text{cc}} = \sum\nolimits_{k} {t_{kk} \left( {\frac{1}{2}\tau \alpha_{ck}^{22} - \frac{1}{4}\tau_{2} \gamma_{ck}^{22} + 4a_{ck}^{2} \left[ {\tau^{2} - \tau_{2} } \right]} \right)}, \) with \( t_{kk} = \frac{1}{1 + \gamma } \), \( \alpha_{ck}^{22} = 0 \;{\text{and}}\; \gamma_{ck}^{22} = - 4\;{\text{if}} \;c \ne k \), \( \alpha_{cc}^{22} = \alpha_{c}^{4} = 2 \;{\text{and}} \;\gamma_{c}^{4} = 0 \), \( {\text{a}}_{\text{cc}} = {\text{a}}_{\text{kk}} = \frac{1}{2} \) and ack = 0. Hence \( \left\{ {{\text{E}}\left[ {{\mathbf{G}}^{ *} {\mathbf{TG}}^{ *} } \right]} \right\}_{\text{cc}} = \frac{1}{1 + \gamma }\left\{ {\tau + \tau^{2} - \tau_{2} + n_{r} \tau_{2} } \right\} \), and \( {\text{E}}\left[ {{\varvec{\Lambda}}_{\text{cc}} } \right] = \frac{1}{{\sigma_{e}^{2} }}\left\{ {\frac{1}{1 + \gamma } + \frac{{\gamma^{2} }}{{\left( {1 + \gamma } \right)^{3} }}\left( {\frac{1}{\tau } + \left( {n_{r} - 1} \right)\frac{{\tau_{2} }}{{\tau^{2} }}} \right)} \right\} \). \( \begin{aligned} {\text{E}}\left[ {v\left( {\hat{q}_{c} } \right)} \right] &= {\text{E}}\left[ {{\mathbf{V}}_{\text{cc}}^{ *} } \right] - \frac{1}{{{\text{E}}\left[ {{\varvec{\Lambda}}_{\text{cc}} } \right]}}\\ &= \sigma^{2} - \sigma^{2} \frac{1}{{1 + \nu^{4} \left( {\frac{1}{\tau } + \frac{{n_{r} - 1}}{{M_{e} }}} \right)}} \\ & = \sigma^{2} \frac{{\nu^{4} \left( {\frac{1}{\tau } + \frac{{n_{r} - 1}}{{M_{e} }}} \right)}}{{1 + \nu^{4} \left( {\frac{1}{\tau } + \frac{{n_{r} - 1}}{{M_{e} }}} \right)}}. \\ \end{aligned} \) If we neglect \( \frac{1}{\tau } - \frac{1}{{M_{e} }} \) and use \( \nu^{2} = \frac{{\sigma_{q}^{2} }}{{\sigma^{2} }} \), we get \( {\text{E}}\left[ {v\left( {\hat{q}_{c} } \right)} \right] = \sigma_{q}^{2} \frac{{\nu^{2} \frac{{n_{r} }}{{M_{e} }}}}{{1 + \nu^{4} \frac{{n_{r} }}{{M_{e} }}}} \), which is similar but not identical to the equation in Goddard et al. [18] (\( \sigma_{q}^{2} \frac{{\nu^{2} \frac{{n_{r} }}{{M_{e} }}}}{{1 + \nu^{2} \frac{{n_{r} }}{{M_{e} }}}} \)). Finally, the precision is estimated as: $$ {\tilde{\text{E}}}\left[ {r_{{q_{\text{c}} ,\hat{q}_{c} }}^{2} } \right] = \frac{{\nu^{2} \frac{{n_{r} }}{{M_{e} }}}}{{1 + \nu^{4} \frac{{n_{r} }}{{M_{e} }}}} .$$ In this situation of unrelatedness between the candidate and the reference population, the second approximation simplifies to \( {{\tilde{\tilde{E}}}}\left[ {r_{{q_{\text{c}} ,\, \hat{q}_{c} }}^{2} } \right] = 1 - \lambda_{\beta } \frac{{E\left[ {\mathop \sum \nolimits_{m} \rho_{m} } \right]}}{{n_{r} \tau }} \). From Table 3, we have \( E\left[ {\sum\nolimits_{m} {\rho_{m} } } \right] = n_{M} \frac{k}{\omega }\theta \) with \( \theta = \log \left( {\left| {\frac{1 + \omega }{1 - \omega }} \right|} \right),\omega = \sqrt {1 + 4h} \), \( h = {{{{\uplambda }}_{{{\upbeta }}} } \mathord{\left/ {\vphantom {{{{\uplambda }}_{{{\upbeta }}} } {2n_{r} }}} \right. \kern-0pt} {2n_{r} }} \) and k = 1/ log 2N e . As λ β = τ/γ, we found: $$ {{\tilde{\tilde{E}}}}\left[ {r_{{q_{\text{c}} ,\hat{q}_{c} }}^{2} } \right] = 1 - \frac{{n_{M} {\text{k}}\theta }}{{\gamma n_{r} \omega }} . $$ Non-independence between reference and candidate population, a simple example We consider the situation of a candidate that is the son of one of the n r individuals in Pr (say the first in the list) while still assuming that reference individuals are unrelated. In this situation, the pedigree relationship matrix is \( \left( {\begin{array}{*{20}c} {\begin{array}{*{20}c} 1 & {0.5 } \\ {0.5} & 1 \\ \end{array} } & {\bf 0} \\ {\bf 0} & {{\mathbf{I}}_{{{\text{n}}_{\text{r}} - 1}} } \\ \end{array} } \right) \), which results in a T matrix \( \left( {\begin{array}{*{20}c} {\begin{array}{*{20}c} a & b \\ b & a \\ \end{array} } & {\bf 0} \\ {\bf 0} & {\frac{1}{1 + \gamma }{\mathbf{I}}_{{{\text{n}}_{\text{r}} - 1}} } \\ \end{array} } \right) \) with \( \gamma = \frac{{\sigma_{q}^{2} }}{{\sigma_{e}^{2} }}, a = \frac{1 + \gamma }{{\left( {1 + \gamma } \right)^{2} - {1 \mathord{\left/ {\vphantom {1 {4\gamma^{2} }}} \right. \kern-0pt} {4\gamma^{2} }}}} \) and \( b = - \frac{{{\gamma \mathord{\left/ {\vphantom {\gamma 2}} \right. \kern-0pt} 2}}}{{\left( {1 + \gamma } \right)^{2} - {1 \mathord{\left/ {\vphantom {1 {4\gamma }}} \right. \kern-0pt} {4\gamma }}}} \). Applications of formulae (2) and (3) are described in Additional file 3. The expected approximate precision with the first approach is: $$ {\tilde{\text{E}}}\left[ {r_{{q_{\text{c}} ,\hat{q}_{\text{c}} }}^{2} } \right] \sim \frac{1}{{\nu^{2} }} - \frac{1}{{\gamma {\text{a}} + \gamma^{3} \frac{{{{\tau }} - {{\tau }}_{2} }}{{{{\tau }}^{2} }}{\text{c}}1 + \gamma^{3} \frac{{{{\tau }}_{2} }}{{{{\tau }}^{2} }}\left[ {{\text{c}}2 + \frac{{{\text{n}}_{\text{r}} - 1}}{1 + \gamma }{\text{c}}3} \right]}} , $$ where \( c1 = \left( {{\text{a}} + {\text{b}}} \right)^{3} + \left( {{\text{a}}^{2} + {\text{b}}^{2} } \right)\frac{1}{2}{\text{a}} \), \( c2 = \frac{1}{4}{\text{a}}\left( {{\text{b}}^{2} - {\text{a}}^{2} } \right) \) and \( c3 = {\text{a}}^{2} + {\text{b}}^{2} + \frac{1}{2} {\text{ab}} \). And with the second approach: $$ {{\tilde{\tilde{E}}}}\left[ {r_{{q_{\text{c}} ,\hat{q}_{c} }}^{2} } \right] = 1 - \frac{{{\text{n}}_{\text{M}} k\theta }}{{\gamma {\text{n}}_{\text{r}} \omega }} - \frac{{{\text{n}}_{\text{M}} k}}{{4\gamma n_{r}^{2} \omega^{2} }}\left( {5\theta \omega - \frac{{\left( {10h + 2} \right)\theta }}{\omega } - 5 - {\text{n}}_{\text{M}} {\text{k}}\theta^{2} - \frac{1}{h}} \right) . $$ Alternative genotypes codification In all the previous developments, genotypes were coded x tim = a tim − 2p tm and w tiq = a tiq − 2p tq . Alternatively, we could define x tim = (a tim − 2p tm )/σ tm and w tiq = (a tiq − 2p tq )/σ tq . The relation between genetic and marker variances becomes σ 2 q = n M σ 2β and the relation between pedigree and genomic matrices becomes E[G*] = A n M . Thus, formulae (1) and (2) are still valid when replacing τ by n M . The \( {\text{E}}\left[ {X_{im}^{{d_{i} }} X_{jm}^{{d_{j} }} \cdots X_{Km}^{{d_{K} }} } \right] \) elements derived in Additional file 1, need to be divided by \( \sigma_{m}^{{d_{i} + d_{j} + \cdots + d_{K} }} \). Table 2 gives the expectations with this alternative codification of genotypes. The quantity {E[XX ′ TXX ′]}ij has to be changed, using \( {{\zeta }} = \frac{1}{{n_{M} }}\sum\nolimits_{m} {\frac{1}{{\sigma_{m}^{2} }}} \). We have: \( \sum\nolimits_{\text{m}} {{\text{E}}\left[ {X_{im} X_{km} X_{jm} X_{lm} } \right] = \frac{{n_{M} }}{2}{{\zeta \alpha }}_{\text{ijkl}}^{1111} - \frac{{n_{M} }}{4}{{\gamma }}_{\text{ijkl}}^{1111} } \), ∑mE[X im X km ] = 2n M a ik , and ∑m(E[X im X km ]E[X jm X lm ]) = 4n M a ik a jl . Thus: $$ \begin{aligned}& \left\{ {{\text{E}}\left[ {{\mathbf{XX^{\prime}TXX^{\prime}}}} \right]} \right\}_{\text{ij}} \\&\quad= \sum\nolimits_{l} {\sum\nolimits_{k} {t_{kl} \left( {\frac{{n_{M} }}{2}{{\zeta }}\alpha_{ijkl}^{1111} - \frac{{n_{M} }}{4}\gamma_{ijkl}^{1111} + 4n_{M} \left( {n_{M} - 1} \right)a_{ik} a_{jl} } \right)} }. \\ \end{aligned} $$ When applied to the case of unrelated individuals and no LD, i.e. when \( t_{kk} = \frac{1}{1 + \gamma } \), \( \alpha_{ck}^{22} = 0 \;{\text{and}}\; \gamma_{ck}^{22} = - 4 \;{\text{if}}\; c \ne k \), \( \alpha_{cc}^{22} = \alpha_{c}^{4} = 2 \;{\text{and}}\; \gamma_{c}^{4} = 0 \), \( {\text{a}}_{\text{cc}} = {\text{a}}_{\text{kk}} = \frac{1}{2} \) and ack = 0, we have: $$ \begin{aligned} {\text{E}}\left[ {{\varvec{\Lambda}}_{\text{cc}} } \right] &= \frac{1}{{\sigma_{e}^{2} }}\left\{ {\frac{1}{1 + \gamma } - \frac{{\gamma^{2} }}{{\left( {1 + \gamma } \right)^{3} }} + \frac{{\gamma^{2} }}{{n_{M}^{2} \left( {1 + \gamma } \right)^{2} }}} \right. . \hfill \\&\quad \left. {\sum\nolimits_{k} {\frac{1}{1 + \gamma }\left( {\frac{{n_{M} }}{2}{{\zeta }}\alpha_{cckk}^{1111} - \frac{{n_{M} }}{4}\gamma_{cckk}^{1111} + 4n_{M} \left( {n_{M} - 1} \right)a_{ck} a_{ck} } \right)} } \right\}, \hfill \\ \end{aligned} $$ which gives: $$ {\text{E}}\left[ {{\varvec{\Lambda}}_{\text{cc}} } \right] = \frac{1}{{\sigma_{e}^{2} \left( {1 + \gamma } \right)}}\left\{ {1 - \frac{{\gamma^{2} }}{{\left( {1 + \gamma } \right)^{2} }}\left( {1 - \frac{{{{\zeta }} + n_{R} + n_{M} - 1}}{{n_{M} }}} \right)} \right\} $$ i.e. \( {\text{E}}\left[ {{\varvec{\Lambda}}_{\text{cc}} } \right] = \frac{1}{{\sigma^{2} }}\left\{ {1 + \nu^{4} \left( {\frac{{{{\zeta }} + n_{R} - 1}}{{n_{M} }}} \right)} \right\} \) and \( {\tilde{\text{E}}}\left[ {r_{{q_{\text{c}} ,\hat{q}_{\text{c}} }}^{2} } \right] = \frac{{\nu^{2} \frac{{{{\zeta }} + n_{R} - 1}}{{n_{M} }}}}{{1 + \nu^{4} \frac{{{{\zeta }} + n_{R} - 1}}{{n_{M} }}}} \). Based on Additional file 2, the expectation of ζ parameter is \( \frac{k}{4}\left[ {2\log \left( {N_{e} - 1} \right) + 2\frac{{N_{e} \left( {N_{e} - 2} \right)}}{{N_{e} - 1}}} \right] \) for a U-shaped distribution of alleles frequencies and log (N e − 1) for a uniform distribution. The case of markers in linkage disequilibrium So far, following Goddard [16], we considered the situation of n M independent segments that each carries a single QTL in LD with a single marker. More typically, the genomic information consists of a large number of non-independent markers. This non-independence comes from long-term effects due to bottlenecks, mutations, migrations, etc. and short-term effects due to family structure. Effective and equivalent numbers of independent loci We based our developments on the very fruitful concept of the effective number of loci that Goddard defined as "the number of independent loci that gives the same variance of realized relationships as that obtained in the more realistic situation" (Goddard [16] appendix). Since our objective was to predict the reliability of GEBV, we now suggest the alternative definition of an "equivalent number of independent loci" which would give the reliability of GEBV for unrelated individuals when considering a sub-set of independent markers that would be identical to the reliability obtained when considering the full set of markers. From the derivation of the reliability given previously, defining x i c and \( {\mathbf{X}}_{{\mathbf{r}}}^{{\mathbf{i}}} \) as the genotype meed \( {\text{E}}_{{{\mathbf{x}}_{{\mathbf{c}}} ,{\mathbf{X}}_{{\mathbf{r}}} }} \left[ {v\left( {\hat{q}_{c} |{\mathbf{x}}_{{\mathbf{c}}} ,{\mathbf{X}}_{{\mathbf{r}}} } \right)} \right] = {\text{E}}_{{{\mathbf{x}}_{\text{c}}^{\text{i}} ,{\mathbf{X}}_{\text{r}}^{\text{i}} }} \left[ {v\left( {\hat{q}_{c} |{\mathbf{x}}_{{\mathbf{c}}}^{{\mathbf{i}}} ,{\mathbf{X}}_{{\mathbf{r}}}^{{\mathbf{i}}} } \right)} \right]. \) With a few simplifying assumptions (identical distribution of genotypes in the reference and candidate populations and equal genotypic variance at all loci) a simple formula can be derived [see Additional file 4]: $$ n_{{M_{i} }} = n_{M} \frac{1 + \gamma }{\gamma }\left( {1 - tr\left[ {\left( {{\text{E}}\left[ {{\mathbf{X}}^{\prime}_{{\mathbf{r}}} {\mathbf{X}}_{{\mathbf{r}}} } \right] + {{\uplambda }}_{{{\upbeta }}} {\mathbf{I}}} \right)^{ - 1} } \right]\frac{{\mathop \sum \nolimits_{m} \sigma_{m}^{2} /n_{M} }}{\gamma }} \right) , $$ where tr[M] is the trace of matrix M. Once marker allele frequencies and between-marker LD are estimated in a population of interest, the equivalent number of independent loci which can be estimated from formula (9) and this parameter can be used in models that predict the genetic gain expected from a genomic selection scheme applied to this population. In the more general situation, prior to the observation of the X r matrix, a simple approximation for \( n_{{M_{i} }} \) is obtained assuming equal variances σ 2 m = s 2, and using the relation between expected LD and effective population size N e as derived by Sved [32]: \( {\text{E}}\left[ {2\Updelta_{ml} } \right] = {{\sigma_{m} \sigma_{l} } \mathord{\left/ {\vphantom {{\sigma_{m} \sigma_{l} } {\sqrt {1 + 4N_{e} d_{lm} } }}} \right. \kern-0pt} {\sqrt {1 + 4N_{e} d_{lm} } }} \) with d lm the distance between ordered loci l and m, such that \( d_{lm} = {{\left| {l - m} \right|L} \mathord{\left/ {\vphantom {{\left| {l - m} \right|L} {n_{M} }}} \right. \kern-0pt} {n_{M} }} \) with L the genome length in Morgan. With those hypotheses, let U = tr[(γn R R + n M I)−1] with \( {\mathbf{R}}_{\text{ml}} = \sqrt {{{n_{M} } \mathord{\left/ {\vphantom {{n_{M} } {\left( {n_{M} + 4N_{e} \left| {l - m} \right|L} \right)}}} \right. \kern-0pt} {\left( {n_{M} + 4N_{e} \left| {l - m} \right|L} \right)}}} \). In this simplified situation, the equivalent number of loci is [See Additional file 4]: $$ n_{{M_{i} }} = n_{M} \frac{{n_{R} \gamma \left( {1 - U} \right)}}{{n_{R} \gamma - n_{M} \left( {1 - U} \right)}} . $$ Towards an exact treatment of linkage disequilibrium For a complete treatment of the LD situation, it is necessary to estimate the expectations of the product of four genetic values. For instance, with the second approximation [formula (2)], we need to compute E[x cm X rim x cl X ril ]. Let \( X_{im} = g_{imf} + g_{imd} \), where g imf and g imd are the "values" of the alleles transmitted to individual i by its father and its dam, with g imf and \( g_{imd} = \left( {0 \;or\; 1} \right) - p_{m} \). They will be called allelic values in the following. Equivalent terms are defined for x cl ,\( x_{cm} \) and X il . The random variable M cls is the allele of individual c received from s at locus \( l \left(f\, {\rm or}\, d \right) \). \( M_{cmt} , M_{ilu}\, {\text{and}}\, M_{imv} \) are defined similarly. $$ \begin{aligned} & {\text{E}}\left[ {x_{cl} x_{cm} X_{il} X_{im} } \right] \\ &\quad= \sum\limits_{{s \in \left\{ {f,d} \right\}}} {\sum\limits_{{t \in \left\{ {f,d} \right\}}} {\sum\limits_{{u \in \left\{ {f,d} \right\}}} {\sum\limits_{{v \in \left\{ {f,d} \right\}}} {{\text{E}}\left[ {g_{cls} g_{cmt} g_{ilu} g_{imv} } \right]} } } } \hfill \\ \end{aligned}. $$ For the candidate c as for the reference individual i, the pair of genetic values may originate from the same parent (and coded on the same chromosome) or not, giving four types of \( \left( {g_{cls} ,g_{cmt} ,g_{ilu,} g_{imv} } \right) \) vectors. In type 1 (\( s = t \, and \, u = v \)), both alleles (belonging to loci \( m\; {\text{and}} \;l \)) of each pair of loci (one for c and one for i) are on the same chromosome (may be from the two fathers, the two dams, c's father and i's dam or i's father and c's dam). In type 2 (\( s = t \, and \, u \ne v \)), both alleles (belonging to loci \( m \;{\text{and}} \;l \)) of the candidate are on the same chromosome, while alleles of the reference individual i are not on the same chromosome.Type 3 (\( s \ne t \, and \, u = v \)) is the reverse from type 2. In type 4 (\( s \ne t \, and \, u \ne v \)), alleles of loci \( m\; {\text{and}} \;l \) of both individuals and i are on different chromosomes. For each of these situations, the identity by descent (IBD) status between alleles at locus \( m \) on chromosomes ct and iv, and at locus \( l \) on chromosomes cs and iu are considered. There are four, as follows: Thus, 16 terms involved in E[x cl x cm X il X im ] are given by: As described in Additional file 5, only seven \( {\text{E}}\left[ {g_{cls} g_{cmt} g_{ilu} g_{imv} |{\mathcal{S}}_{k} } \right] \) are non-null (Table 4). Principles on which the probabilities φ stuv k are estimated and basic examples are described in Additional file 5. Table 4 Expectations of products of four allelic values received by two individuals at two loci depending on the IBD status and parental origins of the alleles As an illustration, we consider again the situation of a candidate (c), that is the son of one of the n r individuals in Pr and assume that c 'sdam is unrelated to the sire. In formula (2), the summation over the reference individuals i comprises a single term for the sire of the candidate and n r − 1 terms for the individual that are unrelated to the c members of this reference population. Based on Additional files 1 and 5, expectations involved in the precision formulae (2) are: E[x 2 cm X 2 rim ] = p m (1 − p m ), and \( \begin{aligned} {\text{E}}\left[ {x_{cl} x_{cm} X_{il} X_{im} } \right] \hfill \\ = \left( {1 - p_{m} } \right)\left( {1 - p_{l} } \right)p_{m} p_{l} + \Updelta_{lm} \left( {1 - 2p_{l} } \right)\left( {1 - 2p_{m} } \right) \hfill \\ + 2\Updelta_{lm}^{2} \left[ {r_{ml} \left( {\frac{{p_{m}^{3} + \left( {1 - p_{m} } \right)^{3} }}{{p_{m} \left( {1 - p_{m} } \right)}} + \frac{{p_{l}^{3} + \left( {1 - p_{l} } \right)^{3} }}{{p_{l} \left( {1 - p_{l} } \right)}}} \right) + \left( {1 - r_{ml} } \right)\left( {1 - 2p_{m} } \right)\left( {1 - 2p_{l} } \right)} \right], \hfill \\ \end{aligned} \) when i is the sire of c; and E[x 2 cm X 2 rim ] = 4[p m (1 − p m )]2 and \( {\text{E}}\left[ {x_{cl} x_{cm} X_{il} X_{im} } \right] = 4\Updelta_{lm}^{2} \left( {1 - 2p_{m} } \right)\left( {1 - 2p_{l} } \right) \), when i and c are unrelated. Numerical evaluation Simulation of allele frequencies In the following numerical evaluation of the formulae derived above, allele frequencies were simulated following an inverse transform sampling (e.g. [32]): \( n_{M} \) allele frequency cumulative distribution function values u m were simulated in a uniform \( {\mathcal{U}}\left( {0,1} \right) \), and corresponding allele frequencies p m , i.e. such as \( u_{m} = \int_{{1/2n_{r} }}^{{p_{m} }} {f\left( p \right)dp} \), computed by \( p_{m} = \frac{{\left( {2n_{r} - 1} \right)^{{\left( {2u_{m} - 1} \right)}} }}{{1 + \left( {2n_{r} - 1} \right)^{{\left( {2u_{m} - 1} \right)}} }} \). Basic situation: no LD and unrelated individuals Convergence of Taylor series and quality of expectation of the reliability approximations were tested for different population sizes (\( n_{r} = 500, 1000, 1500 \;{\text{and}}\; 2500 \)), numbers of markers (\( n_{M} = 50, 100, 250, 1000, 1500, 2000 \;{\text{and}}\; 2500 \)) and proportions of the phenotypic variance explained by the molecular score (\( \nu^{2} = 0.1, 0.4 \;{\text{and}} \;0.7 \)). Given the set of allele frequencies \( p_{m} \left( {m = 1 \ldots n_{M} } \right) \), genotypes X of n r + 1 individuals were generated and the G matrix was built. The reliability of the candidate individual GEBV, \( r^{2} = \frac{{{\text{v}}(\hat{q}_{c} |{\mathbf{X}})}}{{{\text{v}}\left( {q_{\text{c}} |{\mathbf{X}}} \right)}} \) was computed as described in the section «Situation analyzed» , as well as approximations considering 1–10 elements in the Taylor series I − D γ + D 2 γ 2 − D 3 γ 3··· The convergence of the series as predicted by the value (lower or higher than 1) of the matrix's largest eigenvalue was checked numerically, by estimating the mean of this largest eigenvalue from five simulations in each case studied (\( n_{r} = 200\; {\text{to}} \;1000 ;n_{M} = 100 \;{\text{to}}\; 2000 \;{\text{and}} \;\nu^{2} = 0.1, 0.4, 0.7). \) This limited number of replications was chosen after observation of a very limited variance of this eigenvalue. Finally, the asymptotic values of the suggested approximations [formulae (5) and (6)] were computed using the number of independent segments as described by [4]. The process was repeated 50 times and the means of those exact or approximated reliabilities computed. Figure 1a and b illustrates the convergence of the Taylor series when 2000 markers are used, and Tables 5 and 6 give the results for both approximations when ν 2 = 0.4. The Taylor series converged when the proportion ν 2 of the phenotypic variance explained by the molecular score was low, with oscillations and divergence observed when \( \nu^{2} = 0.4 \) or 0.7 with the first approximation and ν 2 = 0.7 with the second approximation. These observations were in accordance with the deviation to one of the largest eigenvalue of the matrix involved in the series (Fig. 2a, b). When the series converged, the approximations rapidly reached a plateau, at the 3rd (respectively, 2nd) order for the first (respectively, second) approximation. Convergence of the Taylor series as a function of heritability and reference population size (\( n_{M} = 2000 \)). a First approximation. b Second approximation Table 5 Performances of the first approximation \( \left( {\tilde{r}_{{q_{\text{c}} ,\hat{q}_{\text{c}} }}^{2} } \right) \) for an unrelated reference population as a function of the number of markers (n M ) and reference population size (n R ), assuming ν 2 = 0.4 Table 6 Performances of the second approximation \( \left( {\tilde{r}_{{q_{\text{c}} ,\hat{q}_{\text{c}} }}^{2} } \right) \) for an unrelated reference population as a function of the number of markers (n M ) and reference population size (n R ), assuming ν 2 = 0.4 Largest Eigen value of the noise matrix \( {\mathbf{D}}\varvec{\gamma} \) involved in the Taylor expansion of the phenotypic variances matrix \( {\mathbf{V}} \) as a function of heritability, reference population size and number of markers. a First approximation. b Second approximation Table 6 shows that the second Taylor series converges always when \( \nu^{2} = 0.4 \). The proposed approximation was generally biased upwards. This over-estimation of the precision was generally limited but increased as the number of markers and the reference population size decreased. The maximum over-estimation observed was 37.5 % (0.22 instead of 0.16 with a standard error less than 0.02). Based on the results in Table 5, it appears that when the first Taylor series converges, the proposed approximation is also slightly over-estimated. The expectation of the approximations, as given in formulae (5) and (6) are very close to the observations. No LD and non-independence between reference and candidate population The quality of the approximation was tested as above, by considering the case of a candidate having one of its parents in the reference population and all other individuals being unrelated. Tables 7 and 8, which summarize the results of the simulation, show that the second approximation is still the most efficient (systematic convergence of the Taylor series and consistency between first order approximation and its expectation). Again, an overestimation of about 20 % is observed with this approximation. Table 7 Performances of the first approximation \( \left( {\hat{r}_{{q_{\text{c}} ,\hat{q}_{\text{c}} }}^{2} } \right) \) when the parents of candidate belong to the reference population as a function of the number of markers (n M ) and reference population size (n R ), assuming ν 2 = 0.4 Table 8 Performances of the second approximation \( \left( {\tilde{r}_{{q_{\text{c}} ,\hat{q}_{\text{c}} }}^{2} } \right) \) when the parents of the candidates belong to the reference population as a function of the number of markers (n M ) and reference population size (n R ), assuming ν 2 = 0.4 Example of the use of the second approach As an illustration of formula (3) different situations that differ in the relationships between the candidate and reference populations were compared. Coefficients of formula (3) were estimated using the elements in Table 3. An effective reference population size of 200, the genotyping of 10,000 markers and a heritability of 0.4 were assumed. Scenarios included no individuals related to the candidate in the reference population, its sire, both parents, 1–10 half-sibs (or uncles), and a combination of parental and half-sib information. The results are in Fig. 3. The linearity of the precision increases with the number of half-sibs, which is consistent with the approximation, but unsatisfactory, as discussed below. Example of approximated precision [from Eq. (3)] corresponding to various relations between the candidate and reference populations. (\( n_{R} = 1000; n_{M} = 10,000; \nu^{2} = 0.4 \)) Equivalent number of independent loci This number was computed using formula (8), for various effective population sizes (\( N_{e} = 100 \;{\text{to}} \;1000 \)), heritabilities (\( h^{2} = 0.1 \;{\text{to}}\; 0.5 \)), total numbers of loci (\( n_{M} = 1000 \;{\text{to}}\; 10,000) \) and reference population sizes (\( n_{R} = 1000 \;{\text{to}}\; 2500 \)). Figure 4 shows how equivalent numbers of independent loci (\( n_{{M_{i} }} \)) vary with the total number of markers (\( n_{M} ) \) and reference population size (n R ). As n M increases, the number \( n_{{M_{i} }} \) rapidly converges to a value which strongly depends on the size of the reference population size. This dependence on n R of the equivalent number of independent loci does not exist in the Goddard's effective number of loci and clearly shows the difference in nature between these concepts. Three phenomena, observed when considering the extreme case of two markers (see Additional file 5), explain this behavior: The trace T of (E[X ′ r X r ] + λβ I)−1 is a decreasing function of n r : as a consequence, the larger is the population size, the smaller is T, which is proportional to the marker effects conditional variances \( v\left(\varvec{\beta}\right) - cov\left( {\varvec{\beta},{\mathbf{y}}} \right)v\left( \varvec{y} \right)^{ - 1} cov( {{\mathbf{y}},\varvec{\beta}}) \)) and the higher is the variance of the estimated molecular score (\( {\text{v}}\left( {q_{c} |{\mathbf{y}}} \right) = {\mathbf{x}}_{{\mathbf{c}}} cov\left( {\varvec{\beta},{\mathbf{y}}} \right)v\left( \varvec{y} \right)^{ - 1} cov\left( {{\mathbf{y}},\varvec{\beta}} \right){\mathbf{x}}^{\prime}_{{\mathbf{c}}} ) \). The trace T is always higher in the situation of LD than for independent markers \( \left( {{\text{T}}_{LD} > {\text{T}}_{LE} } \right) \). The rate of decrease is higher for T LD than for T LE . On the whole, the reliability for a given number of observed markers corresponds to the reliability that is reached with a larger number of independent loci when the size of the reference population is larger. Number of equivalent markers [from Eq. (8)] as a function of the total number of markers (n M ) and reference population size (n R ). (\( Ne = 200; \nu^{2} = 0.4 \)) Figure 5 indicates that the equivalent number of independent loci increases with heritability and effective population size. This last observation was expected since with larger effective population sizes, the LD between two loci decreases and this increases the effective number of loci. The effect of heritability is less direct. Number of equivalent markers [from Eq. (8)] as a function of the effective population size (Ne) and heritability (v 2) \( \left( {n_{M} = 5000; n_{R} = 2000} \right) \) The objective of this paper was to explore approximations of the precision of genomic selection when the selection candidate has relatives in the reference population. Two approximations were developed and numerically compared. These approximations were based on Taylor expansions of a matrix inverse M −1. In both cases, the initial matrix is the sum of the identity matrix and a perturbation (M = I + E). Convergence of these series is not guaranteed and depends on the behavior of the perturbation (I − E + E 2 − E 3 → (I + E)−1 if \( {\mathbf{E}}^{\text{t}} \to 0 \) when t → ∞). With the first approximation, derived from the appendix in [18], this convergence failed when the number of makers was too small (less than 1500 in our example) or the heritability was greater than 0.1. This was only observed when ν 2 = 0.7 with the second approximation. This is fully consistent with the deviation to one of the largest eigenvalue of the E matrix. The expectation of the proposed approximation, when data were simulated with the model corresponding to the hypotheses underlying their algebraic development, was very close to the mean value after 50 simulations. Thus, extremely fast estimation of the precision is possible, which allows intensive optimization and comparison of selection schemes. When individuals are unrelated and markers are in linkage equilibrium, we obtain an estimation of the GEBV accuracy which differs from that of Goddard et al. [18]. This is surprising since that approach was said to be based on the Taylor approximation used here. Their formula may be obtained in a simpler way [see Additional file 6]. However, relaxing the assumption of "absence of between-individual relationship" is not straightforward using this approach. A strong limit of our new approximation comes from the limitation to the first order term of the Taylor series. Deriving algebra was only possible at this stage. The side effect is that no genotypic covariance terms between reference individuals appear in this approximation. As a consequence, only the direct relationships between candidate and reference individuals play a role in the estimation, but not the structure within the reference population. This is unfortunate, because accuracies of genomic prediction are obviously affected by the construction of the reference population. Our last numerical example, in which there is a linear trend with the number of half-sibs, reveals this drawback: two half-sibs of the candidates are treated as unrelated and the information that they carry is just the double of that of a single half-sib. Future developments should focus on this limitation, for instance to derive the expectation of the x c C −1 B 2 x ′ c term. The U-shaped density function \( f( p) \) of allele frequencies was defined as in [16]\( . \) A Beta distribution \( {\mathcal{B}}( {\phi_{a} ,\phi_{b} }) \) for the allele frequencies was assumed by Gianola et al. [30], following Wright [34]. Assuming that the frequency distribution is centered on 0.5, i.e. Φ a = Φ b = Φ, this quantity Φ can be adjusted to fit the distribution of Goddard. Using the Chi2 test as a fitting option, we observed that the adjusted \( \hat{\phi } \) rapidly decreased as the population size increased (Fig. 6), with a slower and slower evolution as the population size grew larger (with \( n_{r} = 200{,}000 \) the adjusted \( \hat{\phi } \) is 0.9750000). Using a Beta distribution could give more generality to the results. If the expectation of τ and τ2 are easily derived from the moments generating function of Beta distribution (\( {\text{E}}[ {{\tau }}] = \frac{{n_{r} a}}{2a + 1} \) and \( {\text{E}} [ {{{\tau }}_{2} } ] = n_{r} \frac{{4a^{2} + 16a + 18}}{{4a^{2} + 8a + 3}} \)), deriving the expectation of parameters \( \sum\nolimits_{m} {\rho_{m} } \), ∑ m ρ 2 m and \( \sum\nolimits_{m} {\frac{{\rho_{m}^{2} }}{{\sigma_{m}^{2} }}} \) is not simple. However, these quantities are quite easily obtained by numerical integration. Thus, adjusting a Beta distribution to observed allele frequencies and numerically computing formula (3) parameters would be a feasible and more versatile implementation of our second genomic precision approximation. Parameter of the beta distribution \( {\mathcal{B}} ( {\phi ,\phi }) \) that best fits Godard's distribution of allele frequencies Our work focused on the BLUP precision of the molecular score \( r^{2} \left( {q_{ci} ,\hat{q}_{ci} |{\mathbf{X}}} \right) = \frac{{v\left( {\hat{q}_{ci} } \right)}}{{v\left( {q_{ci} } \right)}} \) but left aside the proportion of the genetic variance that is captured by the markers \( \left( {\frac{{v ( {q_{ci} } )}}{{v ( {g_{ci} } )}}} \right) \). This last term could be treated as in Goddard et al. [18]: \( \frac{{v\left( {q_{ci} } \right)}}{{v ( {g_{ci} })}} = b = \frac{{n_{M} }}{{n_{M} + M_{e} }} \) with M e the number of independent segments. As noted in the section on the general framework, the quantity \( \frac{{v ( {\hat{q}_{ci} } )}}{{v ( {g_{ci} } )}} = b \times r ( {q_{ci} ,\hat{q}_{ci} |{\mathbf{X}}} ) \) is only an approximation of these GEBV reliabilities i.e. \( r^{2} \left( {g_{ci} ,\hat{q}_{ci} |{\mathbf{X}}} \right) = \frac{{cov^{2} \left( {g_{ci} ,\hat{q}_{ci} |{\mathbf{X}}} \right)}}{{v\left( {g_{ci} |{\mathbf{X}}} \right)v\left( {\hat{q}_{ci} |{\mathbf{X}}} \right)}} \). Equality between those quantities is obtained when \( {\mathbf{X}} = {\mathbf{W}} \)(identity between statistical and genetical models), a condition assumed in Goddard [16] where markers and QTL are modeled as a series of uncorrelated pairs. All the developments shown in this paper are based on the hypothesis that the reliability of GEBV based on non-independent markers for a trait controlled by \( n_{Q} \) QTL that are in incomplete LD with the markers can be approached by the reliability of GEBV when there are n M independent segments that carry a single QTL in LD with a single marker. A few difficulties arose when applying this approach proposed by Goddard [16]. How many independent markers should be considered? The reasoning in Goddard [16] was based on the idea of an effective number of loci (M e ) corresponding to a given variance of realized relationships. Here, we proposed the alternative equivalent number of independent loci (M i ) which corresponds to a given reliability. We showed that this M i number depends on the size of the reference population and on heritability, a dependence that does not occur with M e . If we invert the argument, controlling the level of realized relationships variance with the effective number of loci (M e ) does not seem to be a good approach to control the estimated GEBV reliability. As detailed by Hayes et al. [17], the effective number of independent chromosome segments depends on the population structure. The higher is the mean relationship level, the smaller is this effective number. However, we suggest the use of this number as estimated from a set of unrelated individuals, or of its expectation prior to any observation, assuming independence between individuals. Without formal proof, the idea was that long-term LD was considered by using an effective (or equivalent) number of independent loci while short-term non-independence was taken into account with our formalization of the matrix's expectations that is developed in Additional file 1. A complete proof of the procedure is still needed. Regardless of the definition of \( M_{e} \) or M i , there is no reason that the number of independent loci must equal the number of QTL, which is unknown, contrary to the hypothesis about pairs of marker-QTL (in practice, since the QTL effects are random variables, many segments will only have very small effects on the trait, thus simulating the more likely situation of a limited number of "real" QTL). Equating \( {\mathbf{X}} \) and W as well as σ 2β and σ 2α has no clear justification. The variance \( v\left( {\hat{q}_{c} |{\mathbf{X}}} \right) \) of the molecular score should not be σ 2β x c X ′ r (X r X ′ r + Iλβ)−1 X r x ′ c but σ 2α x c X ′ r (X r X ′ r + Iλβ)−1(W r W ′ r + Iλα)(X r X ′ r + Iλβ)−1 X r x ′ c . This other formula assembles two sets of unknown parameters: the variances σ 2α and σ 2β , and the genotypes X and W. It is often assumed that \( {{\sigma }}_{{{\upbeta }}}^{2} = \sigma_{g}^{2} /\left( {n_{M} \bar{\tau }} \right) \) (e.g. [1]), which results in an overestimation of the \( {{\uplambda }}_{{{\upbeta }}} \) parameter since LD is not considered. Working on the number of independent loci (\( M_{e}\, {\text{or}}\,M_{i} \)) apparently solves this difficulty. The QTL variance \( {{\sigma }}_{{{\alpha }}}^{2} = \sigma_{g}^{2} /\left( {n_{Q} \bar{\tau }} \right) \) could be derived based on a hypothesis about the number of QTL. The situation is more difficult for the genotype matrices since the W r matrix is not observed. If the framework considered so far (n M markers-QTL pairs with strong LD within pairs and no LD between pairs) is partly retained, a slight improvement is possible considering the element b of the genetic variability explained by SNPs. The idea would be to replace, in the formulae used in this paper, σ 2q by b × σ 2g . Element b can be derived by considering that the markers' (β) and QTL' (α) effects are fixed in the genetic and statistical models. Leaving aside the singularity of X ′ r X r when the number of SNPs is large, the marker effects are now estimated by \( {\hat{\varvec{\upbeta }}} = \left( {{\mathbf{X}}^{\prime}_{{\mathbf{r}}} {\mathbf{X}}_{{\mathbf{r}}} } \right)^{ - 1} {\mathbf{X}}^{\prime}_{{\mathbf{r} }}{\mathbf{y}} \) and the molecular score defined as \( {\hat q}= {\mathbf{X}}_{{\mathbf{r}}} {\hat{\varvec{\upbeta }}} \), while the genetic value was g = W r α. Given the genotype matrices, the sample genetic variability is v g = α ′ W ′ r W r α and the sample molecular score variability y ′ X r (X ′ r X r )−1 X ′ r y with an expectation v q = α ′ W ′ r X r (X ′ r X r )−1 X ′ r W r α. The part of the genetic variability explained by the SNPs is the ratio b = v q /v g . Expectations of the matrices' product elements \( \left\{ {{\mathbf{X}}^{\prime}_{{\mathbf{r}}} {\mathbf{X}}_{{\mathbf{r}}} } \right\}_{{\varvec{ml}}} \) are \( 2n_{r} \Updelta_{ml} \) off diagonal and 2n r p m (1 − p m ) = n r σ 2 m in the diagonal, with similar expressions for \( {\mathbf{W^{\prime}_{\mathbf{r}}}} {\mathbf{X}}_{{\mathbf{r}}} \) and W ′ r W r elements. Following Goddard [16], approximating expectations of the matrices' functions by the function of their expectation, and assuming that (1) markers are independent, (2) each QTL q is in LD with only one marker m(q), with a LD value \( \Updelta_{qm\left( q \right)} \), and (3) individuals are unrelated: v g = n r ∑ q α 2 q σ 2 q , we get \( v_{q} \sim 4n_{r} \sum\nolimits_{q} {\frac{{\Updelta_{qm\left( q \right)}^{2} }}{{\sigma_{m\left( q \right)}^{2} }}\alpha_{q}^{2} } = n_{r} \sum\nolimits_{q} {r_{qm\left( q \right)}^{2} \alpha_{q}^{2} \sigma_{q}^{2} } \), and \( {\text{b}} = \frac{{\mathop \sum \nolimits_{q} r_{qm\left( q \right)}^{2} \alpha_{q}^{2} \sigma_{q}^{2} }}{{\mathop \sum \nolimits_{q} \alpha_{q}^{2} \sigma_{q}^{2} }} \), corresponding to Eq. (4) in [16]. The ratio b is the weighted mean of LD r 2. Unfortunately, neither α 2 q nor σ 2 q are known. The unweighted mean \( \frac{{\mathop \sum \nolimits_{q} r_{qm\left( q \right)}^{2} }}{{n_{q} }} = \bar{r}^{2} \) may be a fruitful approximation. Following Sved [33], the expectation of \( r_{qm\left( q \right)}^{2} \) is \( \frac{1}{{1 + 4N_{e} c}} \) with c being the distance, in Morgan, between the QTL and its marker. Let L be the total length of the genome, and assume an equal distance L/n M between each successive marker \( {\text{b}}\sim \int_{0}^{{L/2n_{M} }} {\frac{1}{{1 + 4N_{e} c}}\frac{1}{{{L \mathord{\left/ {\vphantom {L {2n_{M} }}} \right. \kern-0pt} {2n_{M} }}}}dc = \frac{{n_{M} }}{{2N_{e} L}}\left[ {{ \log }\left( {1 + {{2N_{e} L} \mathord{\left/ {\vphantom {{2N_{e} L} {n_{M} }}} \right. \kern-0pt} {n_{M} }}} \right)} \right]} \). The expectation of the reliability \( {\text{E}}\left[ {r_{{q_{\text{c}} ,\hat{q}_{\text{c}} }}^{2} } \right] \), which is a ratio of variances \( {\text{E}}_{{\mathbf{X}}} \left[ {{\text{v}}\left( {\hat{q}_{c} |{\mathbf{X}}} \right)/{\text{v}}\left( {q_{\text{c}} |{\mathbf{X}}} \right)} \right] \) was approximated by the ratio of the variance expectations \( {\text{E}}_{{\mathbf{X}}} \left[ {{\text{v}}\left( {\hat{q}_{c} |{\mathbf{X}}} \right)} \right]/{\text{E}}_{{\mathbf{X}}} \left[ {{\text{v}}\left( {q_{\text{c}} |{\mathbf{X}}} \right)} \right] \). The usual second degree approximation (E[N/D] = E[N]/E[D] − cov[N, D]/E2[D] + v[D]E[N]/E3[D]) could not be used here due to algebra complexity. However, in the case of unrelated individuals and independent markers, numerical evaluation of the difference between exact and approximated results for various reference population sizes and numbers of markers shows a very small underestimation of the reliability (Table 9). Table 9 Expectation of the ratio of variances vs. the ratio of the variance expectations considering different reference population sizes and numbers of markers (ν 2 = 0.4, 50 simulations) The theory presented here was developed by considering a single selection candidate. When candidates are diversely related to the reference population, as suggested in Goddard et al. [18], the candidates should be examined one by one. Moreover, non-independence between candidates should be considered. A further step towards the modeling of genomic selection could be an approximation of the mean genetic values of selected individuals when GEBV reliabilities are heterogeneous. A few other hypotheses were made in this paper, including additivity and i.i.d. of QTL effects, and the use of GBLUP. As long as the objective is to model and optimize breeding plans, only relative values are interesting and we assumed that these hypotheses were not critical. The objective of this paper was to provide a further step towards the development of deterministic models that describe genomic breeding plans. Such deterministic models carry low computational burden and thus allow design optimization through intensive numerical exploration. We proposed two alternative approximations of the estimation of GEBV reliability in the case of non-independence between candidate and reference populations. Both were derived from the Taylor series heuristic approach suggested by Goddard [16]. A numerical exploration of their properties showed that the series were not equivalent in terms of convergence to the exact reliability, that the approximations may overestimate GEBV precision and that they perfectly converged toward their theoretical expectations. Formulae derived for these approximations were simple to handle in the case of independent markers. A few parameters that describe the markers' genotypic variability (allele frequencies, linkage disequilibrium) can be estimated from genomic data corresponding to the population of interest or estimated after assumption about their distribution. When markers are not in linkage equilibrium (i.e. there is LD), replacing the real number of markers and QTL by an effective or equivalent number of independent loci, as proposed by Goddard [16] and Hayes et al. [17], is a practical solution. Research efforts are still needed to overcome some strong limits of this approach. Meuwissen THE, Hayes BJ, Goddard ME. Prediction of total genetic value using genome-wide dense marker maps. Genetics. 2011;157:1819–29. Schaeffer LR. Strategy for applying genome-wide selection in dairy cattle. J Anim Breed Genet. 2006;123:218–23. König S, Simianer H, Willam A. Economic evaluation of genomic breeding programs. J Dairy Sci. 2009;92:382–91. McHugh N, Meuwissen THE, Cromie AR, Sonesson AK. Use of female information in dairy cattle genomic breeding programs. J Dairy Sci. 2011;94:4109–18. de Roos P, Schrooten WC, Veerkamp RF, van Arendonk JAM. Effects of genomic selection on genetic improvement, inbreeding, and merit of young versus proven bulls. J Dairy Sci. 2011;94:1559–67. Pryce JE, Goddard ME, Raadsma HW, Hayes BJ. Deterministic models of breeding scheme designs that incorporate genomic selection. J Dairy Sci. 2010;93:5455–66. Meuwissen THE, Hayes BJ, Goddard ME. Accelerating improvement of livestock with genomic selection. Annu Rev Anim Biosci. 2013;1:221–37. Sonesson AK, Meuwissen THE. Testing strategies for genomic selection in aquaculture breeding programs. Genet Sel Evol. 2009;41:37. Ibánẽz-Escriche N, Fernando RL, Toosi A, Dekkers JCM. Genomic selection of purebreds for crossbred performance. Genet Sel Evol. 2009;41:12. Wolc A, Arango J, Settar P, Fulton JE, O'Sullivan NP, Preisinger R, et al. Persistence of accuracy of genomic estimated breeding values over generations in layer chickens. Genet Sel Evol. 2011;43:33. Tribout T, Larzul C, Phocas F. Efficiency of genomic selection in a purebred pig male line. J Anim Sci. 2012;45:4164–76. Shumbusho F, Raoul J, Astruc JM, Palhiere I, Elsen JM. Potential benefits of genomic selection on genetic gain of small ruminant breeding programs. J Anim Sci. 2013;91:3644–57. Daetwyler HD, Villanueva B, Woolliams JA. Accuracy of predicting the genetic risk of disease using a genome-wide approach. PLoS One. 2008;3:e3395. Legarra A, Robert-Granié C, Manfredi E, Elsen JM. Performance of genomic selection in mice. Genetics. 2008;180:611–8. Van Raden PM. Efficient methods to compute genomic predictions. J Dairy Sci. 2008;91:4414–23. Goddard ME. Genomic selection: prediction of accuracy and maximisation of long term response. Genetica. 2009;136:245–57. Hayes B, Visscher P, Goddard M. Increased accuracy of artificial selection by using the realized relationship matrix. Genet Res. 2009;91:47–60. Goddard ME, Hayes BJ, Meuwissen THE. Using the genomic relationship matrix to predict the accuracy of genomic selection. J Anim Breed Genet. 2011;128:409–21. Buch LH, Kargo M, Berg P, Lassen J, Sørensen AC. The value of cows in reference populations for genomic selection of new functional traits. Animal. 2011;6:880–6. Clark SA, Hickey JM, Daetwyler HD, van der Werf JHJ. The importance of information on relatives for the prediction of genomic breeding values and the implications for the makeup of reference data sets in livestock breeding schemes. Genet Sel Evol. 2012;44:4. Wientjes YCJ, Veerkamp RF, Calus MPL. The effect of linkage disequilibrium and family relationships on the reliability of genomic prediction. Genetics. 2013;193:621–31. Habier D, Fernando RL, Garrick DJ. Genomic BLUP decoded: a look into the black box of genomic prediction. Genetics. 2013;194:597–607. Erbe M, Gredler B, Seefried FR, Bapst B, Simianer H. A function accounting for training set size and marker density to model the average accuracy of genomic prediction. PLoS One. 2013;8:e81046. Weller JI. Economic aspects of animal breeding. London: Chapman & Hall; 1994. Dekkers JCM. Prediction of response to marker-assisted and genomic selection using selection index theory. J Anim Breed Genet. 2007;124:331–41. Daetwyler HD, Pong-Wong R, Villanueva B, Woolliams A. The impact of genetic architecture on genome-wide evaluation methods. Genetics. 2010;185:1021–31. Habier D, Fernando RL, Dekkers JCM. The impact of genetic relationship information on genome-assisted breeding values. Genetics. 2007;177:2389–97. Pszczola M, Strabel T, Mulder HA, Calus MPL. Reliability of direct genomic values for animals with different relationships within and to the reference population. J Dairy Sci. 2012;95:389–400. Wientjes YCJ, Veerkamp RF, Bijma P, Bovenhuis H, Schrooten C, Calus MPL. Empirical and deterministic accuracies of across-population genomic prediction. Genet Sel Evol. 2015;47:5. Gianola D, De Los Campos G, Hill WG, Manfredi E, Fernando R. Additive genetic variability and the bayesian alphabet. Genetics. 2009;183:347–63. Visscher PM, Medland SE, Ferreira MAR, Morley KI, Zhu G, Cornes BK, et al. Assumption-free estimation of heritability from genome-wide identity-by-descent sharing between full siblings. PLoS Genet. 2006;2:e41. Sigman K. Lecture notes introduction to discrete-time Markov chains. http://www.columbia.edu/~ks20/stochastic-I/stochastic-I-MCI.pdf. 2009. Sved JA. Linkage disequilibrium and homozygosity of chromosome segments in finite populations. Theor Popul Biol. 1971;2:125–41. Wright S. The distribution of gene frequencies in populations. Proc Natl Acad Sci USA. 1937;23:307–20. La Gillois M. relation d'identité en génétique. Ann Inst Henri Poincaré. 1964;B2:1–94. Harris DL. Genotypic covariances between inbred relatives. Genetics. 1964;50:1319–48. Jacquard A. Logique du calcul des coefficients d'identité entre deux individus. Populations. 1966;21:751–76. This work was partly done when the author was on sabbatical leave in the Animal Genetic and Breeding Unit (AGBU) in Armidale, Australia. This sabbatical was supported by a grant from AGBU and from INRA (métaprogramme Selgen). Andrew Swan, Julius van der Werf, Mike Goddard, Anne Ricard and Bruno Goffinet are thanked for their many useful comments. Rob Banks is particularly thanked for his help at many levels. GenPhySE (Génétique, Physiologie et Systèmes d'Elevage), INRA, 31326, Castanet-Tolosan, France Jean-Michel Elsen Animal Genetics and Breeding Unit, University of New England, Armidale, Australia Correspondence to Jean-Michel Elsen. Computation of \( {\text{E}}\left[ {{\text{X}}_{\text{im}}^{{{\text{d}}_{\text{i}} }} {\text{X}}_{\text{jm}}^{{{\text{d}}_{\text{j}} }} \cdots {\text{X}}_{\text{Km}}^{{{\text{d}}_{\text{K}} }} } \right] \) as a function of between-chromosome identity coefficients, in the case of independent markers. Using an extension of the identity coefficients theory, the document shows how to compute the elements of the expectation E[XX′TXX′] (i.e. \( {\text{E}}\left[ {X_{i} X_{j} } \right], {\text{E}}\left[ {X_{i} X_{j}^{2} } \right], {\text{E}}\left[ {X_{i} X_{j}^{3} } \right], {\text{E}}\left[ {X_{i}^{2} X_{j}^{2} } \right], \) \( {\text{E}}\left[ {X_{i} X_{j} X_{k}^{2} } \right] \) and E[X i X j X k X l ]) when markers are independent and individuals are related. Expectations of \( \mathop \sum \limits_{m} \sigma_{m}^{2} , \mathop \sum \limits_{m} \sigma_{m}^{4} , \mathop \sum \limits_{m} \rho_{m} \), ∑ m ρ 2 m , ∑ m ρ 2 m /σ 2 m and ∑ m 1/σ 2 m . The expectations of the listed quantities are computed assuming either a U-shaped or a uniform distribution of allele frequencies. Precision formulae when the candidate is related to reference individuals. The approximated formulae derived in the main text are applied to the case of a candidate for which the sire belongs to the reference population. Equivalent numbers of independent loci. (1) equation of the equivalent number of independent loci which gives the precision \( {\text{E}}\left[ {r_{{q_{\text{c}} ,\hat{q}_{c} }}^{2} } \right]\sim \frac{{{\text{E}}_{{\mathbf{X}}} \left[ {{\text{v}}(\hat{q}_{c} |{\mathbf{X}})} \right]}}{{{\text{E}}_{{\mathbf{X}}} \left[ {{\text{v}}\left( {q_{\text{c}} |{\mathbf{X}}} \right)} \right]}} \) obtained with the total number of non-independent markers; (2) simple approximation in a very simplified situation; (3) relations between number of independent markers and size of the reference population. The case of markers in linkage disequilibrium. Derivation of the crossed terms expectation of genomic values (E[x cl x cm X il X im ]) in the situation of LD between loci l and m. Many examples are given for diverse situations. Another demonstration of Goddard et al. [18] accuracy. A complete demonstration is given using the notations of the present paper. Elsen, JM. Approximated prediction of genomic selection accuracy when reference and candidate populations are related. Genet Sel Evol 48, 18 (2016). https://doi.org/10.1186/s12711-016-0183-3 Genomic Selection Candidate Population Genomic Estimated Breeding Values (GEBV) Independent Loci Reference Population Size Submission enquiries: [email protected]
CommonCrawl
International Journal for Equity in Health Health-related quality of life by household income in Chile: a concentration index decomposition analysis Rodrigo Severino1, Manuel Espinoza1,2 & Báltica Cabieses3 International Journal for Equity in Health volume 21, Article number: 176 (2022) Cite this article Health inequities have a profound impact on all dimensions of people's lives, with invariably worse results among the most disadvantaged, transforming them into a more fragile and vulnerable population. These unfair inequalities also affect dimensions focused on subjectivity, such as health-related quality of life (HRQoL), which has been positioned, in recent decades, as an important outcome in health decision-making. The main objective of this study is to estimate socioeconomic inequality in HRQoL of Chilean by household income. Secondary analysis of the National Health Survey (ENS 2016–2017, Chile). This survey includes a nationally representative, stratified, and multistage household sample of people aged 15 and above. Socioeconomic inequality in HRQoL (EQ5D) is estimated by the concentration index (CI) ranked by household income. Decomposition analysis is conducted to examine potential explanatory sociodemographic factors. The CI for household income inequality in HRQoL was -0.063. The lower the household income, the worse the HRQoL reported by in Chile. The decomposition analysis revealed that socioeconomic position contributes 75,7% to inequality in the quality of life, followed by educational level (21.8%), female gender (17.3%), and type of Health Insurance (15%), age (-19.7%) and residence (-10.8%). Less than 1% corresponds to the unexplained residual component. Our findings suggest the existence of a disproportionate concentration of worse HRQoL in the most disadvantaged socioeconomic groups in Chile. This inequality is largely, yet not completely, associated with household income. Other significant factors associated with this inequality are education, gender, and healthcare insurance. These results suggest the need of strengthening efforts to reducing socioeconomic gaps in health outcomes in Chile, as a means to achieve social justice and equity in health and healthcare. Health inequalities have been defined as those measurable differences in health experience and outcomes between different population groups, according to socioeconomic level, geographic area, age, disability, gender, ethnic groups and others [1]. Reducing health inequalities has become a frequent topic of debate in the past decades. The United Nations (UN) considers them one of the sustainable development goals for 2030 [2] and has received a significant support in most countries in the world. These health inequalities are not a natural random phenomenon and have a profound impact on all dimensions of people's lives, as they have strong influences in population health. As vastly documented, worse health results are found among those in socioeconomic disadvantage, as well as the concentration of their risk factors, revealing that the way societies are structured and stratified can have pervasive consequences in population health [3]. There is a large amount of literature that has explored the relationship between socioeconomic status and health outcomes, with a range of indicators to measure this association [4]. Of them all, income has consistently shown to have a strong influence in life opportunities [5]. Income inequality may have direct and indirect influences on population health. In terms of direct effects, it has been related to an unequal opportunity of obtaining good quality living conditions, avoiding environmental exposure, and buying quality food [6]. Indirect effects may be related to subjective sense of self-esteem and social value, individual health-risk behaviours that are socially taught, and others [7]. Income inequality has also been strongly linked to unequal opportunities of accessing health care and the quality of healthcare services received [8]. The scientific evidence is strong, societies with higher levels of inequality in income, education and other social determinants, have higher rates of infant mortality, abuse of illicit drugs, violence, obesity, imprisoners, adolescent pregnancy, less trust between people and less social cohesion [9]. Socioeconomic inequalities in population health have been measured for some health outcomes in Chile. In past decades, Chile has become a prosperous nation in the Latin American region [10]. However, despite experiencing an improvement in various indicators related to poverty and income, this country remains one of the most unequal countries in the region (e.g., Gini coefficient of 0,47 in year 2017) [11]. In terms of population health, the SALURBAL ecology study shows differences of up to 9 years in life expectancy in the city of Santiago for men, and 17.7 years for women, with worse results in the most vulnerable sectors [12]. Cabieses et al., using a representative national survey, found that there is a significant concentration of good-very good self-reported general health amongst those in the higher household income in the country [13]. Other studies have reported socioeconomic and income inequality in a number of health outcomes, such as in self-perceived oral health among adults in Chile, or income-related inequality in health and health care utilization in Chile [14]. The most disadvantaged older adults also have worse health outcomes, the prevalence and incidence of functional limitation follows a clear socioeconomic gradient in those older than 60 years [15]. The same occurs when analyzing other outcomes such as life expectancy or disability-free life expectancy, where it has been documented that people with a better socioeconomic position live longer and healthier lives compared to the more vulnerable population [16]. Even diseases as prevalent as Diabetes Mellitus II also follow a marked socioeconomic gradient when analyzed by quintiles of household income and educational level, where they also remain significantly associated with the presence of complications and attendance at health check-ups [17]. Health-related quality of life (HRQoL) has been positioned as an important outcome in healthcare decision-making, because it complements other more objective health indicators such as mortality or clinical effectiveness [18,19,20]. In aging populations with growing rates of chronic conditions, such as Chile and many other countries, HRQoL has raised as a useful measure of population health, and good predictor of the need for interventions and rising healthcare costs [21]. There are relevant studies in socioeconomic inequalities in HRQoL globally. A study in UK conducted by Davies et al. found that low SEP is a risk factor for hospital death as well as other indicators of potentially poor-quality end-of-life care [22]. Other studies have been published with similar findings in many countries, including some in the Latin American region [23,24,25]. Among many tools to measure HRQoL, EQ5D has become an instrument widely used. It has been defined as a generic questionnaire and has been used in several countries due to its simplicity, being included in some studies that have measured socioeconomic inequality in HRQoL in adult populations [26,27,28,29]. To the best of our knowledge, however, there is no study published analysing income inequality in HRQoL using the EQ5D instrument in the adult population in Chile. Therefore, our main objective is to measure the existing inequality HRQoL measured through EQ-5D by household income in adult population in Chile. For this, we estimated the concentration index and carried out a decomposition analysis that identifies a series of variables that were associated with the CI income inequality measure. Data and sample Secondary analysis of cross-sectional data from the National Health Survey of Chile (ENS 2016–2017); a nationally representative survey with a random, stratified, and multistage sample of 6233 people aged 15 and over [13]. Being a population survey with a complex sampling design, all estimates of the average design effect were made using weighting and the corresponding expansion factor. We assessed the social gradient of HRQoL in the adult population in Chile by socioeconomic status, particularly ranked household income, through the concentration index. In addition, we explored the potential effect of several covariates as determinants of these inequalities through a decomposition analysis of the concentration index. The ENS 2016–2017 survey is an anonymous database available per request by the Ministry of Health of Chile. Variables and measures The dependent variable was HRQoL, measured by EQ-5D-3L and converted into cardinal values (also called health utilities) using the social value set validated for Chile in 2009 [30]. EQ-5D-3L assesses health status according to a descriptive system (questionnaire) and a Visual Analogue Scale (VAS). The dimensions evaluated by this questionnaire are Mobility, Self-Care, Usual Activities, Pain & Discomfort and Anxiety & Depression. Each dimension has 3 levels of severity: No problems, Moderate Problems, and Serious Problems, thus providing a final set of 243 health states [31]. The range of this variable considers negative values, which is a limitation for estimating the concentration index [32]. For this reason, the utilities obtained were transformed into disutilities, representing a decrease in utility (valued quality of life) due to a particular symptom, condition, or complication, as follows: $${\varvec{D}}{\varvec{i}}{\varvec{s}}{\varvec{u}}{\varvec{t}}{\varvec{i}}{\varvec{l}}{\varvec{i}}{\varvec{t}}{\varvec{y}}\boldsymbol{ }=1-\left(\boldsymbol{ }{\varvec{U}}{\varvec{t}}{\varvec{i}}{\varvec{l}}{\varvec{i}}{\varvec{t}}{\varvec{y}}\right)$$ The independent variable was household income per capita. This was estimated from the ENS 2016–2017 dividing the household income by the number of members of the corresponding household. Control variables included in the decomposition analysis included educational level, household income, sex, age, area of residence (urban or rural), and the belonging to a particular health insurance system (public or private). Inequality measurement We analysed income-related health inequalities examining gaps and gradients. First, we calculated the absolute and relative (20:20) gaps. Then, we depicted the inequality gradient using concentration curves. The Concentration Curve (CC) provides a means to evaluate the degree of inequality related to the socioeconomic position in the distribution of a health variable, plotting the cumulative percentage of the health variable on the y-axis (disutilities) against the accumulated percentage of the sample on the x-axis, ordered by their socioeconomic position, starting with the poorest and ending with the richest (as measured in this study by ranked household income per capita) [32]. While the concentration curve is important in depicting income-related inequality at each point in the income distribution for a health outcome of interest, it cannot be used to quantify the magnitude of such income-related inequality. The magnitude of inequality in HRQoL is estimated using the Concentration Index (CI). The CI quantifies the degree of socioeconomic inequality of a health variable, displaying the health gradient in multiple subgroups with natural ordering or ranking. The CI expresses the extent to which an indicator of health is concentrated among the socially disadvantaged or the favoured [33]. The CI is defined as twice the area between the Concentration Curve and the equality line. The CI takes values between -1 and + 1, where a positive value indicates that a health variable is disproportionately concentrated among the most favoured people. In contrast, a negative value indicates the opposite (the variable is disproportionately concentrated in the most disadvantaged people), whereas a value of zero means that there is no inequality [34]. The sign of the concentration index indicates the direction of the relationship between the health variable and socioeconomic position, and its magnitude reflects both the strength of the relationship and the degree of variability in the health variable. The formula to calculate the concentration index is: $${\varvec{C}}{\varvec{I}}=\frac{2}{{\varvec{\mu}}}{\varvec{c}}{\varvec{o}}{\varvec{v}}\boldsymbol{ }({{\varvec{\gamma}}}_{{\varvec{i}}}\boldsymbol{ },\boldsymbol{ }{{\varvec{R}}}_{{\varvec{i}}})$$ where γi denotes the health variable (EQ-5D results) of the i-th individual, μ it is average and Ri denotes the fractional rank of the ith individual concerning the socioeconomic position of their household. Given that we used disutilities instead of utilities, the CC in this analysis is expected to be drawn above the diagonal line and the CI is expected to be negative. Decomposition approach According to Wagstaff et al. the CI can be decomposed into individual factors which contribute to -or are associated with- socioeconomic status related health inequality [35]. Each contribution corresponds to the product of the sensitivity of health concerning that factor and the degree of inequality related to income in that factor. In our study, to reveal the association of each explanatory variable to inequality in HRQoL, the approach of an additive linear regression model was used that links the results of the EQ-5D (y) to a set of k determinants: $${{\varvec{\gamma}}}_{{\varvec{i}}\boldsymbol{ }}=\boldsymbol{ }\boldsymbol{\alpha }\boldsymbol{ }+\boldsymbol{ }\sum_{{\varvec{\kappa}}}{{\varvec{\beta}}}_{{\varvec{\kappa}}\boldsymbol{ }}{{\varvec{\chi}}}_{{\varvec{\kappa}}{\varvec{i}}}+{{\varvec{\varepsilon}}}_{{\varvec{i}}}$$ where Xki is a set of k determining variables for the i-th individual, βk is the coefficient or regressor and ε is the error term. Given the relationship between γi and Xki in Eq. (2), the concentration index for γ, (CI), can be written as: $${\varvec{C}}{\varvec{I}}=\boldsymbol{ }{\sum }_{{\varvec{\kappa}}}{({\varvec{\beta}}}_{{\varvec{\kappa}}\boldsymbol{ }}\overline{{{\varvec{\chi}} }_{{\varvec{\kappa}}}}/{\varvec{\mu}})\boldsymbol{ }{{\varvec{C}}}_{{\varvec{\kappa}}}+{\varvec{G}}{{\varvec{C}}}_{\begin{array}{c}\varepsilon \\ \end{array}}/{\varvec{\mu}}$$ where μ is the mean of γ, \(\overline{{\chi }_{\kappa }}\) is the mean of \({\chi }_{\kappa }\), \({C}_{\kappa }\) is the concentration index for \({\chi }_{\kappa }\) (defined exactly as the concentration index for disutility), \(\frac{{{\varvec{\beta}}}_{{\varvec{\kappa}}}\boldsymbol{ }\overline{{{\varvec{\chi}}}_{{\varvec{\kappa}}}}}{{\varvec{\mu}}}\) is the elasticity of disutility with explanatory variables and \({GC}_{\varepsilon }\) is the generalized concentration index for the residual component εi. Sample description The descriptive statistics are presented in Tables 1 and 2. The average age of the population under study was 49 years (95% CI 48,42—49,38) and women represent 62,9%. Only 11,15% of people lived in rural areas of Chile. Regarding the educational level, 24% had less than 8 years of formal education, and only 22% claimed to have more than 12 years of education. The household income per capita was described by quintiles. The difference between the averages of the most favoured and the least favoured quintile was USD 1,502 per capita. Further, 78.6% of the population were beneficiaries of the public health insurance. Regarding HRQoL measured by EQ-5D, the average utility for the study population is 0.786, while the average disutility (its complement) is 0,214. Table 1 Summary statistics about HRQoL and its determinants in Chile, (ENS 2016–2017) Social gradient in HRQoL Figure 1 shows the average of disutilities for each quintile of income per capita. It can be observed that the social gradient in health runs from the top to the bottom of the socioeconomic spectrum, concentrating the worst HRQoL in the most disadvantaged quintiles of the adult population in Chile. Distribution of disutilities by income per capita quintiles Absolute and relative inequality Absolute and relative differences were calculated using the most extreme subgroups of socioeconomic position (Q1 and Q5) classified by age groups. In Table 3 we can see how each age interval concentrated the disutilities in the poorest quintile, representing 81% more for the 25–44 age range. This effect was somewhat diluted when the analysis was carried out considering all ages, where the concentration of disutilities in the most disadvantaged quintile represented an increase of 48%. Table 3 Inequality ratio (20:20) and inequality gap (20–20) of mean disutility in Chile (ENS 2016—2017) by age groups Concentration Curve (CC) and Concentration Index (CI) Figure 2 illustrates the HRQoL Concentration Curve. As mentioned above, the CC was above the equality curve, implying that the worst results were disproportionately concentrated in the most disadvantaged individuals. This finding was consistent with the results of the estimation of the Concentration Index shown in Table 4, which had a value of -0.063 (95% CI -0.091—-0.035), supporting the notion of disutilities were concentrated in the most disadvantaged quintiles of the adult population in Chile. Table 4 Concentration Index (95% confidence interval, standard error and P-value) for HRQoL in Chile (ENS 2016–2017) Health related quality of life concentration curve. SES: socioeconomic status When analysing factors associated with the CI and according to the detailed decomposition displayed in Tables 5 and 6, we found that income, as a proxy of socioeconomic position, contributes 76% to inequality in HRQoL, followed by educational level (22%), female gender (17%), and type of Health Insurance (15%). Of the total decomposition, less than 1% corresponded to the unexplained residual component, suggesting an adequate general fit of the decomposition model. Table 5 Decomposition of Concentration Index for inequalities in HRQoL in Chile, ENS 2016–2017 Elasticities show the percentage of disutility change related to the average disutility, compared to a reference category. Thus, for example, the analysis of socioeconomic status indicates that, all quintiles between Q1 and Q4, compared to Q5, present an increase in disutility. The maximum change is observed between Q2 and Q5 (6.3%), which is consistent with the distribution of disutilities presented in Fig. 1. This research tested the hypothesis that worse HRQoL was concentrated in the least advantaged socioeconomic groups in adult population in Chile, as measured by ranked household income per capita. Our findings support this idea, as we found a disproportionate concentration of worse HRQoL in the most disadvantaged socioeconomic groups in Chile, with a Concentration Index of -0.063. In the decomposition approach, we found that socioeconomic position was associated with socioeconomic inequality (household income per capita) (76%), but also other factors came to light in this analysis. Educational level (22%), female gender (17%), and type of Health Insurance (15%) were also associated with the CI of HRQoL in the adult population under study. In addition, the decomposition shows that the contribution of different covariates to the total inequality is mainly explained by some sub-groups of the population. For example, for socioeconomic status, the poorest quintiles (Q1 and Q2) show a negative concentration index, whereas the remaining upper quintiles show positive concentration indices. This means that in Q1 and Q2 the disutility is concentrated in the relatively poorer people, but in Q3 to Q5 disutility is concentrated in those people relatively better-off. Indeed, we found that this concentration of disutility affecting more to the relatively richer people of every quantile, increases from Q3 to Q5. This interesting finding has been reported in previous studies [36]. This study is useful to current knowledge in the field of socioeconomic inequality in HRQoL and its findings are consistent with evidence conducted in other countries and regions. For example, a study conducted in Iran showed that a pro-rich distribution of better HRQoL among adults in the capitals of Kermanshah and Kurdistan Provinces [37]. A second study in China showed that SES was positively associated with HRQoL [38]. Similar findings have been reported in Chile for other health outcomes. For example, Cabieses et al. have already found a disproportionate distribution of good/very good self-reported general health in adult population in Chile using data from the CASEN survey, which supports the relevance of monitoring social inequalities in population health for different outcomes over time [13]. Our study has strengths and limitations. We value that this analysis was performed using an anonymous representative national-level dataset. It contained relevant variables for analysis, had very scarce missing values and included urban and rural areas of the territory. The statistical analysis used in this study was relevant to the estimation of social gradients in health and its related factors at population level. Hence, findings can inform national pending challenges in social inequalities in population health. We recognise as limitations of this study the cross-sectional nature of the dataset, which limits our interpretation of findings to associations but not causal inference. Some relevant variables were not included in the analysis like individual risk behaviours and health preferences, which could be expanded in the future using other datasets. Also, household income per capita could be complemented with other measures of socioeconomic status, such as those recommended by the OECD and the World Bank, and that were not available at the time of this study in this dataset because the data used (ENS 2016–2017) do not contain information on the individual characteristics of households (age of its members or their particular needs), which makes it impossible to generate models based on economies of scale such as corrected equivalent income. In addition, our findings are relevant to health authorities, clinical practitioners and public health researchers in the country and the region. We found pervasive social inequalities in a complex and multidimensional health outcome, such as HRQoL. This is informative for health systems monitoring and vigilance of population health, as multiple costly health interventions are developed over time to reduce these and other social gaps in healthcare. This is the case of population-based screening programs, which are highly expensive, but they have the potential of impacting positively in term of reducing health inequalities in health outcomes when effective coverage is achieved [39,40,41]. In this context, because preventive services also show inequalities of access, monitoring their use is essential for an adequate interpretation [42]. Also, it is relevant as a baseline for future similar studies in the country and the region, as HRQoL has reached great attention and usefulness in subjective health in aging populations with high prevalence of chronic conditions. Findings are significant to decision making in healthcare, too. As indicated in this study, inequalities decrease the HRQoL of people throughout Chile, with potentially powerful economic, physical, and psychosocial effects. Therefore, the reduction of these social inequalities becomes an ethical imperative that must be transferred to robust public policies. Part of those inequalities relate to the healthcare system performance, but they are also determined by the organization of the system, which promote more structural inequalities in healthcare [43]. Nevertheless, because the determinants of inequalities (i.e., income, level of education, gender) cannot be tackled only by actions of the healthcare system, these policies need an intersectoral approach that points directly towards the most structural socioeconomic determinants of population health in Chile and the world. Evidence has demonstrated that income inequalities have harmful consequences in population health, including subjective dimensions of population health such as HRQoL. Our study supports this idea, as we found a disproportionate concentration of worse HRQoL in the most disadvantaged socioeconomic groups in Chile, with a Concentration Index of -0.063. Furthermore, in the decomposition approach, we found that socioeconomic position was widely associated with inequality (76%), but also other factors came to light in this analysis. Educational level (22%), female gender (17%), and type of Health Insurance (15%) were also associated with the CI of HRQoL in the population under study. These results provide a measure of health inequality to monitor the progress of social justice in health policy making in Chile, and also offers a relevant benchmark for international comparative analysis, especially with other middle and low-income countries. All data generated or analysed during this study are included in this published article. HRQoL: EQ-5D: Euro-Quality of Life – Five Dimensions ENS: Encuesta Nacional de Salud (National Health Survey) UN: Concentration Curve Concentration Index Q1-5: Quintile 1- Quintile 5 OECD: Organisation for Economic Cooperation and Development Whitehead M. The concepts and principles of equity and health. Health Promot Int. 1991;6(3):217–28. Morton S, Pencheon D, Squires N. Sustainable Development Goals (SDGs), and their implementation: A national global framework for health, development and equity needs a systems approach at every level. British Medical Bulletin. 2017;124(1):81-90. https://doi.org/10.1093/bmb/ldx031. Phelan JC, Link BG, Tehranifar P. Social conditions as fundamental causes of health inequalities: theory, evidence, and policy implications. J Health Soc Behav. 2010;51(1_Suppl):S28–40. Krieger N, Williams DR, Moss NE. Measuring social class in US public health research: concepts, methodologies, and guidelines. Annu Rev Public Health. 1997;18(1):341–78. Marmot M, et al. Closing the gap in a generation: health equity through action on the social determinants of health. The lancet. 2008;372(9650):1661–9. Lynch J, et al. Is Income Inequality a Determinant of Population Health? Part 1. A Systematic Review The Milbank Quarterly. 2004;82(1):5–99. Truesdale BC, Jencks C. The health effects of income inequality: averages and disparities. Annu Rev Public Health. 2016;37:413–30. Galobardes B, et al. Indicators of socioeconomic position (part 1). J Epidemiol Community Health. 2006;60(1):7–12. Pickett K, Wilkinson R. The spirit level: Why equality is better for everyone. Penguin UK; 2010. Programa de las Naciones Unidas para el Desarrollo, Desiguales P. Orígenes, cambios y desafíos de la brecha social en Chile. 2017. World Bank, World Development Indicators, Gini Index Chile; 2017. https://datos.bancomundial.org/indicator/SI.POV.GINI?locations=CL. Accessed 28 March 2022. Bilal U, et al. Inequalities in life expectancy in six large Latin American cities from the SALURBAL study: an ecological analysis. The lancet planetary health. 2019;3(12):e503–10. Cabieses B, et al. Did socioeconomic inequality in self-reported health in Chile fall after the equity-based healthcare reform of 2005? A concentration index decomposition analysis. PLoS ONE. 2015;10(9):e0138227. Vásquez F, Paraje G, Estay M. Income-related inequality in health and health care utilization in Chile, 2000–2009. Rev Panam Salud Publica. 2013;33:98–106. Fuentes-Garcia A, et al. Socioeconomic inequalities in the onset and progression of disability in a cohort of older people in Santiago (Chile). Gac Sanit. 2013;27(3):226–32. Moreno X, et al. Socioeconomic inequalities in life expectancy and disability-free life expectancy among Chilean older adults: evidence from a longitudinal study. BMC Geriatr. 2021;21(1):1–7. Ortiz MS, et al. Disentangling socioeconomic inequalities of type 2 diabetes mellitus in Chile: a population-based analysis. PLoS ONE. 2020;15(9):e0238534. Hay JW, et al. A US population health survey on the impact of COVID-19 using the EQ-5D-5L. J Gen Intern Med. 2021;36(5):1292–301. Wilson IB, Cleary PD. Linking clinical variables with health-related quality of life: a conceptual model of patient outcomes. JAMA. 1995;273(1):59–65. Sitlinger A, Zafar SY. Health-related quality of life: the impact on morbidity and mortality. Surg Oncol Clin. 2018;27(4):675–84. Antol DD, Hagan A, Nguyen H, Li Y, Haugh GS, Radmacher M, ... Shrank WH. Change in self-reported health: A signal for early intervention in a medicare population. In Healthcare (Vol. 10, No. 1, p. 100610). Elsevier; 2022. Davies JM, et al. Socioeconomic position and use of healthcare in the last year of life: a systematic review and meta-analysis. PLoS Med. 2019;16(4):e1002782. Jelin E, Motta R, Costa S. (Eds.). Global entangled inequalities: Conceptual debates and evidence from Latin America. Routledge; 2017. Piovesan C, et al. Impact of socioeconomic and clinical factors on child oral health-related quality of life (COHRQoL). Qual Life Res. 2010;19(9):1359–66. Höfelmann DA, et al. Chronic diseases and socioeconomic inequalities in quality of life among Brazilian adults: findings from a population-based study in Southern Brazil. The European Journal of Public Health. 2018;28(4):603–10. Kind P, R Brooks, R Rabin. EQ-5D concepts and methods. A Developmental History. 2005. p. 2005. Devlin NJ, Brooks R. EQ-5D and the EuroQol group: past, present and future. Appl Health Econ Health Policy. 2017;15(2):127–37. Janssen B, Szende A. Population norms for the EQ-5D. Self-reported population health: an international perspective based on EQ-5D. 2014. p. 19–30. Balestroni G, Bertolotti G. L'EuroQol-5D (EQ-5D): uno strumento per la misura della qualità della vita [EuroQol-5D (EQ-5D): an instrument for measuring quality of life]. Monaldi Arch Chest Dis. 2012;78(3):155-9. Zarate V, et al. Social valuation of EQ-5D health states: the Chilean case. Value Health. 2011;14(8):1135–41. Devlin N, Parkin D, Janssen B. Methods for analysing and reporting EQ-5D data (p. 102). Springer Nature; 2020. Wagstaff A, O'Donnell O, Van Doorslaer E, Lindelow M. Analyzing health equity using household survey data: a guide to techniques and their implementation. World Bank Publications; 2007. Wagstaff A, Doorslaer EV. Overall versus socioeconomic health inequality: a measurement framework and two empirical illustrations. Health Econ. 2004;13(3):297–301. Kakwani N, Wagstaff A, Van Doorslaer E. Socioeconomic inequalities in health: Measurement, computation, and statistical inference. J Econ. 1997;77(1):87–103. Wagstaff A, Van Doorslaer E, Watanabe N. On decomposing the causes of health sector inequalities with an application to malnutrition inequalities in Vietnam. J Econ. 2003;112(1):207–23. Rezaei S, et al. Socioeconomic inequalities in poor health-related quality of life in Kermanshah, Western Iran: A decomposition analysis. J Res Health Sci. 2018;18(1):405. Rezaei S, et al. What explains socioeconomic inequality in health-related quality of life in Iran? a Blinder-Oaxaca decomposition. J Prev Med Public Health. 2018;51(5):219. Wang H, Kindig DA, Mullahy J. Variation in Chinese population health related quality of life: results from a EuroQol study in Beijing. China Quality of life research. 2005;14(1):119–32. Arrospide A, et al. Cost-effectiveness and budget impact analyses of a colorectal cancer screening programme in a high adenoma prevalence scenario using MISCAN-Colon microsimulation model. BMC Cancer. 2018;18(1):1–11. Chugh Y, et al. Cost-effectiveness and budget impact analysis of facility-based screening and treatment of hepatitis C in Punjab state of India. BMJ Open. 2021;11(2):e042280. Garay OU, et al. Cost-Effectiveness and Budget Impact Analysis of Primary Screening With Human Papillomavirus Test With Genotyping in Argentina. Value in Health Regional Issues. 2021;26:160–8. Rotarou ES, Sakellariou D. Determinants of utilisation rates of preventive health services: evidence from Chile. BMC Public Health. 2018;18(1):1–11. Rotarou ES, Sakellariou D. Neoliberal reforms in health systems and the construction of long-lasting inequalities in health care: A case study from Chile. Health Policy. 2017;121(5):495–503. This research was funded by project FONDECYT 11190780, ANID, Chile. Unidad de Evaluación de Tecnologías en Salud, Centro de Investigación Clínica, Pontificia Universidad Católica de Chile, Facultad de Medicina, Santiago, Chile Rodrigo Severino & Manuel Espinoza Department of Public Health, Faculty of Medicine, Pontificia Universidad Católica de Chile, Diagonal Paraguay 362, Piso 2, Santiago, Chile Manuel Espinoza Instituto de Ciencias E Innovación en Medicina (ICIM), Facultad de Medicina, Universidad del Desarrollo, Santiago, Chile Báltica Cabieses Rodrigo Severino RS, ME and BC contributed to the design, calculated, drafting and final approval of the manuscript before submission. Correspondence to Manuel Espinoza. Severino, R., Espinoza, M. & Cabieses, B. Health-related quality of life by household income in Chile: a concentration index decomposition analysis. Int J Equity Health 21, 176 (2022). https://doi.org/10.1186/s12939-022-01770-w Socioeconomic inequalities EQ-5D Inequities in health and health systems in Latin America and the Caribbean Submission enquiries: [email protected]
CommonCrawl
Ambiguity and Logic In automata theory (finite automata, pushdown automata, ...) and in complexity, there is a notion of "ambiguity". An automaton is ambiguous if there is a word $w$ with at least two distinct accepting runs. A machine is $k$-ambiguous if for every word $w$ accepted by the machine there are at most $k$ distinct runs to accept $w$. This notion is also defined over context-free grammars: a grammar is ambiguous if there exists a word that can be derived in two different ways. It is also known that many languages have a nice logical characterization over finite models. (If a language $L$ is regular, there exists a monadic second-order formula $\phi$ over words such that every word $w$ of $L$ is a model of $\phi$, similarly NP if equivalent to the Second order formulae where every 2nd order quantifiers are existential.) Hence, my question is at the edges of the two domains: is there any result, or even a canonical definition, of "ambiguity" of formulae of a given logic? I can imagine a few definitions: $\exists x \phi(x)$ is non ambiguous if there exists at most one $x$ such that $\phi(x)$ holds and that $\phi(x)$ is non-ambiguous. $\phi_0\lor\phi_1$ would be ambiguous if there exists a model of both $\phi_0$ and $\phi_1$, or if $\phi_i$ is ambiguous. A SAT formula would be non-ambiguous iff there is at most one correct assignation. Hence, I wonder if it is a well-known notion, else it may be interesting to try to do research on this topic. If the notion is known, could anyone give me keywords I could use to search for information on the matter (because "logic ambiguity" gives a lot of unrelated results), or a book/pdf/article references? lo.logic automata-theory regular-language nondeterminism finite-model-theory Arthur MILCHIORArthur MILCHIOR Rules in a grammar and inference rules in logic can both be thought of as production rules which gives us "new stuff" from "known stuff". Just as there may be many ways to produce (or parse) a word with respect to a grammar, so may there be many ways to produce (or prove) a logical formula. This analogy can be drawn further. For example, certain logical systems admit normal forms of proofs. Likewise, certain grammars admit canonical parse trees. So I'd say your examples from logic are going in the wrong direction. The correct analogy is "parse tree" : "word" = "proof" : "logical formula" In fact, a sufficiently general kind of grammar will be able to express typical inference rules of logic, so that the grammatically correct words will be precisely the provable formulas. In this case the parse trees will actually be the proofs. In the opposite direction, if we are willing to think of very general inference rules (which do not necessarily have a traditional logical flavor), then every grammar will be expressible as a system of axioms (terminals) and inference rules (productions). And once again we will see that a proof is the same thing as a parse tree. Andrej BauerAndrej Bauer $\begingroup$ I did not really thought about proofs. I am more used to (finite) model theory. We care about figuring out which sets are models of a formula, and which set are not. (Especially, for a formula, what is the complexity of finding if a set is a model or not, and for provable formula, hence tautologies, the complexity is O(1) since every sets are models). But thank you a lot for your answer. $\endgroup$ – Arthur MILCHIOR Mar 31 '11 at 14:46 $\begingroup$ Well, to add analogy: model theory is to logic what semantics is to languages. Model theory assigns meaning to logical theories, while semantics assigns meaning to languages. Sometimes it is best not to mix apples and oranges, even if you're used to it. $\endgroup$ – Andrej Bauer Mar 31 '11 at 21:43 Just two remarks. I hope they help. The standard definitions of semantics of a logic and of truth follow Tarski's presentation, proceeding by induction on formula structure. Another possibility is to give game-based semantics as suggested by Hintikka. Truth and satisfiability are all defined in terms of strategies in a game. For first order formulae, one can prove that a formula is true under Tarski's notion if and only if there exists a winning strategy in the Hintikka game. Towards formalising your question, one can ask if the game admits multiple strategies. There is also the interesting question about whether the strategies should be deterministic. Hintikka required them to be deterministic. The proof that Hintikka's original and Tarski's semantics are equivalent requires the Axiom of Choice. One can also formalise truth in terms of games with non-deterministic strategies with fewer complications. Your language theory example brought to mind determinism, simulation relations and language acceptance. A simulation relation between automata implies language inclusion between their languages though the converse is not true. For deterministic automata the two notions coincide. One can ask if it is possible to extend simulation relations in a 'smooth' manner to capture language equivalence for non-deterministic automata. Kousha Etessami has a really nice paper showing how to do this using k-simulations (A Hierarchy of Polynomial-Time Computable Simulations for Automata). Intuitively, the 'k' reflects the degree of non-determinism the simulation relation can capture. When 'k' equals the level of non-determinism in the automaton, simulation and language equivalence coincide. That paper also gives a logical characterisation of k-simulations in terms of polyadic modal logic and a bounded variable fragment of first-order logic. You get language inclusion, determinism, games, modal logic and first order logic, all in one bumper package. Vijay DVijay D This started as a comment under Andrej Bauer's answer, but it got too big. I think an obvious definition of ambiguity from a Finite Model Theory point of view would be: $ambiguous(\phi) \implies \exists M_1,M_2 | M_1 \vDash \phi \wedge M_2 \vDash \phi \wedge M_1 \vDash \psi \wedge M_2 \nvDash \psi$ In words, there exist distinct models of your grammar encoded as a formula $\phi$ that can be distinguished by some formula $\psi$, perhaps a sub-formula of $\phi$. You can connect this to Andrej's response about proofs through Descriptive Complexity. The combination of the existence of an encoding of a particular model plus its acceptance by an appropriate TM as a model of a given formula IS a proof that the axioms and inferences (and hence an equivalent grammar) encoded in that formula are consistent. To make this fully compatible with Andrej's answer, you would have to say that the model is "generated" by the formula acting as a filter on the space of all possible finite models (or something like that), with the encoding and action of filtering on the input model as the "proof". The distinct proofs then witness the ambiguity. This may not be a popular sentiment, but I tend to think of finite model theory and proof theory as the same thing seen from different angles. ;-) Marc HamannMarc Hamann $\begingroup$ "Of your grammar encoded a formula $\phi$", I beg your pardon, I do not understand. Do you mean "as a formula". As far as I can tell, you can always distinguish two different finite models. $\endgroup$ – Arthur MILCHIOR Mar 31 '11 at 21:18 $\begingroup$ Yes, that should have been "as a formula". I've fixed it. As for distinguishing finite models, the other situation is that there is only one accepted finite model for your language (possibly up to some notion of isomorphism). That is the opposite of ambiguity. $\endgroup$ – Marc Hamann Apr 1 '11 at 1:04 $\begingroup$ I guess that would indeed be "ambiguity". I just did not thought about it like this. Mostly because as far as language are concerned this would not really be interesting. But from a logical point-of-vue if makes sens $\endgroup$ – Arthur MILCHIOR Apr 1 '11 at 18:10 $\begingroup$ I'm not sure that the language part has to be be boring. I have more ideas about this, but I think it would takes us beyond the scope of this forum. ;-) $\endgroup$ – Marc Hamann Apr 1 '11 at 19:06 Not sure about the question applied to CS, but try searching for the term Vagueness and logic. In philosophy of logic, ambiguity is usually made distinct from vagueness (see here for instance), and I think what you are after is vagueness (as vagueness is defined as terms where there are borderline cases). The major book in this area is Timothy Williamson's Vagueness (but also see the bibliography on the Stanford site above). DanielCDanielC $\begingroup$ Thank you for your answer. But as you tell, I do not really see relation with computer science. Especially, an universe is or is not a model of a formula, there are not really any vagueness here. Instead, over automata, ambiguity is something that is well-defined, and there are known algorithm to decide if an automaton is abiguous, k-ambiguous or unambiguous. (only over some kind of automaton) $\endgroup$ – Arthur MILCHIOR Mar 31 '11 at 21:12 $\begingroup$ You are quite right, I probably shouldn't have jumped in on this question and stuck to lurking. I'm only a noob at CS (about to finish my undergrad in logic/philosophy of science and pure math). Thanks for the information though. $\endgroup$ – DanielC Apr 2 '11 at 4:05 I (also) agree with Anrej. I think descriptive complexity is a computation-less characterization (which makes it interesting in its own way) and therefore the computational ambiguity examples from formal languages theory (automata/grammars/...) that you gave look to be in a quite different domain. In descriptive complexity languages correspond to complexity classes and queries (in a language) correspond to computational problems (not algorithms). There is no intended way of checking/computing a query AFAIK, so if you are not looking for computational ambiguity IMHO those examples are misleading. KavehKaveh $\begingroup$ Kaveh, I'm not sure that I agree that the computation-less characterization of descriptive complexity is 100% right. The computational details are very important to understanding how a particular logic captures a complexity class. The advantage is that, once you have done your proofs and understand how it works, you can set the computation aside, and focus on the logical details using standard logical methods. $\endgroup$ – Marc Hamann Apr 1 '11 at 14:56 $\begingroup$ Same remark à Mark. Descriptive complexity is also known as database theory, a vocabulary beeing a structure of a database, and the models of the theory beeing the content of the database. Hence it's happy that we can compute and figure out if a database respect a formula. $\endgroup$ – Arthur MILCHIOR Apr 1 '11 at 18:09 $\begingroup$ @Marc, but there is no intended way of computation, it is a purely descriptive characterization. Of course you can connect it to algorithms (and their computations) in other settings, but that is secondary to its nature. As I said, complexity classes (e.g. $\mathbf{AC^0}$) correspond to descriptive languages (e.g. $\mathbf{FO}$), computational problems correspond to queries, but AFAIK there is not anything corresponding to algorithms or computations in descriptive complexity (which is not surprising considering it is also part of model theory). $\endgroup$ – Kaveh Apr 1 '11 at 20:35 $\begingroup$ @Kaveh, I'm making a slightly subtle point, but one that I think is important, since it seems to be frequently misunderstood (for example by failed P=NP? attempts). There is an underlying, fairly brute-force algorithm that underlies the correspondence of a logical language and a complexity class. Working with the logic allows you not to have to think about the details of this algorithm every second, but the beauty and genius of the proofs by Fagin, Immerman, Vardi and others lies exactly in describing these algorithms. People who lose sight of them completely generally end up in trouble. $\endgroup$ – Marc Hamann Apr 1 '11 at 21:05 $\begingroup$ @Kaveh, I think we understand each other, and share our respect for the field. "Brute-force" was not intended as a slight on the underlying algorithms, just making clear that we are talking about something slightly more abstract than what someone who does, say, algorithmic optimization work might think of as an algorithm. $\endgroup$ – Marc Hamann Apr 1 '11 at 22:32 Not the answer you're looking for? Browse other questions tagged lo.logic automata-theory regular-language nondeterminism finite-model-theory or ask your own question. Model-checking for three-variable logics and restricted structures NP vs co-NP and second-order logic Finding a finite model What is the minimal extension of FO that captures the class of regular languages? Understanding least-fixed point logic Hyperdoctrines and Monadic Second Order Logic Is infinitary logic a logic in the sense of Gurevich? Can constant ambiguity reduce the state complexity of a regular languages? An exponentially-ambiguous weighted automaton without an equivalent polynomially-ambiguous automaton Complexity of validity of first-order logic over finite words with bounded quantifier alternation?
CommonCrawl
Boundary values of the resolvent of Schrödinger hamiltonians with potentials of order zero DCDS Home Transition map and shadowing lemma for normally hyperbolic invariant manifolds March 2013, 33(3): 1077-1088. doi: 10.3934/dcds.2013.33.1077 Hamiltonian structures for projectable dynamics on symplectic fiber bundles Guillermo Dávila-Rascón 1, and Yuri Vorobiev 1, Department of Mathematics, University of Sonora, Hermosillo, C.P. 83000, Mexico, Mexico Received April 2011 Revised October 2011 Published October 2012 The Hamiltonization problem for projectable vector fields on general symplectic fiber bundles is studied. Necessary and sufficient conditions for the existence of Hamiltonian structures in the class of compatible symplectic structures are derived in terms of invariant symplectic connections. In the case of a flat symplectic bundle, we show that this criterion leads to the study of the solvability of homological type equations. Keywords: Ehresmann connection, Poisson tensor., invariant connection, symplectic bundle, coupling, symplectic connection, Hamiltonization problem, projectable dynamics. Mathematics Subject Classification: Primary: 53D05, 53D17; Secondary: 70H06, 70H0. Citation: Guillermo Dávila-Rascón, Yuri Vorobiev. Hamiltonian structures for projectable dynamics on symplectic fiber bundles. Discrete & Continuous Dynamical Systems - A, 2013, 33 (3) : 1077-1088. doi: 10.3934/dcds.2013.33.1077 R. Abraham and J. E. Marsden, "Foundation of Mechanics,'', Second Edition, (1978). Google Scholar V. I. Arnold, "Mathematical Methods of Classical Mechanics,'', Second Edition, (1989). Google Scholar V. I. Arnold, V. V. Kozlov and A. I. Neishtadt, "Mathematical Aspects of Classical and Celestial Mechanics,'', Encyclopedia of Math. Sci., (1988). Google Scholar O. I. Bogoyavlenskij, Theory of tensor invariants of integrable Hamiltonian systems I. Incompatible poisson structures,, Comm. Math. Phys., 180 (1996), 529. doi: 10.1007/BF02099623. Google Scholar A. D. Bryuno, Normalization of a Hamiltonian system near an invariant cycle or torus,, Russian Math. Surveys, 44 (1989), 53. doi: 10.1070/RM1989v044n02ABEH002041. Google Scholar G. Dávila-Rascón, Hamiltonian structures for two frequency systems and the KAM-setting,, Aportaciones Matemáticas, 38 (2008), 11. Google Scholar G. Dávila-Rascón, R. Flores-Espinoza and Y. Vorobiev, Euler equations on $\mathfrak so (4)$ as a nearly integrable Hamiltonian system,, Qualitative Theory of Dynamical Systems, 7 (2008), 129. doi: 10.1007/s12346-008-0007-0. Google Scholar G. Dávila-Rascón and Y. Vorobiev, A Hamiltonian approach for skew-product dynamical systems,, Russian J. of Math. Phys., 15 (2008), 35. Google Scholar G. Dávila-Rascón and Y. Vorobiev, The first step normalization for Hamiltonian systems with two degrees of freedom over orbit cylinders,, Electronic J. of Diff. Equations, 2009 (2009), 1. Google Scholar F. Espinoza and Y. Vorobiev, Hamiltonian formalism for fiberwise linear Hamiltonian dynamical systems,, Bol. Soc. Mat. Mexicana, 6 (2000), 213. Google Scholar M. Gotay, R. Lashof, J. Sniatycki and A. Weinstein, Closed forms on symplectic fiber bundles,, Comment. Math. Helv., 58 (1983), 617. doi: 10.1007/BF02564656. Google Scholar S. Golin, A. Knauf and S. Marmi, The Hannay angles: Geometry, adiabaticity and an example,, Commun. Math. Phys., 123 (1989), 95. doi: 10.1007/BF01244019. Google Scholar W. Greub, S. Halperin and R. Vanstone., "Connections, Curvature and Cohomology,'', Vol. II, (1973). Google Scholar V. Guillemin and S. Sternberg, "Symplectic Techniques in Physics,'', Cambridge, (1984). Google Scholar V. Guillemin, E. Lerman and S. Sternberg, "Symplectic Fibrations and Multiplicity Diagrams,'', Cambridge Univ. Press., (1996). doi: 10.1017/CBO9780511574788. Google Scholar M. V. Karasev and Yu. M. Vorobjev, Adapted connections, Hamilton dynamics, geometric phases and quantization over isotropic submanifolds,, Amer. Math. Soc. Transl. (2), 187 (1998), 203. Google Scholar V. V. Kozlov, "Symmetries, Topology, and Resonances in Hamiltonian Mechanics,'', Springer-Verlag, (1996). doi: 10.1007/978-3-642-78393-7. Google Scholar J. E. Marsden, R. Montgomery and T. Ratiu, Reduction, symmetry and phases in mechanics,, Memoirs of the AMS, 88 (1990), 1. Google Scholar J. R. Marsden, T. S. Ratiu and G. Raugel, Symplectic connections and the linearization of Hamiltonian systems,, Proc. Roy. Soc. Edinburgh, 117 (1991), 329. Google Scholar J. E. Marsden and T. S. Ratiu, "Introduction to Mechanics and Symmetry,'', Spinger-Verlag, (1994). Google Scholar D. McDuff and D. Salamon, "Introduction to Symplectic Topology,'', Oxford Mathematical Monographs, (1998). Google Scholar P. Michor, "Topics in Differential Geometry,'', Graduate Studies in Mathematics, (2008). Google Scholar R. Montgomery, J. E. Marsden and T. Ratiu, Gauged Lie-Poisson structures,, Cont. Math. AMS, 28 (1984), 101. Google Scholar R. Montgomery, The connection whose holonomy is the classical adiabatic angles of Hannay and Berry and its generalization to the non-integrable case,, Commun. Math. Phys., 120 (1988), 269. doi: 10.1007/BF01217966. Google Scholar A. Neishtadt, Averaging method and adiabatic invariants,, in, (2008), 53. Google Scholar S. Sternberg, Minimal coupling and the symplectic mechanics of a classical particle in the presence of a Yang-Mills field,, Proc. Nat. Acad. Sci., 74 (1977), 5253. doi: 10.1073/pnas.74.12.5253. Google Scholar Y. M. Vorobiev, Hamiltonian structures of the first variation equations,, Sbornik: Mathematics, 191 (2000), 447. Google Scholar Y. M. Vorobjev, Coupling tensors and Poisson geometry near a single symplectic leaf,, Lie Algebroids, 54 (2001), 249. Google Scholar Y. M. Vorobjev, Poisson structures and linear Euler systems over symplectic manifolds,, Amer. Math. Soc. Transl., 216 (2005), 137. Google Scholar A. Weinstein, "Lectures on Symplectic Manifolds,'', CBMS Lecture Notes 29, (1977). Google Scholar N. M. J. Woodhouse, "Integrability, Self-Duality and Twistor Theory,'', Clarendon Press, (1996). Google Scholar Roderick S. C. Wong, H. Y. Zhang. On the connection formulas of the third Painlevé transcendent. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 541-560. doi: 10.3934/dcds.2009.23.541 Otávio J. N. T. N. dos Santos, Emerson L. Monte Carmelo. A connection between sumsets and covering codes of a module. Advances in Mathematics of Communications, 2018, 12 (3) : 595-605. doi: 10.3934/amc.2018035 Marie-Claude Arnaud. When are the invariant submanifolds of symplectic dynamics Lagrangian?. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 1811-1827. doi: 10.3934/dcds.2014.34.1811 Shunfu Jin, Wuyi Yue, Zhanqiang Huo. Performance evaluation for connection oriented service in the next generation Internet. Numerical Algebra, Control & Optimization, 2011, 1 (4) : 749-761. doi: 10.3934/naco.2011.1.749 Pavel I. Etingof. Galois groups and connection matrices of q-difference equations. Electronic Research Announcements, 1995, 1: 1-9. Marta Strani. Existence and uniqueness of a positive connection for the scalar viscous shallow water system in a bounded interval. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1653-1667. doi: 10.3934/cpaa.2014.13.1653 Roger E. Khayat, Martin Ostoja-Starzewski. On the objective rate of heat and stress fluxes. Connection with micro/nano-scale heat convection. Discrete & Continuous Dynamical Systems - B, 2011, 15 (4) : 991-998. doi: 10.3934/dcdsb.2011.15.991 Vinicius Albani, Adriano De Cezaro. A connection between uniqueness of minimizers in Tikhonov-type regularization and Morozov-like discrepancy principles. Inverse Problems & Imaging, 2019, 13 (1) : 211-229. doi: 10.3934/ipi.2019012 Michael Entov, Leonid Polterovich, Daniel Rosen. Poisson brackets, quasi-states and symplectic integrators. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1455-1468. doi: 10.3934/dcds.2010.28.1455 Daniel Guan. Classification of compact homogeneous spaces with invariant symplectic structures. Electronic Research Announcements, 1997, 3: 52-54. Fasma Diele, Carmela Marangi. Positive symplectic integrators for predator-prey dynamics. Discrete & Continuous Dynamical Systems - B, 2018, 23 (7) : 2661-2678. doi: 10.3934/dcdsb.2017185 Pablo G. Barrientos, Artem Raibekas. Robustly non-hyperbolic transitive symplectic dynamics. Discrete & Continuous Dynamical Systems - A, 2018, 38 (12) : 5993-6013. doi: 10.3934/dcds.2018259 Marie-Claude Arnaud. A nondifferentiable essential irrational invariant curve for a $C^1$ symplectic twist map. Journal of Modern Dynamics, 2011, 5 (3) : 583-591. doi: 10.3934/jmd.2011.5.583 Rafael de la Llave, Jason D. Mireles James. Parameterization of invariant manifolds by reducibility for volume preserving and symplectic maps. Discrete & Continuous Dynamical Systems - A, 2012, 32 (12) : 4321-4360. doi: 10.3934/dcds.2012.32.4321 Luis García-Naranjo. Reduction of almost Poisson brackets and Hamiltonization of the Chaplygin sphere. Discrete & Continuous Dynamical Systems - S, 2010, 3 (1) : 37-60. doi: 10.3934/dcdss.2010.3.37 Santiago Cañez. Double groupoids and the symplectic category. Journal of Geometric Mechanics, 2018, 10 (2) : 217-250. doi: 10.3934/jgm.2018009 Chungen Liu, Qi Wang. Symmetrical symplectic capacity with applications. Discrete & Continuous Dynamical Systems - A, 2012, 32 (6) : 2253-2270. doi: 10.3934/dcds.2012.32.2253 Mads R. Bisgaard. Mather theory and symplectic rigidity. Journal of Modern Dynamics, 2019, 15: 165-207. doi: 10.3934/jmd.2019018 P. Balseiro, M. de León, Juan Carlos Marrero, D. Martín de Diego. The ubiquity of the symplectic Hamiltonian equations in mechanics. Journal of Geometric Mechanics, 2009, 1 (1) : 1-34. doi: 10.3934/jgm.2009.1.1 Björn Gebhard. A note concerning a property of symplectic matrices. Communications on Pure & Applied Analysis, 2018, 17 (5) : 2135-2137. doi: 10.3934/cpaa.2018101 Guillermo Dávila-Rascón Yuri Vorobiev
CommonCrawl
#Alfv N Wave#Wave Propagation#Simulations#Corona#Divergent#Mhd#Linearised#Wigglewave#Mnras#Stellar Astrophysics Enhanced Phase Mixing of Torsional Alfvén Waves in Stratified and Divergent Solar Coronal Structures, Paper II: Nonlinear Simulations By Callum Boocock, David Tsiklauri We use MHD simulations to detect the nonlinear effects of torsional Alfvén wave propagation in a potential magnetic field with exponentially divergent field lines, embedded in a stratified solar corona. In Paper I we considered solutions to the linearised governing equations torsional Alfvén wave propagation and showed, using a... Laser-Driven, Ion-Scale Magnetospheres in Laboratory Plasmas. II. Particle-in-cell Simulations Ion-scale magnetospheres have been observed around comets, weakly-magnetized asteroids, and localized regions on the Moon, and provide a unique environment to study kinetic-scale plasma physics, in particular in the collisionless regime. In this work, we present the results of particle-in-cell simulations that replicate recent experiments on the Large Plasma Device at the University of California, Los Angeles. Using high-repetition rate lasers, ion-scale magnetospheres were created to drive a plasma flow into a dipolar magnetic field embedded in a uniform background magnetic field. The simulations are employed to evolve idealized 2D configurations of the experiments, study highly-resolved, volumetric datasets and determine the magnetospheric structure, magnetopause location and kinetic-scale structures of the plasma current distribution. We show the formation of a magnetic cavity and a magnetic compression in the magnetospheric region, and two main current structures in the dayside of the magnetic obstacle: the diamagnetic current, supported by the driver plasma flow, and the current associated to the magnetopause, supported by both the background and driver plasmas with some time-dependence. From multiple parameter scans, we show a reflection of the magnetic compression, bounded by the length of the driver plasma, and a higher separation of the main current structures for lower dipolar magnetic moments. Simulating a Catalyst induced Quantum Dynamical Phase Transition of a Heyrovsky reaction with different models for the environment Fabricio S. Lozano-Negro, Marcos A. Ferreyra-Ortega, Denise Bendersky, Lucas Fernández-Alcázar, Horacio M. Pastawski. Through an appropriate election of the molecular orbital basis, we show analytically that the molecular dissociation occurring in a Heyrovsky reaction can be interpreted as a Quantum Dynamical Phase Transition, i.e., an analytical discontinuity in the molecular energy spectrum induced by the catalyst. The metallic substrate plays the role of an environment that produces an energy uncertainty on the adatom. This broadening induces a critical behavior not possible in a quantum closed system. We use suitable approximations on symmetry, together with both Lanczos and canonical transformations, to give analytical estimates for the critical parameters of molecular dissociation. This occurs when the bonding to the surface is (\sqrt{2}) times the molecular bonding. This value is slightly weakened for less symmetric situations. However simple, this conclusion involves a high order perturbative solution of the molecule-catalyst system. This model is further simplified to discuss how an environment-induced critical phenomenon can be evaluated through an idealized perturbative tunneling microscopy set-up. In this case, the energy uncertainties in one or both atoms are either Lorentzian or Gaussian. The former results from the Fermi Golden Rule, i.e., a Markovian approximation. The Gaussian uncertainty, associated with non-Markovian decoherent processes, requires the introduction of a particular model of a spin bath. The partially coherent tunneling current is obtained from the Generalized Landauer-Büttiker Equations. The resonances observed in these transport parameters reflect, in many cases, the critical properties of the resonances in the molecular spectrum. CHEMISTRY・ 14 DAYS AGO Recent advance in phase transition of vanadium oxide based solar reflectors and the fabrication progress Vanadium dioxide (VO2) as a phase-change material controls the transferred heat during phase transition process between metal and insulator states. At temperature above 68C, the rutile structure VO2 keeps the heat out and increases the IR radiation reflectivity, while at the lower temperature the monoclinic structure VO2 acts as the transparent material and increase the transmission radiation. In this paper, we first present the metal-insulator phase transition (MIT) of the VO2 in high and low temperatures. Then we simulate the meta-surface VO2 of metamaterial reflector by Ansys HFSS to show the emittance tunability of the rutile and monoclinic phase of the VO2. In next section, we will review the recent progress in the deposition of thermochromic VO2 on glass and silicon substrate with modifying the pressure of sputtering gases and temperature of the substrate. Finally, we present the results of the in-situ sputtered VOx thin film on thick SiO2 substrate in different combination of oxygen and argon environment by V2O5 target at temperature higher than 300C and then, analyze it with x-ray diffraction (XRD) method. The thermochromic VO2 based metamaterial structures open a new route to the passive energy-efficient optical solar reflector in the past few years. Weighing the Galactic disk using phase-space spirals IV. Tests on a 3d galaxy simulation In this fourth article on weighing the Galactic disk using the shape of the phase-space spiral, we have tested our method on a billion particle three-dimensional N-body simulation, comprised of a Milky Way like host galaxy and a merging dwarf satellite. The main purpose of this work was to test the validity of our model's fundamental assumptions: that the spiral inhabits a locally static and vertically separable gravitational potential. These assumptions might be compromised in the complex kinematic system of a disturbed three-dimensional disk galaxy; in fact, the statistical uncertainty and any potential biases related to these assumptions is expected to be amplified for this simulation, which differs from the Milky Way in that it is more strongly perturbed and has a phase-space spiral that inhabits higher vertical energies. We constructed 44 separate data samples from different spatial locations in the simulated host galaxy. Our method produced accurate results for the vertical gravitational potential of these 44 data samples, with an unbiased distribution of errors with a standard deviation of 7 %. We also tested our method under severe and unknown spatially dependent selection effects, also with robust results; this sets it apart from traditional dynamical mass measurements that are based on the assumption of a steady state, which are highly sensitive to unknown or poorly modelled incompleteness. Hence, we will be able to make localised mass measurements of distant regions in the Milky Way disk, which would otherwise be compromised by complex and poorly understood selection effects. ASTRONOMY・ 9 DAYS AGO Magnetic Dual Chiral Density Wave: A Candidate Quark Matter Phase for the Interior of Neutron Stars In this review, we discuss the physical characteristics of the magnetic dual chiral density wave (MDCDW) phase of dense quark matter and argued why it is a promising candidate for the interior matter phase of neutron stars. The MDCDW condensate occurs in the presence of a magnetic field. It is a single-modulated chiral density wave characterized by two dynamically generated parameters: the fermion quasiparticle mass $m$ and the condensate spatial modulation $q$. The lowest Landau level quasiparticle modes in the MDCDW system are asymmetric about the zero energy, a fact that leads to the topological properties and anomalous electric transport exhibited by this phase. The topology makes the MDCDW phase robust against thermal phonon fluctuations, and as such, it does not display the Landau-Peierls instability, a stapled feature of single-modulated inhomogeneous chiral condensates in three dimensions. The topology is also reflected in the presence of the electromagnetic chiral anomaly in the effective action and in the formation of hybridized propagating modes known as an axion-polaritons. Taking into account that one of the axion-polaritons of this quark phase is gapped, we argued how incident $\gamma$-ray photons can be converted into gapped axion-polaritons in the interior of a magnetar star in the MDCDW phase leading the star to collapse, a phenomenon that can serve to explain the so-called missing pulsar problem in the galactic center. ASTRONOMY・ 10 DAYS AGO A model of double coronal hard X-ray sources in solar flares A number of double coronal X-ray sources have been observed during solar flares by RHESSI, where the two sources reside at different sides of the inferred reconnection site. However, where and how are these X-ray-emitting electrons accelerated remains unclear. Here we present the first model of the double coronal hard X-ray (HXR) sources, where electrons are accelerated by a pair of termination shocks driven by bi-directional fast reconnection outflows. We model the acceleration and transport of electrons in the flare region by numerically solving the Parker transport equation using velocity and magnetic fields from the macroscopic magnetohydrodynamic simulation of a flux rope eruption. We show that electrons can be efficiently accelerated by the termination shocks and high-energy electrons mainly concentrate around the two shocks. The synthetic HXR emission images display two distinct sources extending to $>$100 keV below and above the reconnection region, with the upper source much fainter than the lower one. The HXR energy spectra of the two coronal sources show similar spectral slopes, consistent with the observations. Our simulation results suggest that the flare termination shock can be a promising particle acceleration mechanism in explaining the double-source nonthermal emissions in solar flares. Beyond Gaussian pair fluctuation theory for strongly interacting Fermi gases II: The broken-symmetry phase We theoretically study the thermodynamic properties of a strongly interacting Fermi gas at the crossover from a Bardeen-Cooper-Schrieffer (BCS) superfluid to a Bose-Einstein condensate (BEC), by applying a recently outlined strong-coupling theory that includes pair fluctuations beyond the commonly-used many-body $T$-matrix or ladder approximation at the Gaussian level. The beyond Gaussian pair fluctuation (GPF) theory always respects the exact thermodynamic relations and recovers the Bogoliubov theory of molecules in the BEC limit with a nearly correct molecule-molecule scattering length. We show that the beyond-GPF theory predicts quantitatively accurate ground-state properties at the BEC-BCS crossover, in good agreement with the recent measurement by Horikoshi \textit{et al.} in Phys. Rev. X \textbf{7}, 041004 (2017). In the unitary limit with infinitely large $s$-wave scattering length, the beyond-GPF theory predicts a reliable universal energy equation of state up to 0.6$T_c$, where $T_c$ is the superfluid transition temperature at unitarity. The theory predicts a Bertsch parameter $\xi \simeq 0.365$ at zero temperature, in good agreement with the latest quantum Monte Carlo result $\xi = 0.367(7)$ and the latest experimental measurement $\xi = 0.367(9)$. We attribute the excellent and wide applicability of the beyond-GPF theory in the broken-symmetry phase to the reasonable re-summation of Feynman diagrams following a dimensional $\epsilon$-expansion analysis near four dimensions ($d=4-\epsilon$), which gives rise to accurate predictions at the second order $\mathcal{O}(\epsilon^2)$. Our work indicates the possibility of further improving the strong-coupling theory of strongly interacting fermions based on the systematic inclusion of large-loop Feynman diagrams at higher orders $\mathcal{O}(\epsilon^n)$ with $n\ge 3$. SCIENCE・ 8 DAYS AGO Gravitational Waves from an Inflation Triggered First-Order Phase Transition Large excursion of the inflaton field can trigger interesting dynamics. One important example is a first-order phase transition in a spectator sector which couples to the inflaton. Gravitational waves (GWs) from such a first-order phase transition during inflation, an example of an instantaneous source, have an oscillatory feature. In this work, we show that this feature is generic for a source in an era of accelerated expansion. We also demonstrate that the shape of the GW signal contains information about the evolution of the early universe following the phase transition. In particular, the slope of the infrared part of the GW spectrum is sensitive to the evolution of the Hubble parameter when the GW modes reenter the horizon after inflation. The slope of the profile of the intermediate oscillatory part and the ultraviolet part of the GW spectrum depend on the evolution of the Hubble parameter when the modes exit horizon during the inflation and when they reenter the horizon during the reheating. The ultraviolet spectrum also depends on the details of the dynamics of the phase transition. We consider the GW signal in several models of evolution during and after inflation, and compare them with the minimal scenario of quasi-de Sitter inflation followed by radiation domination after a fast reheating, and demonstrate that the shape of the GW can be used to distinguish them. In this way, the GW signal considered in this paper offers a powerful probe to the dynamics of the early universe which is otherwise difficult to explore directly through CMB, large scale structure, big bang nucleosynthesis (BBN), and other well-studied cosmological observables. Indirect Adaptive Control of Nonlinearly Parameterized Nonlinear Dissipative Systems In this note we address the problem of indirect adaptive (regulation or tracking) control of nonlinear, input affine dissipative systems. It is assumed that the supply rate, the storage and the internal dissipation functions may be expressed as nonlinearly parameterized regression equations where the mappings (depending on the unknown parameters) satisfy a monotonicity condition -- this encompasses a large class of physical systems, including passive systems. We propose to estimate the system parameters using the "power-balance" equation, which is the differential version of the classical dissipation inequality, with a new estimator that ensures global, exponential, parameter convergence under the very weak assumption of interval excitation of the power-balance equation regressor. To design the indirect adaptive controller we make the standard assumption of existence of an asymptotically stabilizing controller that depends -- possibly nonlinearly -- on the unknown plant parameters, and apply a certainty-equivalent control law. The benefits of the proposed approach, with respect to other existing solutions, are illustrated with examples. Model-Free Nonlinear Feedback Optimization Feedback optimization is a control paradigm that enables physical systems to autonomously reach efficient operating points. Its central idea is to interconnect optimization iterations in closed-loop with the physical plant. Since iterative gradient-based methods are extensively used to achieve optimality, feedback optimization controllers typically require the knowledge of the steady-state sensitivity of the plant, which may not be easily accessible in some applications. In contrast, in this paper we develop a model-free feedback controller for efficient steady-state operation of general dynamical systems. The proposed design consists in updating control inputs via gradient estimates constructed from evaluations of the nonconvex objective at the current input and at the measured output. We study the dynamic interconnection of the proposed iterative controller with a stable nonlinear discrete-time plant. For this setup, we characterize the optimality and the stability of the closed-loop behavior as functions of the problem dimension, the number of iterations, and the rate of convergence of the physical plant. To handle general constraints that affect multiple inputs, we enhance the controller with Frank-Wolfe type updates. COMPUTERS・ 14 DAYS AGO Structural Phase Transitions in SrTiO3 from Deep Potential Molecular Dynamics Strontium titanate (SrTiO3) is regarded as an essential material for oxide electronics. One of its many remarkable features is subtle structural phase transition, driven by antiferrodistortive lattice mode, from a high-temperature cubic phase to a low-temperature tetragonal phase. Classical molecular dynamics (MD) simulation is an efficient technique to reveal atomistic features of phase transition, but its application is often limited by the accuracy of empirical interatomic potentials. Here, we develop an accurate deep potential (DP) model of SrTiO3 based on a machine learning method using data from first-principles density functional theory (DFT) calculations. The DP model has DFT-level accuracy, capable of performing efficient MD simulations and accurate property predictions. Using the DP model, we investigate the temperature-driven cubic-to-tetragonal phase transition and construct the in-plane biaxial strain-temperature phase diagram of SrTiO3. The simulations demonstrate that strain-induced ferroelectric phase is characterized by two order parameters, ferroelectric distortion and antiferrodistortion, and the ferroelectric phase transition has both displacive and order-disorder characters. This works lays the foundation for the development of accurate DP models of other complex perovskite materials. CHEMISTRY・ 3 DAYS AGO The unipotent radical of the Mumford-Tate group of a very general mixed Hodge structure with a fixed associated graded The family of all mixed Hodge structures on a given rational vector space $M_\mathbb{Q}$ with a fixed weight filtration $W_\cdot$ and a fixed associated graded Hodge structure $Gr^WM$ is naturally in a one to one correspondence with a complex affine space. We study the unipotent radical of the very general Mumford-Tate group of the family. We do this by using general Tannakian results which relate the unipotent radical of the fundamental group of an object in a filtered Tannakian category to the extension classes of the object coming from the filtration. Our main result shows that if $Gr^WM$ is polarizable and satisfies some conditions, then outside a union of countably many proper Zariski closed subsets of the parametrizing affine space, the unipotent radical of the Mumford-Tate group of the objects in the family is equal to the unipotent radical of the parabolic subgroup of $GL(M_\mathbb{Q})$ associated to the weight filtration on $M_\mathbb{Q}$ (in other words, outside a union of countably many proper Zariski closed sets the unipotent radical of the Mumford-Tate group is as large as one may hope for it to be). Note that here $Gr^WM$ itself may have a small Mumford-Tate group. MATHEMATICS・ 7 DAYS AGO A three-field phase-field model for mixed-mode fracture in rock based on experimental determination of the mode II fracture toughness In this contribution, a novel framework for simulating mixed-mode failure in rock is presented. Based on a hybrid phase-field model for mixed-mode fracture, separate phase-field variables are introduced for tensile (mode I) and shear (mode II) fracture. The resulting three-field problem features separate length scale parameters for mode I and mode II cracks. In contrast to the classic two-field mixed-mode approaches it can thus account for different tensile and shear strength of rock. The two phase-field equations are implicitly coupled through the degradation of the material in the elastic equation, and the three fields are solved using a staggered iteration scheme. For its validation, the three-field model is calibrated for two types of rock, Solnhofen Limestone and Pfraundorfer Dolostone. To this end, double-edge notched Brazilian disk (DNBD) tests are performed to determine the mode II fracture toughness. The numerical results demonstrate that the proposed phase-field model is able to reproduce the different crack patterns observed in the DNBD tests. A final example of a uniaxial compression test on a rare drill core demonstrates, that the proposed model is able to capture complex, 3D mixed-mode crack patterns when calibrated with the correct mode I and mode II fracture toughness. Nonintegrability of Forced Nonlinear Oscillators In recent papers by the authors (S.~Motonaga and K.~Yagasaki, Obstructions to integrability of nearly integrable dynamical systems near regular level sets, submitted for publication, and K.~Yagasaki, Nonintegrability of nearly integrable dynamical systems near resonant periodic orbits, submitted for publication), two different techniques which allow us to prove the real-analytic or complex-meromorphic nonintegrability of forced nonlinear oscillators having the form of time-periodic perturbations of single-degree-of-freedom Hamiltonian systems were provided. Here the concept of nonintegrability in the Bogoyavlenskij sense is adopted and the first integrals and commutative vector fields are also required to depend real-analytically or complex-meromorphically on the small parameter. In this paper we review the theories and continue to demonstrate their usefulness. In particular, we consider the periodically forced damped pendulum and prove its nonintegrability in the above meaning. High-resolution observations with ARTEMIS/JLS and the NRH: IV. Imaging spectroscopy of spike-like structures near the front of type-II bursts S. Armatas, C. Bouratzis, A. Hillaris, C.E. Alissandrakis, P. Preka-Papadema, A. Kontogeorgos, P. Tsitsipis, X. Moussas. Narrowband bursts (spikes) appear on dynamic spectra from microwave to decametric frequencies. They are believed to be manifestations of small-scale energy release through magnetic reconnection. We study the position of the spike-like structures relative to the front of type-II bursts and their role in the burst emission. We used high-sensitivity, low-noise dynamic spectra obtained with the acousto-optic analyzer (SAO) of the ARTEMIS-JLS radiospectrograph, in conjunction with images from the Nançay Radioheliograph (NRH) in order to study spike-like bursts near the front of a type-II radio burst during the November 3, 2003 extreme solar event. The spike-like emission in the dynamic spectrum was enhanced by means of high-pass-time filtering. We identified a number of spikes in the NRH images. Due to the lower temporal resolution of the NRH, multiple spikes detected in the dynamic spectrum appeared as single structures in the images. These spikes had an average size of ~200" and their observed brightness temperature was 1.4-5.6x10^9K, providing a significant contribution to the emission of the type-II burst front. At variance with a previous study on the type-IV associated spikes, we found no systematic displacement between the spike emission and the emission between spikes. At 327.0 MHz, the type II emission was located about 0.3 RSUN above the pre-existing continuum emission, which, was located 0.1 RSUN above the western limb. This study indicates that the spike-like chains aligned along the type II burst MHD shock front are not a perturbation of the type II emission, as in the case of type IV spikes, but a manifestation of the type II emission itself. The preponderance of these chains, together with the lack of isolated structures or irregular clusters, points towards some form of small-scale magnetic reconnection, organized along the type-II propagating front. Calculations of neutron fluxes and isotope conversion rates in a thorium-fuelled MYRRHA reactor, using GEANT4 and MCNPX, Nuclear Engineering and Design Neutronics calculations have been performed of the MYRRHA ADS Reactor with a thorium-based fuel mixture, using the simulation programs MCNPX (Waters, 2002) and Geant4 (Agostinelli, 2003). Thorium is often considered for ADS systems, and this is the first evaluation of the possibilities for thorium based fuels using a reactor design which has been developed in detail. It also extends the application of the widely-used Geant4 program to the geometry of MYRRHA and to thorium. An asymptotic 232Th/233U mixture is considered, together with the standard MOX fuel and a possible 232Th/MOX starter. Neutron fluxes and spectra are calculated at several regions in the core: fuel cells, IPS cells and the two (Mo and Ac) isotope production cells. These are then used for simple calculations of the fuel evolution and of the potential for the incineration of minor actinide waste. Results from the two programs agree and support each other and show that the thorium fuel is viable, and has good evolution/breeding properties, and that minor actinide incineration, though it will not take place on a significant scale, will be demonstrable. INDUSTRY・ 1 DAY AGO Analysis of the distribution, rotation and scale characteristics of solar wind switchbacks: comparison between the first and second encounters of Parker Solar Probe The S-shaped magnetic structure in the solar wind formed by the twisting of magnetic field lines is called a switchback, whose main characteristics are the reversal of the magnetic field and the significant increase in the solar wind radial velocity. We identify 242 switchbacks during the first two encounters of Parker Solar Probe (PSP). Statistics methods are applied to analyze the distribution and the rotation angle and direction of the magnetic field rotation of the switchbacks. The diameter of switchbacks is estimated with a minimum variance analysis (MVA) method based on the assumption of a cylindrical magnetic tube. We also make a comparison between switchbacks from inside and the boundary of coronal holes. The main conclusions are as follows: (1) the rotation angles of switchbacks observed during the first encounter seem larger than those of the switchbacks observed during the second encounter in general; (2) the tangential component of the velocity inside the switchbacks tends to be more positive (westward) than in the ambient solar wind; (3) switchbacks are more likely to rotate clockwise than anticlockwise, and the number of switchbacks with clockwise rotation is 1.48 and 2.65 times of those with anticlockwise rotation during the first and second encounters, respectively; (4) the diameter of switchbacks is about 10^5 km on average and across five orders of magnitude (10^3 -- 10^7 km). SCIENCE・ 1 DAY AGO High-power laser experiment forming a supercritical collisionless shock in a magnetized uniform plasma at rest Ryo Yamazaki, S. Matsukiyo, T. Morita, S. J. Tanaka, T. Umeda, K. Aihara, M. Edamoto, S. Egashira, R. Hatsuyama, T. Higuchi, T. Hihara, Y. Horie, M. Hoshino, A. Ishii, N. Ishizaka, Y. Itadani, T. Izumi, S. Kambayashi, S. Kakuchi, N. Katsuki, R. Kawamura, Y. Kawamura, S. Kisaka, T. Kojima, A. Konuma, R. Kumar, T. Minami, I. Miyata, T. Moritaka, Y. Murakami, K. Nagashima, Y. Nakagawa, T. Nishimoto, Y. Nishioka, Y. Ohira, N. Ohnishi, M. Ota, N. Ozaki, T. Sano, K. Sakai, S. Sei, J. Shiota, Y. Shoji, K. Sugiyama, D. Suzuki, M. Takagi, H. Toda, S. Tomita, S. Tomiya, H. Yoneda, T. Takezaki, K. Tomita, Y. Kuramitsu, Y. Sakawa. Temperature-assisted Piezoresponse Force Microscopy: Probing Local Temperature-Induced Phase Transitions in Ferroics Combination of local heating and biasing at the tip-surface junction in temperature-assisted piezoresponse force microscopy (tPFM) opens the pathway for probing local temperature induced phase transitions in ferroics, exploring the temperature dependence of polarization dynamics in ferroelectrics, and potentially discovering coupled phenomena driven by strong temperature- and electric field gradients. Here, we analyze the signal formation mechanism in tPFM and explore the interplay between thermal- and bias-induced switching in model ferroelectric materials. We further explore the contributions of the flexoelectric and thermopolarization effects to the local electromechanical response, and demonstrate that the latter can be significant for "soft" ferroelectrics. These results establish the framework for quantitative interpretation of tPFM observations, predict the emergence the non-trivial switching and relaxation phenomena driven by non-local thermal gradient-induced polarization switching, and open a pathway for exploring the physics of thermopolarization effects in various non-centrosymmetric and centrosymmetric materials. CHEMISTRY・ 1 DAY AGO Modeling time-resolved kinetics in solids induced by extreme electronic excitation We present a concurrent Monte Carlo (MC) - molecular dynamics (MD) approach to modeling of matter response to excitation of its electronic system. The two methods are combined on-the-fly at each time step in one code, TREKIS-4. The MC model describes arrival of irradiation, which in the current implementation can consist of a photon, an electron, or a fast ion. It also traces induced cascades of excitation of secondary particles, electrons and holes, and their energy exchange with atoms due to scattering. The excited atomic system is simulated with an MD model. We propose a simple and efficient way to account for nonthermal effects in the electron-atom energy transfer in covalent materials via conversion of potential energy of the ensemble into the kinetic energy of atoms, which can be straightforwardly implemented into an MD simulation. Such a combined MC-MD approach enables us time-resolved tracing of the excitation kinetics of both, electronic and atomic systems, and their simultaneous response to a deposited dose. As a proof-of-principle example, we show that proposed method describes atomic dynamics after X-ray irradiation in a good agreement with tight-binding MD, with much more affordable computational demands. The new model also allows us to gain insights into behavior of the atomic system during the energy deposition from a nonequilibrium electronic system excited by an ion impact. MATHEMATICS・ 1 DAY AGO
CommonCrawl
Parasites & Vectors Modelling the impact of insecticide-based control interventions on the evolution of insecticide resistance and disease transmission Susana Barbosa1,2, Katherine Kay1,3, Nakul Chitnis4,5 & Ian M. Hastings ORCID: orcid.org/0000-0002-1332-742X1 Parasites & Vectors volume 11, Article number: 482 (2018) Cite this article Current strategies to control mosquito-transmitted infections use insecticides targeted at various stages of the mosquito life-cycle. Control is increasingly compromised by the evolution of insecticide resistance but there is little quantitative understanding of its impact on control effectiveness. We developed a computational approach that incorporates the stage-structured mosquito life-cycle and allows tracking of insecticide resistant genotypes. This approach makes it possible to simultaneously investigate: (i) the population dynamics of mosquitoes throughout their whole life-cycle; (ii) the impact of common vector control interventions on disease transmission; (iii) how these interventions drive the spread of insecticide resistance; and (iv) the impact of resistance once it has arisen and, in particular, whether it is sufficient for malaria transmission to resume. The model consists of a system of difference equations that tracks the immature (eggs, larvae and pupae) and adult stages, for males and females separately, and incorporates density-dependent regulation of mosquito larvae in breeding sites. We determined a threshold level of mosquitoes below which transmission of malaria is interrupted. It is based on a classic Ross-Macdonald derivation of the malaria basic reproductive number (R0) and may be used to assess the effectiveness of different control strategies in terms of whether they are likely to interrupt disease transmission. We simulated different scenarios of insecticide deployment by changing key parameters in the model to explore the comparative impact of insecticide treated nets, indoor residual spraying and larvicides. Our simulated results suggest that relatively low degrees of resistance (in terms of reduced mortality following insecticide contact) can induce failure of interventions, and the rate of spread of resistance is faster when insecticides target the larval stages. The optimal disease control strategy depends on vector species demography and local environmental conditions but, in our illustrative parametrisation, targeting larval stages achieved the greatest reduction of the adult population, followed by targeting of non-host-seeking females, as provided by indoor residual spraying. Our approach is designed to be flexible and easily generalizable to many scenarios using different calibrations and to diseases other than malaria. Approximately 17% of human infectious diseases are transmitted by vectors such as mosquitoes, ticks and fleas [1, 2] and many are controlled by public health interventions using insecticides to target the vector. Malaria is the most serious example of a vector-borne infection and caused an estimated 212 million clinical cases and 429,000 deaths in 2016 [3]. Deploying insecticides against Anopheline mosquitoes, primarily in the form of insecticide-treated nets (ITNs) and indoor residual spraying (IRS), has been highly successful (see for example [4,5,6,7]) and are credited with contributing 68% and 13%, respectively, to recent dramatic reductions in falciparum malaria in Africa [7]. These successes come at a cost: large amounts of insecticides have to be deployed, and it is estimated that more than 50% of the population in sub-Saharan Africa was protected by at least one vector control intervention in 2015 [8]. A near-inevitable consequence has been the emergence and spread of insecticide resistance (IR) in mosquito vector species [9]. Almost two thirds of countries with ongoing malaria transmission now report resistance to one or more classes of insecticide [10,11,12] and this is widely recognised as a major threat to the sustainable impact of malaria control programmes (reviewed in [9]). Similar patterns of insecticide resistance are noted in other mosquito populations under public health control, notably the Aedes mosquitoes that transmit dengue. The threat posed by insecticide resistance in mosquito populations has stimulated a series of theoretical papers to investigate the processes. They have been of two main forms. The first relates to evolutionary genetic and/or mathematical models exploring resistance management strategies designed to minimise selection for resistance (e.g. [13,14,15,16,17,18,19,20]). These models simply regarded insecticide resistance as something to be avoided and sought ways to understand, avoid or slow its evolution; this meant they usually had to ignore the most important operational factor of IR, i.e. its quantitative impact on undermining insecticide-based control of human disease transmission. A second suite of models does investigate the impact of insecticide resistance on mosquito population demography and hence on disease transmission (e.g. [19, 21, 22]). These could assess the impact of IR on control (using a 'with' vs 'without' comparison) but neglected the dynamics by which IR evolved and spread, and how it might be potentially delayed. The purpose of this paper is to close this methodological disconnect between the two approaches and demonstrate how they can be combined to simultaneously quantify the likely impact of insecticide deployment and resistance on malaria transmission potential. We developed a demographic/genetic model for mosquito population dynamics that tracks overlapping generations and runs in discrete time steps of one day. It focuses on malaria transmission by its key vectors, Anopheles, although it can easily be modified to accommodate the bionomics of other species. The model incorporates the stage-structured mosquito life-cycle, i.e. eggs, larvae, pupae and adults. Modelling the adult stage allows mortality rates to differ between sexes (males do not blood-feed) and between the feeding and digesting/oviposition stages of the adult female. Density-dependent competition, and hence population regulation, is assumed to occur at the larval stage such that the emergence rate of new mosquitoes includes the non-linear impact of insecticides on reducing the population size. We integrated insecticide resistance into the model and allowed differential survival of mosquitoes depending on their genotypes (SS, SR and RR where S is the sensitive allele and R is the resistant), sex and the stage of the life-cycle (egg, larvae, pupae, adults). We then show how to interrogate this demography to calculate the R0 of the mosquito population; if vector R0 is less than 1 then the mosquito population will go extinct and disease transmission will cease. If extinction does occur we can then predict whether the presence (or importation) of resistance will be sufficient to re-establish the vector population, i.e. whether its R0 in the presence of resistance is greater than 1. We then used a Ross-Macdonald model to investigate situations where vector R0 > 1 to predict whether malaria transmission will continue despite control interventions reducing adult female population size and longevity and/or whether transmission will re-emerge once resistance is present in vector populations. The model is, therefore, designed to simultaneously answer a series of questions that arise naturally from control programmes: What impact do insecticides have on the mosquito population: will it be driven to extinction and, if not, how will insecticide deployment affect mosquito numbers and adult female longevity? What impact will these changes in mosquito demography have on disease transmission: assuming the mosquito populations are not eliminated, will there still be ongoing transmission? How will different patterns of insecticide deployment select for resistance? How will the spread of insecticide resistance affect mosquito populations and compromise attempts to reduce disease transmission? We focus on malaria transmission, but Ross-Macdonald is a generic model for vector-borne disease transmission and, in principle, our methodology is equally applicable to other mosquito-borne diseases such as dengue. The anopheline mosquitoes that transmit malaria undergo complete metamorphosis through four distinct life-cycle stages: egg, larva, pupa and adult. Adult females feed on a vertebrate host and lay eggs in water bodies. Eggs hatch, within one or two days to a week or more, into larvae that breathe air through tubes, eating floating organic matter. Larvae moult four times until they became pupae. Pupae live near the surface of the water and do not eat, breathing through siphons on their back, and after a few days emerge as adults. The adult lives for a few days to several weeks [23]. The juvenile stages are similar in males and females, but the adult stage differs significantly in their behaviours as only females seek and feed on vertebrate hosts. A more detailed description of the life-cycle from a modelling perspective can be found in [24]. Note that because males do not bite and transmit infections they can be ignored in models that deal solely with transmission (e.g. [24, 25]) but they must be included here because they contribute half the genes to the next generation and their behaviour means adult males often inhabit a largely insecticide-free "refugia" with corresponding low selection for resistance [19]. Figure 1 outlines the model structure designed to reflect this life-cycle, and its parameterisation. It was constructed as a discrete-time, stage-structured model using a system of difference equations. The inclusion of the stage-structure allows realistic modelling of the life-cycle and selection of resistance at appropriate points within that life-cycle. Population regulation was assumed to occur through larval competition. The model was implemented in R [26] and used discrete time steps of one day to capture the circadian nature of mosquito behaviour. A schematic of our mosquito stage-structured model. The adult stage dynamics is considerably different in male and female mosquitoes primarily because male mosquitoes do not feed on vertebrate hosts and hence do not enter a host-seeking phase. Male adults are composed of newly emerged individuals plus the adult males that survived the previous day. Female adults are grouped in three classes: (i) unfed individuals that are currently host-seeking (newly emerged individuals, individuals that did not find a host the previous day, and individuals that laid eggs the previous day and are starting a new gonotrophic cycle), Eq. 12; (ii) fed individuals, Eq. 13; and (iii) resting individuals, Eq. 14. The model tracks the three potential genotypes j ∈ (SS, RS, RR) of the individuals through their developmental stages. The total number of eggs laid by all females is Λ (Eqs. 15 to 17), of which (1 − φ)Λ are males and φΛ are females. We assume adult females mate once upon emergence, while males can mate multiple times. The θ parameters refer to the duration of each stage in days, and ρ to the proportion of individuals that survive per day in a given stage (e, eggs; l, larvae; p, pupae) We assumed resistance is encoded at a single gene with two alleles encoding resistance and sensitivity. We simultaneously ran this model in parallel for the three genotypes i.e. SS, SR and RR. This allows the genotypes to have different patterns of mortality depending on their level of insecticide resistance. Note that larval competition directly occurs between all three genotypes and that adults mate (at random) between the three genotypes. We assumed that males can mate multiple times but female mosquitoes mate once, immediately after emergence from pupae, and carry the sperm for the rest of their lives. We explicitly tracked the genotype of the sperm each female carries. A demographic/genetic model of mosquitoes under insecticide control We use two superscripts in the notations: the first to denote gender (f for females and m for males) and the second to denote the mosquito genotype j, where j is one of SS, RS, or RR. We append a third superscript, k, to adult female mosquitoes, where k is one of SS, SR or RR and denotes the genotype of the male mosquito that she mated with. We describe the model parameters, and their specific values, for the life-cycle in Table 1. Table 1 Parameters used in the mosquito demographic simulations Tracking the mosquito juvenile population Development through the juvenile life-cycle is tracked using the index i to represent days since the egg was laid (i = 1 denotes a newly laid egg): θe is the duration of the egg stage, θl is the duration of the larval stage and θp is the duration of the pupal stage (all measured in days). The total duration of the juvenile stages is therefore ζ where ζ = θe + θl + θp and we denote the female juvenile mosquito population of genotype j at time t as xfj(t) where \( {x}_i^{fj}(t) \) for 1 ≤ i ≤ θe denotes the number of female egg stages, of genotype j, of age i, at time t, \( {x}_i^{fj}(t) \) for (θe + 1) ≤ i ≤ (θe + θl) denotes the number of female larval stages of genotype j, of age i, at time t, \( {x}_i^{fj}(t) \) for (θe + θl + 1) ≤ i ≤ ζ denotes the number of female pupal stages of genotype j, of age i, at time t. The male juvenile population is described in an analogous manner with a superscript m instead of f. Note that the symbol \( {x}_i^{--}(t) \) denotes the number of juveniles at stage "i" at the end of day "t". The equations in this section therefore all function in the same way. They calculate the number of mosquitoes coming into stage "i" at the start of the current day [i.e. from the previous day and stage, \( {x}_{i-1}^{--}\left(t-1\right) \)], allowing for factors such as density dependence and mating, and then multiplying this number by the survival probability of that stage to obtain the number surviving at the end of that day, i.e. \( {x}_i^{--}(t) \). We describe the dynamics of the juvenile male and female mosquito populations of genotype j in Eqs. 1 to 8. After each iteration, mosquitoes are moved forward in chronological time (to t + 1) and in developmental time (to i + 1). The juvenile female mosquito population of genotype j at time t, xfj(t) was tracked by first determining the number of newly laid female eggs i.e. the first day of the egg stage, i = 1: $$ {x}_1^{fj}(t)={\Lambda}^j\left(t-1\right)\varphi {\rho}_e^{fj} $$ where Λj(t − 1) is the total number of eggs of genotype j laid at time t – 1 (see later discussion of Eqs. 15 to 17) and φ is the proportion of female eggs (always set to 0.5 here). The developing eggs after the first day were tracked using: $$ {x}_i^{fj}(t)={x}_{i-1}^{fj}\left(t-1\right){\rho}_e^{fj}\mathrm{for}\ 2\le i\le {\theta}_e $$ where eggs develop over θe days and progress is dependent on the daily egg survival probability, ρe. The larval stages were tracked as: $$ {x}_i^{fj}(t)={x}_{i-1}^{fj}\left(t-1\right)\left[\frac{1}{1+{c}_i^{fj}\ \frac{L\left(t-1\right)}{Z}}\right]{\rho}_l^{fj}\mathrm{for}\ \left({\theta}_e+1\right)\le i\le \left({\theta}_e+{\theta}_l\right) $$ where larval stages persist for θl days and progress is dependent on the daily larval survival probability ρl. In this model, density-dependent population regulation (DDPR) occurs in the larval stages of both sexes and is represented by the factor encoded in square brackets. This factor is described in more detail below in Eqs. 9 and 10. The pupal stages were tracked as: $$ {x}_i^{fj}(t)={x}_{i-1}^{fj}\left(t-1\right){\rho}_p^{fj}\mathrm{for}\ \left({\theta}_e+{\theta}_l+1\right)\le i\le \zeta $$ The juvenile male mosquito population of genotype j at time t, xmj(t) was similarly defined for number of male eggs, developing eggs, larval stages and pupal stages as $$ {x}_1^{mj}(t)={\Lambda}^j\left(t-1\right)\left(1-\varphi \right){\rho}_e^{mj} $$ $$ {x}_i^{mj}(t)={x}_{i-1}^{mj}\left(t-1\right){\rho}_e^{mj}\mathrm{for}\ 2\le i\le {\theta}_e $$ $$ {x}_i^{mj}(t)={x}_{i-1}^{mj}\left(t-1\right)\left[\frac{1}{1+{c}_i^{mj}\ \frac{L\left(t-1\right)}{Z}}\right]{\rho}_l^{mj}\mathrm{for}\ \left({\theta}_e+1\right)\le i\le \left({\theta}_e+{\theta}_l\right) $$ $$ {x}_i^{mj}(t)={x}_{i-1}^{mj}\left(t-1\right){\rho}_p^{mj}\mathrm{for}\ \left({\theta}_e+{\theta}_l+1\right)\le i\le \zeta $$ Implementing density-dependent population regulation (DDPR) The DDPR was incorporated into the larval populations in Eqs. 3 and 7 using the Leslie Gower population growth model, analogous to Beverton-Holt (B-H), which is a classic discrete time population growth model whose continuous-time equivalent is logistic growth towards a carrying capacity [27]. The B-H equation is: $$ {x}_{t+1}={x}_t{R}_0\left[\frac{1}{1+\frac{x_t}{Z}}\right] $$ where xt is the number of individuals at generation t, R0 is the per capita growth rate per generation and Z is a number that determines the carrying capacity of the population, K, as K = (R0 − 1)Z. We extend the B-H model in Eqs. 3 and 7 with a change in scale from individual animals (Z in Eq. 9) to amount of larval resources to account for competition between different genotypes (in this manuscript). The DDPR described within square brackets in Eqs. 3 and 7 is analogous to that in Eq. 9 with this change of scale. The total amount of larval resources is user-defined as a constant Z in arbitrary, undefined units, which sets the carrying capacity of the population. L(t) is the current amount of larval resources being consumed at time t (see below) hence the ratio L(t)/Z in Eqs. 3 and 7 plays exactly the same role as x(t)/Z in Eq. 9; it is simply that Eq. 7 defines the approach to carrying capacity in units of resources while Eq. 9 defines it in units of population. The only remaining difference between Eqs. 7 and 9 is the extra term c in Eq. 7 that describes relative competitive ability of the genotypes, age and sex of the larvae. The competitive ability of larvae, \( {c}_i^{fj} \) and \( {c}_i^{mj} \), may differ depending on their genotype (for example, resistant forms may pay a fitness penalty for carrying the resistance mutation) and the resource consumption (denoted ωfj and ωmj, see below) of each genotype may vary (for example, resistant forms may be larger and consume more resources). Similarly, older larvae are likely to consume more food and may be more resilient to competition. The total larval consumption of resources by male and female larvae of all genotypes is obtained simply by summing over the sexes, genotypes, and stages, i.e. $$ L(t)=\sum \limits_{j\in \left\{ SS, RS, RR\right\}}\left(\sum \limits_{i={\theta}_e+1}^{\theta_e+{\theta}_l}{\omega}_i^{fj}{x}_i^{fj}(t)+{\omega}_i^{mj}{x}_i^{mj}(t)\right) $$ where ωi is the relative resource consumption of the larval sex/genotype, the latter being indicated by its superscript, j, of age i. Isolating the DDPR as a distinct factor in Eqs. 3 and 7 means it is simple to substitute other forms of DDPR if required (e.g. [24, 25]) or other functions such as the Ricker function [28]. Tracking the mosquito adult population The adult male population of genotype j at time t, ymj(t) is: $$ {y}^{mj}(t)=\left[{y}^{mj}\left(t-1\right)+{x}_{\zeta}^{mj}\left(t-1\right)\right]{\rho}_d^{mj} $$ which is the number of male adults that survived from the previous day (ymj(t − 1)) augmented by male adults that emerged from pupae \( \left({x}_{\zeta}^{mj}\left(t-1\right)\right) \), scaled by the probability that they survive the day (\( {\rho}_d^{mj} \)). Females mate once when they emerge and store the sperm to fertilise all their future egg production while males may mate multiple times (see for example [29]). Female anophelines need to blood-feed to produce eggs, so their behaviour differs significantly from those of males (who do not blood-feed). Fertilised females initiate their gonotrophic cycle that consists of 3 phases: (i) foraging for a host and blood-feeding; (ii) resting to allow digestion of the blood and egg maturation; and (iii) searching for a suitable oviposition site and oviposition (Fig. 1). This gonotrophic cycle is repeated throughout the female's remaining lifespan. The female adult population time t + 1 is described in Eqs. 12 to 14. Recall that adult female mosquitoes require a third superscript k (where k is one of SS, SR or RR) to denote the genotype of the male mosquito she mated with (which will be the paternal genotype for her subsequent egg production). The number of host-seeking unfed females in the current gonotrophic cycle is: $$ {y}_1^{fj k}(t)=\kern0.5em \left[{x}_{\zeta}^{fj}\left(t-1\right)\frac{\sigma^k\left({y}^{mk}\left(t-1\right)+{x}_{\zeta}^{mk}\left(t-1\right)\right){\rho}_d^{mk}}{\sum_{h\varepsilon \left\{ SS, RS, RR\right\}}{\sigma}^H\left({y}^{mk}\left(t-1\right)+{x}_{\zeta}^{mk}\left(t-1\right)\right){\rho}_d^{mk}}+\kern0.5em {y}_1^{fj k}\left(t-1\right)\left(1-{H}^j\right)\kern0.5em +\kern0.5em {y}_{\tau}^{fj k}\left(t-1\right)\right]{\rho}_s^{fi} $$ where the first term describes the number of newly emerged female adults f, of genotype j \( \left({x}_{\zeta}^{fj}\left(t-1\right)\right) \), that will mate with a male of genotype k \( i.e.\left(\ {y}^{mk}\left(t-1\right)+{x}_{\zeta}^{mk}\left(t-1\right)\right){\rho}_d^{mk} \), which has a mating viability σk (this term is normalised by dividing by the total adult male population weighted by their mating viability). The second term, \( {y}_1^{fjk}\left(t-1\right)\left(1-{H}^j\right) \), refers to other female mosquitoes still in the host-seeking state that were unfed adults the previous day and unsuccessful in finding a host on the previous day (H is the probability of successfully finding a host and feeding). The third term, \( {y}_{\tau}^{fjk}\left(t-1\right) \) represents females that successfully laid eggs, completing their gonotrophic cycle, and are now seeking a host in their new gonotrophic cycle. These terms are then scaled by the probability that they survive this day of host-seeking, i.e.\( {p}_s^{fj}. \) The number of female mosquitoes entering the second adult phase of the gonotrophic cycle (resting and fed the previous day) corresponds to individuals in y1 that survived and successfully fed (a proportion Hj) and is described as: $$ {y}_2^{fj k}(t)={y}_1^{fj k}\left(t-1\right){H}^j{p}_n^{fj} $$ The number of females in the remaining days of this "resting" phase of digestion of the blood and egg maturation, was found using: $$ {y}_i^{fj k}(t)={y}_{i-1}^{fj k}\left(t-1\right){\rho}_n^{fj}\mathrm{for}\ 3\le i\le \tau $$ if the duration of the resting stage is sufficiently long i.e. (τ ≥ 3). We assume, for simplicity, that the probability rested females successfully survive while finding an oviposition site and mating (the third phase of the female gonotrophic cycle) is the same as their daily probability of survival while resting, i.e. \( {\rho}_n^j \). This factor enters the equations describing egg laying, i.e. Eqs. 15 to 17 below. Tracking the spread of resistance The frequency of resistance is defined at the start of the simulations and is assumed to be equal in males and females, with genotypes in Hardy-Weinberg equilibrium [30]. Male and female genotypes are tracked separately in the simulations (Fig. 1) because their exposure to insecticides as adults will differ and hence genotype frequencies may differ between males and females in the adult, breeding population. Mating is assumed to occur at random and inheritance is by standard Mendelian genetics. We can therefore calculate the proportion of genotypes in the next generation according to the following three equations where βj is the number of eggs laid by genotype j. The number of homozygous susceptible eggs laid at time t is $$ {\Lambda}^{SS}(t)={\beta}^{SS}{\rho}_n^{SS}\left({y}_{\tau}^{fSSSS}(t)+\frac{1}{2}{y}_{\tau}^{fSSRS}(t)\right)+{\beta}^{RS}{\rho}_n^{RS}\left(\frac{1}{2}{y}_{\tau}^{fRSSS}(t)+\frac{1}{4}{y}_{\tau}^{fRSRS}(t)\right) $$ The number of heterozygous eggs laid at time t is $$ {\wedge}^{RS}(t)=\kern0.5em {\beta}^{SS}{\rho}_n^{SS}\left(\frac{1}{2}{y}_{\tau}^{fSSRS}(t)+\kern0.5em {y}_{\tau}^{fSSRR}(t)\right)+{\beta}^{RS}{\rho}_n^{RS}\left(\frac{1}{2}{y}_{\tau}^{fRSSS}(t)+\frac{1}{2}{y}_{\tau}^{fRSRS}(t)+\frac{1}{2}{y}_{\tau}^{fRSRR}(t)\right)+{\beta}^{RR}{\rho}_n^{RR}\left({y}_{\tau}^{fRRSS}(t)+\frac{1}{2}{y}_{\tau}^{fRRRS}(t)\right) $$ The number of homozygous resistant eggs laid at time t is $$ {\Lambda}^{RR}(t)={\beta}^{RS}{\rho}_n^{RS}\left(\frac{1}{4}{y}_{\tau}^{fRSRS}(t)+\frac{1}{2}{y}_{\tau}^{fRSRR}(t)\right)+{\beta}^{RR}{\rho}_n^{RR}\left(\frac{1}{2}{y}_{\tau}^{fRRRS}(t)+{y}_{\tau}^{fRRRR}(t)\right) $$ The ρ parameters in these equations represent the additional mortality associated with searching for oviposition sites; for simplicity, these were assumed to be equal to that of non-host-seeking. Immigration, emigration and mutation are absent but it would be straightforward to include these effects by altering genotype frequencies at the egg stage (mutations) or by altering the number and/or genotypes of adult stages to represent immigration/emigration. Estimating population basic reproductive rate (R0) for mosquitoes A natural question considered by control programmes is whether an intervention will eliminate the local mosquito population. It is possible to run the model described above to find if a population is viable, i.e. start the demographic simulation from extremely low mosquito numbers and find if they increase over the longer term and a stable age distribution has been reached. This is computationally expensive, especially if large-scale sensitivity analyses are being run, so an algebraic expression for R0 is desirable. The R0 for female mosquitoes, ignoring differences in genotypes, is $$ {R}_0=\frac{\varphi \beta {\rho}_e^{\theta_e}{\rho}_l^{\theta_l}{\rho}_p^{\theta_p}{\rho}_sH{\rho}_n^{\tau -1}}{1-{\rho}_s\left(1-H\right)-{\rho}_sH{\rho}_n^{\tau -1}} $$ This equation can be derived in two ways (using an "intuitive" approach and a rigorous mathematical approach); both yield the same result and are described in Additional files 1 and 2, respectively. Obviously if R0 < 1, the mosquitoes are locally extinct and no disease transmission will occur, i.e. the intervention has succeeded. Estimating population basic reproductive rate (R0) for malaria and human malaria prevalence Assuming a viable mosquito population remains despite the intervention (i.e. R0 > 1 for mosquitoes, see above), the next step is to predict whether this mosquito population is able to transmit malaria. The basic reproductive rate of malaria, R0m, using the approach attributed to Ross and Macdonald (R-M) [31] is as follows (although we note there are several variations of this basic equation [32]): $$ {R}_{0m}=\frac{m{a}^2{b}_1{b}_2}{gr}{\rho}_i=\frac{M}{N}\bullet \frac{a^2{b}_1{b}_2}{gr}{\rho}_i $$ m=M/N is the number of female mosquitoes per human host where N is the size of the human population and M is the size of the female adult mosquito population (i.e. Af(t) in our models, see later description of Eq. 23); a is the rate of biting on humans by a single mosquito (number of bites per unit time); b1 is the probability of infection transmission from infectious mosquitoes to susceptible humans; b2 is the probability of infection transmission from infectious humans to susceptible mosquitoes r is the per capita rate of recovery for humans (so 1/r is the average duration of infection in the human host); g is the per capita constant mortality rate for female mosquitoes (so 1/g is the average life time of a mosquito). This is usually obtained as − ln(p) where p is the daily survival rate (Box 2 of [32]); ρi is the probability of surviving the "extrinsic incubation period" i.e. the time period between the mosquito biting a malaria-infected human and that mosquitoe becoming infectious to other humans (i.e. the presence of sporozoites in her mouthparts). The R-M approach does not differentiate between females in different stages of their gonotrophic cycle whereas our approach explicitly defines different death rates according to the behaviour of the female of any given day (i.e. actively host-seeking or resting). We now illustrate how the R-M approach may be used with differential female survivorship in different states. We will assume, for convenience, locally intense transmission of malaria by An. gambiae, a vector that feeds almost exclusively on humans and bites approximately every 4 days (see Additional file 3). Assuming the female always completes her cycle in these 4 days and then proceeds to the next cycle, the approximate adult female daily survival probability is the geometric mean of daily survival rates during the cycle, so that the daily mortality rate is: $$ g=-\mathit{\ln}\left[\sqrt[4]{\rho_s^j{\left({\rho}_n^j\right)}^3}\right]=-\mathit{\ln}\left[{\rho}_s^j{\left({\rho}_n^j\right)}^3\right]/4 $$ The duration of the extrinsic incubation period depends on temperature but assuming ideal conditions, we will let it be ~10 days; we also assume that the adult always finds a human on the day she starts host-seeking. The mosquitoes will have fed at the start of the extrinsic incubation period so the extrinsic incubation period will consist of 3 days resting, another day host-seeking, 3 days resting, another day feeding and 3 days resting until she is ready to feed again and transmit the infection, i.e. 9 days resting and 2-host-seeking so we can estimate the probability of surviving the extrinsic incubation period as: $$ {\rho}_i={\left({\rho}_n^j\right)}^9{\left({\rho}_s^j\right)}^2 $$ These calculations were, as mentioned above, based on ideal situations for mosquitoes [i.e. they always find hosts (so H = 1), extrinsic incubation only lasts 10 days, and so on]. Mosquitoes can be age-dated by parity in the field so we can revise Eq. 21 to obtain the probability a mosquito survives 3 or more feeding/parity cycles, each cycle of 3 days resting and one feeding, as (0.963 × 0.71)3 = 0.25. This is rather high but not unrealistic. Gilles & Wiles [33] found 20% and 23% of An. gambiae and An. funestus, respectively, were "3-parous and older" in Muheza, Tanzania; these data came from 1965 when there was much less insecticide being deployed in public health. Background mortality rates will be higher in contemporary settings with widespread insecticide deployment although incorporating this background exposure greatly complicates extraction of basal mortality rates (see [34] for a recent example). More realistic calculations may be used to incorporate 'non-perfect' conditions in the mosquito populations, for example wide scale ITN coverage may mean mosquitoes take two or even three days of host-searching to obtain a blood meal. Equations 20 and 21 can be updated to reflect these new combinations of days spent searching and resting. In reality, a range of different combinations will occur in the mosquito population and the solutions to the equations will be a type of weighted mean across the combinations [35]. We omitted these complications in the interests of simplicity, because the calculations only serve as illustrative target reductions for interventions (see later) and to avoid duplication of previous work [35]. This approach does enable us to obtain the parameters required to calculate, M', the target number of adult mosquitoes that results in R0m < 1 and hence elimination of malaria as $$ {M}^{\prime }<N\frac{rg}{a^2{b}_1{b}_2{\rho}_i} $$ These equations do not distinguish between the genotypes with differing levels of insecticide resistance (which results in the different genotypes having different survival probabilities). It is straightforward to incorporate resistant genotypes by regarding them as equivalent to different "species"; since malaria is often spread by more than one vector species, methods for calculating R0 in the presence of several species is well worked out [36, 37]. The value of R0m allows the equilibrium prevalence, \( \widehat{P} \) of malaria in humans to be calculated algebraically. Anderson & May [31], for example, calculated it as \( \widehat{P}=\left({R}_0-1\right)/\left({R}_0+a{b}_2/g\right) \). The problem with R-M applied to malaria is that it does not allow for super-infection or for acquired immunity so prevalences obtained algebraically should be interpreted with caution. An alternative, and probably more robust approach, is to obtain R0m as described above and then obtain malaria prevalence using their empirical relationships estimated from field surveys; for example, using figure 2 of [38] to convert R0 to entomological inoculation rate (EIR) and then figure 1 of [39] to convert EIR to prevalence. Partial rank correlation coefficients (PRCC) between equilibrium adult female population size and selected model parameters. The parameters symbols in the y-axes are defined in Table 1 and the horizontal error bars delimit the 95% confidence intervals. In order of importance: ω is resource consumption of larvae, ρlis larval survival (per day), Z is total larval resources, c is the impact of larval competition, ρpis pupal survival (per day), ρnis adult female survival (per day) when resting, ρsis adult female survival (per day) when host-seeking, ρeis egg survival (per day), H is the proportion of adult females that successfully find a host (per day), and ρdis adult male survival (per day) Simulating mosquito populations When tracking the spread and impact of resistance in the simulation, the starting frequency of resistance was assumed to be 0.5 in all cases. The initial frequencies will be much lower at the start of most real-life interventions so these simulations show the last stages of resistance spread following interventions. These intermediate frequencies of alleles reduce the impact of stochastic frequency changes, allowing better estimation of selection coefficients; these coefficients are key summary measures in population genetic theory allowing results to be generalised. For example, section coefficients determine the rate of geographical migration of resistance, and the chance of resistance alleles first emerging in the populations (e.g. [40]). The genotypes were introduced in Hardy-Weinberg equilibrium, i.e. 25, 50 and 25% of the SS, SR and RR genotypes, respectively. We calibrate the models for an area of high malaria transmission and use a value of 110 for mosquito density (i.e. number of adult female mosquitoes per human host; m in the Ross-Macdonald model) as justified in Additional file 3. A useful starting point for identifying high-impact interventions is to use the calculations developed above to identify those parameters, such as larval survival probability, in which small changes may have a disproportionately large impact on adult female population size. This enables us to identify key parameters which are prime candidates to be targeted by insecticides. The demographic/genetic model described above was run to equilibrium adult population size using 3000 randomly generated combinations of parameter values drawn from the parameter space described in Table 1 with no genetic differences in resistance levels. The output allowed us to perform a sensitivity analysis of the influence of the parameters on female population size. The total number of adult females at time t, Af(t) is the sum of the number of mosquitoes in each day of the feeding cycle as described by Eqs. 12 to 14, i.e. $$ {A}_f(t)=\sum \limits_{i=1}^{\tau}\kern0.5em \sum \limits_{j\in \left\{ SS, SR, RR\right\}}\sum \limits_{k\in \left\{ SS, SR, RR\right\}}{y}_i^{fjk}(t) $$ Mann-Witney and t-tests were used to compare the mean parameter values that generated a viable malaria population with those that lead to extinction. Partial rank correlation coefficients (PRCC) were then calculated as a sensitivity analysis of the model using only the simulations that generated viable populations. The PRCC were only calculated between parameters expected to be affected by vector control measures (i.e. ρe, ρl, ρp, ρs, ρn, c, ω, Z, ρd and H, assuming no differences between males or females in parameter values in the non-adult stages); the magnitude of the absolute PRCC values can be used to rank the relative importance of the 10 input parameters. The impact of insecticide deployment and threat posed by resistance Our main goal with the development of this model was to address operational issues of insecticide deployment and how it both drives, and is compromised by, resistance. We focus on exploring the generic issues concerning the application of insecticides rather than attempting to parameterise a particular setting because there are limited data on many of the key parameters, particularly for differential survival of the different sensitive/resistant genotypes. The combination of default parameters given in Table 1 resulted in a viable population in the absence of insecticide deployment; we then investigated the likely impact of insecticide deployment (and resistance) by changing the values of parameters that are likely to be affected by the intervention. In particular, we ran simulations that mimic larvicides, ITNs and IRS and their impact on the mosquito population. The total adult female mosquito population at equilibrium (Eq. 23) for the default parameters in Table 1 is Af = 135,878. The equivalent number of humans for this default setting was then obtained as N = 1235 using the value of m = 110 (Additional file 3). It is now possible to use this value of N together with the numbers of adult female mosquitoes when under control measures to obtain M′ using Eq. 22 and hence to predict whether disease transmission is possible. We then investigated how the emergence and spread of resistance would impact insecticide-based interventions and, in particular, whether the spread of resistance would allow mosquito populations to recover to the extent that malaria transmission would restart, i.e. R0m > 1. We present the worst-case scenarios in terms of spread of resistance, because we assume resistance to be completely dominant, i.e. we assume the survival probabilities of the heterozygote and homozygote resistant genotypes to be equal. When considering the interventions below we assume that those targeting the non-adult stages have the same impact on both males and females as there is little, if any, sexual difference in exposure in these stages, e.g. an intervention reducing the female larvae daily survival probability by 50% would also reduce male larval survival probability by 50%. Interventions targeting adults are assumed to only affect females. This avoids having to define a differential impact on the two sexes that will almost certainly arise due to behavioural differences; for example, IRS may reduce adult female resting survival by 80% but adult male survival by only 5%. Defining this differential impact on adults is also unnecessary because male adult survival has no impact on overall population size (because all females are assumed to mate successfully irrespective of male population size). Ignoring the potential impact of IRS and ITN on male mortality slows the rate at which resistance spreads (because there is no selective pressure on males by IRS or ITN) but we regard this as a reasonable simplification that could be relaxed later. Note that we do include male mortality at the larvae stages because they contribute to DDPR; their deaths lessen competition at this stage and help reduce the impact of female larval mortality on the eventual adult population size. Single-insecticide interventions We initially simulate the dynamics of the mosquito population under reduced survival imposed by the use of insecticides that target single stages of the life-history and without the emergence of resistance. We use the default values given on Table 1 as the baseline values that lead to a viable population. We assume that ITNs act by decreasing the survival probability of female adults while host-seeking, ρs; IRS reduces female adult survival while resting, ρn; larvicides reduce the survival probabilities of larvae, ρl; and a pupacide that kills only pupae, ρp (we are unaware of any agents that do this but include this hypothetical example for methodological completeness). Henceforth we will be using the intervention name and the parameter that we assume it affects interchangeably. We reduced each survival probability by 10, 30, 40 and 80% of the original value to explore the impact of different degrees of intervention effectiveness. We tracked the number of adult female mosquitoes post-intervention and the intervention was considered to be successful if the number of females was reduced below a threshold value obtained using Eq. 22, below which malaria transmission would be theoretically interrupted. Combined-insecticide interventions Interventions often use combinations of insecticides that target two or more stages of the mosquito life-history. The impact of these interventions was investigated, as for single interventions, by reducing the survival probabilities of the affected life-stages by 10%, 30%, 40% and 80% of the original value and tracking the number of adult female mosquitoes. We investigated three specific interventions as listed below. They are designed to illustrate our approach rather than to simulate specific, well-calibrated examples (see later discussion around calibration). The interventions are as follows: ITNs and IRS: these interventions reduce the survival probabilities of host-seeking adult females (ITN: ρs) and resting females (IRS: ρn). Larviciding and IRS: larviciding (for example with temephos) is assumed to affect both larvae and pupae (ρl and ρp) while IRS, as above, reduces the survival of non-host-seeking adult females (ρn) Larviciding and ITNs: as above, larviciding is assumed to reduce ρl and ρp while ITN reduces the survival of host-seeking adult females, ρs. The combination of interventions was considered successful if the number of females was reduced below the critical threshold value below which malaria transmission is theoretically interrupted. We ran the model using 3000 randomly generated combinations of parameters from the distributions described in Table 1 which resulted in viable mosquito populations at equilibrium in 103 (3.4%) of these runs. Statistical analysis (two-tailed t-tests and Mann-Witney U-tests) on the parameters used in the sensitivity analysis (Table 1) showed that the following parameters were highly significantly (P < 0.0001 in both tests after correcting for multiple testing using the BH method) higher in simulations that resulted in viable populations compared to those that went extinct: the daily survival probabilities of the immature stages (i.e. eggs, larvae and pupae), of females seeking a host, and of females resting (ρe, ρl, ρp, ρs, ρn respectively). In contrast, parameters describing the effect of larval competition (c), relative resource consumption (ω), resource availability (Z), the daily probability that a female successfully finds a host and feeds (H), and the proportion of male mosquitoes (ρd) were not statistically different (P > 0.05). Among the non-significant parameters, the first three are associated with the larval competition that is absent when populations are at very low densities and are therefore expected to have no effect on determining whether a population is viable or not (although they will, of course affect the equilibrium size of viable populations). The fourth factor, H, is non-significant, presumably because a female that fails to find a host one day can survive and successfully feed the next. Finally, ρd is daily adult male survivorship which again is not expected to affect whether a population is viable because we assume females always find a male and that males can mate multiple times. These t-tests and Mann-Witney tests reveal whether a factor has an impact on whether a mosquito population is viable, but it is the PRCC analyses that reveal the parameters with the largest impact in determining the size of the adult female population. Density-dependent population regulation in our models is assumed to occur by larval competition so it is not surprising that those factors with the largest impact on adult population size were those controlling the intensity of larval competition, i.e. the total larval resources available (Z), the relative resource consumptions of larvae (ω), the impact of larval competition (c), and daily larval survivorship (ρl), see Fig. 2. Note that the PRCC results on final population size are consistent with the t-test and Mann-Whitney results described in the previous paragraph on whether a population is viable, i.e. the daily survival probabilities are all highly significant (with the except of male survival) while the probability that a female successfully finds a host and feeds (H) and male survival are non-significant. The only difference is in factors associated with resource availability (ω, Z and c) which affect final population size but, for reasons described above, have no impact on whether or not a population is viable. The impact of insecticide deployment prior to the emergence of resistance An equilibrium adult female population size (Af(t)) of 135,878 was reached using the default parameterization given in Table 1; the ability of the controlled population to transmit disease can be investigated using a Ross-Macdonald model calibrated as described in Table 2. This equilibrium population size of adult females served as the baseline adult female population size in the absence of intervention in all simulations/scenarios (i.e. Figs. 3, 4, 5 and 6). Table 2 Parameters used in the Ross-Macdonald transmission calculations Simulations of the impact of insecticidal interventions on the female adult population size. Interventions were simulated by the decreasing survival that would plausibly occur at five different stages: larval (ρl), pupal (ρp), adult females host-seeking (ρs), adult females resting (ρn) and adult males (ρd). The legend shows the percentage of decreased survival imposed on each parameter i.e. 0 (the initial group) -10, -30, -40 and -80%. Abbreviations: IRS, indoor residual spraying; ITN, insecticide-treated net Simulations of the impact of combined insecticidal interventions on the female adult population size. Interventions were simulated by decreasing the survival that would plausibly occur in three combined interventions. The legend on each panel shows the percentage of decreased survival imposed on each parameter by the intervention. IRS combined with ITNs reduces female survival while resting (ρn) and host-seeking (ρs). Larviciding combined with IRS reduces survival of both sexes in the larvae and pupal stages (ρl and ρp ) and in adult females resting stages (ρn). Larviciding combined with ITNs reduces survival of both sexes in the larvae and pupal stages (ρl and ρp ) and adult females while host-seeking (ρn). Abbreviations: IRS, indoor residual spraying; ITN, insecticide-treated net The potential impact of resistance on single-insecticide control interventions. The legend on each panel gives the percentage of decreased survival caused by the intervention for each genotype. The blue line shows the resistant allele frequency over time and the black line shows the number of female adult mosquitoes. The rapid decline in adult population size post-intervention shows that the magnitude of the resistance phenotype was not sufficient to prevent a population crash (although the smaller, resistant population were sufficient to allow malaria transmission; see Table 5 and main text for details). Resistance was assumed to be dominant, interventions started with a resistance allele frequency of 50% and the three genotypes in Hardy-Weinberg equilibrium. Abbreviations: IRS, indoor residual spraying; ITN, insecticide-treated net; SS, the homozygous sensitive genotype; SR, the heterozygous sensitive/resistant genotype; RR, the homozygous resistant genotype The potential impact of insecticide resistance on combined-insecticide control interventions. The legend in each panel shows the percentage of decreased survival imposed by the intervention on each parameter for each of the three genotypes. The blue line shows the resistant allele frequency over time and the black line shows the total number of female adult mosquitoes. As in Fig. 5, the magnitude of the resistance phenotype was not sufficient to prevent a population crash although the smaller, resistant, populations may be sufficiently large to allow malaria transmission (see Table 6 and main text for details). Resistance was assumed to be dominant, interventions started with a resistance allele frequency of 50% and the three genotypes in Hardy-Weinberg equilibrium. Abbreviations: IRS, indoor residual spraying; ITN, insecticide-treated net; SS, the homozygous sensitive genotype; SR, the heterozygous sensitive/resistant genotype; RR, the homozygous resistant genotype We investigated the likely impact of single-insecticide interventions by assuming insecticide deployment decreases survival probabilities in various parts of the mosquitoes' lifestyle by 10, 30, 40 or 80% (Table 3). Whether these impacts are sufficient to interrupt malaria transmission can be investigated using Eq. 22 with the calibration developed above (as summarised in Tables 1 and 2) to identify the threshold density of mosquitoes below which malaria transmission cannot be sustained. The equilibrium number of females present after the intervention are given in Table 3. A 30% reduction of the larval daily survival (0.94 to 0.66) resulted in extinction of the mosquito population, and hence interruption of malaria transmission. The pupae survival probability, ρp, would have to be lowered by 40% (0.55 to 0.33) to drive the population to extinction. Note that we modelled pupae independently from larvae, because pupae do not feed and therefore are believed not to incur density-dependence regulation, but in practice both stages share the same physical space and interventions such as larviciding may affect both stages. This scenario of decreasing survival of only the pupal stage is, therefore, very unlikely to be used in the field. However, it serves to show that theoretically we would have to reduce pupae daily survival more than larval survival to achieve the same level of reduction in the adult female population, the underlying reason being that the pupal stage is shorter so daily survival must be much lower to achieve comparable overall killing to the longer larval stage. Targeting adult females only in the non-feeding, resting stage, ρn, requires a more modest decrease of 30% (0.96 to 0.67) to generate near-extinction of the mosquito population with consequent cessation of malaria transmission. ITNs target adult females seeking a host for a blood meal and is one of the most widespread malaria interventions; our results suggest it would be necessary to decrease survival during the seeking stage, ρs, by around 40% (0.96 to 0.58) to eliminate the mosquito population. Table 3 The impact of insecticide-based interventions that target one stage of the life-cycle Figure 3 illustrates the dynamics of these interventions summarised in Table 3, taking the pre-intervention population of 135,878 as its starting point. The bottom panel of Fig. 3 shows the impact of reducing adult male survival. As expected, it is not possible to decrease the female adult population by targeting the male population alone because our model assumes males can mate multiple times and so changes in male number caused by reduced ρd have no impact on the size of the next generation unless they are so large as to eliminate all males. We use a similar approach to investigate the impact of combined-insecticide interventions by assuming the insecticides reduced life-cycle survivals by 10 or 30%. The results are summarised in Table 4. The hypothetical example of combining IRS and ITNs is sufficient to drive the mosquito population to extinction or to a very small size that is well below the threshold for interruption of malaria transmission assuming a decrease in survival of 10% in the non-host-seeking females (ρn = 0.87) and 30% in the host-seeking females (ρs = 0.5), or vice versa (Table 4). Combining larviciding with either IRS or ITN suggests that small reductions (10%) in both parameters are sufficient to render the mosquito population inviable and to interrupt malaria transmission (Table 4). Table 4 The impact of insecticide-based interventions that target two stages of the life-cycle The dynamics of the interventions shown in Figs. 3 and 4 suggest that interventions of this magnitude may have a rapid effect acting on a timescale of weeks. Note, however, that our simulations assumed instantaneous deployment of the insecticide-based interventions and so illustrate its fastest possible impact on the local mosquito population. In reality, an intervention may take days, weeks or even months to deploy and in this case the reduction in population size will be much slower. Importantly, the final equilibrium population size will not be affected by how rapidly the intervention is deployed and the proportionate reduction in population size can be obtained from Table 3 noting that the original population size was 135,878 (so, for example, Table 3 shows that if larviciding decreases larval survival by 10%, this will reduce the population size to 10,711 which is a 92% reduction in population size). The impact of resistance on insecticide-based interventions The simulations shown in Figs. 3 and 4 assumed only a single genotype was present, i.e. the homozygous sensitive, SS, genotype. We introduced resistance SR and RR genotypes and re-ran these simulations to illustrate the potential impact of resistance on insecticide-based control programmes. The resistant allele, R, was assumed to be present at a frequency of 50% and was assumed to be dominant. It is important to note, given our simplifying assumption that no fitness cost is associated with resistance, that if resistance spreads from a starting frequency of 50% it will spread from any starting frequency, including very low ones. Consequently, our results and conclusions are unaffected by choice of initial resistance frequency. The reason we chose a starting frequency of 50% was to emphasise how rapidly resistance spreads and potentially undermines control, once it reaches detectable frequencies (if we start with lower initial frequencies, or recessive gene action, then there is a long period before resistance reaches significant frequencies). We start with the equilibrium population size that was obtained under the default parameters (i.e. 135,878 adult females) then impose interventions that have illustrative, differential effects on the sensitive and resistant genotypes (as defined in the panels of Figs. 5 and 6). Examples of hypothetical single-insecticide interventions are shown in Fig. 5. In all cases, resistance spread rapidly during the intervention. However, the magnitude of the resistance phenotype was insufficient to prevent the mosquito population from rapid, large and sustained reductions post-intervention. Despite this apparent success, Table 5 suggests this "crashed" population was sufficiently large that malaria transmission would be maintained. The adult population sizes in the absence of resistance (i.e. if only SS genotypes were present) would be zero in each example (second column of Table 5, but the presence of resistance may allow a viable mosquito population to be maintained once resistance has been fixed (fourth column of Table 5) that is sufficiently large so that malaria transmission is possible. Table 5 The impact of resistance on control interventions that target a single stage of the life-cycle. This is quantified by adult mosquito population size The analogous example of combined-insecticide interventions in the presence of resistance is shown in Fig. 6 and summarised in Table 6. The same basic dynamics occurred as for single-insecticide interventions, i.e. a rapid increase in resistance and an immediate fall in the adult female population. The impact of the latter was more heterogeneous. All interventions would have reduced mosquito populations to negligible sizes and blocked transmission (column 2 of Table 6). However, the spread of resistance allowed mosquito population sizes to recover sufficiently that disease transmission would re-start in 2 of the 6 scenarios (columns 4 to 6 of Table 6). Table 6 The impact of resistance on control interventions that target two or more stages of the life-cycle Insecticides are used in many contexts to reduce insect-borne disease transmission. We have combined mosquito demographics, genetics and malaria epidemiology to provide a methodology to simultaneously investigate the impacts of insecticide deployments in reducing or preventing the transmission of infections and the threat posed by resistance. To our knowledge, this synthesis has not been attempted prior to this study although many previous studies have addressed individual aspects of these requirements (space precludes a detailed discussion of this previous work but access to the modelling literature can be obtained, for example, from [16, 19, 21, 24, 25, 35, 41] and recent reviews such as [42]). We have taken a standard demographic model and added three novel factors: the ability to track insecticide resistance spread within the mosquito demography, derived an equation for R0 of mosquitoes that predicts whether interventions will drive local mosquitoes populations to extinction, and finally used the parameters of insect demography to derive a Ross-Macdonald equation for R0 for malaria that indicates whether it is likely that the reduction in mosquito numbers and/or their longevity is sufficient to interrupt malaria transmission. In summary, we have described a transparent methodology that allows researchers to investigate specific scenarios, while being sufficiently flexible that the genetic component can also be used to investigate other systems such as sex-linked resistance and genetic control mechanisms [18]. We developed a basal model as proof-of principle which is sufficiently flexible to allow alternative control strategies to be incorporated and evaluated. One such example is the proposal to target male mating swarms to reduce mosquito population size [43]. We assumed above that males could effectively inseminate an infinite number of females, hence, the number of males made no difference to the population size (and hence male survival was immaterial; lower panel of Fig. 3). This is a simplifying assumption, often made in ecology/demography, that recognises that female number is the usual determinant of population size. We could relax this assumption. For example, if males are believed to be unable to inseminate more than ten females per night, we could restrict the number of mated females per night to less than ten times the adult male population size. Similarly, for species where mating occurs in a male swarm (such as An. gambiae) if a male population size is reduced to the extent that females find it difficult to locate a swarm then the female mating probability can be reduced. We ignored these potential complications in this manuscript to focus on the basic genetics and demography but note that they can be included in modelling directed at more specific intervention scenarios. Similarly, we assume a single genetic locus encodes resistance but the methodology could be extended, albeit with a substantial increase in complexity, to include two genetic loci which would allow users to investigate the impact of joint-insecticide strategies such as the use of mixtures (e.g. [18, 20, 44]). This assumption that one gene encodes resistance to an insecticide has been commonly made throughout the literature (e.g. [45,46,47,48,49] and subsequent work). The results presented herein are also valid if resistance is coded by polygenes (i.e. resistance level is modulated by a large number of genes, each with a very small effect). For example, Tables 5 and 6 and Figs. 5 and 6 show the impact of a reduction in mortality on mosquito population size and disease transmission caused by IR. The genetic basis of the degree of IR is immaterial for this impact, e.g. a reduction in larval mortality by 10% has the same impact irrespective of whether its genetic basis is a single gene or many genes. The dynamics of spread will be very different between single- and poly-genetic resistance [50] but the impact of resistance on control can be investigated in the same way. A final, strategic application is to simulate interventions, quantify how rapidly resistance spreads, and use these dynamics to extract the selective advantage of resistance which is a key input parameter for calibrating genetic models of IR evolution. Operationally, ITNs and IRS may have two additional effects not captured in our model: repelling and possibly diverting mosquitoes to alternative hosts due to insecticide irritation (e.g. [19]) and/or the physical barrier of the net, and lengthening the duration of the gonotrophic cycle leading to a reduced oviposition rate [24]. The methodology can also incorporate behavioural changes that may evolve in response to insecticide resistance [51,52,53,54], for example, a reduced tendency to rest indoors after feeding, which will lower mortality rates during the female gonadotrophic cycle. These factors can be brought into insecticide resistance modelling but we have ignored these possible effects for simplicity; in particular, a formal sensitivity analysis (discussed later) would reveal the extent to which behavioural changes may affect the evolution of resistance and its impact on disease transmission. The Ross-Macdonald (R-M) approach is the easiest algebraic method of predicting whether disease transmission will cease (see [32] for an extensive review of R-M). It is also flexible: for example, we assume a female always finds a mate on the first day of emergence, and that the adult female feeding cycle is as quantified as in Eq. 21, but heterogeneity in such factors can be regarded as occurring in different mosquito 'species' and the overall R0 calculated from the relative frequencies of these different 'species'. The two main criticisms of R-M, that it does not allow super-infection or acquired human immunity, do not apply in our usage because cessation of malaria transmission at R0 < 1 implies no infections and hence no super-infection, and no acquired immunity. The drawback of R-M is that it generates a simple yes/no prediction of whether the mosquito population has the capacity to sustain malaria transmission, but it is not a robust method to quantitatively predict the intensity of malaria transmission nor its epidemiological impact; the latter depends on factors such as malaria super-infection in humans, levels of human acquired immunity, malaria importation rates and so on [32]. The methodology developed here is focused on mosquito demography and, if malaria transmission is identified as being viable, then these details of mosquito demography need to be passed to more sophisticated, individual-based simulations models of malaria transmission that do incorporate the human elements of malaria epidemiology (e.g. [55, 56]) to simulate the impact on human populations. The results of the interventions targeting single stages of the mosquitoes' life-cycle (using the PRCC values in Fig. 2 and examples in Fig. 3) indicate that the most effective method of controlling the mosquito population, all other factors being equal, would be to target the larval and the adult resting stages. These results reflect the belief that larval survival has a great impact on the adult population density although, as pointed out by White et al. [24], it does not kill adult mosquitoes that are potentially infectious so may have a smaller impact on disease transmission (i.e. female adult death rate is not affected). Alternatively, it may be better to target the host-seeking female mosquitoes to reduce disease transmission; this may make little difference to overall mosquito population size but their reduced longevity makes a substantial difference to malaria transmission. The strategy with the most impact will also depend on individual species demography and local environmental conditions. In our parameterisation, larvae were a good intervention target because they spend ten days in this stage so mortality at this stage operates over ten days. Conversely, we assume a ten day extrinsic incubation period (EIP) so mortality in resting females operates over nine days (Eq. 21). However, if temperature falls such that the EIP increases to 20 days then female mortality while resting will operate over 18 days and this stage may become a far more effective point of control. In reality "all other factors" are not equal as there are operational and financial differences associated with each strategy. An obvious example from laviciding is how to identify a substantial proportion of the breeding sites (e.g. [57]) because these depend on local mosquito ecology that may vary widely even within a species. Despite this requirement to identify breeding sites, larviding is likely to become increasing important as the most plausible insecticide-based method of targeting the outdoor-biting mosquito species responsible for "residual" malaria transmission once the primary indoor resting/biting species have been controlled. Our methodology is therefore capable of providing insight into how control may be optimised by balancing operational difficulty against likely impact; for example, contrasting a low impact, operationally simple and hence widespread intervention, against an operationally complex, more focussed approach with high local impact on mosquito populations. We emphasise that this manuscript primarily describes methodological advances, tying together the separate strands of insecticide deployment, insect demography and bionomics, the evolution of resistance and the impact of resistance on disease transmission. The conclusions described above, for example the high impact of larviciding, are correct for the specific instances we investigated but could not yet be used as a basis for general policy recommendations. Such recommendations would need to be based on a far more detailed sensitivity analysis than the rather arbitrary one used here (Additional file 3). Full exploration of plausible parameter space may well conclude that no one strategy is universally superior, but that the optimal strategy depends on local conditions (as occurred, for example, in our recent work on whether insecticides should be deployed sequentially or in mixtures [18]). There is increasing emphasis on the need for rational, co-ordinated efforts to control disease vectors, and integrated vector management (IVM) schemes are now an integral part of WHO policy [58]. Achieving the goals of programmes such as Roll Back Malaria may require an integrated approach combining disease treatment and interventions against both adult and larval stages of the vector [25]. IVM strategies often deploy combinations of interventions targeting two or more stages of the life-cycle. Combinations are intuitively likely to be more effective than interventions targeting a single stage. In reality, there are a number of important confounding factors that can affect the effectiveness of combined insecticide interventions. A comprehensive review of these factors can be found in [59] but they include, for example, (i) whether the insecticides act independently or may interfere or synergise with each other, (ii) whether the durations of insecticide persistence are matched or whether one decays more rapidly leaving the other to act alone for extended periods, and (iii) the behaviour of the vector, such as the extent to which it is anthropophilic and/or endophilic. As a real example, data from the Solomon Islands [60] suggested that house spraying (with DDT) was more effective than ITNs but that the amount of the insecticide required would be reduced if ITNs were also used. However, the same study was not able to associate reduction in malaria cases with larviciding (with temephos) in combination with other interventions. In particular, the use of IRS and ITNs in combination is thought to increase the probability of a mosquito meeting an insecticide, and help to reach and maintain high coverage levels that are often difficult to attain with single deployment strategies [61, 62]. Similarly, the addition of larviciding to ITN deployment has been shown to be highly beneficial [63] as has larval source management, although this depends on the ability to identify a large proportion of breeding sites [64]. The problem is that the more effective an intervention, the greater the selection for resistance; trading short-term benefits in reducing disease transmission, against longer-term impacts of driving IR means that both processes should ideally be combined in the same model as was done here. Resistance is a constant threat to interventions and our results suggest that when deploying a single intervention, even a small increase in survival due to insecticide resistance may be sufficient to restore a mosquito population to sustainable levels (Tables 5 and 6). The results presented above suggest that, in terms of reducing adult female population size, the use of larviciding seems an effective option either alone or in combination, although unlike ITN and IRS, it will not reduce the longevity of adult females. Importantly, it is likely that resistance will spread faster if insecticides target the larval stages rather than the adult stages. This occurs for two reasons. First, because insecticides have a bigger impact on larval survival: their effects are compounded over the ten days of larval life, so selection for resistance may be higher. Secondly, because larviciding applies selection pressure on both sexes; in contrast, adulticides used in IRS or ITNs differentially target females, leaving the exophilic males as a sort of unexposed refugia shielded from selection pressures [19, 65]. There are frequent calls to 'model resistance' (e.g. [66]) and our modelling approach describes the parameters required to fully calibrate the system, which constitutes a type of 'shopping list' of variables that should be collected in the field. It is important to note that we are not attempting here to investigate and evaluate specific insecticide-based interventions, but are concentrating on developing the methodology by which this may be done. The main impediment to investigating specific interventions is that many of the required parameter values are largely unknown. As a specific example, the number of male mosquitoes entering homes (and hence potentially encountering insecticides on wall and ITNs) is often unknown because many researchers simply discard males from their collections as they play no role in malaria transmission. We therefore recognise that accurate calibration of individual ecological/epidemiological settings is currently impossible. We have focused on developing the model, obtaining preliminary, illustrative results, and anticipate that its main use will be in future sensitivity analyses. These analyses recognise that accurate calibration is often impossible and instead explore a single, plausible parameter space (e.g. [18]); the key operational issue then is to identify what interventions are best (however 'best' is defined, e.g. cost, simplicity, short- or long-term impact on transmission) and whether the 'best' policy depends on local vector bionomics and patterns of transmission. Our results are, therefore, preliminary and serve the purpose of demonstrating the potential of our computational approach. If one policy always performs better irrespective of underlying parameters, then it is a robust conclusion to use that policy. If some policies work better in certain situations and worse in others, then analysis of the models can show under which conditions (i.e. parameter combinations) each policy works best (most obviously using classification trees, e.g. [67]) and hence identify policies appropriate to local conditions. The illustrative analyses we performed explored the comparative impact of ITNs, IRS and larvicides, and quantified the benefits that can be achieved by combining these interventions. We develop and describe a stand-alone model that simultaneously incorporates mosquito demography and the genetics of resistance, to simulate the impact on disease transmission and the extent to which this impact is threatened by the spread of resistance. Future development would be to link the model to one of health economics to investigate the cost-effectiveness of each intervention and the extent to which short-term gains in control might be offset by longer-term losses due to resistance. There is currently intense interest in modelling malaria to underpin elimination efforts [66], and models such as the one developed here linking demography and the genetics of resistance have a key role to play in designing sustainable control and elimination strategies. DDPR: Density-dependent population regulation EIP: Extrinsic incubation period IR: Insecticide resistance IRS: Indoor residual spraying ITN: Insecticide-treated net IVM: Integrated vector management R0 : Basic reproductive number The homozygous resistant genotype SR: The heterozygous sensitive/resistant genotype The homozygous sensitive genotype Camejo A. Control issues. Trends Parasitol. 2016;32:169–71. World Health Organization. A global brief on vector-borne diseases. Geneva: World Health Organization; 2014. World Health Organization. World Malaria Report 2017. Geneva: World Health Organization; 2017. Eisele TP, Larsen D, Steketee RW. Protective efficacy of interventions for preventing malaria mortality in children in Plasmodium falciparum endemic areas. Int J Epidemiol. 2010;39(Suppl. 1):i88–i101. Christian L. Insecticide-treated bed nets and curtains for preventing malaria. Cochrane Database Syst Rev. 2004;2:CD000363. Pluess B, Tanser FC, Lengeler C, Sharp BL. Indoor residual spraying for preventing malaria. Cochrane Database Syst Rev. 2010;4:CD006657. Bhatt S, Weiss DJ, Cameron E, Bisanzio D, Mappin B, Dalrymple U, et al. The effect of malaria control on Plasmodium falciparum in Africa between 2000 and 2015. Nature. 2015;526:207–11. Article PubMed PubMed Central CAS Google Scholar Ranson H, Lissenden N. Insecticide resistance in African Anopheles mosquitoes: a worsening situation that needs urgent action to maintain malaria control. Trends Parasitol. 2016;32:187–96. Article PubMed CAS Google Scholar World Health Organization. Global plan for insecticide resistance management in malaria vectors. Geneva: World Health Organization; 2012. World Health Organization. Test procedures for insecticide resistance monitoring in malaria vector mosquitoes. Geneva: World Health Organization; 2013. Ranson H, Abdallah H, Badolo A, Guelbeogo WM, Kerah-Hinzoumbé C, Yangalbé-Kalnoné E, et al. Insecticide resistance in Anopheles gambiae: data from the first year of a multi-country study highlight the extent of the problem. Malar J. 2009;8:299. Koella JC. On the use of mathematical models of malaria transmission. Acta Trop. 1991;49:1–25. McKenzie FE, Samba EM. The role of mathematical modeling in evidence-based malarial control. Am J Trop Med Hyg. 2004;75(Suppl. 2):94–6. Read AF, Lynch PA, Thomas MB. How to make evolution-proof insecticides for malaria control. PLoS Biol. 2009;7:e1000058. Gourley SA, Liu R, Wu J. Slowing the evolution of insecticide resistance in mosquitoes: a mathematical model. P Roy Soc A-Math Phy. 2011;467:2127–48. Glunt KD, Thomas MB, Read AF. The effects of age, exposure history and malaria infection on the susceptibility of Anopheles mosquitoes to low concentrations of pyrethroid. PLoS One. 2011;6:e24968. Levick B, South A, Hastings IM. A two-locus model of the evolution of insecticide resistance to inform and optimise public health insecticide deployment strategies. PLoS Comp Biol. 2017;13:e1005327. Article CAS Google Scholar Birget PLG, Koella JC. A genetic model of the effects of insecticide-treated bed nets on the evolution of insecticide resistance. Evo Med Public Health. 2015;2015:205–15. Curtis CF. Theoretical models of the use of insecticide mixtures for the management of resistance. Bull Entomol Res. 1985;75:259–65. Blayneh KW, Mohammed-Awel J. Insecticide-resistant mosquitoes and malaria control. Math Biosci. 2014;252:14–26. Briët OJT, Penny MA, Hardy D, Awolola TS, Bortel WV, Corbel V, et al. Effects of pyrethroid resistance on the cost effectiveness of a mass distribution of long-lasting insecticidal nets: a modelling study. Malar J. 2014;12:77. Clements AN. The Biology of Mosquitoes: Volume 1: Development, Nutrition and Reproduction. London: Chapman & Hall; 1992. White M, Griffin J, Churcher T, Ferguson N, Basanez M-G, Ghani A. Modelling the impact of vector control interventions on Anopheles gambiae population dynamics. Parasit Vectors. 2011;4:153. Hancock P, Godfray HC. Application of the lumped age-class technique to studying the dynamics of malaria-mosquito-human interactions. Malar J. 2007;6:98. R Core Team. R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2014. Cushing JM, Costantino RF, Dennis B, Desharnais RA, Henson SM. Chaos in Ecology (Theoretical Ecology Series Volume 1). San Diego: Academic Press/Elsevier; 2003. Caswell H. Matrix Population Models: Construction, Analysis and Interpretation. 2nd ed. Sunderland: Sinauer Associates; 2001. Diabaté A, Yaro AS, Dao A, Diallo M, Huestis DL, Lehmann T. Spatial distribution and male mating success of Anopheles gambiae swarms. BMC Evol Biol. 2011;11:184. Andrews C. The Hardy-Weinberg Principle. Nature Education Knowledge. 2010;3:65. Anderson RM, May RM. Infectious Diseases of Humans: Dynamics and Control. Oxford: Oxford University Press; 1992. Smith DL, Battle KE, Hay SI, Barker CM, Scott TW, McKenzie FE. Ross, Macdonald, and a theory for the dynamics and control of mosquito-transmitted pathogens. PLoS Pathog. 2012;8:e1002588. Gillies MT, Wilkes TJ. A study of the age-composition of populations of Anopheles gambiae Giles and A. funestus Giles in north-eastern Tanzania. Bull Entomol Res. 1965;56:237. Churcher TS, Lissenden N, Griffin JT, Worrall E, Ranson H. The impact of pyrethroid resistance on the efficacy and effectiveness of bednets for malaria control in Africa. eLife. 2016;5:e16090. Chitnis N, Smith T, Steketee R. A mathematical model for the dynamics of malaria in mosquitoes feeding on a heterogeneous host population. J Biol Dyn. 2008;2:259–85. van den Driessche P, Watmough J. Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission. Math Biosci. 2002;180:29–48. Diekmann O, Heesterbeek JA, Roberts MG. The construction of next-generation matrices for compartmental epidemic models. J Roy Soc Interface. 2010;47:873–85. Smith DL, McKenzie FE, Snow RW, Hay SI. Revisiting the basic reproductive number for malaria and its implications for malaria control. PLoS Biol. 2007;5:e42. Smith D, Dushoff J, Snow R, Hay S. The entomological inoculation rate and Plasmodium falciparum infection in African children. Nature. 2005;438:492–5. Hartl DL, Clark AG. Principles of Population Genetics. 4th ed. Sunderland: Sinauer Associates; 2007. Lu J, Li J. Dynamics of stage-structured discrete mosquito population models. J Appl Anal Comput. 2011;1:53–67. Reiner RC, Perkins TA, Barker CM, Niu T, Chaves LF, Ellis AM, et al. A systematic review of mathematical models of mosquito-borne pathogen transmission: 1970–2010. J Roy Soc Interface. 2013;10:81. Diabate A, Tripet F. Targeting male mosquito mating behaviour for malaria control. Parasit Vectors. 2015;8:347. South A, Hastings IM. Insecticide resistance evolution with mixtures and sequences: a model-based explanation. Malar J. 2018;17:80. Gould F. Simulation models for predicting durability of insect-resistant germ plasm: a deterministic diploid, two-locus model. Environ Entomol. 1986;15:1–10. Tabashnik BE. Managing resistance with multiple pesticide tactics: theory, evidence, and recommendations. J Econ Entomol. 1989;82:1263–9. Tabashnik BE. Insecticide resistance. Trends Ecol Evol. 1995;10:164. Roush RT. Designing resistance management programs - how can you choose. Pestic Sci. 1989;26:423–41. Mani GS. Evolution of resistance in the presence of 2 insecticides. Genetics. 1985;109:761–83. Via S. Quantitative genetic models and the evolution of pesticide resistance. In: Committee on Strategies for the Management of Pesticide Resistant Pest Populations, editors. Pesticide resistance: Strategies and tactics for management. Washington DC: National Academy Press; 1986. p. 222–35. Killeen G, Chitnis N. Potential causes and consequences of behavioural resilience and resistance in malaria vector populations: a mathematical modelling analysis. Malar J. 2014;13:97. Briet O, Chitnis N. Effects of changing mosquito host searching behaviour on the cost effectiveness of a mass distribution of long-lasting, insecticidal nets: a modelling study. Malar J. 2013;12:215. Sokhna C, Ndiath MO, Rogier C. The changes in mosquito vector behaviour and the emerging resistance to insecticides will challenge the decline of malaria. Eur J Clin Microbiol Infect Dis. 2013;19:902–7. Gatton ML, Chitnis N, Churcher T, Donnelly MJ, Ghani AC, Godfray HCJ, et al. The importance of mosquito behavioural adaptations to malaria control in Africa. Evolution. 2013;67:1218–30. Griffin JT, Hollingsworth TD, Okell LC, Churcher TS, White M, Hinsley W, et al. Reducing Plasmodium falciparum malaria transmission in Africa: a model-based evaluation of intervention strategies. PLoS Med. 2010;7:e1000324. Smith T, Maire N, Ross A, Penny M, Chitnis N, Schapira A, et al. Towards a comprehensive simulation model of malaria epidemiology and control. Parasitology. 2008;135:1507–16. Fillinger U, Sombroek H, Majambere S, van Loon E, Takken W, Lindsay SW. Identifying the most productive breeding sites for malaria mosquitoes in The Gambia. Malar J. 2009;8:62. World Health Organization. Handbook for integrated vector management. Geneva: World Health Organization; 2012. Okumu F, Moore S. Combining indoor residual spraying and insecticide-treated nets for malaria control in Africa: a review of possible outcomes and an outline of suggestions for the future. Malar J. 2011;10:208. Over M, Bakote'e B, Velayudhan R, Wilikai P, Graves PM. Impregnated nets or DDT residual spraying? Field effectiveness of malaria prevention techniques in Solomon Islands, 1930–1999. Am J Trop Med Hyg. 2004;71(Suppl.):214–23. Corbel V, Akogbeto M, Damien GB, Djenontin A, Chandre F, Rogier C, et al. Combination of malaria vector control interventions in pyrethroid resistance area in Benin: a cluster randomised controlled trial. Lancet Infect Dis. 2012;12:617–26. Hamel MJ, Otieno P, Bayoh N, Kariuki S, Were V, Marwanga D, et al. The combination of indoor residual spraying and insecticide-treated nets provides added protection against malaria compared with insecticide-treated nets alone. Am J Trop Med Hyg. 2011;85:1080–6. Fillinger U, Ndenga B, Githeko A, Lindsay SW. Integrated malaria vector control with microbial larvicides and insecticide-treated nets in western Kenya: a controlled trial. Bull World Health Organ. 2009;87:655–65. Tusting LS, Thwing J, Sinclair D, Fillinger U, Gimnig J, Bonner KE, et al. Mosquito larval source management for controlling malaria. Cochrane Database Syst Rev. 2013;8:CD008923. PubMed Central Google Scholar Koella JC, Lynch PA, Thomas MB, Read AF. Towards evolution-proof malaria control with insecticides. Evol Appl. 2009;2:469–80. The malERA Consultative Group on Modeling. A research agenda for malaria eradication: Modeling. PLoS Med. 2011;8:e1000403. Barbosa S, Hastings I. The importance of modelling the spread of insecticide resistance in a heterogeneous environment: the example of adding synergists to bed nets. Malar J. 2012;11:258. We thank Dr Andy South and two anonymous reviewers for many helpful comments on the manuscript. This study was funded by the Fundação para a Ciência e Tecnologia, Portugal and Siemens Portugal; the Bill and Melinda Gates Foundation; the Wellcome Trust Institutional Strategic Strengthening Fund; and the Innovative Vector Control Consortium. NC was partly supported by the project, "Increased susceptibility to Plasmodium falciparum of insecticide resistant Anopheles in West Africa" which is a partnership of Swiss TPH with West African institutions funded by the Swiss Network for International Studies. Not applicable. The article describes entirely theoretical research and calibration data were extracted from published, publicly-available literature. Parasitology Group, Liverpool School of Tropical Medicine, L3 5QA, Liverpool, UK Susana Barbosa, Katherine Kay & Ian M. Hastings Present address: Université Côte d'Azur, Centre National de la Recherche Scientifique, Institut de Pharmacologie Moléculaire et Cellulaire, Valbonne, France Susana Barbosa Present address: Department of Pharmaceutical Sciences, State University of New York at Buffalo, Buffalo, New York, 14214, USA Katherine Kay Department of Epidemiology and Public Health, Swiss Tropical and Public Health Institute, Socinstrasse 57, 4002, Basel, Switzerland Nakul Chitnis University of Basel, Petersplatz 1, 4003, Basel, Switzerland Ian M. Hastings SB, NC and IH developed the algebra which was later verified by KK. SB wrote the R scripts and produced the figures. All authors contributed to the writing and revising of the manuscript. All authors read and approved the final manuscript. Correspondence to Ian M. Hastings. Additional file 1: A mathematically-rigorous derivation of R0 for anophelene mosquitoes. (PDF 141 kb) An equivalent, intuitive, derivation of R0 for anophelene mosquitoes. (DOCX 18 kb) Notes on model calibration. (DOCX 21 kb) Barbosa, S., Kay, K., Chitnis, N. et al. Modelling the impact of insecticide-based control interventions on the evolution of insecticide resistance and disease transmission. Parasites Vectors 11, 482 (2018). https://doi.org/10.1186/s13071-018-3025-z Accepted: 18 July 2018
CommonCrawl
Settings and artefacts relevant for Doppler ultrasound in large vessel vasculitis L. Terslev1, A. P. Diamantopoulos2, U. Møller Døhn1, W. A. Schmidt3 & S. Torp-Pedersen4 Ultrasound is used increasingly for diagnosing large vessel vasculitis (LVV). The application of Doppler in LVV is very different from in arthritic conditions. This paper aims to explain the most important Doppler parameters, including spectral Doppler, and how the settings differ from those used in arthritic conditions and provide recommendations for optimal adjustments. This is addressed through relevant Doppler physics, focusing, for example, on the Doppler shift equation and how angle correction ensures correctly displayed blood velocity. Recommendations for optimal settings are given, focusing especially on pulse repetition frequency (PRF), gain and Doppler frequency and how they impact on detection of flow. Doppler artefacts are inherent and may be affected by the adjustment of settings. The most important artefacts to be aware of, and to be able to eliminate or minimize, are random noise and blooming, aliasing and motion artefacts. Random noise and blooming artefacts can be eliminated by lowering the Doppler gain. Aliasing and motion artefacts occur when the PRF is set too low, and correct adjustment of the PRF is crucial. Some artefacts, like mirror and reverberation artefacts, cannot be eliminated and should therefore be recognised when they occur. The commonly encountered artefacts, their importance for image interpretation and how to adjust Doppler setting in order to eliminate or minimize them are explained thoroughly with imaging examples in this review. Diagnosis and monitoring of large vessel vasculitis (LVV) are part of most rheumatology clinics. LVV includes giant cell arteritis with and without large vessel involvement and Takayasu arteritis. Giant cell arteritis is seen in people aged >50 years, with a predilection for the temporal and other extracranial arteries such as the axillary and subclavian arteries. Takayasu arteritis appears in younger people with disease onset before the age of 40 years. A distinctive part of LVV is an increase of the intima-media vessel wall thickness, which may result in stenosis or even occlusion (altering the flow profile and changing the blood velocity in the affected areas of the vessels). Several publications have highlighted the use of ultrasound for diagnosing LVV and monitoring disease activity [1,2,3,4,5,6] and in some institutions it has even substituted for temporal artery biopsy [6, 7]. Some of the vasculitis features, such as increased intima-media vessel wall thickness, are diagnosed with greyscale (GS) ultrasound alone but Doppler ultrasound plays a role by aiding in visualising the vessel and the wall swelling and pinpointing the areas of stenosis or occlusion and, hence, the importance of changes in flow velocity and direction. Though Doppler ultrasound is used routinely in most rheumatology departments for patients with arthritis, the Doppler settings for ultrasound in large vessels and the resulting artefacts are very different. In arthritic joints the type of vascularization is characterised by slow flow, and the vessels of interest are invisible without the Doppler mode with minimal movement of the vessel wall during the cardiac cycle. The type of flow in arteries affected by LVV is fast flow. Large vessels have considerable movement of the vessel wall during the cardiac cycle. These differences in flow for arthritis and vasculitis have an impact on the optimal Doppler settings for each. The correct interpretation of flow requires knowledge of physical and technical factors influencing the Doppler signal. Artefacts caused by physical limitations of the modality or inappropriate equipment settings may result in flow characteristics that differ considerably from the actual physiologic situation, consequently leading to misinterpretation of flow information. The different factors and their impact on flow are described in the following sections. The ability to detect flow—the Doppler shift The ability to detect flow is caused by the Doppler shift, which is a change in the wavelength (frequency) of sound resulting from motion of a source, receiver or reflector. As the ultrasound transducer is both a stationary source and a receiver of sound, the Doppler shift arises from reflectors in motion—for all practical purposes the erythrocytes [8]. When the ultrasound pulse is reflected from moving erythrocytes, two successive Doppler shifts are involved. First, the sound from the stationary transmitter—the transducer—is received by the moving erythrocytes—the receiver in motion. Second, the erythrocytes act as moving sources of ultrasound as they re-eradiate the ultrasound wave back toward the transducer—the emitter in motion. These two Doppler shifts account for the factor 2 in the Doppler equation: $$ {f}_D={f}_t-{f}_r=\frac{2{f}_t v \cos \theta}{c} $$ where f D is the Doppler shift, f t is the transmitted frequency, f r is the received frequency, v is the blood velocity, θ is the insonation angle (angle between the ultrasound beam and the blood flow) and c is the speed of sound. The Doppler shift is thus directly proportional to the velocity of the flow (v), cosine (cos) to the insonation angle (θ), and the transmitted frequency of the ultrasound (f t ) [9]. The Doppler used is called pulsed Doppler. With a series of pulses, the phase of the returning signals is compared to the phase of the emitted signal. A change in phase translates to a change in frequency—e.g. when the returning signal is compared to the emitted, returning wave tops will not correspond to the emitted wave tops because the distance between the tops has changed. The number of these pulses per second is called the pulse repetition frequency (PRF). The insonation angle and blood velocity The insonation angle is the angle between the path of the Doppler pulses and the direction of flow in the vessel. When this angle is 90°, there will be no frequency shift because cos(90°) = 0 (Fig. 1). The maximum frequency shift of a given vessel is obtained when the direction of flow matches the direction of the Doppler pulses (directly towards or away from the transducer) giving an insonation angle of 0° or 180°, resulting in cos(θ) = ±1. The insonation angle is the angle between the path of the Doppler pulses and the direction of flow in the vessel as indicated by the orientation of the Doppler box. When this angle is 90° (top), there will be no frequency shift because cos(90°) = 0. With angle correction in this carotid artery more of the flow becomes detectable (bottom) When measuring the velocity of the blood flow spectral Doppler is used, and the Doppler equation is rearranged: $$ {f}_D=\frac{2{f}_t v \cos \theta}{c}\iff v=\frac{f_D c}{2{f}_t \cos \theta} $$ As seen, blood velocity can only be determined if the insonation angle is known and entered into the machine—the angle correction. In synovial flow where the insonation angle is unknown as the vessels cannot be seen, the velocity is calculated based on the assumption that the Doppler angle is zero. Therefore, the displayed velocities are most often incorrect. However, the large(r) vessels assessed in LVV are easily seen and angle correction is possible, allowing for correct velocity calculation. Angle correction can only be made in spectral Doppler. In arthritis, the presence of flow is important, not direction nor velocity, and angle correction has no relevance. In LVV, correct blood velocity is important for diagnosing the degree of stenosis [10, 11]. Doppler modalities and choice of Doppler mode There are three Doppler modes: colour Doppler (CD), power Doppler (PD), and spectral Doppler. The two "colour" Doppler modalities—PD and CD—show different aspects of the flow detected superimposed on the GS image. The Doppler analysis is carried out in the Doppler box, defining the region of interest. Inside the box, the image is divided into small cells, each behaving like an independent Doppler gate with its own Doppler analysis. For both colour modalities, a Doppler shift (change in frequency) has to be detected before any Doppler information is displayed. The difference between the two modalities is the way the Doppler information is displayed and not the way the Doppler shift is detected. The ability to detect the Doppler shift determines the machine's Doppler sensitivity, which may be different for CD and PD and has to be determined in practice [12]. In CD, the mean frequency shift for each cell inside the Doppler box is displayed as a colour according to a colour code. The colours that arise from the detected Doppler shifts indicate the qualitative direction of flow and also relative velocities and is an image of the mean blood velocity. The colour "red" is most often by default set to flow towards the top of the image and blue towards the bottom. Different hues of red and blue indicate different velocities (in reality different frequency shifts). CD displays the direction and velocity of the flow and is the recommended Doppler modality for LVV. Power Doppler PD displays the energy in each cell from all the moving erythrocytes—neither direction nor velocity. Disregarding direction of flow (negative or positive frequency shift) and velocity (high- or low-frequency shift) the power (energy) of the many different frequency shifts inside a cell are added to form the PD signal. The brighter the colour the higher the energy. In PD, the power of the signal from each point relates to the number of moving erythrocytes in that sample volume, which means it depicts the amount of blood moving in each cell. PD images may be regarded as images of the detected blood pool. PD does not detect direction nor velocity and is not recommended for LVV. Spectral Doppler In spectral Doppler, a Doppler line is displayed in the GS image indicating the path of the Doppler beam. This line may be vertical or angled relative to the GS image. This is called Duplex ultrasound (GS image and spectral Doppler) and if PD/CD is present as well, it is called triplex ultrasound. On the Doppler line the measurement area—the gate—is bordered by two parallel lines. The gate (area between the two lines) can be moved up and down on the line and adjusted in size. With spectral Doppler, the detected Doppler shifts within the gate are plotted against time, and the blood velocity throughout the cardiac cycle is shown on a graph. To display the correct blood velocity, angle correction must be made. When angle correction is activated a line appears centrally in the Doppler gate—the operator aligns this line with the orientation of the vessel and the machine then determines the insonation angle (see the rearranged Doppler equation)—allowing the detected Doppler shifts (kHz) to be translated into velocities (m/s). The flow is displayed as a spectral waveform of changes in velocity during the cardiac cycle. The flow velocity is the vertical axis with the time on the horizontal axis (Fig. 2). In spectral Doppler, the flow is displayed as a spectral waveform of changes in velocity during the cardiac cycle. The flow velocity is on the vertical axis with the time on the horizontal axis. The image is from a carotid artery Inside the vessel, the erythrocytes travel with different velocities; the highest velocities are found in the centre of the vessel and the lowest velocities nearest the wall. On the graph, the velocities throughout the cardiac cycle are shown with a line with a certain thickness. The thickness of the line displays the distance between the fastest and the slowest flow inside the Doppler gate. With spectral Doppler it is possible to display the flow velocities in the vessels. Optimal Doppler settings Several adjustable parameters may improve the Doppler findings and are discussed in the following sections. Doppler frequency The Doppler frequency at which the transducer operates is selectable. As in GS ultrasound, a lower Doppler frequency allows more penetration but also a lower resolution. Thus, higher Doppler frequency gives a more detailed image of the vessels at the expense of penetration (Fig. 3). The trade-off between penetration and sensitivity is somewhat unpredictable because resolution in this context is not an issue. An inappropriate Doppler frequency will result in an inability to detect flow. A lower Doppler frequency allows more penetration but also a lower resolution. A higher Doppler frequency gives a more detailed image of the vessels at the expense of penetration. Left: carotid artery with a Doppler frequency of 7.7 MHz. Right: carotid artery with a Doppler frequency of 14.3 MHz; this cannot penetrate well enough to give optimal flow information The optimal Doppler frequency must be found in practice and not in theory. The colour box is the area where evaluation of flow is performed. The numerous Doppler analyses inside the colour box are computationally demanding on the ultrasound unit. This results in a decrease of GS frame rate and the larger the colour box, the lower the frame rate. It is recommended to adjust the box so it covers the region of interest and goes to the top of the image, even though this is not as imperative as in arthritis (see Doppler artefacts relevant in fast flow). Some vessels are horizontally orientated and almost parallel to the probe (orthogonal to the Doppler beam), which decrease the Doppler shifts and thereby the detection of flow. To improve flow detection, it is recommended to angle the Doppler box to avoid an insonation angle close to 90° but preferably below 60o [13]. PRF is the Doppler sampling frequency of the transducer and is reported in kilo Hertz (KHz). The frequency with which these pulses are emitted determines the maximum Doppler shifts obtainable. The maximum Doppler shift frequency that can be sampled without aliasing is PRF/2, called the Nyquist limit [14]. The Nyquist limit may be presented on-screen as a blood velocity (the maximum measurable velocity of blood moving directly towards or away from the transducer) or in kHz (maximum measurable Doppler shift). If the blood velocity (and thereby the Doppler shift) is above the Nyquist limit, the machine will misinterpret the velocity and direction, causing aliasing (see section on Doppler artefacts relevant in fast flow). When a high PRF is chosen, it is assumed that high-velocity flow is of interest. Therefore, higher wall filters that remove noise are applied (see Doppler artefacts relevant in fast flow). Adjusting PRF results in simultaneous adjustment of these wall filters (linked control). With a high PRF, the system is insensitive to lower velocities, which has the same frequency range as noise, because the higher wall filters eliminate the information. However, pulsation in large vessels makes the vessel wall move back and forth, generating noise (Fig. 4a) which disturbs the interpretation of the image if the PRF is set too low. When a higher PRF is selected the wall filters will eliminate the noise from the moving vessel wall and only flow inside the vessel and the correct flow direction will be shown (Fig. 4b). Adjustment of pulse repetition frequency (PRF) results in simultaneous adjustment of wall filters. Left: With a low PRF the wall filters do not remove noise from the moving vessel wall as in this carotid artery (PRF 0.5 KHz). Right: When a higher PRF is selected the wall filters will eliminate the noise from the moving vessel wall and only flow inside the vessel and the correct direction of flow will be shown (PRF 3.5 KHz) The optimal PRF setting in LVV is the lowest possible that does not result in aliasing in a normal part of the vessel. With this setting, a stenosis will result in aliasing, which thereby becomes evident. With the PRF set too high a stenotic area may not be identified by the presence of aliasing and wall filters may eliminate true flow from the vessel lumen. The optimal PRF will vary depending on flow velocity in the examined vessel (higher PRF in the carotid artery than in the occipital artery). The ideal PRF may also vary between patients but will often be within the same range (Table 1). Table 1 How to adjust the Doppler settings in large vessel vasculitis The PRF in spectral Doppler is the same as in CD, where the lowest possible PRF without aliasing is preferable. The lower the PRF is, the more detailed (larger) the flow curve is. When aliasing occurs, it is seen on the flow curve as the top of the graph being cut off and displayed as coming from the bottom of the image (or vice versa). There are two ways to correct this: baseline correction or increasing the PRF—the latter will result in a smaller graph with loss of details. Baseline correction is an independent function on the machine that allows the operator to lower or elevate the baseline until there is room for the whole graph on either side of the baseline (Fig. 5). Baseline correction in spectral Doppler. The optimal PRF is without aliasing. The lower the PRF is, the more detailed (larger) the flow curve is. Left: When aliasing occurs it is seen on the flow curve as the top of the graph being cut off and displayed as coming from the bottom of the image. Right: Baseline correction in this carotid artery allows the operator to lower or elevate the baseline until there is room for the whole graph on either side of the baseline. When baseline correction is not sufficient, then PRF must be increased In spectral Doppler, baseline correction eliminates aliasing and, when insufficient, PRF must be increased. Every Doppler instrument has high-pass filters—also called wall filters—which separate by frequency alone [15, 16]. They eliminate the lowest Doppler shifts that originate from the motion of the vessel wall and surrounding solid tissues. These unwanted shifts are called clutter or motion artefacts (low frequency and high amplitude). The machine cannot distinguish which low-frequency Doppler shifts originate from slow moving blood and which originate from tissue movement. Consequently, both will be removed when the filters are high (the PRF and wall filter are linked controls, and by increasing the PRF, the wall filters are also increased). On some machines, the filters can be adjusted manually, but the highest possible wall filter is higher for a high PRF than a low. The wall filters should be kept high in LVV to eliminate artefacts from pulsating vessel walls. Doppler gain The Doppler gain is independent of GS gain and determines the sensitivity of the system to flow. Gain is adjusted separately for CD and spectral Doppler. CD gain is adjusted by increasing the gain until random noise appears in the image and then lowering the gain until only a very few noise pixels are present. By lowering the gain, noise, blooming and motion artefacts are reduced but may result in only the centre of the large vessel being filled up, mimicking wall swelling, as slow flow signals alongside the vessel wall will go undetected [17, 18]. Too high gain settings result in random noise (Fig. 6) [19] or blooming where colour pixels may cover the artery walls, hiding a possible vasculitic wall swelling. The gain determines the sensitivity of the system to flow. In this carotid artery the gain is adjusted by increasing the gain until random noise appears in the image and then lowering the gain until only a very few noise pixels are present. Top: Correct gain. Middle: Too high gain. Bottom: Too low gain Spectral gain is adjusted by increasing the gain until the flow curve appears as clear as possible with potential background random noise (on the black background of the graph). Then the gain is lowered until the flow curve is clearly distinguishable from the background noise. The gain is adjusted correctly by increasing the gain until random noise appears in the background and then lowered until only a very few noise pixels are present. Persistence (CD) Persistence is a function that averages colour information over a number of frames—usually adjustable over a number of arbitrary units (e.g. 0 to 5) from minimum (no averaging) to maximum averaging. The more persistence, the more "afterglow" of the colour but at the expense of the dynamic nature of the flow. Having a high persistence has no advantage and should be kept low to maintain the dynamic nature of the flow. Colour priority (threshold) When colour information is obtained, GS information will also be present, and the machine has to decide whether to show one or the other. Colour priority (CP) is a function that determines this. It is only adjustable on some machines and on others it is a fixed setting or is handled by the machine as a linked control. A low CP allows valid GS information to override Doppler information, helping to suppress motion and blooming artefacts in the relatively hyperechoic tissue surrounding a pulsating artery (above a certain grey level, grey overrides colour). A high CP allows Doppler information to override GS information, e.g. GS reverberation artefacts inside vessels. This function explains why some Doppler artefacts apparently prefer to appear in dark regions of the image. It also explains why GS gain may influence the amount of colour in the image, as increasing GS gain may result in more GS information being above the threshold where colour is suppressed. CP when set low will allow a relatively large vessel to be displayed without blooming or artefacts due to pulsating. However, a carotid artery may often have some degree of reverberation artefact in the lumen, and the function will then result in poor filling of the vessel lumen (Fig. 7). Colour priority/threshold. A low colour priority allows valid GS information to override Doppler information and a high colour priority allows Doppler information to override GS information. In the carotid artery the low colour priority results in poor filling of the vessel lumen (top) and should therefore be set high (bottom) CP should be adjusted to high as it may have an impact on the visualisation of flow in larger vessels. Doppler artefacts relevant in fast flow Random noise Random noise is produced in all electrical circuits. When the gain is too high, this noise becomes visible in the CD and PD display. In the image, it is seen as colour foci appearing randomly in the image. It is easily identified as an artefact because the colour foci do not reappear in the same location as true flow does (Fig. 6b). By lowering the gain random noise will disappear. The random noise is used to set an optimal Doppler gain see paragraph on gain adjustment. Aliasing arises when the Doppler shift of the moving blood is higher than half of the PRF (Nyquist limit). Aliased signals are displayed with the wrong direction (red instead of blue and vice versa) and velocity (the hue of the colour) (Fig. 8). Aliasing exists only in CD and spectral Doppler, and while it has no relevance in arthritic conditions, it is important in LVV to identify areas with stenosis because these areas will have the highest blood velocities. Aliasing in spectral Doppler distorts the flow information and must be corrected by adjusting the PRF before velocities can be measured. In the axillary artery the aliased signal identifies the area with stenosis because this area has the highest blood velocities. The flow is displayed with wrong direction (red instead of blue and vice versa) and velocity (the hue of the colour) The PRF should be adjusted so normal flow does not create aliasing. Aliasing then aids in diagnosing stenotic areas. The Doppler circuitry detects motion between the transducer and the tissue. When both of these are immobile, only the moving blood will generate colours in the image. Movement of the patient, transducer, or movement of the tissue or vessel wall caused by arterial pulsation during Doppler imaging give motion relative to the transducer and produce a Doppler shift [19] (Fig. 4, left). These movements are slow and produce low frequency Doppler shifts [16] that appear as random short flashes of large confluent areas of colours. Motion artefacts are easily separated from true flow signals because of their relatively large size and because they do not respect the vessel walls. One way to avoid these low-frequency flash artefacts is to use wall filters which remove the noise together with slow flow [16]. Another option is to lower the CP; however, although this can decrease motion artefacts, it may result in poorer filling of the vessel lumen and is not recommended. Motion artefacts should be avoided by increasing the PRF and thereby the wall filters to eliminate noise generated by the tissue movement. Blooming artefact describes the phenomenon that the colour "bleeds" beyond the vessel wall, making it look larger than it really is (Fig. 9). Blooming is gain-dependent and lowering the Doppler gain will minimise blooming artefacts and vice versa. Blooming in a facial artery and vein. Blooming is gain-dependent and lowering the Doppler gain will minimize blooming artefacts (top), while increasing gain also increases blooming (bottom) In arthritis, blooming is accepted as a systematic error as it is flow generated and, if avoided, weak flow signals will go undetected. In LVV, blooming is more problematic and must be minimised as it potentially results in colour covering a vessel wall swelling [18]. Blooming must be minimised by lowering the gain as it may disguise a wall swelling. The Doppler image always uses a single focus point. On some machines the Doppler uses the same focus point as the GS image, and on others these two modalities have separate focus points. On some machines, there may be multiple focus points in the GS images while other machines only allow a single GS focus point when in Doppler mode. In the focal zone, the pulse is most narrow and has the highest spatial peak energy. Consequently, the echoes generated in the focal zone have higher amplitudes, and therefore the Doppler sensitivity is dependent upon focus positioning. Typically, focus points can be moved by the examiner in predetermined steps inside the Doppler box. The Doppler focus point must be adjusted within the colour box to the depth of the assessed vessel. Any highly reflecting smooth surface may act as an acoustic mirror and the Doppler image is just as prone to mirroring as the GS image. Mirror artefacts are rare in large vessels and then the mirror surface is often air (Fig. 10). The mirror artefact is easily recognised when the true image, the mirror and mirror image all are present in the image but may be more tricky to identify when only the mirror and mirror image are present. If the artefact is investigated with spectral Doppler, it will show true flow because it is a mirror image of true flow. Mirror artefact. The pleura is a smooth surface with air behind that may act as an acoustic mirror. Left: The subclavian artery (SA) is seen with the pleura as a white line (p) below. Right: The SA is mirrored below the pleura. The clavicle (c) is seen casting an acoustic shadow The mirror artefact cannot be avoided by adjustments. With spectral Doppler, the whole Doppler spectrum may be mirrored in the posterior vessel wall, resulting in an arterial Doppler spectrum on both sides of the baseline. The Doppler pulse behaves just as the GS pulse with respect to reverberation. A superficial vessel may be repeated lower in the image (simple reverberation) or display a showering of colour below the vessel (complex reverberation) (Fig. 11). This is especially relevant when scanning the vertebral artery where the more superficially located carotid artery may cause this artefact. Complex reverberation artefact in temporal artery. The superficial vessel displays reverberation as a showering of colour below the vessel This artefact cannot be eliminated. By letting the colour box go to the top of the image, possible sources of reverberation can be detected. Falsely found absence of flow may occur if the examiner presses too hard with the transducer thereby blocking the flow. This is most relevant for the temporal arteries. It is recommended to use a sufficient amount of gel and light probe pressure. In LVV, CD and spectral Doppler are promising tools to detect and monitor vessel wall inflammation and stenosis. The first step to obtaining more uniform examinations and correct interpretation of Doppler findings in LVV patients is to know how to optimize Doppler settings and become aware of frequently appearing artefacts as described in this review. It is generally recommended to find the correct settings in healthy controls before applying the settings to a LVV patient for pathology. This is in contrast to obtaining optimal settings in arthritis, where it is recommended to adjust the settings in patients with synovial inflammation. Our suggestions to adjust settings for LVV are summarised in Table 1. Even though Doppler ultrasound plays an important role as an aid in visualising the vessel and wall swelling and to pinpoint the areas of stenosis or occlusion, the elementary lesion of LVV is an increased intima-media wall thickness that is visualised with GS ultrasound alone, which also can be verified by a positive compression sign. The artefacts described in this paper are a natural part of Doppler scanning and should be known by all ultrasonographers. Knowledge of artefacts allows true flow to be distinguished from false flow. Noise artefacts appear randomly in the image (perhaps with a preponderance for dark regions) whereas true flow remains geographically fixed. When adjusting the gain, there is a trade-off between optimal setting and the risk of blooming that might cover wall swelling and likewise for the PRF, where the correct adjustment may be too high for optimal colour filling of the vessel, hence mimicking wall swelling. Adjusting the many parameters of the Doppler should not be done at every examination and can be stored as presets. However, some parameters (e.g. PRF, focus and frequency) must be adjusted throughout the examination due to the different size of the vessels, the depth of their location and their flow profile. Currently, the Outcome Measures in Rheumatology (OMERACT) ultrasound group is validating the definitions of the elementary lesions in LVV, further enhancing the use of ultrasound for vasculitis in everyday clinical use. The Doppler artefacts outlined in this review are important and relevant sources of possible misinterpretations. Artefacts cannot completely be eliminated, but can also be used constructively to optimise Doppler settings. Knowledge of Doppler physics and settings and the most frequently appearing artefacts enable investigators to make more precise and uniform interpretations of the examinations. Colour priority GS: LVV: Large vessel vasculitis PRF: Schmidt WA, Kraft HE, Vorpahl K, Volker L, Gromnica-Ihle EJ. Color duplex ultrasonography in the diagnosis of temporal arteritis. N Engl J Med. 1997;337(19):1336–42. Karassa FB, Matsagas MI, Schmidt WA, Ioannidis JP. Meta-analysis: test performance of ultrasonography for giant-cell arteritis. Ann Intern Med. 2005;142(5):359–69. Schmidt WA, Moll A, Seifert A, Schicke B, Gromnica-Ihle E, Krause A. Prognosis of large-vessel giant cell arteritis. Rheumatology (Oxford). 2008;47(9):1406–8. Arida RM, Scorza CA, Schmidt B, de AM, Cavalheiro EA, Scorza FA. Physical activity in sudden unexpected death in epilepsy: much more than a simple sport. Neurosci Bull. 2008;24(6):374–80 Ball EL, Walsh SR, Tang TY, Gohil R, Clarke JM. Role of ultrasonography in the diagnosis of temporal arteritis. Br J Surg. 2010;97(12):1765–71. Diamantopoulos AP, Haugeberg G, Lindland A, Myklebust G. The fast-track ultrasound clinic for early diagnosis of giant cell arteritis significantly reduces permanent visual impairment: towards a more effective strategy to improve clinical outcome in giant cell arteritis? Rheumatology (Oxford). 2016;55(1):66–70. Schmidt WA. Ultrasound in vasculitis. Clin Exp Rheumatol. 2014;32(1 Suppl 80):S71–7. Doppler CA. Über das farbige Licht der Doppelsterne and einiger anderer Gerstirne des Himmels. Abh Konigl-Böhm Ges. 1843;2:465–82. Kremkau FW. Doppler color imaging. Principles and instrumentation. Clin Diagn Ultrasound. 1992;27:7–60. Schmidt WA. Technology Insight: the role of color and power Doppler ultrasonography in rheumatology. Nat Clin Pract Rheumatol. 2007;3(1):35–42. Schmidt WA, Backhaus M. What the practising rheumatologist needs to know about the technical fundamentals of ultrasonography. Best Pract Res Clin Rheumatol. 2008;22(6):981–99. Torp-Pedersen S, Christensen R, Szkudlarek M, Ellegaard K, D'Agostino MA, Iagnocco A, et al. Power and color Doppler ultrasound settings for inflammatory flow: impact on scoring of disease activity in patients with rheumatoid arthritis. Arthritis Rheumatol. 2015;67(2):386–95. Thrush A, Hartshorne T. Peripheral vascular ultrasound. 2nd ed. Churchill Livingstone: 2005. Burns PN. Principles of Doppler and color flow. Radiol Med (Torino). 1993;85(5 Suppl 1):3–16. Jansson T, Persson HW, Lindstrom K. Movement artefact suppression in blood perfusion measurements using a multifrequency technique. Ultrasound Med Biol. 2002;28(1):69–79. Rubin JM, Spectral Doppler US. Radiographics. 1994;14(1):139–50. Martinoli C. Gain setting in power Doppler. Radiology. 1997;202:284–5. Schmidt WA. Role of ultrasound in the understanding and management of vasculitis. Ther Adv Musculoskelet Dis. 2014;6(2):39–47. Pozniak MA, Zagzebski JA, Scanlan KA. Spectral and color Doppler artifacts. Radiographics. 1992;12(1):35–44. Dr Lene Terslev acknowledges the Danish Rheumatism Association for support. Dr Lene Terslev has received funding by the Danish Rheumatism Association. Center for Rheumatology and Spinal Diseases, Copenhagen University Hospital, Rigshospitalet, Copenhagen, Denmark L. Terslev & U. Møller Døhn Department of Rheumatology, Hospital of Southern Norway Trust, Kristiansand, Norway A. P. Diamantopoulos Medical Centre for Rheumatology, Immanuel Krankenhaus, Berlin, Germany W. A. Schmidt Department Radiology, Copenhagen University Hospital. Rigshospitalet, Copenhagen, Denmark S. Torp-Pedersen Search for L. Terslev in: Search for A. P. Diamantopoulos in: Search for U. Møller Døhn in: Search for W. A. Schmidt in: Search for S. Torp-Pedersen in: All authors contributed to manuscript preparation, image collection and critical review before submission. All authors read and approved the final manuscript. Correspondence to L. Terslev. Terslev, L., Diamantopoulos, A.P., Døhn, U.M. et al. Settings and artefacts relevant for Doppler ultrasound in large vessel vasculitis. Arthritis Res Ther 19, 167 (2017) doi:10.1186/s13075-017-1374-1 Doppler ultrasound
CommonCrawl
Journal of the American Mathematical Society Published by the American Mathematical Society, the Journal of the American Mathematical Society (JAMS) is devoted to research articles of the highest quality in all areas of pure and applied mathematics. The 2020 MCQ for Journal of the American Mathematical Society is 4.79. Journals Home eContent Search About JAMS Editorial Board Author and Submission Information Journal Policies Subscription Information No occurrence obstructions in geometric complexity theory by Peter Bürgisser, Christian Ikenmeyer and Greta Panova HTML | PDF J. Amer. Math. Soc. 32 (2019), 163-193 Request permission The permanent versus determinant conjecture is a major problem in complexity theory that is equivalent to the separation of the complexity classes $\mathrm {VP}_{\mathrm {ws}}$ and $\mathrm {VNP}$. In 2001 Mulmuley and Sohoni suggested studying a strengthened version of this conjecture over complex numbers that amounts to separating the orbit closures of the determinant and padded permanent polynomials. In that paper it was also proposed to separate these orbit closures by exhibiting occurrence obstructions, which are irreducible representations of $\mathrm {GL}_{n^2}(\mathbb {C})$, which occur in one coordinate ring of the orbit closure, but not in the other. We prove that this approach is impossible. However, we do not rule out the general approach to the permanent versus determinant problem via multiplicity obstructions as proposed by Mulmuley and Sohoni in 2001. N. Alon and M. Tarsi, Colorings and orientations of graphs, Combinatorica 12 (1992), no. 2, 125–134. MR 1179249, DOI 10.1007/BF01204715 Jarod Alper, Tristram Bogart, and Mauricio Velasco, A lower bound for the determinantal complexity of a hypersurface, Found. Comput. Math. 17 (2017), no. 3, 829–836. MR 3648107, DOI 10.1007/s10208-015-9300-x Peter Bürgisser, Completeness and reduction in algebraic complexity theory, Algorithms and Computation in Mathematics, vol. 7, Springer-Verlag, Berlin, 2000. MR 1771845, DOI 10.1007/978-3-662-04179-6 Peter Bürgisser, Cook's versus Valiant's hypothesis, Theoret. Comput. Sci. 235 (2000), no. 1, 71–88. Selected papers in honor of Manuel Blum (Hong Kong, 1998). MR 1765966, DOI 10.1016/S0304-3975(99)00183-8 Peter Bürgisser, The complexity of factors of multivariate polynomials, Found. Comput. Math. 4 (2004), no. 4, 369–396. MR 2097213, DOI 10.1007/s10208-002-0059-5 P. Bürgisser, Permanent versus determinant, obstructions, and Kronecker coefficients, Sém. Lothar. Combin., 75 (2015); arXiv:1511.08113 (2015). Peter Bürgisser, Matthias Christandl, and Christian Ikenmeyer, Even partitions in plethysms, J. Algebra 328 (2011), 322–329. MR 2745569, DOI 10.1016/j.jalgebra.2010.10.031 Peter Bürgisser, Matthias Christandl, and Christian Ikenmeyer, Nonvanishing of Kronecker coefficients for rectangular shapes, Adv. Math. 227 (2011), no. 5, 2082–2091. MR 2803795, DOI 10.1016/j.aim.2011.04.012 Peter Bürgisser, Christian Ikenmeyer, and Jesko Hüttenhain, Permanent versus determinant: not via saturations, Proc. Amer. Math. Soc. 145 (2017), no. 3, 1247–1258. MR 3589323, DOI 10.1090/proc/13310 Peter Bürgisser and Christian Ikenmeyer, Fundamental invariants of orbit closures, J. Algebra 477 (2017), 390–434. MR 3614157, DOI 10.1016/j.jalgebra.2016.12.035 Peter Bürgisser, Christian Ikenmeyer, and Greta Panova, No occurrence obstructions in geometric complexity theory, 57th Annual IEEE Symposium on Foundations of Computer Science—FOCS 2016, IEEE Computer Soc., Los Alamitos, CA, 2016, pp. 386–395. MR 3631001 Peter Bürgisser, J. M. Landsberg, Laurent Manivel, and Jerzy Weyman, An overview of mathematical issues arising in the geometric complexity theory approach to $\textrm {VP}\neq \textrm {VNP}$, SIAM J. Comput. 40 (2011), no. 4, 1179–1209. MR 2861717, DOI 10.1137/090765328 Jin-Yi Cai, Xi Chen, and Dong Li, Quadratic lower bound for permanent vs. determinant in any characteristic, Comput. Complexity 19 (2010), no. 1, 37–56. MR 2601194, DOI 10.1007/s00037-009-0284-2 Christophe Carré and Jean-Yves Thibon, Plethysm and vertex operators, Adv. in Appl. Math. 13 (1992), no. 4, 390–403. MR 1190119, DOI 10.1016/0196-8858(92)90018-R Arthur Cayley, The collected mathematical papers. Volume 2, Cambridge Library Collection, Cambridge University Press, Cambridge, 2009. Reprint of the 1889 original. MR 2866114 Matthias Christandl, Brent Doran, and Michael Walter, Computing multiplicities of Lie group representations, 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science—FOCS 2012, IEEE Computer Soc., Los Alamitos, CA, 2012, pp. 639–648. MR 3186652 Igor Dolgachev, Lectures on invariant theory, London Mathematical Society Lecture Note Series, vol. 296, Cambridge University Press, Cambridge, 2003. MR 2004511, DOI 10.1017/CBO9780511615436 William Fulton and Joe Harris, Representation theory, Graduate Texts in Mathematics, vol. 129, Springer-Verlag, New York, 1991. A first course; Readings in Mathematics. MR 1153249, DOI 10.1007/978-1-4612-0979-9 Fulvio Gesmundo, Christian Ikenmeyer, and Greta Panova, Geometric complexity theory and matrix powering, Differential Geom. Appl. 55 (2017), 106–127. MR 3724215, DOI 10.1016/j.difgeo.2017.07.001 B. Grenet, An upper bound for the permanent versus determinant problem, Theory of Computing (to appear). Roger Howe, $(\textrm {GL}_n,\textrm {GL}_m)$-duality and symmetric plethysm, Proc. Indian Acad. Sci. Math. Sci. 97 (1987), no. 1-3, 85–109 (1988). MR 983608, DOI 10.1007/BF02837817 Jesko Hüttenhain and Pierre Lairez, The boundary of the orbit of the 3-by-3 determinant polynomial, C. R. Math. Acad. Sci. Paris 354 (2016), no. 9, 931–935 (English, with English and French summaries). MR 3535348, DOI 10.1016/j.crma.2016.07.002 C. Ikenmeyer, Geometric Complexity Theory, Tensor Rank, and Littlewood–Richardson Coefficients, PhD thesis, Institute of Mathematics, University of Paderborn, 2012. Jesko Hüttenhain and Christian Ikenmeyer, Binary determinantal complexity, Linear Algebra Appl. 504 (2016), 559–573. MR 3502551, DOI 10.1016/j.laa.2016.04.027 Christian Ikenmeyer, Ketan D. Mulmuley, and Michael Walter, On vanishing of Kronecker coefficients, Comput. Complexity 26 (2017), no. 4, 949–992. MR 3723353, DOI 10.1007/s00037-017-0158-y Christian Ikenmeyer and Greta Panova, Rectangular Kronecker coefficients and plethysms in geometric complexity theory, Adv. Math. 319 (2017), 40–66. MR 3695867, DOI 10.1016/j.aim.2017.08.024 Harlan Kadish and J. M. Landsberg, Padded polynomials, their cousins, and geometric complexity theory, Comm. Algebra 42 (2014), no. 5, 2171–2180. MR 3169697, DOI 10.1080/00927872.2012.758268 Shrawan Kumar, A study of the representations supported by the orbit closure of the determinant, Compos. Math. 151 (2015), no. 2, 292–312. MR 3314828, DOI 10.1112/S0010437X14007660 Joseph M. Landsberg, Laurent Manivel, and Nicolas Ressayre, Hypersurfaces with degenerate duals and the geometric complexity theory program, Comment. Math. Helv. 88 (2013), no. 2, 469–484. MR 3048194, DOI 10.4171/CMH/292 J. M. Landsberg and Zach Teitler, On the ranks and border ranks of symmetric tensors, Found. Comput. Math. 10 (2010), no. 3, 339–366. MR 2628829, DOI 10.1007/s10208-009-9055-3 M. Larsen and R. Pink, Determining representations from invariant dimensions, Invent. Math. 102 (1990), no. 2, 377–398. MR 1074479, DOI 10.1007/BF01233432 I. G. Macdonald, Symmetric functions and Hall polynomials, 2nd ed., Oxford Mathematical Monographs, The Clarendon Press, Oxford University Press, New York, 1995. With contributions by A. Zelevinsky; Oxford Science Publications. MR 1354144 Guillaume Malod and Natacha Portier, Characterizing Valiant's algebraic complexity classes, J. Complexity 24 (2008), no. 1, 16–38. MR 2386928, DOI 10.1016/j.jco.2006.09.006 L. Manivel, Gaussian maps and plethysm, Algebraic geometry (Catania, 1993/Barcelona, 1994) Lecture Notes in Pure and Appl. Math., vol. 200, Dekker, New York, 1998, pp. 91–117. MR 1651092 Thierry Mignon and Nicolas Ressayre, A quadratic bound for the determinant and permanent problem, Int. Math. Res. Not. 79 (2004), 4241–4253. MR 2126826, DOI 10.1155/S1073792804142566 Ketan D. Mulmuley, On P vs. NP and geometric complexity theory, J. ACM 58 (2011), no. 2, Art. 5, 26. MR 2786586, DOI 10.1145/1944345.1944346 Ketan D. Mulmuley and Milind Sohoni, Geometric complexity theory. I. An approach to the P vs. NP and related problems, SIAM J. Comput. 31 (2001), no. 2, 496–526. MR 1861288, DOI 10.1137/S009753970038715X Ketan D. Mulmuley and Milind Sohoni, Geometric complexity theory. II. Towards explicit obstructions for embeddings among class varieties, SIAM J. Comput. 38 (2008), no. 3, 1175–1206. MR 2421083, DOI 10.1137/080718115 David Mumford, Algebraic geometry. I, Classics in Mathematics, Springer-Verlag, Berlin, 1995. Complex projective varieties; Reprint of the 1976 edition. MR 1344216 F. D. Murnaghan, The Analysis of the Kronecker Product of Irreducible Representations of the Symmetric Group, Amer. J. Math. 60 (1938), no. 3, 761–784. MR 1507347, DOI 10.2307/2371610 D. G. Northcott, Multilinear algebra, Cambridge University Press, Cambridge, 1984. MR 773853, DOI 10.1017/CBO9780511565939 Richard P. Stanley, Positivity problems and conjectures in algebraic combinatorics, Mathematics: frontiers and perspectives, Amer. Math. Soc., Providence, RI, 2000, pp. 295–319. MR 1754784 S. Toda, Classes of arithmetic circuits capturing the complexity of the determinant, IEICE Trans. Info. and Sys., E75-D (1992), no. 1, 116–124. L. G. Valiant, Completeness classes in algebra, Conference Record of the Eleventh Annual ACM Symposium on Theory of Computing (Atlanta, Ga., 1979) ACM, New York, 1979, pp. 249–261. MR 564634 L. G. Valiant, The complexity of computing the permanent, Theoret. Comput. Sci. 8 (1979), no. 2, 189–201. MR 526203, DOI 10.1016/0304-3975(79)90044-6 L. G. Valiant, Reducibility by algebraic projections, Logic and algorithmic (Zurich, 1980) Monograph. Enseign. Math., vol. 30, Univ. Genève, Geneva, 1982, pp. 365–380. MR 648313 Steven H. Weintraub, Some observations on plethysms, J. Algebra 129 (1990), no. 1, 103–114. MR 1037395, DOI 10.1016/0021-8693(90)90241-F A. Yabe, Bi-polynomial rank and determinantal complexity, arXiv:1504.00151 (2015). Retrieve articles in Journal of the American Mathematical Society with MSC (2010): 68Q17, 05E10, 14L24 Retrieve articles in all journals with MSC (2010): 68Q17, 05E10, 14L24 Peter Bürgisser Affiliation: Technische Universität Berlin, Berlin, Germany Email: [email protected] Christian Ikenmeyer Affiliation: Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbrücken, Germany Email: [email protected] Greta Panova Affiliation: University of Pennsylvania, Philadelphia, Pennsylvania; and Institute for Advanced Study, Princeton, New Jersey Email: [email protected]. Received by editor(s) in revised form: April 20, 2018 Published electronically: October 4, 2018 Additional Notes: The first author was partially supported by DFG grant BU 1371/3-2. The third author was partially supported by NSF grant DMS-1500834 Journal: J. Amer. Math. Soc. 32 (2019), 163-193 MSC (2010): Primary 68Q17, 05E10, 14L24 DOI: https://doi.org/10.1090/jams/908
CommonCrawl
Greyhound racing ideal trajectory path generation for straight to bend based on jerk rate minimization Numerical optimization of pacing strategy in cross-country skiing based on Gauss pseudo-spectral method Xu Ni, Jiawei Liu, … Peng Ke Modifications to the net knee moments lead to the greatest improvements in accelerative sprinting performance: a predictive simulation study Nicos Haralabidis, Steffi L. Colyer, … Dario Cazzola Injury prevention in Super-G alpine ski racing through course design Matthias Gilgien, Philip Crivelli, … Jörg Spörri Relationship between Thoroughbred workloads in racing and the fatigue life of equine subchondral bone Ashleigh V. Morrice-West, Peta L. Hitchens, … R. Chris Whitton Simulation-based biomechanical assessment of unpowered exoskeletons for running Hamidreza Aftabi, Rezvan Nasiri & Majid Nili Ahmadabadi Difference between preferred and non-preferred leg in peak speed, acceleration, and deceleration variables and their relationships with the change-of-direction deficit Ana Filipa Silva, Rafael Oliveira, … Filipe Manuel Clemente Validation of a motion model for soccer players' sprint by means of tracking data Takuma Narizuka, Kenta Takizawa & Yoshihiro Yamazaki Differences in equine spinal kinematics between straight line and circle in trot A. Byström, A. M. Hardeman, … A. Egenvall Inter-strides variability affects internal foot tissue loadings during running Coline Van Waerbeke, André Jacques, … Guillaume Rao Md. Imam Hossain1, David Eager1 & Paul D. Walker1 This paper presents methods for modelling and designing an ideal path trajectory between straight and bend track path segments for racing greyhounds. To do this, we numerically generate clothoid and algebraic curve segments for racing quadrupeds using a sequential vector transformation method as well as using a helper equation for approaching ideal clothoid segments that would respect greyhound kinematic parameters and boundary conditions of the track. Further, we look into the limitations of using a clothoid curve for racing dog track path design and propose a smooth composite curve for track transition design which roughly maintains G3 curvature continuity for smooth jerk to overcome limitations of a clothoid transition. Finally, we show results from race data modelling and past injury data, which provide a strong indication of clothoid curve segments improving the dynamics and safety of racing greyhounds while reducing injuries. In the greyhound racing sports industry, injuries to dogs are highly prevalent1. The sport has grown exponentially in recent years due to live wagering accessibility and various revenue sharing programs2. As a result, it has become evident that better track design is required to reduce the likelihood of racing greyhound injuries at the tracks. Observations3 confirmed that in greyhound racing congestion occurs at the entrance to the first bend. Also, researchers theorized that a smooth-running path is required for curved track design without which quadrupeds are more likely to lose coordination at specific transitions4. Similarly, it was shown that various track shapes have considerable effects on greyhound injury rates indicating track curvature influences5. When it comes to track shapes and smooth paths, transition curves are an essential part of path design in many areas such as road design and train track designs6. Similarly, transition curves help reduce disturbances in quadruped gait symmetry4. This is because, quadrupeds are subject to a centrifugal force which induces an outward pull on the curved track path, forcing quadrupeds to deviate from navigating the track path4. Theoretically, a transition curve would also assist navigation of the body around the curved path even if it is not sufficiently banked7. Clothoid transition curves are extensively found in road and rail track designs such as it was found from the analysis that the Tokaido Shinkansen high-speed rail uses a 600 m clothoid transition in one of the 2.1 km radius bends to achieve a maximum travelling speed of 270 km/hr with minimal track path camber. Clothoid curves are essential for generating continuous curvature paths with straight and perfect arc segments8. This is achieved by linking constant curvature segments with clothoid segments8. For example, a clothoid can join a line and a circle with G2 curvature continuity where both the tangent vector and curvature at the line-circle intersection are continuous9. The performance of clothoid and other transition curves trajectories can be effectively analyzed by looking into their curvature profiles. Curvature is an import factor in trajectory designs as it affects the maximum speed a vehicle can travel without skidding or whether the pilot of an aeroplane suffers blackout as a result of g-forces10. Also, a valid curve is one which respects upper bound curvature constraints set by kinematics properties of moving bodies11. In this paper, we illustrate numerical methods to approach clothoid curves and other transition curves to model and generate smooth running paths for greyhound racing. We also show galloping greyhound trajectory performance, relating it to injury rates and track shapes. The paper is organized as follows. Sections one and two describe greyhound trajectory and trajectory dynamics, respectively. In sections three and four, clothoid transition generation and approaching ideal clothoid transitions for racing greyhounds are presented. Ideal transition curves developed for galloping greyhounds are presented in section five. Finally, section six evaluates racing greyhound trajectory performance for existing tracks. Trajectory of a racing greyhound In the greyhound racing industry, the trajectory of a racing greyhound is oftentimes overlooked for track designs and injury prevention measures despite its significance in dynamic outcomes for the animal. One key parameter which determines the trajectory of a racing greyhound is the curvature of its running path. Curvature, κ(s), is the change of heading relative to distance travelled8. Also, the curvature can be thought of as the inverse of the radius of curvature, which denotes the turning radius at any point in the path12. Furthermore, a related variable, sharpness α, is the change of curvature for distance travelled which also forms the basis for constructing continuous curvature path trajectories8. While designing a path, curvature change must remain smooth throughout the trajectory of a moving object as the centrifugal acceleration experienced is directly proportional to the path curvature12. As a result, in trajectory generation for motion planning the smoothness of a trajectory is directly related to the smoothness of its curvature profile13. Likewise, for the path to be feasible, it must conform to continuous position, heading, as well as curvature at all points8. Now, if the path of the trajectory is defined by a function y = f(x) then the radius of curvature ρ at any given point can be found from the following equation:14 $$\rho =\frac{{\left[1,+,{\left(\frac{dy}{dx}\right)}^{2}\right]}^{3/2}}{|\frac{{d}^{2}y}{d{x}^{2}}|}$$ Then, the curvature is, $$\kappa =\frac{1}{\rho }$$ However, if the path of the trajectory cannot be translated into a continuous function, then any three adjacent data points lying on the path can be used to calculate the radius of curvature at any given point using the circumradius of a triangle formula15. The circumradius formula (3) provides the radius of the circumcircle of a triangle which is inherently cyclic16,17. The triangle is defined by the adjacent data points lying on the path, as shown below in Fig. 1. $$\rho =R=\frac{abc}{4A}$$ Where, a, b, and c denote the three sides of a triangle defined by three adjacent data points on the path, and A is the area of the triangle. Calculating an arbitrary path's instantaneous radius of curvature using data points lying on the path. An ideal racing greyhound trajectory would involve looking into two major control factors, greyhound heading which deals with curvature and sharpness of the running path and greyhound kinetics which deals with the acceleration/deceleration of a greyhound. Racing greyhound trajectory dynamics The trajectory of a racing greyhound induces dynamic greyhound conditions such as centrifugal acceleration, centrifugal jerk, and greyhound heading yaw rate. It also influences racing greyhound states such as leaning, braking forces as a result of ground reaction force, centripetal force, stride frequency, and stride length. A sharp discontinuity in any of the dynamic conditions would result in a significantly unpredictable dynamic imbalance for a racing greyhound. During racing, such a situation would put a greyhound in considerably uncontrollable situations where there are already racing situations such as congestion and tight bends with variable track cross-falls along the width of the tracks. To design a trajectory for racing greyhounds which would meet the specific track design goals, the trajectory performance can be evaluated by looking into two dynamic factors of racing greyhounds namely: centrifugal jerk and yaw rate. These two factors are highly sensitive to the trajectory performance of an object in motion as both are related to the radius of curvature of the trajectory. Modelling centrifugal jerk Jerk is the rate change of acceleration. Like centrifugal acceleration, the effect of jerk is also experienced in the body18. Essentially, jerk is the increasing or decreasing of the force in the body18. Eager, et al.18 explains the use of jerk as a measure of safety in various disciplines including mechanical engineering and civil engineering as well as its application in greyhound racing. Lower jerk values are essential as they indicate that the change in centrifugal acceleration is minimal for a greyhound while it is navigating its trajectory. For, humans, there are derived maximum acceleration change and corresponding time duration for this change for roller coaster rides18. No such derivations exist for racing quadrupeds yet. As a result, modelling of the centrifugal jerk for racing animals becomes an essential part of optimum trajectory generation. The first step to centrifugal jerk analysis is finding the instantaneous radius of the trajectory or calculating the radius of curvature at all points in the trajectory path. For cars and trains, calculation of the instantaneous radius of curvature is found by using geometric primitives and splines and approximated using continuous functions as their respective heading change is continuous. However, for greyhounds, the heading change by the greyhound is not continuous and expected to occur at every stride. Furthermore, greyhounds are known to have a stride frequency greater than 3 Hz19. This implies a greyhound would change its heading if required more than three times a second where the magnitude of each heading change could vary from stride to stride. Therefore, we can gather all the location coordinate data for strides of a single racing greyhound and calculate the instantaneous radius of curvature ρ of the racing greyhound using either the circumradius formula (3) or the perpendicular bisectors method. Then, we can calculate the racing greyhound's instantaneous centrifugal acceleration from the instantaneous speed and radius of curvature. Finally, the instantaneous jerk is derived from the rate of change in the centrifugal acceleration. Modelling yaw rate The yaw rate is the rate change of heading or turning. It relates a racing greyhound's angular displacement to its forward speed. It also provides an indication of the stability of the path a racing greyhound is taking. For example, it was shown from the race kinematic simulation and race data that racing greyhounds' yaw rate is not smooth immediately after jumping out from the starting boxes20. For a constant radius curve path, the yaw rate is simply the radius of curvature over speed (4) which is used for calculating a vehicle's momentary radius of turn. For a racing greyhound trajectory, the yaw rate can be directly related to the sum of the lateral forces. A lower yaw rate would indicate lower lateral forces such as centrifugal force and frictional force acting on a greyhound. To maintain a smooth trajectory, a racing greyhound needs to maintain a smooth yaw rate. However, since the speed of the racing greyhound varies over time as well as the lateral frictional forces from the traction ground, maintaining a smooth yaw rate would also require careful balancing of these two factors while designing tracks to facilitate a smooth trajectory for a racing greyhound. $$\dot{\psi }=\frac{s}{\rho }$$ Clothoid track segments for deriving natural racing greyhound trajectory The clothoid segment is a curve known for its curvature being proportional to its length21. This property of the clothoid is useful as it allows the gradual development of centrifugal acceleration or can act as centrifugal acceleration easement, which significantly reduces the risk of accidents occurring12. Recent research shows that there are different types of curves already developed, which can be used as centrifugal acceleration easement curves12. For example, Quintic polynomial and B-splines functions are computationally less expensive and also able to provide curvature continuity for curve design13. However, the drawbacks of these functions are complex curvature profiles which are hard to follow as they are not necessarily smooth13. This is where clothoids are useful as their curvature profile is a straight line making them easy to follow13. Furthermore, clothoids are characterized by a linear curvature, allow minimizing of curvature variation where piecewise clothoids exhibit excellent smoothness properties22. For these fundamental reasons, currently clothoids are extensively found in road design and robot path planning to achieve smooth transitions in the trajectories22. We found that clothoids are essential at the race track not only for developing smooth path trajectories but also for reducing the likelihood of certain types of race dynamics hazards. From the race videos, it was noted that a greyhound is more likely to change lanes to a higher radius upon entering the first bend. This could be due to the track bend lacking adequate transition to accommodate for greyhound natural instantaneous yaw rate change and leaning rate change limits. As a result, the prospect of the greyhound bumping into another nearby greyhound increases significantly. This specific race dynamic outcome can be reduced or nearly eliminated if the track path has clothoid segments which match natural greyhound heading turning rate change limits. Generating clothoid segments for track path design There are many methods available for computing the clothoid. Most methods involve approximations to the clothoid21. For example, it can be approximated by high degree polynomial curves23, such as by an S-power series24 as well as by an arc spline9. Also, continued fractions and rational functions are commonly used for approximations9. A more recent development in the spline primitives found in much computer-aided design software makes it easy to approximate a clothoid while respecting boundary conditions such as curvature and tangent continuity. Also, spline primitives are known for good and fast controllability with positional and tangential constraints making them ideal for various applications22. Each of the methods available results in different degrees of accuracy and may not be suitable for efficient greyhound track path design purposes. This is mainly due to less controllability in generating a clothoid according to greyhound kinematics. Moreover, to accommodate the clothoid segment into the path design, a coordinate respecting system must be incorporated or derived from the existing clothoid methods which respects different design boundary conditions. Computing clothoid curves using existing methods The most common method of computing a clothoid can be found in its definition in terms of Fresnel integrals24 where it is computed using the Fresnel sine and cosine functions as shown in Eqs. (5a) and (5b) and using some forms of Taylor series expansions on the functions which converge for an independent variable8. Series expansion functions are extensively used because the clothoid defining formulas are transcendental functions21. The parametric plot of Fresnel sine and cosine functions provides coordinate values of the clothoid curve. However, this does not respect any form of unit scaling or boundary conditions as well as not allowing computing the clothoid for a specific rate of change of curvature, sharpness or smoothing applications. Similarly, Eqs. (6a) and (6b) give an approximation of Fresnel sine and cosine functions which converge for all independent variables x. Another common method involves utilizing auxiliary functions8, as shown in Eqs. (7a) and (7b). $$S(x)={\int }_{0}^{x}\sin ({t}^{2})dt$$ $$C(x)={\int }_{0}^{x}\cos ({t}^{2})dt$$ $$S(x)={\int }_{0}^{x}\sin ({t}^{2})dt=\mathop{\sum }\limits_{n=0}^{\infty }{(-1)}^{n}\frac{{x}^{4n+3}}{(2n+1)!(4n+3)}$$ $$C(x)={\int }_{0}^{x}\cos ({t}^{2})dt=\mathop{\sum }\limits_{n=0}^{\infty }{(-1)}^{n}\frac{{x}^{4n+1}}{(2n)!(4n+1)}$$ Equations (5a) and (5b) then can be written in the auxiliary function form, as shown below:8 $$C(x)=\frac{1}{2}+f(x){\rm{s}}{\rm{i}}{\rm{n}}(\frac{\pi }{2}{x}^{2})-g(x){\rm{c}}{\rm{o}}{\rm{s}}(\frac{\pi }{2}{x}^{2})$$ $$S(x)=\frac{1}{2}-f(x){\rm{c}}{\rm{o}}{\rm{s}}(\frac{\pi }{2}{x}^{2})-g(x){\rm{s}}{\rm{i}}{\rm{n}}(\frac{\pi }{2}{x}^{2})$$ Where auxiliary functions f and g are defined as: $$f(x)=\left(\frac{1}{2}-S(x)\right){\rm{c}}{\rm{o}}{\rm{s}}(\frac{\pi }{2}{x}^{2})-\left(\frac{1}{2}-C(x)\right){\rm{s}}{\rm{i}}{\rm{n}}(\frac{\pi }{2}{x}^{2})$$ $$g(x)=\left(\frac{1}{2}-C(x)\right){\rm{c}}{\rm{o}}{\rm{s}}(\frac{\pi }{2}{x}^{2})+\left(\frac{1}{2}-S(x)\right){\rm{s}}{\rm{i}}n(\frac{{\rm{\pi }}}{2}{x}^{2})$$ Likewise, for auxiliary function definition of the clothoid a good rational approximation to compute the clothoid is using the following auxiliary functions8. $$f(x)=\frac{1+0.926x}{2+1.792x+3.104{x}^{2}}$$ $$g(x)=\frac{1}{2+4.142x+3.492{x}^{2}+6.670{x}^{3}}$$ Moreover, recently, researchers developed more efficient numerical methods where one such method is using arc length parameterisation12. While analytical methods lack parameterisation for different application case scenarios researchers are becoming more reliant on developing numerical techniques for computing the clothoids. A numerical approach for generating the clothoid curve transitions for racing greyhounds and other quadrupeds It is evident that existing methods lack greyhound kinematic parameterisation for racing greyhound transition design purposes. A numerical method is generally preferred as a first approach for incorporating different parametrisation into the clothoid curves. To develop a numerical technique for the clothoid which incorporates greyhound kinematics variables, we looked into the characteristics of the mathematical model of the clothoid curve. A clothoid curve transition accomplishes a gradual transition from the straight to the circular curve of the constant radius where the curvature changes from zero to a finite value. As a result, the tangent vector ti, which lies on the clothoid curve, also gradually rotates from zero to a finite angle Fig. 2. Furthermore, let us assume a greyhound changes its heading with every stride as noted from the race data and galloping gait of a greyhound. With these two crucial pieces of information relating to the clothoid curve tangent vector and the greyhound heading step-change length, we can apply vector transformation to generate a clothoid curve positional vector Pi Fig. 2. Now, we define the clothoid tangent vector as a function of greyhound stride length constant as denoted by transition segment length and a variable denoted by transition deflection angle. The transition deflection angle ai defines the local rotation of the clothoid curve tangent vector at a specific transition segment location i relative to the horizontal axis. Moreover, as a clothoid curve transition would gradually increase its curvature with constant curvature acceleration, the transition deflection angle ai is a function of the transition deflection angle acceleration constant. The transition deflection angle acceleration d defines the rate change of curvature per transition segment length of the clothoid curve, which essentially tells us how quickly the clothoid tangent vector rotation is accelerating. Finally, once the transition deflection angle is calculated for local ith transition segment, the clothoid curve positional vector can be calculated as shown in Fig. 2 and Eq. (11). To generate the entire clothoid curve for the specified number of transition segments by the constant n the process of translating and then rotating the clothoid tangent vector is iterated to get the clothoid positional vectors for all the transition segments. For example, Fig. 3 shows a clothoid curve generated using this method when transition segment length s equals 1 m, the number of transition segments n equals 250 and transition deflection angle acceleration d is 0.02 degrees. Racing greyhound clothoid path generation using numerical method parameterization. A clothoid curve with curvature combs containing 250 single meter segments and with a turning acceleration of 0.02 degrees per segment. d = transition deflection angle acceleration ai = transition deflection angle relative to horizontal axis s = transition segment length n = number of transition segments ti = transition tangent vector i = transition segment number $${t}_{i}=f(s,{a}_{i})=[\begin{array}{c}cos({a}_{i})\times s\\ sin({a}_{i})\times s\end{array}]$$ $${P}_{i}=f({t}_{i},{P}_{i-1})={P}_{i-1}+{t}_{i}$$ $${a}_{i}=\mathop{\sum }\limits_{k=1}^{i}d\times i=1\times d+2\times d+3\times d+\ldots +i\times d$$ $$d\times i\propto \kappa $$ Where κ denotes the curvature of the clothoid curve. Now, for instance, using the numerical method explained above to generate a clothoid curve transition for racing greyhounds with a transition exit radius of approximately 52 m and a total transition length of 45 m, we would have to consider the d constant to be 0.69 degrees per transition segment, the s constant to be 5 m (assuming that average stride length of a greyhound is 5 m) and the n constant to be 9. The curvature and jerk results of this clothoid transition curve for racing greyhounds are shown in Fig. 4. The numerical calculation of ai and Pi is shown in Table 1. A clothoid curve transition for racing greyhounds with a total 45 m transition length having a an approximately 52 m turning radius at the end of the transition. Table 1 Numerically calculated values of ai and Pi variables for a clothoid curve. Using this numerical method approach, we showed how an optimized clothoid curve transition could be determined numerically by tweaking curve generating factors. The controlling of initial values as set by d, s, and n allows generating any combination of clothoid curves as required for different kinematic path design goals. Designing ideal clothoid segments for racing greyhounds and other quadrupeds When designing clothoid segments, it is essential that greyhound heading is not changing at the maximum performable rate since such a heading would put a greyhound into a limit state turning while maintaining a high speed. An ideal clothoid segment would have continuous curvature to allow a greyhound to navigate the path with a minimal amount of veering effort. In the next section, we derive a helper equation which can be used for specifying ideal clothoid transitions as well as for modelling dynamics for racing greyhounds at the tracks. Deriving an equation for exact clothoid requirements for racing greyhounds and other quadrupeds Equation (3) produced a relationship between greyhound kinematics such as heading turning angle acceleration and turning radius at the end of a natural clothoid transition. First, let's assume for a clothoid transition a racing greyhound would pass ns number of strides with a constant s meter stride length. Now, if the total clothoid transition length is T meters, then the number of greyhound strides ns in a transition is given by Eq. (13). Again, since the length of the greyhound's strides remains unchanged in the clothoid transition, the greyhound's turning angle a in the last stride of the transition can be defined by Eq. (14) if the greyhound heading turning angle is accelerating with d degrees per stride. Now, to calculate a greyhound's heading radius of turn R near the end of the clothoid transition using Eq. (3), we use Heron's formula (17) to calculate the area of the triangle A (17) formed by last two greyhound strides s1 and s2. Furthermore, using the cosine rule we calculate the unknown side s of the triangle formed by the last two greyhound strides s1 and s2. Finally, by plugging in values for R and simplifying the equation, we reach a final equation form (18) which defines a racing greyhound's turning radius R at the end of the clothoid transition in terms of transition length T, greyhound heading turning acceleration a and greyhound constant stride length s. Consequently, as Eq. (18) relates greyhound heading turning parameters to clothoid transition parameters, which is useful for modelling and designing ideal clothoid transitions for racing greyhounds. In the next section, we show some of the design and modelling of the clothoid transitions using Eq. (18). d = transition deflection angle acceleration (per stride) a = deflection angle of greyhound heading for last greyhound stride ns = total number of greyhound strides in the transition s = length of a single stride R = transition last stride turn radius T = transition length $$ns=\frac{T}{s}$$ $$a=d\times (ns-1)$$ $$s=\sqrt{{s}_{1}^{2}+{s}_{2}^{2}-2{s}_{1}{s}_{2}Cos(a-180)}$$ Where s1 and s2 are a racing greyhound's last two strides in the transition. $$p=\frac{{s}_{1}+{s}_{2}+s}{2}$$ Where p is semi-perimeter of the inscribed triangle (Fig. 1) in the circle formed by a racing greyhound's last two strides s1 and s2. $$A=\sqrt{p(p-{s}_{1})(p-{s}_{2})(p-s)}$$ $$R=\frac{\sqrt{2}{s}^{2}\,\sqrt{2{s}^{2}\,\cos \left(\frac{\pi d\left(\frac{T}{s}-1\right)}{180}\right)+2{s}^{2}}}{2\sqrt{-{s}^{4}\left(\cos \left(\frac{\pi d\left(\frac{T}{s}-1\right)}{90}\right)-1\right)}}$$ Clothoid design for constant radius bend Every track has a bend radius requirement as calculated from the physical infrastructure and design goals. If a track requires a 52 m radius bend at the end of the transition, then using Eq. (18), we find the following expected greyhound kinematics and transition design possibilities as shown in Table 2. It should be noted that there could be a large number of design outcomes for a single parameter design such as a design for a specific bend radius. The greyhound yaw rate at the entrance is simply the greyhound angular displacement rate change per stride times greyhound stride frequency. Also, in generating the folllowing results racing greyhound speed was assumed to be 19.5 m/s and stride frequency to be 3.5 Hz. Table 2 Clothoid transition options for 52 m radius bend. As can be seen from Table 2, each of the clothoid transition possibilities can be applied at different locations at the track based on the race requirements. For instance, the clothoid transition Design No. 3 can be applied at the home turn bend exit since the greyhound speed and stride length would be much lower making it possible for a greyhound to adopt to higher yaw rate and angular displacement acceleration path navigation. Clothoid design for greyhound angular displacement rate change limits For racing greyhounds known to have certain angular displacement rate change limits based on greyhound training and health background histories, using Eq. (18) we can enumerate possible clothoid designs options. For example, if the expected racing greyhounds have a maximum angular displacement rate change limit of 0.5 deg/stride2, then we can consider the following clothoid transition design options as shown in Table 3. Table 3 Clothoid transition options for racing greyhound accelerating with a maximum angular heading turning of 0.5 degrees per stride2. As can be seen from Table 3, using greyhound angular displacement rate change as a design constraint exhibits more diverse clothoid transitions in terms of transition length and transition exit bend radius. Design No. 1 shows that it is possible to have a short transition for a larger radius bend. Likewise, design no. 4 portrays a long transition for smaller radius bend. As a result, the angular displacement rate based design approach provides excellent freedom in choosing clothoid transitions based on track requirements. Modelling of racing greyhound jerk dynamics It is possible to calculate jerk exhibited by clothoid transitions using Eq. (18). Since a clothoid has uniform curvature acceleration, the jerk produced by a clothoid remains the same for the entire length of the transition. So, we can find jerk value at any arbitrary location in a clothoid transition to find overall jerk for the transition. For example, if we are interested in the jerk at the end of a clothoid transition, first we would calculate radius value R for both T and T-s for the transition. Then, we would calculate corresponding centrifugal acceleration values. Finally, since the jerk is the change in centrifugal acceleration over time, we simply divide the difference of centrifugal accelerations by the time taken by one stride. Table 4 presents some example calculations of racing greyhound jerk values for various clothoid transition designs considering instantaneous greyhound speed to be 19.5 m/s: Table 4 Clothoid transitions racing greyhound's jerk modelling using Eq. (18). An approach to designing ideal transitions for racing greyhounds As can be seen from Fig. 4, it was found that racing greyhound clothoid transition curves have a significant flaw. Although the development of the curvature is gradual as can be seen from the curvature plot of Fig. 4, the jerk profile is not smooth and almost jumps instantaneously from zero to a higher value (Fig. 4). This is important, as such a dramatic change of jerk would impose a high energy release in a short time resulting in considerably unstable conditions for greyhounds navigating in and out of the transitions. Furthermore, the clothoid curve generation for racing greyhounds using the numerical method above showed that regardless of transition curve length jerk goes through a step change within one transition segment or one racing greyhound stride. Consequently, a clothoid curve transition was deemed not to be an ideal fit for racing greyhound track path designs. The clothoid transition curve does not maintain a smooth jerk initiation for a racing greyhound. Hence the curve can only be considered G2 continuous with matching curvature at the entrance and exit of the transition curve. This imposes several disadvantages in racing greyhound race dynamics at the tracks. For example, we can break down the disadvantages into two main categories, namely clustering related problems and path smoothing, where each is entangled with the other. The clustering of racing greyhounds is a common issue during races. This happens mainly due to single lure convergence as a result of the number of following galloping greyhounds. A tight convergence of the racing greyhound pack is noticeable at race tracks in the locations where track path curvature change is sudden and abrupt. As clustering is a precursor to various dynamics unstable conditions such as bumping of one greyhound by another, maintaining a smooth path profile such as G3 curvature continuity where the clustering occurs, becomes vital. As greyhounds follow the racing lure, they occupy different lanes such that they have different path radii and tend to cut corners forming various individual transitions into the bend which are all unique. A G2 curvature continuity as found in the clothoid transitions where the rate of change of the jerk is not smooth would induce all the racing greyhounds following the lure to follow one unique transition into the bend to keep instantaneous jerk to the minimum. This is not feasible. To overcome the limitations of clothoid transitions, we applied the numerical method of generating clothoid curves discussed in the previous section to develop moderate G3 curvature continuity transition curves for racing greyhounds. Also, two different transition curve configurations were selected for generating the curves as these configurations best match the many current tracks found in Australia in terms of real estate requirements. The configurations are a 45 m transition with transition end radius of 52 m and a 75 m transition with transition end radius of 70 m. First, we assume ai = X and plot for different X expressions to derive different curves where the curvature results for the curves are shown in Figs. 5 and 6. The X expression defines the nature of curvature function as the curve length increases from the origin. As seen from the plots (Figs. 5 and 6), when the X expression is linear it is a clothoid transition where the jerk is initiated immediately within one transition segment for both 45 m and 75 m transition configurations. To get G3 curvature continuity curves, we tried X0.6, X1.5, X2, and ((1.2)X−1) expressions. As can be seen from the plots, all the curves except the clothoid curve X and X0.6 curve maintain a moderate G3 curvature continuity with a smooth jerk profile. However, as X expressions are in power and logarithmic function form for X0.6, X1.5, X2, and ((1.2)X−1) these curves result in higher jerk in the second half of the transition. This suggested that X1.5, X2, and ((1.2)X−1) curves could be used to develop a G3 curvature continuity transition curve for racing greyhounds if the jerk could be maintained in the second half of the transition. Thus, we decided to use these curves as auxiliary curves which would provide smooth jerk initiation for the transition. However, compared to other curves, the overall jerk and smoothness performance of the X1.5 is optimum. Different smooth curves curvature and jerk results as 45 m transition curves for greyhound racing. Here, we generate composite transition curves with various degrees of G3 curvature continuity for racing greyhound ideal path design. Each composite transition curve generated combines the X1.5 curve as an auxiliary curve and a clothoid curve as the main curve. So, the overall transition curve generating function can be considered as a piecewise function shown in Eq. (19) where the auxiliary curve function g is applicable until q transition segment is reached. $$f(x)=\{\begin{array}{c}g(x)\\ z(x)\end{array}\begin{array}{c}{\rm{if}}\,x < q\\ {\rm{otherwise}}\end{array}$$ $$x=y(d,i)$$ Figure 7 shows curvature and jerk results for four different composite curves as ideal transitions for racing greyhounds, plotted using the numerical method explained in the earlier section, the configurations for these composite curves are given in Table 5. Four different straight to bend curvature graphs and jerk results for ideal racing greyhound transition curves. Table 5 Kinematic and shape properties for four straight to bend composite curve transitions. As can be seen from Fig. 7, composite transition curves have strong advantages over pure clothoid transition in terms of curvature and jerk continuities and excellent moderate G3 continuity for the first half of the transition. The overall instantaneous jerk is significantly lower in the composite curve transitions compared to clothoid transitions. This is because the window of jerk initiation is much longer in composite curve transitions because of the gradual development of jerk and on average it is four transition segments or four greyhound strides compared to just one stride in the clothoid transitions. Greyhound racing data results A racing greyhound getting injured at the tracks provides an indication of its overall racing trajectory performance. Also, we can analyze the trajectory of a racing greyhound at the tracks to measure track path performance. Below, we present two such case scenarios by analyzing racing greyhound track data and injury rates. Race injury data results for track path renovation In the greyhound racing track path design, it was found that only circular arcs (constant curvature) and lines (zero curvature) were used extensively despite non-continuous curvature resulting at the segment intersections25. A discontinuity in the curvature implies that a greyhound must change its heading instantaneously and abruptly resulting in a path which is not feasible8. Also, track survey data from Australia shows that a brief transition is applied, made of an arc spline consisting of one or more circular arcs joined with continuous tangent vectors. This particular design practice also leads to multiple discontinuities in track path curvature. We looked into one particular greyhound racing track (Track A) located in Australia and its two years of racing history. In the first year, it had a track path design with G1 continuity constituting half-circle bends and straights (Fig. 8). In the second racing year, the track was renovated with clothoid curve transitions into and out of the constant radius bends (Fig. 9). A 40 m clothoid transition was adjoined between a straight and a constant bend section for four bend and straight intersections. The outcome of this clothoid transition incorporation into the original track path design eases centrifugal acceleration effect on the greyhounds where the centrifugal force is raised gradually from zero to an approximate nominal 240 N (Fig. 9). The renovation at Track A definitely would have changed centrifugal jerk performance significantly as the clothoid curve joining straights and bends would maintain G2 curvature continuity for the track path. To see whether this resulted in a significant decrease in racing injury rates, injury data for a two-year period were analyzed containing one year injury data for before and after renovation. By assuming differences in other contributing factors to injury rates such as variations in weather, track maintaining conditions, different greyhound breeds and training patterns, race operating conditions between the years were minimal the injury rates should show general trends due to track path renovation changes. We found that before the clothoid intervention at Track A the normalized catastrophic and major injury rate per 1000 race starts was about 4.58 whereas after the clothoid intervention it was reduced to 4.22, a 7.9% reduction in this category of injury rates. Typically, this category of injury results from significant damage to greyhound physics. However, when we took into account all types of injuries at Track A for before and after the renovation, the normalized injury rates per 1000 race starts reduced to 26.71 from 44.68 injuries, a 40.2% reduction in overall injuries due to clothoid implementation at the track. Furthermore, under all injury types the most commonly occurring injury is happening in the greyhound forelegs responsible for turning assist for dog's navigation. Metacarpal fractures and tibial fractures due to torsional stress occurring in the forelegs indicate navigational work stress on the greyhounds. Track path curvature as shown by curvature combs for Track A with G1 curvature continuity for bends. Curvature of racing greyhound trajectory Like any path following object, a racing greyhound has limitations of the radius of curvature or extrema of curvature for its running path. Also, when a racing greyhound runs following a track path which has curvature discontinuity or non-optimal transitions, a deviation in the greyhound's position occurs from the projected track path trajectory. This phenomenon was observed in the greyhound location data in the races. Furthermore, numerical racing greyhound simulations confirmed that when a greyhound is following the line of sight of a lure, its yaw rate gradually builds up for the bend for a track shape which is less circular5. To see if there is any difference between racing greyhound path trajectory to track path, we analyzed racing greyhound location tracking data for a track which has non-optimal transition length to reduce the jerk magnitude. From the racing greyhound location tracking data and track survey data, we generated curvature results for both racing greyhound trajectory and track path (Fig. 10). The greyhound location data for all greyhounds starts were averaged to plot the results where ten races or eighty starts were considered to plot the results below. As can be seen from the curvature plot, there is a significant difference between racing greyhound trajectory and track path. This indicates racing greyhounds deviating from the track path to accommodate a more natural trajectory according to their physics. Also, it was observed from the analysis that transitions occurring in racing greyhound trajectory is relatively gradual and longer as indicated by the green dashed marker compared to the black dashed marker for track path. Track path and greyhounds trajectory curvature comparison. This paper presents a numerical method for generating racing greyhound clothoid transitions for track path designs along with an equation for modelling any kind of clothoid curves. The numerical technique is robust and can be algorithmically controlled to achieve defined goals compared to existing approaches for designing racing greyhound clothoid transitions. Moreover, it can be extended to function as a generator of other curves rather than just clothoid curves. By looking into jerk modelling data, an ideal transition curve is presented suitable for racing greyhound track path designs which overcomes limitations set by clothoid transitions. The effect of clothoid transitions in an existing track was verified by measuring injury rates over a two-year period. The trajectory of racing greyhounds in an existing track with inadequate transitions was analyzed to show non-optimum track path conditions. Finally, this paper showed evidence through modelling and injury data that clothoid and other composite curves improve racing dynamics safety for racing greyhounds. Furthermore, the methods presented here can also be used in designing and modelling trajectories for other moving bodies, including but not limited to horses, vehicles and trains. Sicard, G. K., Short, K. & Manley, P. A. A survey of injuries at five greyhound racing tracks. The Journal of small animal practice 40, 428–432, https://doi.org/10.1111/j.1748-5827.1999.tb03117.x (1999). Beer, L. M. A study of injuries in Victorian racing greyhounds 2006–2011. (2014). Hayati, H., Eager, D., Stevenson, R., Brown, T. & E., A. The impact of track related parameters on catastrophic injury rate of racing greyhounds in 9th Australian Congress on Applied Mechanics ACAM9 27–29 (Engineers Australia, Sydney, Australia 2017). Fredricson, I., Dalin, G., Drevemo, S., Hjertén, G. & Alm, L. O. A Biotechnical Approach to the Geometric Design of Racetracks. Equine Veterinary Journal 7, 91–96, https://doi.org/10.1111/j.2042-3306.1975.tb03240.x (1975). Mahdavi, F., Hossain, M. I., Hayati, H., Eager, D. & Kennedy, P. Track Shape, Resulting Dynamics and Injury Rates of Greyhounds in ASME 2018 International Mechanical Engineering Congress and Exposition. Mathew, T. V. & Rao, K. K. Introduction to Transportation engineering. (2006). Stubbs, A. K. Racetrack Design and Performance. (RIRDC, 2004). Wilde, D. K. Computing clothoid segments for trajectory generation in 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2440–2445. Meek, D. S. & Walton, D. J. An arc spline approximation to a clothoid. Journal of Computational and Applied Mathematics 170, 59–77, https://doi.org/10.1016/j.cam.2003.12.038 (2004). Article ADS MathSciNet MATH Google Scholar Jia, Y. B. Curvature. 7 <http://web.cs.iastate.edu/~cs577/handouts/curvature.pdf> (2019). Chen, Y., Cai, Y. & Thalmann, D. Efficient, Accurate and Robust Approximation of Clothoids for Path Smoothing. Vázquez-Méndez, M. E. & Casal, G. The Clothoid Computation: A Simple and Efficient Numerical Algorithm in Journal of Surveying Engineering. (American Society of Civil Engineers). Delingette, H., Hebert, M. & Ikeuchi, K. Trajectory generation with curvature constraint based on energy minimization in Proceedings IROS '91:IEEE/RSJ International Workshop on Intelligent Robots and Systems '91. 206–211 vol.201. Hibbeler, R. C. Engineering Mechanics: Dynamics (13th Edition). (Prentice Hall, 2012). Hossain, M. I., Hayati, H. & Eager, D. A Comparison of the Track Shape of Wentworth Park and Proposed Murray Bridge, University of Technology Sydney, Australia (2016). Maslanka, D. J. Circumcircles and Incircles of Triangles. <http://mypages.iit.edu/~maslanka/C&I_Tri.pdf> (2018). Kimberling, C. In The Mathematical Gazette Vol. 85 (1998). Eager, D., Pendrill, A. M. & Reistad, N. Beyond velocity and acceleration: jerk, snap and higher derivatives. European Journal of Physics 37 (2016). Hayati, H., Eager, D., Jusufi, A. & Brown, T. A Study of Rapid Tetrapod Running and Turning Dynamics Utilizing Inertial Measurement Units in Greyhound Sprinting in ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. Hossain, M. I., Eager, D. & Walker, P. Simulation of Racing Greyhound Kinematics in Proceedings of the 9th International Conference on Simulation and Modeling Methodologies, Technologies and Applications - Volume 1: SIMULTECH Vol. 1 47-56 (SciTePress, Prague, Czech Republic, 2019). Meek, D. S. & Walton, D. J. A note on finding clothoids. Journal of Computational and Applied Mathematics 170, 433–453, https://doi.org/10.1016/j.cam.2003.12.047 (2004). Bertails-Descoubes, F. Super-Clothoids. Computer Graphics Forum 31, 509–518 (2012). Wang, L. Z., Miura, K. T., Nakamae, E., Yamamoto, T. & Wang, T. J. An approximation approach of the clothoid curve defined in the interval [0, π/2] and its offset by free-form29 curves. Computer-Aided Design 33, 1049–1058, https://doi.org/10.1016/S0010-4485(00)00142-1 (2001). Sánchez-Reyes, J. & Chacón, J. M. Polynomial approximation to clothoids via s-power series. Computer-Aided Design 35, 1305–1313, https://doi.org/10.1016/S0010-4485(03)00045-9 (2003). Hongo, T., Arakawa, H., Sugimoto, G., Tange, K. & Yamamoto, Y. An Automatic Guidance System of a Self-Controlled Vehicle. IEEE Transactions on Industrial Electronics IE-34, 5–10, doi:10.1109/TIE.1987.350916 (1987). This work was sponsored by Greyhound Racing NSW, Australia and Faculty of Engineering and Information Technology at the University of Technology Sydney, Australia. The authors would also like to acknowledge the support of Greyhound Racing Victoria for providing the greyhound location tracking and track survey data. The research was funded by Greyhound Racing NSW research grant "Identifying optimal greyhound race track design for canine safety and welfare Phase II". Faculty of Engineering and Information Technology, University of Technology Sydney, Broadway, 2007, NSW, Australia Md. Imam Hossain, David Eager & Paul D. Walker Md. Imam Hossain David Eager Paul D. Walker MIH conceived of the presented idea, developed the theoretical framework, performed the analytic calculations and performed the numerical modelling and simulations to derive the results. DE conceived of the presented idea, supervised the project and reviewed the manuscript. PW reviewed the manuscript. Correspondence to Md. Imam Hossain. Hossain, M.I., Eager, D. & Walker, P.D. Greyhound racing ideal trajectory path generation for straight to bend based on jerk rate minimization. Sci Rep 10, 7088 (2020). https://doi.org/10.1038/s41598-020-63678-1 Impact of transition design on the accuracy of velodrome models Shaun Fitzgerald Richard Kelso Andrew Warr Sports Engineering (2021)
CommonCrawl
The Progressive Physicist Technical Thoughts on Polling Methods Posted by Dave Goldberg on March 17, 2018 Hi Folks, just a quicky on how polls could be tallied with significantly reduced noise. I'd be very interested in hearing your thoughts (especially if your thoughts consist of: this is already done, dummy, and it's called X). Oh, and apologies to those on mobile devices for which the LaTeX parsing seems not to be working. I've been thinking that an effective way to poll people who've participated in past elections is to ask them who they voted for (or if they voted) in the last election and then to ask who or which party they'll vote for now. Say that the last race was a nearly 50-50 split, if the $f_{AB} \ll 1$ is the fraction of A voters switching from parties ($A\rightarrow B$) and $f_{BA} \ll 1$(the number going from $B\rightarrow A$), then the vote share for party A will be: $$p_A=0.5\times (1-f_{AB}+f_{BA})$$ But the uncertainty* in this will be: $$\sigma_A=\sqrt{\sigma_{AB}^2+\sigma_{BA}^2}$$ $$\sigma_{AB}^2\simeq \frac{0.5\times f_{AB}(1-f_{AB})}{N}$$ So, taking $1-f_{AB}\simeq 1$ we get: $$\sigma_A=\frac{0.5\times \sqrt{f_{AB}+f_{BA}}}{\sqrt{N}}$$ This is to be compared with the normal error bars for simply asking "who will you vote for?" which yields $$\sigma_{A,traditional}=\frac{0.5}{\sqrt{N}}$$ So consider a district which went exactly 50-50 in the 2016 election but which now approximately 4% of Trump voters are now feeling regrets and would switch to voting Dem (and no Clinton voters switch). This means that a perfect poll would produce a 52-48 result, in favor of the Dem. The error bars are reduced by approximately a factor of: $$\frac{\sigma_A}{\sigma_{A,traditional}}\simeq \sqrt{f_{AB}+f_{BA}}=\sqrt{0.04}= 0.2$$ Put another way, you'd get roughly the same errorbars by interviewing 40 people under the new approach as you would with interviewing 1000 people under the old. Or consider a 400 person survey. Under the traditional approach, your formal uncertainty would be $\sigma_A=2.5\%$. Under the new approach, you'd expect (with 4% of Republican "switchers") to get $\sim 8$ to tell you that they will switch. Those are the only ones you're looking for. In this case, you'd get a formal error of only $0.5\%$. You'd get a similar result by asking the third option (both from the previous and next election) as to whether they'd voted or intended to vote. Now, before you jump in with every conceivable objection, I realize that a major issue is that people may simply lie about who they voted for (because of virtue signalling, or to skew the results). This, indeed, was the fatal flaw of the notorious LA Times poll from the last election (which used a panel and missed by 5 points and predicted a significant popular vote victory by Trump). For instance, if the probability of falsely saying that someone had previously voted for $A$ is $L_A$, and for $B$ is $L_B$ then $L_A -L_B$ would produce the same effect as $f_{BA}-f_{AB}$ on the calculation. It's possible that you'd need to correct for that using some sort of Bayesian prior, but at the moment, I don't have deep thoughts about how that would be done. But there are advantages to this approach as well, besides reducing the formal errorbars. Since election polling is essentially looking for changes in behavior at or near the margins, this approach is much more sensitive to focusing on those changes in behavior. What's more, it's less sensitive to poor sampling by parties. Suppose you inadvertently poll too many Dems, for instance. Traditional polling would over-estimate the Dem result in the next election, but this won't. * Most reporting gives the "margin of error" (MOE), which is $2\sigma$, corresponding to a 95% likelihood range. Polls Technical 10 Reasons Trump is Not Getting Reelected The conventionalwisdom seems to be that despite his awful approval numbers, Trump is likely to win reelection in November. This seems absurd on its face, but some folks seem awfully sure about it. And it's not just blustering on the right; there are plenty on the left who've decided that we're doomed as well. This may be some sort of defense mechanism. It's easier to be the cynical cool kid who insists that we're going to lose, and then be pleasantly surprised when and if it doesn't work out that way. And a lot of people are still shell-shocked from 2016. Well, I'm not buying it. I've done a deep dive on a lot of different platforms (like and follow all of them), and I'm here to tell you that while it's not impossible, Trump's chances in November are pretty awful. No big math calculations. Just a few takeaway points, a figure or two, and some general, numerically justified, reasons for optimism. 1. Being the incumbent isn't as helpf… Predictions and Postdictions in the Philly Primaries! Tuesday was Election Day here in Philadelphia, and as we're more or less a one party town, the primary essentially is the election. I had the pleasure of working with 3 great first-time candidates: Tiffany Palmer (who ran successfully for judge), Jen Devor (who lost in a tough race for election commissioner), and Eryn Santamoor (who missed by a couple of spots in a 30-way race for city council at large). Following the election, I have some gripes about the excessive influence of the party machine, but now's not the time. I did some quantitative modeling and analysis for their campaigns throughout, but on election night, I set up a war room, and made real time predictions of the final outcomes. And as with the midterms, my models were remarkably stable and converged very early on. They were also surprisingly simple: I looked at the total number of voters and relative historical turnout division by division (you may know divisions as "precincts").Looking at some h… We're in Pretty Good Shape The pundits are there for the clicks, but not me. I'm here to give you a bit of comfort, and to help you form a plan of action, and with that said, things are looking pretty goddamn good for the 2020 election. If you're anything like me, you want a progressive leading the charge, but more importantly, you want Trump out of there at all costs. And we need to beat him decisively, for a couple of reasons. First, a big enough margin will likely carry the Senate as well. It takes no imagination at all to picture McConnell preventing any and all nominations from a President Warren. Second, as has been noted since the last election, Trump will doubtless contest the legitimacy of the election no matter what happens, and he can do a lot of damage on the way out. Put another way: You think this is a guy (or narcissistic sociopath if you prefer) who will accept election results he doesn't like when he goes to the mattresses on Alabama? — David Plouffe (@davidplouffe) September 5, …
CommonCrawl
Prof. Dr.-Ing. Roland Platz Faculty of Mechanical Engineering and Mechatronics 09141/874669-209 consulting time Appointment any time via e-mail. S. Kersting Roland Platz M. Kohler T. Melz Data Uncertainty (Chapter 2.1), pg. 31-34. In: Mastering Uncertainty in Mechanical Engineering. null (Tracts in Mechanical Engineering book series (STME)) Approach to Assess Basic Deterministic Data and Model Form Uncertaint in Passive and Active Vibration Isolation, pg. 208-223. This contribution continues ongoing own research on uncertainty quantification in structural vibration isolation in early design stage by various deterministic and non-deterministic approaches. It takes into account one simple structural dynamic system example throughout the investigation: a one mass oscillator subject to passive and active vibration isolation. In this context, passive means that the vibration isolation only depends on preset inertia, damping, and stiffness properties. Active means that additional controlled forces enhance vibration isolation. The simple system allows a holistic, consistent and transparent look into mathematical modeling, numerical simulation, experimental test and uncertainty quantification for verification and validation. The oscillator represents fundamental structural dynamic behavior of machines, trusses, suspension legs etc. under variable mechanical loading. This contribution assesses basic experimental data and mathematical model form uncertainty in predicting the passive and enhanced vibration isolation after model calibration as the basis for further deterministic and non-deterministic uncertainty quantification measures. The prediction covers six different damping cases, three for passive and three for active configuration. A least squares minimization (LSM) enables calibrating multiple model parameters using different outcomes in time and in frequency domain from experimental observations. Its adequacy strongly depends on varied damping properties, especially in passive configuration. J. Lenz Analysis of data uncertainty using the example of passive and active vibration isolation (Chapter 4.1.1), pg. 119-123. C. Ehrett D. Brown C. Kitchens X. Xu S. Atamturktur Simultaneous Bayesian Calibration and Engineering Design With an Application to a Vibration Isolation System, vol. 6 In: Journal of Verification, Validation and Uncertainty Quantification Calibration of computer models and the use of those design models are two activities traditionally carried out separately. This paper generalizes existing Bayesian inverse analysis approaches for computer model calibration to present a methodology combining calibration and design in a unified Bayesian framework. This provides a computationally efficient means to undertake both tasks while quantifying all relevant sources of uncertainty. Specifically, compared with the traditional approach of design using parameter estimates from previously completed model calibration, this generalized framework inherently includes uncertainty from the calibration process in the design procedure. We demonstrate our approach to the design of a vibration isolation system. We also demonstrate how, when adaptive sampling of the phenomenon of interest is possible, the proposed framework may select new sampling locations using both available real observations and the computer model. This is especially useful when a misspecified model fails to reflect that the calibration parameter is functionally dependent upon the design inputs to be optimized. TSZ Weißenburg M. Schaeffner B. Götz Active buckling control of a beam-column with circular cross-section using piezo-elastic supports and integral LQR control, vol. 25 In: Smart Materials and Structures DOI: 10.26083/tuprints-00017750 Buckling of slender beam-columns subject to axial compressive loads represents a critical design constraint for light-weight structures. Active buckling control provides a possibility to stabilize slender beam-columns by active lateral forces or bending moments. In this paper, the potential of active buckling control of an axially loaded beam-column with circular solid cross-section by piezo-elastic supports is investigated experimentally. In the piezo-elastic supports, lateral forces of piezoelectric stack actuators are transformed into bending moments acting in arbitrary directions at the beam-column ends. A mathematical model of the axially loaded beam-column is derived to design an integral linear quadratic regulator (LQR) that stabilizes the system. The effectiveness of the stabilization concept is investigated in an experimental test setup and compared with the uncontrolled system. With the proposed active buckling control it is possible to stabilize the beam-column in arbitrary lateral direction for axial loads up to the theoretical critical buckling load of the system. Approach to Assess Data and Model Form Uncertainty when Predicting and Comparing the Dynamic Behavior in Passive and Active Vibration Isolation via Numerical and Experimental Simulation In: Engineering Mechanics Institute Conference 2021 and Probabilistic Mechanics & Reliability Conference 2021 C. Gehb Load redistribution via semi-active guidance elements in a kinematic structure (Chapter 5.4.8), pg. 347-351. Vibration attenuation in beam truss structures via (semi-)active piezoelectric shunt-damping (Chapter 5.4.6), pg. 338-343. Bayesian inference based parameter calibration for a mathematical model of a load-bearing structure (Chapter 4.1.2), pg. 123-128. Active buckling control of compressively loaded beam-columns and trusses (Chapter 5.4.7), pg. 343-347. A. Matei S. Ulbrich Model Uncertainty (Chapter 2.2), pg. 35-39. Bayesian Inference Based Parameter Calibration of the LuGre-Friction Model, vol. 44, pg. 369-382. In: Experimental Techniques Load redistribution in smart load bearing mechanical structures can be used to reduce negative effects of damage or to prevent further damage if predefined load paths become unsuitable. Using controlled friction brakes in joints of kinematic links can be a suitable way to add dynamic functionality for desired load path redistribution. Therefore, adequate friction models are needed to predict the friction behavior. Possible models that can be used to model friction vary from simple static to complex dynamic models with increasing sophistication in the representation of friction phenomena. The LuGre-model is a widely used dynamic friction model for friction compensation in high precision control systems. It needs six parameters for describing the friction behavior. These parameters are coupled to an unmeasurable internal state variable, therefore, parameter identification is challenging. Conventionally, optimization algorithms are used to identify the LuGre-parameters deterministically. In this paper, the parameter identification and calibration is formulated to achieve model prediction that is statistically consistent with the experimental data. By use of the R2 sensitivity analysis, the most influential parameters are selected for calibration. Subsequently, the Bayesian inference based calibration procedure using experimental data is performed. Uncertainty represented in former wide parameter ranges can be reduced and, thus, model prediction accuracy can be increased. Adequate Mathematical Beam-Column Model for Active Buckling Control in a Tetrahedron Truss Structure, pg. 323-332. In: Model Validation and Uncertainty Quantification, Volume 3. null (Conference Proceedings of the Society for Experimental Mechanics Series) Active buckling control of compressively loaded beam-columns provides a possibility to increase the maximum bearable axial load compared to passive beam-columns. Reliable mathematical beam-column models that adequately describe the lateral dynamic behavior are required for the model-based controller synthesis in order to avoid controller instability for real testing and application. This paper presents an adequate mathematical beam-column model for the active buckling control in a tetrahedron truss structure. Furthermore, it discusses model form uncertainty arising from model simplification of the global tetrahedron model to three local beam-column models. An experimental tetrahedron truss structure that comprises three passive beams and three active beam-columns with piezo-elastic supports for active buckling control is investigated. The tetrahedron is clamped at the three base nodes and free at the top node. In the two piezo-elastic supports of each active beam-column, integrated piezoelectric stack actuators compensate lateral deflections due to increasing axial compressive loads and may, thus, prevent buckling. In previous works, active buckling control was investigated for a single beam-column that was clamped rigidly in an experimental test setup. A verified and validated single beam-column model with compliant boundary conditions was used to represent the piezo-elastic supports for active buckling control. The mathematical model of the active beam-columns is calibrated with experimental data from all three nominally identical active beam-columns to account for uncertainty in manufacturing, assembly or mounting. Subsequently, they are compared with respect to the transfer functions and the first eigenfrequencies. It is shown that the boundary conditions of the single beam-column model may be calibrated to adequately describe the boundary conditions within the tetrahedron truss structure. Thus, it will be used for the model-based controller synthesis in future investigations on the active buckling control of the tetrahedron truss structure. Quantification and Evaluation of Parameter and Model Uncertainty for Passive and Active Vibration Isolation, pg. 135-147. In: Model Validation and Uncertainty Quantification, Volume 3. Proceedings of IMAC–XXXVII A Conference and Exposition on Structural Dynamics (Jan. 28–31, 2019; Orlando, FL, USA) (Conference Proceedings of the Society for Experimental Mechanics Series) Vibration isolation is a common method used for minimizing the vibration of dynamic load-bearing structures in a region past the resonance frequency, when excited by disturbances. The vibration reduction mainly results from the tuning of stiffness and damping during the early design stage. High vibration reduction over a broad bandwidth can be achieved with additional and controlled forces, the active vibration isolation. In this context, "active" does not mean the common understanding that the surroundings are isolated against the machine vibrations. Also in this context, "passive" means that no additional and controlled force is present, other than the common understanding that the machine is isolated against the surroundings. For active vibration isolation, a signal processing chain and an actuator are included in the system. Typically, a controller is designed to enable a force of an actuator that reduces the system's excitation response. In both passive and active vibration isolation, uncertainty is an issue for adequate tuning of stiffness and damping in early design stage. The two types of uncertainty investigated in this contribution are parametric uncertainty, i.e. the variation of model parameters resulting in the variation of the systems output, and model uncertainty, the uncertainty from discrepancies between model output and experimentally measured output. For this investigation, a simple one mass oscillator under displacement excitation is used to quantify the parameter and model uncertainty in passive and active vibration isolation. A linear mathematical model of the one mass oscillator is used to numerically simulate the transfer behavior for both passive and active vibration isolation, thus predicting the behavior of an experimental test rig of the one mass oscillator under displacement excitation. The models' parameters that are assumed to be uncertain are mass and stiffness as well as damping for the passive vibration isolation and an additional gain factor for the velocity feedback control in case of active vibration isolation. Stochastic uncertainty is assumed for the parameter uncertainty when conducting a Monte Carlo Simulation to investigate the variation of the numerically simulated transfer functions. The experimental test rig enables purposefully adjustable insertion of parameter uncertainty in the assumed value range of the model parameters in order to validate the model. The discrepancy between model and system output results from model uncertainty and is quantified by the Area Validation Metric and an Bayesian model validation approach. The novelty of this contribution is the application of the Area Validation Metric and Bayes' approach to evaluate and to compare the two different passive and active approaches for vibration isolation numerically and experimentally. Furthermore, both model validation approaches are compared. BAYESIAN Inference Based Parameter Calibration of a Mechanical Load-Bearing Structure's Mathematical Model, pg. 337-347. Load-bearing structures with kinematic functions like a suspension of a vehicle and an aircraft landing gear enable and disable degrees of freedom and are part of many mechanical engineering applications. In most cases, the load path going through the load-bearing structure is predetermined in the design phase. However, if parts of the load-bearing structure become weak or suffer damage, e.g. due to deterioration or overload, the load capacity may become lower than designed. In that case, load redistribution can be an option to adjust the load path and, thus, reduce the effects of damage or prevent further damage. For an adequate numerical prediction of the load redistribution capability, an adequate mathematical model with calibrated model parameters is needed. Therefore, the adequacy of an exemplary load-bearing structure's mathematical model is evaluated and its predictability is increased by model parameter uncertainty quantification and reduction. The mathematical model consists of a mechanical part, a friction model and the electromagnetic actuator to achieve load redistribution, whereby the mechanical part is chosen for calibration in this paper. Conventionally, optimization algorithms are used to calibrate the model parameters deterministically. In this paper, the model parameter calibration is formulated to achieve a model prediction that is statistically consistent with the data gained from an experimental test setup of the exemplary load-bearing structure. Using the R2 sensitivity analysis, the most influential parameters for the model prediction of interest, i.e. the load path going through the load-bearing structure represented by the support reaction forces, are identified for calibration. Subsequently, BAYESIAN inference based calibration procedure using the experimental data and the selected model parameters is performed. Thus, the mathematical model is adjusted to the actual operating conditions of the experimental load-bearing structure via the model parameters and the model prediction accuracy is increased. Uncertainty represented by originally large model parameter ranges can be reduced and quantified. R. Locke S. Kupis Applying uncertainty quantification to structural systems: Parameter reduction for evaluating model complexity, pg. 241-256. Different mathematical models can be developed to represent the dynamic behavior of structural systems and assess properties, such as risk of failure and reliability. Selecting an adequate model requires choosing a model of sufficient complexity to accurately capture the output responses under various operational conditions. However, as model complexity increases, the functional relationship between input parameters varies and the number of parameters required to represent the physical system increases, reducing computational efficiency and increasing modeling difficulty. The process of model selection is further exacerbated by uncertainty introduced from input parameters, noise in experimental measurements, numerical solutions, and model form. The purpose of this research is to evaluate the acceptable level of uncertainty that can be present within numerical models, while reliably capturing the fundamental physics of a subject system. However, before uncertainty quantification can be performed, a sensitivity analysis study is required to prevent numerical ill-conditioning from parameters that contribute insignificant variability to the output response features of interest. The main focus of this paper, therefore, is to employ sensitivity analysis tools on models to remove low sensitivity parameters from the calibration space. The subject system in this study is a modular spring-damper system integrated into a space truss structure. Six different cases of increasing complexity are derived from a mathematical model designed from a two-degree of freedom (2DOF) mass spring-damper that neglects single truss properties, such as geometry and truss member material properties. Model sensitivity analysis is performed using the Analysis of Variation (ANOVA) and the Coefficient of Determination R2. The global sensitivity results for the parameters in each 2DOF case are determined from the R2 calculation and compared in performance to evaluate levels of parameter contribution. Parameters with a weighted R2 value less than .02 account for less than 2% of the variation in the output responses and are removed from the calibration space. This paper concludes with an outlook on implementing Bayesian inference methodologies, delayed-acceptance single-component adaptive Metropolis (DA-SCAM) algorithm and Gaussian Process Models for Simulation Analysis (GPM/SA), to select the most representative mathematical model and set of input parameters that best characterize the system's dynamic behavior. M. Schäffner Selection of an Adequate Model of a Piezo-Elastic Support for Structural Control in a Beam Truss Structure, pg. 41-49. DOI: 10.1007/978-3-030-47638-0_4 Axial and lateral loads of lightweight beam truss structures e.g. used in automotive engineering may lead to undesired structural vibration that can be reduced near a structural resonance frequency via resonant piezoelectric shunt-damping. In order to tune the electrical circuits to the desired structural resonance frequency within a model-based approach, an adequate mathematical model of the beam truss structure is required. Piezo-elastic truss supports with integrated piezoelectric stack transducers can transfer the axial and lateral forces and may be used for vibration attenuation of single beams or whole beam truss structures. For usage in a single beam test setup, the piezo-elastic support's casing is clamped rigidly and is connected to the beam via a membrane-like spring element that allows for rotation as well as axial and lateral displacements of the beam. In this contribution, the piezo-elastic support is integrated into a two-dimensional beam truss structure comprising seven beams, where its casing is no longer clamped rigidly but is subject to axial, lateral and rotational displacements. Based on the previously verified and validated model of the single beam test setup, two different complex mathematical models of the piezo-elastic support integrated in the two-dimensional beam truss structure are derived in this contribution. The two mathematical models differ in their number of degrees of freedom for the piezo-elastic support as well as in the assumption of rigid or compliant casing. By comparing numerically and experimentally determined structural resonance frequencies and vibration amplitudes, the model that more adequately predicts the truss structure's vibration behavior is selected on basis of the normalized root mean squared error. For future works, the more adequate model will be used to tune electrical circuits for resonant piezoelectric shunt-damping in a three-dimensional truss structure. R. Feldmann Analyzing Propagation of Model Form Uncertainty for Different Suspension Strut Models, pg. 255-263. Model form uncertainty often arises in structural engineering problems when simplifications and assumptions in the mathematical modelling process admit multiple possible models. It is well known that all models incorporate a model error that is captured by a discrepancy due to missing or incomplete physics in the mathematical model. As an example, this discrepancy can be modelled as a function based upon Gaussian processes and its confidence bounds can be seen as a measure of adequacy for the respective model. Assessment of model form uncertainty can be conducted by comparing the confidence bounds of competing discrepancy functions. In this paper, a modular active spring-damper system is considered that was designed to resemble a suspension strut as part of an aircraft landing gear and is excited by dynamic drop tests. In previous research about the suspension strut, different mathematical system models with respect to different linear and non-linear assumptions for damping and stiffness properties to describe the dynamic system behaviour of the suspension strut were compared by means of the confidence intervals of their discrepancy functions. The results indicated that the initial conditions used for exciting the system model were inadequate. The initial conditions themselves constitute a mathematical model, so that model form uncertainty inherent to the initial condition model can effect the system model. The propagation of model form uncertainty within the model will be analysed in this paper by considering two cases: In the first case, the system model is excited with an inadequate initial condition model, while in the second case, experimentally measured initial conditions will be employed that represent the true value except for measurement errors. The comparison of both shows how model form uncertainty propagates through the model chain from the initial condition model to the system model. Two control strategies for semi-active load path redistribution in a load-bearing structure, vol. 118, pg. 195-208. In: Mechanical Systems and Signal Processing DOI: 10.1016/j.ymssp.2018.08.044 In this paper, a two mass oscillator, a translatoric moving mass connected to a rigid beam by a spring-damper system, is used to numerically and experimentally investigate the capability of load path redistribution due to controlled semi-active guidance elements with friction brakes. The mathematical friction model will be derived by the LuGre approach. The rigid beam is embedded on two supports and is initially aligned with evenly distributed loads in beam and supports by the same stiffness condition. With the semi-active auxiliary guidance elements it is possible to provide additional forces to relieve one of the beam's supports. Two control strategies are designed and tested to induce additional forces in the auxiliary guidance elements to bypass a proportion of loading away from the spring-damper system towards the now kinetic auxiliary guidance elements. The control strategies I and II depend on the different control inputs: I beam misalignment and II desired reaction force ratio in the supports. The beam's misalignment and the supports' reaction forces are calculated numerically and measured experimentally for varying stiffness parameters of the supports and are compared with and without semi-active auxiliary kinematic guidance elements. The structure's moving mass is loaded with a force according to a step-function. Thus, undesired misalignment caused by varying stiffness as well as undesired load distribution in the structure's supports can be reduced by redistributing load between the supports during operation. S. Mallapur Uncertainty quantification in the mathematical modelling of a suspension strut using Bayesian inference, vol. 118, pg. 158-170. In the field of structural engineering, mathematical models are utilized to predict the dynamic response of systems such as a suspension strut under different boundary and loading conditions. However, different mathematical models exist based on their governing functional relations between the model input and state output parameters. For example, the spring-damper component of a suspension strut is considered. Its mathematical model can be represented by linear, nonlinear, axiomatic or empiric relations resulting in different vibrational behaviour. The uncertainty that arises in the prediction of the dynamic response from the resulting different approaches in mathematical modelling may be quantified with Bayesian inference approach especially when the system is under structural risk and failure assessment. As the dynamic output of the suspension strut, the spring-damper compression and the spring-damper forces as well as the ground impact force are considered in this contribution that are taken as the criteria for uncertainty evaluation due to different functional relations of models. The system is excited by initial velocities that depend on a drop height of the suspension strut during drop tests. The suspension strut is a multi-variable system with the payload and the drop height as its varied input variables in this investigation. As a new approach, the authors present a way to adequately compare different models based on axiomatic or empiric assumptions of functional relations using the posterior probabilities of competing mathematical models. The posterior probabilities of different mathematical models are used as a metric to evaluate the model uncertainty of a suspension strut system with similar specifications as actual suspension struts in automotive or aerospace applications for decision making in early design stage. The posterior probabilities are estimated from the likelihood function, which is estimated from the cartesian vector distances between the predicted output and the experimental output. Quantification of Uncertainty in the Mathematical Modelling of a Multivariable Suspension Strut Using Bayesian Interval Hypothesis-Based Approach, vol. 885, pg. 3-17. In: Applied Mechanics and Materials Mathematical models of a suspension strut such as an aircraft landing gear are utilized by engineers in order to predict its dynamic response under different boundary conditions. The prediction of the dynamic response, for example the external loads, the stress and the strength as well as the maximum compression in the spring-damper component aids engineers in early decision making to ensure its structural reliability under various operational conditions. However, the prediction of the dynamic response is influenced by model uncertainty. As far as the model uncertainty is concerned, the prediction of the dynamic behavior via different mathematical models depends upon various factors such as the model's complexity in terms of the degrees of freedom, material and geometrical assumptions, their boundary conditions and the governing functional relations between the model input and output parameters. The latter can be linear or nonlinear, axiomatic or empiric, time variant or time-invariant. Hence, the uncertainty that arises in the prediction of the dynamic response of the resulting different mathematical models needs to be quantified with suitable validation metrics, especially when the system is under structural risk and failure assessment. In this contribution, the authors utilize the Bayesian interval hypothesis-based method to quantify the uncertainty in the mathematical models of the suspension strut. A. Krzyżak Uncertainty Quantification in Case of Imperfect Models: A Non‐Bayesian Approach, vol. 45, pg. 729-752. In: Scandinavian Journal of Statistics DOI: 10.1111/sjos.12317 The starting point in uncertainty quantification is a stochastic model, which is fitted to a technical system in a suitable way, and prediction of uncertainty is carried out within this stochastic model. In any application, such a model will not be perfect, so any uncertainty quantification from such a model has to take into account the inadequacy of the model. In this paper, we rigorously show how the observed data of the technical system can be used to build a conservative non-asymptotic confidence interval on quantiles related to experiments with the technical system. The construction of this confidence interval is based on concentration inequalities and order statistics. An asymptotic bound on the length of this confidence interval is presented. Here we assume that engineers use more and more of their knowledge to build models with order of errors bounded by urn:x-wiley:sjos:media:sjos12317:sjos12317-math-0001. The results are illustrated by applying the newly proposed approach to real and simulated data. Gain-Scheduled H∞ Buckling control of a Circular Beam-Column Subject to Time-Varying Axial Loads, vol. 27, pg. 065009. DOI: 10.1088/1361-665X/aab63a Smart Materials and Structures Paper Gain-scheduled ${{\mathscr{H}}}_{\infty }$ buckling control of a circular beam-column subject to time-varying axial loads Maximilian Schaeffner1 and Roland Platz2 Published 3 May 2018 • © 2018 IOP Publishing Ltd Smart Materials and Structures, Volume 27, Number 6 Citation Maximilian Schaeffner and Roland Platz 2018 Smart Mater. Struct. 27 065009 99 Total downloads 6 6 total citations on Dimensions. Turn on MathJax Get permission to re-use this article Share this article Share this content via email Share on Facebook Share on Twitter Share on Google+ Share on Mendeley Hide article information Author affiliations 1 Technische Universität Darmstadt, System Reliability, Adaptronics and Machine Acoustics SAM, Magdalenenstraße 4, D-64289 Darmstadt, Germany 2 Fraunhofer Institute for Structural Durability and System Reliability LBF, Bartningstraße 47, D-64289 Darmstadt, Germany ORCID iDs Maximilian Schaeffner https://orcid.org/0000-0002-0957-7725 Dates Received 12 January 2018 Accepted 13 March 2018 Published 3 May 2018 Check for updates using Crossmark Peer review information Method: Single-anonymous Screened for originality? Yes DOI https://doi.org/10.1088/1361-665X/aab63a [Titel anhand dieser DOI in Citavi-Projekt übernehmen] Buy this article in print Journal RSS Sign up for new issue notifications Create citation alert Abstract For slender beam-columns loaded by axial compressive forces, active buckling control provides a possibility to increase the maximum bearable axial load above that of a purely passive structure. In this paper, an approach for gain-scheduled ${{\mathscr{H}}}_{\infty }$ buckling control of a slender beam-column with circular cross-section subject to time-varying axial loads is investigated experimentally. Piezo-elastic supports with integrated piezoelectric stack actuators at the beam-column ends allow an active stabilization in arbitrary lateral directions. The axial loads on the beam-column influence its lateral dynamic behavior and, eventually, cause the beam-column to buckle. A reduced modal model of the beam-column subject to axial loads including the dynamics of the electrical components is set up and calibrated with experimental data. Particularly, the linear parameter-varying open-loop plant is used to design a model-based gain-scheduled ${{\mathscr{H}}}_{\infty }$ buckling control that is implemented in an experimental test setup. The beam-column is loaded by ramp- and step-shaped time-varying axial compressive loads that result in a lateral deformation of the beam-column due to imperfections, such as predeformation, eccentric loading or clamping moments. The lateral deformations and the maximum bearable loads of the beam-column are analyzed and compared for the beam-column with and without gain-scheduled ${{\mathscr{H}}}_{\infty }$ buckling control or, respectively, active and passive configuration. With the proposed gain-scheduled ${{\mathscr{H}}}_{\infty }$ buckling control it is possible to increase the maximum bearable load of the active beam-column by 19% for ramp-shaped axial loads and to significantly reduce the beam-column deformations for step-shaped axial loads compared to the passive structure. Bayesian Multivariate Validation Approach to Quantify the Uncertainty in the Finite Element Model of a Suspension Strut (Paper No. 248) D. Mayer G. Stevens Approach in Uncertainty Quantification to Predict the Vibration Control Performance of Tuned Absorbers in Early Design Stage (Paper No. 262) Effect of static axial loads on the lateral vibration attenuation of a beam with piezo-elastic supports, vol. 27, pg. 035011. DOI: 10.1088/1361-665X/aaa937 In this paper, vibration attenuation of a beam with circular cross-section by resonantly shunted piezo-elastic supports is experimentally investigated for varying axial tensile and compressive beam loads. The beam's first mode resonance frequency, the general electromechanical coupling coefficient and static transducer capacitance are analyzed for varying axial loads. All three parameter values are obtained from transducer impedance measurements on an experimental test setup. Varying axial beam loads manipulate the beam's lateral bending stiffness and, thus, lead to a detuning of the resonance frequencies. Furthermore, they affect the general electromechanical coupling coefficient of transducer and beam, an important modal quantity for shunt-damping, whereas the static transducer capacitance is nearly unaffected. Frequency transfer functions of the beam with one piezoe-elastic support either shunted to an RL-shunt or to an RL-shunt with negative capacitance, the RLC-shunt, are compared for varying axial loads. It is shown that the beam vibration attenuation with the RLC-shunt is less influenced by varying axial beam loads and, therefore, is more robust against detuning. Consistent approach to describe and evaluate uncertainty in vibration attenuation using resonant piezoelectric shunting and tuned mass dampers, vol. 18, pg. 108. In: Mechanics & Industry DOI: 10.1051/meca/2016011 Undesired vibration may occur in lightweight structures due to low damping and excitation. For the purpose of vibration attenuation, tuned mass dampers (TMD) can be an appropriate measure. A similar approach uses resonantly shunted piezoelectric transducers. However, uncertainty in design and application of resonantly shunted piezoelectric transducers and TMD can be caused by insufficient mathematical modeling, geometric and material deviations or deviations in the electrical and mechanical quantities. During operation, uncertainty may result in detuned attenuation systems and loss of attenuation performance. A consistent and general approach to display uncertainty in load carrying systems developed by the authors is applied to describe parametric uncertainty in vibration attenuation with resonantly shunted piezoelectric transducers and TMD. Mathematical models using Hamilton's principle and Ritz formulation are set up for a beam, clamped at both ends with resonantly shunted transducers and TMD to demonstrate the effectiveness of both attenuation systems and investigate the effects of parametric uncertainty. Furthermore, both approaches lead to additional masses, piezoelectric material for shunt damping and compensator mass of TMD, in the systems. It is shown that vibration attenuation with TMD is less sensitive to parametric uncertainty and achieves a higher performance using the same additional mass. Quantification and Evaluation of Uncertainty in the Mathematical Modelling of a Suspension Strut Using Bayesian Model Validation Approach, pg. 113-124. In: Proceedings of the 35th IMAC, a Conference and Exposition on Structural Dynamics 2017 (30 Jan - 2 Feb, 2017; Garden Grove, CA, USA). null (Conference Proceedings of the Society for Experimental Mechanics Series) Global Load Path Adaption in a Simple Kinematic Load-Bearing Structure to Compensate Uncertainty of Misalignment Due to Changing Stiffness Conditions of the Structure's Supports, pg. 133-144. Linear Parameter-Varying (LPV) Buckling Control of an Imperfect Beam-Column Subject to Time-Varying Axial Loads, pg. 103-112. Lateral Vibration Attenuation of a Beam with Piezo-Elastic Supports Subject to Varying Axial Tensile and Compressive Loads, pg. 1-8. Non-probabilistic Uncertainty Evaluation in the Concept Phase for Airplane Landing Gear Design, pg. 161-169. S. Li Observations by Evaluating the Uncertainty of Stress Distribution in Truss Structures Based on Probabilistic and Possibilistic Methods, vol. 2 Load-bearing mechanical structures like trusses face uncertainty in loading along with uncertainty in stress and strength, which are due to uncertainty in their development, production, and usage. According to the working hypothesis of the German Collaborative Research Center SFB 805, uncertainty occurs in processes that are not or only partial deterministic and can only be controlled in processes. The authors classify, compare, and evaluate four different direct methods to describe and evaluate the uncertainty of normal stress distribution in simple truss structures with one column, two columns, and three columns. The four methods are the direct Monte Carlo (DMC) simulation, the direct quasi-Monte Carlo (DQMC) simulation, the direct interval, and the direct fuzzy analysis with α-cuts, which are common methods for data uncertainty analysis. The DMC simulation and the DQMC simulation are categorized as probabilistic methods to evaluate the stochastic uncertainty. On the contrary, the direct interval and the direct fuzzy analysis with α-cuts are categorized as possibilistic methods to evaluate the nonstochastic uncertainty. Three different truss structures with increasing model complexity, a single-column, a two-column, and a three-column systems are chosen as reference systems in this study. Each truss structure is excited with a vertical external point load. The input parameters of the truss structures are the internal system properties such as geometry and material parameters, and the external properties such as magnitude and direction of load. The probabilistic and the possibilistic methods are applied to each truss structure to describe and evaluate its uncertainty in the developing phase. The DMC simulation and DQMC simulation are carried out with full or "direct" sample sets of model parameters such as geometry parameters and state parameters such as forces, and a sensitivity analysis is conducted to identify the influence of every model and state input parameter on the normal stress, which is the output variable of the truss structures. In parallel, the direct interval and the direct fuzzy analysis with α-cuts are carried out without altering and, therefore, they are direct approaches as well. The four direct methods are then compared based on the simulation results. The criteria of the comparison are the uncertainty in the deviation of the normal stress in one column of each truss structure due to varied model and state input parameters, the computational costs, as well as the implementation complexity of the applied methods. Approach to Evaluate and to Compare Basic Structural Design Concepts of Landing Gears in Early Stage of Development Under Uncertainty, pg. 167-175. In: Model Validation and Uncertainty Quantification, Volume 3. (IMAC-XXXIV Conference and Exposition on Structural Dynamics; January 25-28, 2016; Orlando, FL, USA) (Conference Proceedings of the Society for Experimental Mechanics Series) S. Ondoua G. Enss Solid State Support Active buckling control of an imperfect beam-column with circular cross-section using piezo-elastic supports and integral LQR control, vol. 744, pg. 012165. In: Journal of Physics: Conference Series DOI: 10.1088/1742-6596/744/1/012165 For slender beam-columns loaded by axial compressive forces, active buckling control provides a possibility to increase the maximum bearable axial load above that of a purely passive structure. In this paper, the potential of active buckling control of an imperfect beam-column with circular cross-section using piezo-elastic supports is investigated numerically. Imperfections are given by an initial deformation of the beam-column caused by a constant imperfection force. With the piezo-elastic supports, active bending moments in arbitrary directions orthogonal to the beam-column's longitudinal axis can be applied at both beam- column's ends. The imperfect beam-column is loaded by a gradually increasing axial compressive force resulting in a lateral deformation of the beam-column. First, a finite element model of the imperfect structure for numerical simulation of the active buckling control is presented. Second, an integral linear-quadratic regulator (LQR) that compensates the deformation via the piezo-elastic supports is derived for a reduced modal model of the ideal beam-column. With the proposed active buckling control it is possible to stabilize the imperfect beam-column in arbitrary lateral direction for axial loads above the theoretical critical buckling load and the maximum bearable load of the passive structure. A. Krzyzak Nonparametric Quantile Estimation Based on Surrogate Models, vol. 62, pg. 5727-5739. In: IEEE Transactions on Information Theory DOI: 10.1109/TIT.2016.2586080 Nonparametric estimation of a quantile q m(X),α of a random variable m(X) is considered, where m : ℝ d → ℝ is a function, which is costly to compute and X is an ℝ d -valued random variable with known distribution. Monte Carlo surrogate quantile estimates are considered, where in a first step, the function m is estimated by some estimate (surrogate) m n , and then, the quantile q m(X),α is estimated by a Monte Carlo estimate of the quantile qm n(X),α . A general error bound on the error of this quantile estimate is derived, which depends on the local error of the function estimate m n , and the rates of convergence of the corresponding Monte Carlo surrogate quantile estimates are analyzed for two different function estimates. The finite sample size behavior of the estimates is investigated in simulations. Active load path adaption in a simple kinematic load-bearing structure due to stiffness change in the structure's supports, vol. 744, pg. 012168. Load-bearing structures with kinematic functions enable and disable degrees of freedom and are part of many mechanical engineering applications. The relative movement between a wheel and the body of a car or a landing gear and an aircraft fuselage are examples for load-bearing systems with defined kinematics. In most cases, the load is transmitted through a predetermined load path to the structural support interfaces. However, unexpected load peaks or varying health condition of the system's supports, which means for example varying damping and stiffness characteristics, may require an active adjustment of the load path. However, load paths transmitted through damaged or weakened supports can be the reason for reduced comfort or even failure. In this paper a simplified 2D two mass oscillator with two supports is used to numerically investigate the potential of controlled adaptive auxiliary kinematic guidance elements in a load-bearing structure to adapt the load path depending on the stiffness change, representing damage of the supports. The aim is to provide additional forces in the auxiliary kinematic guidance elements for two reasons. On the one hand, one of the two supports that may become weaker through stiffness change will be relieved from higher loading. On the other hand, tilting due to different compliance in the supports will be minimized. Therefore, shifting load between the supports during operation could be an effective option. Evaluation of uncertainty in experimental active buckling control of a slender beam-column with disturbance forces using Weibull analysis, vol. 79, pg. 123-131. Buckling of slender load-bearing beam-columns is a crucial failure scenario in light-weight structures as it may result in the collapse of the entire structure. If axial load and load capacity are unknown, stability becomes uncertain. To compensate this uncertainty, the authors successfully developed and evaluated an approach for active buckling control for a slender beam-column, clamped at the base and pinned at the upper end. Active lateral forces are applied with two piezoelectric stack actuators in opposing directions near the beam-column' clamped base to prevent buckling. A Linear Quadratic Regulator is designed and implemented on the experimental demonstrator and statistical tests are conducted to prove effectivity of the active approach. The load capacity of the beam-column could be increased by 40% and scatter of buckling occurrences for increasing axial loads is reduced. Weibull analysis is used to evaluate the increase of the load capacity and its related uncertainty compensation. Description and evaluation of uncertainty in the early development phase of a beam-column system subjected to passive and active buckling control, pg. 269-280. DOI: 10.1515/9783110469240-024 Mendeley Open Access Data Set Lateral vibration attenuation of a beam with circular cross-section by a support with integrated piezoelectric transducers shunted to negative capacitances, vol. 25, pg. 095045. DOI: 10.1088/0964-1726/25/9/095045 Undesired vibration may occur in lightweight structures due to excitation and low damping. For the purpose of lateral vibration attenuation in beam structures, piezoelectric transducers shunted to negative capacitances can be an appropriate measure. In this paper, a new concept for lateral vibration attenuation by integrated piezoelectric stack transducers in the elastic support of a beam with circular cross-section is presented. In the piezoelastic support, bending of the beam in an arbitrary direction is transformed into a significant axial deformation of three stack transducers and, thus, the beam's surface may remain free from transducers. For multimodal vibration attenuation, each piezoelectric transducer is shunted to a negative capacitance. It is shown by numerical simulation and experiment that the concept of an elastic beam support with integrated shunted piezoelectric stack transducers is capable of reducing the lateral vibration of the beam in an arbitrary direction. C. Melzer Uncertainty quantification for decision making in early design phase for passive and active vibration isolation, pg. 4501-4513. O. Heuss Optimal tuning of shunt parameters for lateral beam vibration attenuation with three collocated piezoelectric stack transducers (Paper No. 149) Approach to prevent locking in a spring-damper system by adaptive load redistribution in auxiliary kinematic guidance elements, pg. 94330G. In: Industrial and Commercial Applications of Smart Structures Technologies 2015. null (SPIE Proceedings) DOI: 10.1117/12.2086491 In many applications, kinematic structures are used to enable and disable degrees of freedom. The relative movement between a wheel and the body of a car or a landing gear and an aircraft fuselage are examples for a defined movement. In most cases, a spring-damper system determines the kinetic properties of the movement. However, unexpected high load peaks may lead to maximum displacements and maybe to locking. Thus, a hard clash between two rigid components may occur, causing acceleration peaks. This may have harmful effects for the whole system. For example a hard landing of an aircraft can result in locking the landing gear and thus damage the entire aircraft. In this paper, the potential of adaptive auxiliary kinematic guidance elements in a spring-damper system to prevent locking is investigated numerically. The aim is to provide additional forces in the auxiliary kinematic guidance elements in case of overloading the spring-damper system and thus to absorb some of the impact energy. To estimate the potential of the load redistribution in the spring-damper system, a numerical model of a two-mass oscillator is used, similar to a quarter-car-model. In numerical calculations, the reduction of the acceleration peaks of the masses with the adaptive approach is compared to the Acceleration peaks without the approach, or, respectively, when locking is not prevented. In addition, the required force of the adaptive auxiliary kinematic guidance elements is calculated as a function of the masses of the system and the drop height, or, respectively, the impact energy. M. Krech L. Kristl T. Freund A. Kuttich M. Zocholl P. Groche Methodical Approaches to Describe and Evaluate Uncertainty in the Transmission Behavior of a Sensory Rod, Applied Mechanics and Materials, pg. 205-217. Lateral vibration attenuation of a beam with circular cross-section by supports with integrated resonantly shunted piezoelectric transducers Comparison of Methodical Approaches to Describe and Evaluate Uncertainty in the Load-Bearing Capacity of a Truss Structure In: Proceedings of the Fourth International Conference on Soft Computing Technology in Civil, Structural and Environmental Engineering. null (Civil-Comp Proceedings) DOI: 10.4203/ccp.109.26 Load bearing mechanical structures like trusses face uncertainty in loading along with uncertainty in stress and strength due to uncertainty in their development, production and usage. According to the working hypothesis of the German Collaborative Research Centre SFB 805, uncertainty occurs in processes that are not, or only partial deterministic, and can only be controlled in processes. The authors classify and compare different methodical approaches to describe and to evaluate uncertainty in the development phase of three simple two-dimensional linear mathematical truss structure models in a consistent way. The truss structures are assumed to be mounted statically determined. They are each loaded by a vertical static force at a similar node. The criteria to compare the methodical approaches for uncertainty analysis are the limit load condition in one of the columns in the truss structure due to that load. For that, the authors distinguish between stochastic and non-stochastic or, respectively, probabilistic and non-probabilistic evaluation methods, depending on the quality of information of the internal system properties such as geometry and material, and external properties such as magnitude and direction of loading. As a probabilistic approach, direct Monte-Carlo simulation with full sample sets of internal and external property values are conducted exemplarily. Furthermore, the relation between the number of columns in a truss structure and the number of samples through the criteria of convergence is presented. Examples of interval and fuzzy analysis will give information about non-probabilistic uncertainty. The effectiveness and confidence intervals of the different methods will be evaluated by means of the uncertain limit load condition in one column of the truss structure due to uncertainty in internal and external system properties. Additionally, uncertainty of the safety factor for the three trusses is analysed by varying the radii of the columns and by evaluating the probability of failure. Comparison of Uncertainty in Passive and Active Vibration Isolation, pg. 15-25. In this contribution, the authors discuss a clear and comprehensive way to deepen the understanding about the comparison of parametric uncertainty for early passive and active vibration isolation design in an adequate probabilistic way. A simple mathematical one degree of freedom linear model of an automobile's suspension leg, excited by harmonic base point stroke and subject to passive and active vibration isolation purpose is used as an example study for uncertainty comparison. The model's parameters are chassis mass, suspensions leg's damping and stiffness for passive vibration isolation, and an additional gain factor for velocity feedback control when active vibration isolation is assumed. Assuming the parameters to be normally distributed, they are non-deterministic input for Monte Carlo-Simulations to investigate the dynamic vibrational response due the deterministic excitation. The model parameters are assumed to vary according plausible assumptions from literature and own works. Taking into account three different damping levels for each passive and active vibration isolation approach, the authors investigate the numerically simulated varying dynamical output from the model's dynamic transfer function in six case studies in frequency and time domain. The cases for the output in frequency domain are (i) varying maximum vibration amplitudes at damped resonance frequencies for different passive and active damping levels, (ii) varying vibration amplitudes at the undamped resonance frequency, (iii) varying isolation frequency, (iv) varying amplitudes at the excitation frequency beyond the passive system's fixed isolation frequency, and (v) vibration amplitudes for −15 dB isolation attenuation. In time domain, case (vi) takes a closer look at the varying decaying time until steady state vibration is reached. Consistent Comparison of Methodical Approaches to Describe and Evaluate Uncertainty in the Load-Carrying Capacity of a Truss Structure (Paper No. D10) Model verification and validation of a piezo-elastic support for passive and active structural control of beams with circular cross-section, pg. 67-77. S. Ochs K. Pitz Quantitative Description and assessment of uncertainty for a load-bearing system, pg. 29-44. In: Tagungsband zur 27. VDI-Fachtagung Technische Zuverlässigkeit TTZ 2015. null (VDI-Berichte) Load Transmitting Device Influence of varying support stiffness on the load path in a 2D-truss for structural health control, pg. 3067-3074. Numerical and experimental investigation of parameter uncertainty in the working diagram and in use of a single piezoelectric stack actuator to stabilize a slender beam column against buckling, pg. 2713-2719. In: Eurodyn 2014. Proceedings of the 9th International Conference on Structural Dynamics (30 June - 2 July 2014; Porto, Portugal) (EURODYN) Uncertainty in the use of a single piezoelectric stack actuator for stabilization purposes of a slender beam column is described and assessed via the working diagram to estimate the stack actuator's force and stroke levels. Deviations in the blocking force, in the maximum free stroke of the actuator, in the mechanical prestress load on the actuator and in the stiffness of the host structure are considered as uncertain parameters. Worst Case analyses in the working diagram are performed to assess the influence of the uncertain parameters on the actuator's force and stroke levels. Real measurements of the force and stroke levels generated by a single piezoelectric stack actuator working against a cantilever beam are performed in an experimental set up. Uncertainty in the experimental measurements of the actuator's force an stroke levels are described an discussed. It is seen that a high discrepancy between numerical and experimental results with respect to uncertainty can be quantified. Active stabilization of a slender beam-column under static axial loading and estimated uncertainty in actuator properties, pg. 235-245. Buckling of load-carrying beam-columns is a severe failure scenario in light-weight structures. The authors present an approach to actively stabilize a slender beam-column under static axial load to prevent it from buckling in its first buckling mode. For that, controlled active counteracting forces are applied by two piezoelectric stack actuators near the column's fixed base, achieving a 40% higher axial critical load and leaving most of the column's surface free from actuation devices. However, uncertain actuator properties due to tolerances in characteristic maximum free stroke and blocking force capability have an influence on the active stabilization. This uncertainty and its effect on active buckling control is investigated by numerical simulation, based on experimental tests to determine the actual maximum free stroke and blocking force for several piezoelectric stack actuators. The simulation shows that the success of active buckling control depends on the actuator's variation in its maximum free stroke and blocking force capability. A. Hanselka T. Koch Device capable of self-propulsion along a supporting structure Nonparametric estimation of a maximum of quantiles, vol. 8, pg. 3176 - 3192. In: Electronic Journal of Statistics DOI: 10.1214/14-EJS970 A simulation model of a complex system is considered, for which the outcome is described by m(p,X), where p is a parameter of the system, X is a random input of the system and m is a real-valued function. The maximum (with respect to p) of the quantiles of m(p,X) is estimated. The quantiles of m(p,X) of a given level are estimated for various values of p from an order statistic of values m(pi,Xi) where X,X1,X2,… are independent and identically distributed and where pi is close to p, and the maximal quantile is estimated by the maximum of these quantile estimates. Under assumptions on the smoothness of the function describing the dependency of the values of the quantiles on the parameter p the rate of convergence of this estimate is analyzed. The finite sample size behavior of the estimate is illustrated by simulated data and by applying it in a simulation model of a real mechanical system. Statistical approach for active buckling control with uncertainty, pg. 291-297. In: Topics in Modal Analysis I, Volume 7. (Proceedings of IMAC–XXXII A Conference and Exposition on Structural Dynamics; Feb. 3-6, 2014; Orlando, FL, USA) (Conference Proceedings of the Society for Experimental Mechanics Series) Buckling of load-carrying column structures is an important failure scenario in light-weight structures as it may result in the collapse of the entire structure. If the actual loading is unknown, stability becomes uncertain. To investigate uncertainty, a critically loaded beam-column, subject to buckling, clamped at the base and pinned at the upper end is considered, since it is highly sensitive to small changes in loading. To control the uncertainty of failure due to buckling, active forces are applied with two piezoelectric stack actuators arranged in opposing directions near the beam-column's base to prevent it from buckling. In this paper, active buckling control is investigated experimentally. A mathematical model of the beam-column is built and a model based Linear Quadratic Regulator (LQR) is designed to stabilize the system. The controller is implemented on the experimental test setup and a statistically relevant number of experiments is conducted to prove the effect of active stabilization. It is found that the load bearing capacity of the beam-column could be increased by more than 40% for the experimental test setup using different controller parameters for three ranges of axial loading. Consistent Approach to describe and evaluate uncertainty in vibration attenuation using resonant piezoelectric shunting and tuned mass dampers, pg. 51-64. Mathematical modeling and numerical simulation of an actively stabilized beam-column with circular cross-section, pg. 90572H. In: Active and Passive Smart Structures and Integrated Systems 2014. null (SPIE Proceedings) Buckling of axially loaded beam-columns represents a critical design constraint for light-weight structures. Besides passive solutions to increase the critical buckling load, active buckling control provides a possibility to stabilize slender elements in structures. So far, buckling control by active forces or bending moments has been mostly investigated for beam-columns with rectangular cross-section and with a preferred direction of buckling. The proposed approach investigates active buckling control of a beam-column with circular solid cross-section which is fixed at its base and pinned at its upper end. Three controlled active lateral forces are applied near the fixed base with angles of 120° to each other to stabilize the beam-column and allow higher critical axial loads. The beam-column is subject to supercritical static axial loads and lateral disturbance forces with varying directions and offsets. Two independent modal state space systems are derived for the bending planes in the lateral y- and z-directions of the circular cross-section. These are used to design two linear-quadratic regulators (LQR) that determine the necessary control forces which are transformed into the directions of the active lateral forces. The system behavior is simulated with a finite element model using one-dimensional beam elements with six degrees of freedom at each node. With the implemented control, it is possible to actively stabilize a beam-column with circular cross-section in arbitrary buckling direction for axial loads significantly above the critical axial buckling load. Effect of uncertain boundary conditions and uncertain axial loading on lateral vibration attenuation of a beam with shunted piezoelectric transducers, pg. 4495–4508. Undesired vibration may occur in lightweight structures due to excitation and low damping. For the purpose of vibration attenuation, resonantly shunted piezoelectric transducers can be an appropriate measure. In this paper, uncertainty in design and application of resonantly shunted piezoelectric patch transducers to attenuate the vibration of a beam due to uncertain rotational support stiffness and uncertain static axial loading is investigated. A linear mathematical model of a beam with piezoelectric patch transducers using RITZ formulation is used to calculate the vibration attenuation potential under uncertainty. Variation in the support stiffness and variation static axial loading lead to detuning and cause the resonant shunt to work off the desired frequency, resulting in higher vibration amplitudes. For a beam that is pinned or fixed at both ends, the attenuation effect is less sensitive to uncertainty in the support stiffness than in case of an elastic support that is neither fully pinned nor fully fixed at both ends. A beam fixed at both ends is most robust against uncertainty in static axial loading. Approach to Evaluate Uncertainty in Passive and Active Vibration Reduction, pg. 345-352. Uncertainty is an important design constraint when configuring a dynamic mechanical system that is subject to passive or active vibration reduction. Uncertainty can be divided into the categories unknown, estimated and stochastic uncertainty depending on the amount of information, e.g. of the principal mechanical parameter's deviation in inertia, energy dissipation, compliance and today more and more with active energy feeding to enhance damping. In this paper, these uncertainty categories as well as solutions for uncertainty control in the early design phase will be described and evaluated analytically in a simple but consistent and transparent way on the basis of a mathematical dynamic linear model. The model is a one degree of freedom mass-damper-spring system representing a suspension leg supporting a vehicle's chassis that is subject to passive and active damping. The amplitude and phase responses in frequency domain are shown analytically in case studies for different assumptions of the effective uncertainty. Amongst others, sample tests are conducted by Monte Carlo Simulations when stochastic uncertainty is considered. The uncertainty examinations on vibration reduction for the selected dynamical model show promising results indicating the predominance of active damping vs. passive damping statistically. Influence of uncertain support boundary conditions on the buckling load of an axially loaded beam-column, pg. 4675–4686. H. Hanselka Uncertainty in passive and active stabilisation of critically loaded columns, pg. 61-67. Unsicherheit im Arbeitsdiagramm eines piezoelektrischen Aktuators für die aktive Stabilisierung von Stäben (Uncertainty in the working diagram of a piezoelectric actuator to stabilize rods), pg. 261-270. In: 26. VDI-Fachtagung Technische Zuverlässigkeit 2013: Entwicklung und Betrieb zuverlässiger Produkte (Leonberg bei Stuttgart, 23.-24. April 2013). null (VDI-Berichte) Uncertainty in Loading and Control of an Active Column Critical to Buckling, vol. 19 In: Shock and Vibration DOI: 10.3233/SAV-2012-0700 Buckling of load-carrying column structures is an important design constraint in light-weight structures as it may result in the collapse of an entire structure. When a column is loaded by an axial compressive load equal to its individual critical buckling load, a critically stable equilibrium occurs. When loaded above its critical buckling load, the passive column may buckle. If the actual loading during usage is not fully known, stability becomes highly uncertain.This paper presents an approach to control uncertainty in a slender flat column structure critical to buckling by actively stabilising the structure. The active stabilisation is based on controlling the first buckling mode by controlled counteracting lateral forces. This results in a bearable axial compressive load which can be theoretically almost three times higher than the actual critical buckling load of the considered system. Finally, the sensitivity of the presented system will be discussed for the design of an appropriate controller for stabilising the active column. Uncertainty in Mechanical Engineering Mathematical modelling of postbuckling in a slender beam column for active stabilisation control with respect to uncertainty, pg. 834119. In: Active and Passive Smart Structures and Integrated Systems 2012 (12-15 March 2012; San Diego, CA, USA). null (Proceedings of SPIE) Buckling is an important design constraint in light-weight structures as it may result in the collapse of an entire structure. When a mechanical beam column is loaded above its critical buckling load, it may buckle. In addition, if the actual loading is not fully known, stability becomes highly uncertain. To control uncertainty in buckling, an approach is presented to actively stabilise a slender flat column sensitive to buckling. For this purpose, actively controlled forces applied by piezoelectric actuators located close to the column's clamped base stabilise the column against buckling at critical loading. In order to design a controller to stabilise the column, a mathematical model of the postcritically loaded system is needed. Simulating postbuckling behaviour is important to study the effect of axial loads above the critical axial buckling load within active buckling control. Within this postbuckling model, different kinds of uncertainty may occur: i) error in est imation of model parameters such as mass, damping and stiffness, ii) non-linearities e. g. in the assumption of curvature of the column's deflection shapes and many more. In this paper, numerical simulations based on the mathematical model for the postcritically axially loaded column are compared to a mathematical model based on experiments of the actively stabilised postcritically loaded real column system using closed loop identification. The motivation to develop an experimentally validated mathematical model is to develop of a model based stabilising control algorithm for a real postcritically axially loaded beam column. J. Nuffer Unsicherheit in der Zuverlässigkeitsbewertung von aktiven Komponenten am Beispiel eines piezoelektrischen Stapelaktuators für eine aktive Knickstabilisierung (Uncertainty in evaluating reliability of active components in active buckling control), pg. 145-158. In: Tagungsband zur 25. VDI-Fachtagung Technische Zuverlässigkeit TTZ 2011: Entwicklung und Betrieb zuverlässiger Produkte. null (VDI-Berichte) T. Eifler M. Haydn L. Mosch Prozessmodell zur systematischen Beschreibung und Verkettung von Unsicherheit (Process model for systematic description and linking uncertainty). Poster A survey on uncertainty in the control of an active column critical to buckling (Paper No. 17) Approach for a Consistent Description of Uncertainty in Process Chains of Load Carrying Mechanical Systems, vol. 104, pg. 133-144. DOI: 10.4028/www.scientific.net/amm.104.133 Uncertainty in load carrying systems e.g. may result from geometric and material deviations in production and assembly of its parts. In usage, this uncertainty may lead to not completely known loads and strength which may lead to severe failure of parts or the entire system. Therefore, an analysis of uncertainty is recommended. In this paper, uncertainty is assumed to occur in processes and an approach is presented to describe uncertainty consistently within processes and process chains. This description is then applied to an example which considers uncertainty in the production and assembly processes of a simple tripod system and its effect on the resulting load distribution in its legs. The consistent description allows the detection of uncertainties and, furthermore, to display uncertainty propagation in process chains for load carrying systems. A study of uncertainties in active load carrying systems due to scatter in specifications of piezoelectric actuators, pg. 2114-2120. C. Stapp Statistical approach to evaluating reduction of active crack propagation in aluminum panels with piezoelectric actuator patches, vol. 20, pg. 1-11. Fatigue cracks in light-weight shell or panel structures may lead to major failures when used for sealing or load-carrying purposes. This paper describes investigations into the potential of piezoelectric actuator patches that are applied to the surface of an already cracked thin aluminum panel to actively reduce the propagation of fatigue cracks. With active reduction of fatigue crack propagation, uncertainties in the cracked structure's strength, which always remain present even when the structure is used under damage tolerance conditions, e.g. airplane fuselages, could be lowered. The main idea is to lower the cyclic stress intensity factor near the crack tip with actively induced mechanical compression forces using thin low voltage piezoelectric actuator patches applied to the panel's surface. With lowering of the cyclic stress intensity, the rate of crack propagation in an already cracked thin aluminum panel will be reduced significantly. First, this paper discusses the proper placement and alignment of thin piezoelectric actuator patches near the crack tip to induce the mechanical compression forces necessary for reduction of crack propagation by numerical simulations. Second, the potential for crack propagation reduction will be investigated statistically by an experimental sample test examining three cases: a cracked aluminum host structure (i) without, (ii) with but passive, and (iii) with activated piezoelectric actuator patches. It will be seen that activated piezoelectric actuator patches lead to a significant reduction in crack propagation. A study on scatter in piezoelectric stack actuator characteristics as an uncertainty criterion in usage process of a load-carrying system (Paper No. 41) Parameter study on an actively stabilised beam column, pg. 17-25. J. Koenen Evaluation and Control of Uncertainty in Using an Active Column System, vol. 104, pg. 187-195. Uncertainty in usage of load-carrying systems mainly results from not fully knownloads and strength. This article discusses basic approaches to control uncertainty in usage ofload-carrying systems by passive and active means. An active low damped column system critical to buckling is presented in which a slender column can be stabilised actively by piezo stackactuators at one of its ends only. Uncertainty may be controlled in the active column systemby temporarily enhancing the bearable axial load theoretically up to three times compared to the passive column system in case of critical loading. However, in the implementation of theseapproaches, system-speci c uncertainty may also occur. In numerical examinations it is shown, that small deviations in measured axial loading may increase the active force signi cantly to achieve stabilisation. The increase of applied active force might affect lifetime of the piezo stackactuators and thus the stabilising capability of the active column system. General approach and possibility to evaluate uncertainty in estimating loads acting on a beam (Paper No. 25) Uncertainties in active stabilization of buckling An Approach to Control the Stability in an Active Load-Carrying Beam-Column by One Single Piezoelectric Stack Actuator, pg. 535-546. Euler buckling of column structures is an important design constraint in slender light-weight structures as it may result in the collapse of an entire structure. Thus, uncertainties in the usage of technical products, that result from unforeseen incidents or misuse, shall be identified, assessed and controlled. The main objective of this research is to develop and validate a concept to stabilise a column structure by the use of a lateral active force, induced by a piezoelectric stack actuator. A slender flat beam-column, built in vertically at the base, pinned at the upper end and loaded by an axial compressive force equal to its buckling load is examined numerically and experimentally. The active stabilisation is based on the fact that the initial minimal deflection will be superimposed with the deflection, caused by an actively controlled actively controlled counteracting force. This leads the structure into its approximate second bending deflection mode. With the concept shown, a simple beam-column critical to buckling is stabilised on demand. Investigating the Potential of Piezoelectric Actuator Patches for the Reduction of Fatigue Crack Propagation in Aluminum Panels An Enhanced Approach to Control Stability in an Active Column Critical to Buckling K. Habermehl T. Bedarff T. Hauer S. Schmitt Approach to validate the influences of uncertainties in manufacturing on using load-carrying structures, pg. 5319-5333. This paper gives an example of a new approach to display systematically uncertainties within the processchain to manufacture, to assemble and to use load-carrying structures to, eventually, control them in the future. By controlling uncertainties, safety margins between external loading and internal strength of a load carrying structure could be lowered, oversizing will be reduced, resources will be preserved, range of application will be widened and economic advantages will be achieved. In this work, the influences of uncertainties in manufacturing as well as assembling processes on the usage processes with respect to load distribution in a simple tripod is examined. If equal load distribution is desired, this equality highly depends on the quality of drilling holes for a leg connecting device. The holes vary in diameters, so the legs may be assembled differently. It will be shown exemplary, how deviations due to manufacturing may change the load distribution. Monte Carlo Simulation and real experiments on a simple tripod are conducted for validation. An Approach to Quantify the Influence of Uncertainties in Model-based Usage-Monitoring of Load-Carrying Systems, pg. 857-866. In this study, a method to estimate uncertainties in a model-based usage-monitoring of a load-carrying system to determine mechanical loading condition online during operation is discussed. Generally, uncertainties in the determination of the loading condition occur. As an example, uncertainties in identifying a single external force on a simple cantilever beam depending on scattering input and process parameter, for example derivations of material properties or in sensor position, are investigated with Monte Carlo Simulations. These influences of scattering properties are evaluated by estimating the correlation between uncertain input as well as process parameters and the standard deviation of error in force estimation. For this application it will be shown, that uncertainty in the assumptions on structural damping and uncertainty in measured signals due tovarying sensor position has a high influence to the accuracy of the force estimation. With the method shown,measures to reduce uncertainties in usage monitoring could be derived and rated. Ansätze und Maßnahmen zur Beherrschung von Unsicherheit in lasttragenden Systemen des Maschinenbaus. Controlling uncertainties in load carrying systems, pg. 55-62. In: Konstruktion (Zeitschrift für Produktentwicklung und Ingenieur-Werkstoffe) In diesem Beitrag werden neue Ansätze zur Beherrschung von Unsicherheiten in lasttragenden Systemen vorgestellt. Werden Unsicherheiten beherrscht, können z. B. Sicherheitsbeiwerte zwischen Beanspruchbarkeit und Beanspruchung minimiert, Überdimensionierung vermieden, Ressourcen geschont, Einsatzbereiche erweitert und damit wirtschaftlicher Vorteil ermöglicht werden. Um diese Ziele zu erreichen, werden im seit Anfang 2009 bestehenden und von der Deutschen Forschungsgemeinschaft DFG geförderten Sonderforschungsbereich SFB 805 zunächst bekannte Methoden und Technologien zur Entwicklung, Produktion und Nutzung bis zur Wiederverwendung lasttragender Systeme hinsichtlich ihres Unsicherheitspotenzials untersucht. Auf dieser Basis werden dann die Unsicherheiten systematisch beschrieben und beurteilt, um sie schließlich zu beherrschen. This paper shows a new approach for controlling uncertainties in load carrying systems in mechanical engineering. By controlling uncertainties, for example, safety margins between mechanical loading and strength will be lowered, oversizing will be reduced, resources will be preserved, range of application will be widened and economic advantages will be achieved. To reach these goals, the new german collaborative research centre (SFB 805), funded by the Deutsche Forschungsgemeinschaft DFG and started in January 2009, examines in the first step the potential of uncertainties of well known methods and technologies to develop, to fabricate and to use as well as to reuse load carrying systems. On this basis and in the second step, uncertainties will be described and evaluated systematically to, eventually, control them. R. Engelhard A. Sichau H. Kloberdanz H. Birkhofer A Model to Categorise Uncertainty in Load-Carrying Systems, pg. 53-64. A survey to control uncertainties by comprehensive monitoring of load-carrying structures, pg. 76471R1–76471R9. In: Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems 2010. null (SPIE Proceedings) This paper gives a general view on some aspects of the influence of uncertainty in model-based monitoring of loadcarrying structures. The advantages and relevance of monitoring for the prediction of reliability will be clarified and the difference between uncertainty and reliability is discussed. Solving inverse Problems is a particular challenge in monitoring systems. Therefore, different categories of inverse problems are discussed. A generally valid extended difference equation, which describes the transfer behavior of the structure, will be derived as the basis for digital signal processing of model-based monitoring. This equation also considers changes in the structures dynamic properties, e.g due to damage or temperature. With this equation, the influence of uncertainty due to measurement noise to the functionability of monitoring is discussed and some possibilities are shown to control this uncertainty when determining ideal sensor-positions for monitoring. E. Janssen Mechatronische Stabilisierung knickgefährdeter Stäbe in lasttragenden Systemen des Maschinenbaus (Mechatronic stabilisation of beams critical to buckling), pg. 323-328. M. Matthias J. Bös Lärm- und Schwingungsminderung im Schiffbau durch adaptronische/mechatronische Lösungsansätze (Acoustic and vibration reduction in ship building with mechatronic solutions), pg. 384-399. Maßnahmen zur technischen Lärm- und Schwingungsminderung basieren idealerweise auf der Vermeidung der Entstehung, der Verringerung der Abstrahlung oder der Beeinflussung der Ausbreitung und Übertragung von Vibrationen und (Körper-) Schall am oder möglichst nahe dem Entstehungsort sowie an Koppelstellen. Die Technologie der Adaptronik/Mechatronik verfolgt dazu den Ansatz, an entsprechenden Stellen (z.B. Lagerstellen, Verbindungselemente, schwingende Flächen) eine in Frequenz, Phase und Amplitude angepasste Kraft derart einzuleiten, dass sie die störenden Schwingungen destruktiv überlagert und somit eine Schwingungsreduktion erzielt wird. Die erreichbare Schwingungsreduktion und der dafür notwendige Aufwand (Entwicklungs- und Systemkosten) hängen dabei stark von der jeweiligen Zielanwendung ab. Anhand von unterschiedlichen Beispielen werden in diesem Beitrag das Potenzial und die Einsatzmöglichkeiten dieser Technologie für schiffbauliche Anwendungen dargestellt. Technical measures for noise and vibration reduction are usually based on the reduction of vibration generation, of sound radiation, or of the transmission of vibrations, air-borne, and structure-borne sound at (or as closely as possible to) the source, as well as at coupling points. The technology of adaptronics/mechatronics (i.e., smart structure technology) applies the concept of using additional forces that are adjusted with respect to frequency, phase, and amplitude at appropriate locations (e.g., bearings, fasteners, vibrating surfaces) in such a way that they counteract the unwanted vibrations in order to significantly reduce the overall vibrations. The achievable vibration reduction and the required expenses (e.g., development and system costs) depend strongly on the particular target application. This paper demonstrates the potential and possibilities of this technology for ship building applications. M. Thomaier K. Wolf FMEA for qualitative measurement of the reliability of an active interface for vibration reduction in passenger cars, pg. 317-328. In: Proceedings der 23. Tagung Technische Zuverlässigkeit TTZ 2007. null (VDI-Bericht) Examination of Reliability of Piezoelectric Cantilever Beams. Poster, pg. 1-4. R. Markert Modellgestützte Diagnose von Unwuchten und Wellenrissen in Rotorsystemen (Model based diagnose of unbalances and shaft cracks in rotor systems), pg. 205-223. In: VDI-Schwingungstagung 2007: Schwingungsüberwachung und Diagnose von Maschinen. null (VDI-Berichte) A. Sekhar Health Monitoring and Crack Identifikation in a Rotor System, pg. 217-225. A. Büter Schadensüberwachung mit Wandlermaterialien. Health Monitoring with Smart Materials, pg. 20-27. In: Thema Forschung (Technische Universität Darmstadt) A. Friedmann Zuverlässigkeitsbewertung einer Strukturschnittstelle zur aktiven Schwingungsminderung (Evaluation of reliability of an interface for active vibration control), pg. 7-16. In: Tagungsband zur 1. Tagung DVM - Arbeitskreis Zuverlässigkeit mechatronischer und adaptronischer Systeme. null (DVM-Bericht) Zuverlässigkeitsuntersuchungen an resonant erregten piezoelektrischen Biegeaktoren: Experiment und Analyse zur Degradation und Schädigung von Piezoventilatoren. Reliability studies on resonantly stimulated piezoelectrical bending actuators: Experiment and analysis on the degradation and damage of piezo fans, pg. 163-173. Zuverlässigkeit aktiver Systeme: experimentelle und theoretische Methoden (Reliability of active Systems: experimental and theoretical methods), pg. 19.1-19.7. Buch (Monographie) Untersuchungen zur modellgestützten Diagnose von Unwuchten und Wellenrissen in Rotorsystemen. Dissertation, vol. Nr. 325 In: Fortschrittberichte VDI : Reihe 11, Schwingungstechnik Untersuchungen zur Rißidentifikation in Rotoren, pg. 173-181. J. Jayesh Model Based Unbalance and Fatigue Crack Identification in Rotor Systems, pg. 41. Model Based Fatigue Crack Identification in Rotor Systems, pg. 145. Model based crack identification and monitoring in a rotor system passing the critical speed, pg. 877-884. M. Seidler Model Based Fault Identification in Rotor Systems by Least Squares Fitting, vol. 7, pg. 311-321. In: International Journal of Rotating Machinery DOI: 10.1155/S1023621X01000264 In the present paper a model based method for the on-line identification of malfunctions in rotor systems is proposed. The fault-induced change of the rotor system is taken into account by equivalent loads which are virtual forces and moments acting on the linear undamaged system model to generate a dynamic behaviour identical to the measured one of the damaged system.By comparing the equivalent loads reconstructed from current measurements to the pre-calculated equivalent loads resulting from fault models, the type, amount and location of the current fault can be estimated. The identification method is based on least squares fitting algorithms in the time domain. The quality of the fit is used to find the probability that the identified fault is present.The effect of measurement noise, measurement locations, number of mode shapes taken into account etc., on the identification result and quality is studied by means of numerical experiments. Finally, the method has also been tested successfully on a real test rig for some typical faults. Fault Models for On-line Identification of Malfunctions in Rotor Systems, pg. 435-446. Model Based Fault Identification in Rotor Systems by Least Squares Fitting, pg. 901-907. Validation of Online Diagnostics of Malfunctions in Rotor Systems, pg. 581-590. Hochschulschrift Ermittlung des Dynamischen Verhaltens eines Verkehrsflugzeug-Fußbodens im Hinblick auf die Reduzierung von Schallabstrahlung in die Kabine. Masterarbeit in Kooperation mit Airbus Industries in Hamburg Motion Dynamics and Design Machine and structural dynamics, rotor dynamics Vibrations, experimental modal analysis Mathematical modeling, numerical simulation, design, manufacturing, experimental test Model verification & validation, non-probabilistic and probabilistic uncertainty quantification Active/adaptive state control, structural health monitoring and control in mechatronic structural dynamic systems Teaching and advising undergraduate and graduate students 2004 Ph.D. (Dr.-Ing.) Rotor Dynamics, Mechanics, Technische Universität Darmstadt (TUD) 1998 M.S. (Dipl.-Ing.) Mechanical Eng., Technische Universität Berlin 2019 – 2021 Penn State University, PA, USA – Visiting Scholar for Structural Dynamics and Uncertainty Quantification, College of Engineering, Architectural Engineering 2017 – 2019 Research Manager Reliability Future Mobility, Fraunhofer Institute for Structural Durability and System Reliability (LBF), Darmstadt 2008 – 2019 Senior Research Fellow in Mechatronics and Adaptronics, Fraunhofer LBF 2016 – 2017 Adjunct Professor, College of Engineering and Science, Clemson University, Clemson 2004 – 2008 Research Fellow Smart Materials/Smart Systems, Technische Universität (TU) Darmstadt, Darmstadt 1998 – 2004 Research and Teaching Assistant, Ph.D.-Student in Rotor Dynamics, TU Darmstadt, Darmstadt 1997 – 1998 Structural Engineer/Stress Analyst, Boeing Commercial Airplane Group, Seattle https://scholar.google.com/citations?user=IGJZsR8AAAAJ&hl=en
CommonCrawl
Nonagon diagonals nonagon diagonals As we know, by angle sum property of triangle, the sum of interior angles of a triangle is equal to 180 degrees. 2 Diagonals A diagonal is a line segment connecting two non-consecutive vertices of a polygon (Fig 3. Clicking one of the terms below will access a visual display showing one or more individuals' way of understanding the term. Nonagon Calculator. Compute the ordered triple (e,a,d) Source(s): regular nonagon 9 sided polygon exterior angle measures degrees interior angle that: https://tr. Angles are calculated and displayed in degrees, here you can convert angle units. Circles are centered 9, Nonagon Diagonal of a polygon: The segment joining any two non- consecutive vertices is called a diagonal. Enjoy a range of free pictures featuring polygons and polyhedrons of all shapes and sizes, including simple 2D shapes, 3D images, stars and curves before heading over to our geometry facts section to learn all about them. This is only 11 2 times the number of sides regular nonagon (9 sides) regular decagon (10 sides) regular dodecagon (12 sides) In Euclidean Geometry, any Polygon can be completely enclosed in some sufficiently large triangle. The diagonals in regular polygons and the number of sides of those regular polygons? A diagonal is a segment that connects two non-consecutive vertices in a polygon. Formula for diagonals: d = n(n - 3) / 2 where d is the number of diagonals and n is the number of sides 32-inch diagonal IPS SCREEN with 2k resolution: The most comfortable working space in the city (backs, palms, legs…) Fast and secure internet: Access to common space and kitchen: Professional office cleaning: Superb location in the heart of the city: Personal key storage cassette: 12-hour weekdays on weekdays and 10-hour Saturdays nonagon. However, the Learn diagonal definition, solved examples, formula & facts. octagon: 20 A decagon has 35 diagonals. Take care when counting the diagonals to count each one only once. </p> <p>That brutality is not a bug, it's a feature, a feature of the harsh military training in brazillian armed forces, with lots of physical and psychological punishments inflicted upon the rookies. cube, cuboid, sphere, cylinder, prism. This picture features a nonagon. Also opposite sides are parallel and opposite angles are equal. How? Lets count the number of triangles where side of the pentagon is one of the side of the triangle, for each one such side there are 6 triangles. a. Thus there are triangles formed. " So the total number would be n(n - 3). and the area is about 42 in. the diagonals, by using Euler's formula V - E + F = 2. Thus, the perimeter of a regular polygon is composed of a certain number n of identical sides. The interior angle of the regular nonagon is , so that angle 451 equals . In the figure below, PR and SQ bisect each other at point T ; PR bisects ∠ SRQ and ∠ SPQ ; and SQ bisects ∠ PSR and ∠ PQR . Chart 1. Snowflakes have hexagonal patterns, like this beautiful image from NASA. 0. py Your greeting goes here Table of Regular Polygon Facts Name # Sides Ext Ang Int Ang Sum Int Ang Diags Equilateral triangle 3 120. 10. $27$ diagonals means $\frac{10*27}{2}=135$ ways to have adjacent diagonals. In geometry, a nonagon (/ ˈ n ɒ n ə ɡ ɒ n /) or enneagon (/ ˈ ɛ n i ə ɡ ɒ n /) is a nine-sided polygon or 9-gon. Find the perimeter P and area A of the nonagon. Students can draw the diagonals. Now that you have this, you can find how many diagonals there are in any n-sided polygon. <br>already held the weekly U. Draw a pentagon, a hexagon, and a heptagon. Two circles are centered at intersection points of diagonals of a regular hepatgon. Diagonals How many diagonals does a nonagon have? Read and A diagonal of a polygon is a segment that connects two nonconsecutive vertices. A line segment that goes from one corner to another, but is not an edge. She was asked to design a small fire station for the Vitra factory in Switzerland. No, heptagons only have seven sides. Triangle, quadrilateral, pentagon, hexagon, heptagon, octagon, nonagon and decagon. So a triangle has no diagonals. Pentagon A five-sided polygon. How many distinct diagonals does a pentagon have? 3. The height of the prism is greater than the length of a side of the nonagon. net Mathematical puzzles, with hints, full solutions, and links to related math topics. A polygon is concave if all or part of at least one diagonal lies outside the polygon. How many diagonals are needed to divide a heptagon, a nonagon, and an undecagon into triangles?: The room is shaped as a large nonagon, its sheer walls, a flat metallic grey, rising almost two stories up and curving into a dome. When we start with a polygon with four or more than four sides, we need to draw all the possible diagonals from one vertex. Introduction. 5. The diagonals of a rhombus are perpendicular bisectors of each other. Geometry Problem. Created by user's request Sep 24, 2014 · The diagonals bisect each other. The legend states that Descartes was inspired to invent this system after watching a fly move around on the ceiling over his bed. Chord and central angle Diagonals. So, from each vertex we can draw N-3 Jan 27, 2017 · Start with ONLY one vertex. See full list on magoosh. b. Keetley was another rider known to have crossed Nemaha County on 25 miles west of Kennekuk. Also, you can not extend a diagonal from a vertex to that same vertex. A regular hexagon has six sides and six vertices. Q 1 : How many diagonals does a heptagon have? 2. Durable, watertight and breathable. Octagon. Square: A parallelogram that is both a rhombus and a rectangle. The sum of the interior angles of a convex polygon is then just (n-2)* 180. Thus you construct one long diagonal from each vertex, and hence 8 long diagonals. 58(Approx). 99% Upvoted. The area of a rhombus is 132 square cm. 000 2 Regular pentagon 5 72. 4 Sep 2004 Author, Topic: Nonagon diagonals (Read 2284 times) In regular nonagon ABCDEFGHI, show that AB + AC = AE. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Drag the green points to resize the circles. Joe to Salt Lake City. : To find angle measures in polygons. Aug 28, 2012 · A diagonal is a line segment that can be drawn from one vertex of a polygon to another vertex that does not share a side. In, Hyperbolic Geometry, this is not an obvious statement. what is the length of the sufe of the equilateral triangle Determine if the following pair of rates are equivalent. Diagonals are line segments that link two non-adjacent vertices in a given polygon. These diagonals intersect each other at p points inside the nonagon, q points on the (Here the edges are counted as diagonals. Again you have constructed each one twice so there are 4 long diagonals. Nov 23, 2014 - Paul Lockhart's book Measurement is a gem. 20 centimeter; B. If 5 straight lines diverge from a point in such a Concept: Define Concave & Convex CONVEX: Any polygon that has all of it's verticies pointing away from the interior of the polygon Concept: NAMES OF POLYGONS SIDES Name 3 Triangle 4 Quadrilateral 5 Pentagon 6 Hexagon 7 Heptagon 8 Octagon 9 Nonagon 10 Decagon N n-gon Concept: Classify by sides and angles Equilateral: All sides are equal Jun 15, 2008 · The K9 complete graph is drawn as a regular nonagon with all 36 edges connected. Example 1 : The number of diagonals in a polygon of n sides is. club What Is The Sum Of Interior Angles Of A Nonagon by ww98. Example 9: Find the perimeter and area of the equilateral triangle inscribed in OF. 63; B. The diagonals will extend from this vertex ONLY. See full list on mathopenref. An irregular polygon is a polygon with unequal sides and angles. II. The number of diagonals of a polygon of n sides is given by. An irregular nonagon can have vitually infinite posibles shapes, Despite this all have 9 sides. Key Vocabulary • Diagonal - A diagonal of a polygon is a segment that joins two This Demonstration shows four visual proofs that , where , , and are the lengths of a side, the shortest diagonal, and the longest diagonal, respectively, of a regular nonagon. A Square has 2 Diagonals. If , and are the lengths of a side, a shortest diagonal and a longest diagonal, respectively, of a regular nonagon (see adjoining figure), then Solution. Photograph by NASA / Alexey Kljatov. All pentagons will have five diagonals. It did not take long to set up and was very roomy and comfortable for our family. Two are regular convex heptagons. angles in quadrilateral What size have angles in quadrilateral (4-gon) if they are in a ratio of 8: 9: 10: 13? 9-gon Sum of interiol angles of 9-gon is: Circle and hexagon Regular nonagon with diagonals. This number is then divided by two to equal the number of diagonals. How many diagonals can be drawn from each vertex of a heptagon? 4. A diagonal of a polygon is a line segment that joins two nonconsecutive vertices. (9−2)×180 =7×180 =1260 Second, because the nonagon is equiangular, every angle's measure is equal, so divide 1260 by 9 to find each angle's measure is 140 . Regular Quadrilateral Regular Triangle Regular nonagon with diagonals Regular dodecagon Arc Length, Radians Regular Hexagon with Congruence Marked Regular Dodecagon, Marked Intersecting Secants Circle (7) In a rhombus, the diagonals bisect the opposite angles. The Find the sum of the measures of the interior angles of a convex nonagon. Calculate the sizes of the angles marked by the letter x in the following diagrams: (a) (b) (c) (d) 4. A segment that connects non-consecutive vertices. What is the probability that their intersection lies inside the 9, nonagon Just like quadrilaterals, these different types of polygons also have diagonals. Number of diagonals: 27: The number of distinct diagonals possible from all vertices. Sum of the Angles Sample Run linux1[106] python proj1. 000 9 Regular heptagon 7 51. A shape's diagonals are determined by counting its number of sides, subtracting three and multiplying that number by the original number of sides. 23; 71. A Pentagon has 5 Diagonals. Very fast shipping, and well described. A nonagon is a polygon with 9 sides and 9 interior angles which add to 1260 degrees. Ace. You can also build the diagonal sides shorter than the others, to make the octagon look more like a square with its corners chopped off. Types of Polygons regular, convex irregular, convex irregular, concave Tell whether each figure is a polygon. so nonagon: 27 diagonals. The formula for determining the number of diagonals of an n-sided polygon is n(n - 3)/2; thus, a nonagon has 9(9 - 3)/2 = 9(6)/2 = 54/2 = 27 diagonals. The number of diagonals in a polygon is equal to the number of vertices the polygon has. Find the measure of each interior angle of the badge. Six times three is 18, divided by two is nine diagonals in a hexagon. The number of diagonals in a polygon that can be drawn from any vertex in a polygon is three less than the number of sides. Illustration of a right nonagonal prism with regular nonagons for bases and rectangular faces. It has two neighboring vertices that are connected to our vertex by two polygon's sides. Formula for diagonals: d = n(n - 3) / 2 where d is the number of diagonals and n is the number of sides This is a 2-page document that shows the diagonals in regular polygons (square, pentagon, hexagon, septagon, octagon, nonagon, decagon and dodecagon). Octagon concave polygon hus at least one diagonal will points outside the polygon. The first page has the diagonals drawn in, with the number and characteristics. A. (n—2) x 1800 (ii) For a hexagon: . There are eight triangles in a regular decagon. The approach to the formula you quote is that you pick two vertices to draw a line between, which you can do in $10 \choose 2$ ways. We hope the given CBSE Class 8 Maths Notes Chapter 3 Understanding Quadrilaterals Pdf free download will help you. Since 6 times 180° equals 1,080°, we have now found the total degree measure of the interior angles of the octagon. 8/5. $10$ of those are sides of the decagon instead of diagonals, so the result is ${10 \choose 2}-10=35$ Geometry Q&A Library The diagonals of rhombus are 10 cm and 24 cm. Soon after completion, the fire station became an exhibit space to celebrate Frank Gehry's work. Kansas Pony Express Trail Bicycle Route Guide Although the Pony Express was in operation for less than two years, it left an indelible imprint on American history and self-image. A diagonal is a line segment joining two non-consecutive vertices. A rectangle is a quadrilateral with four right angles and all the properties of a parallelogram, plus one more that concerns the diagonals. The area of a regular nonagon is 34 m2. This is so obvious a statement that I have never even seen it written as a theorem. The number of All in all we found that the pattern of diagonals was: 6, 6, 5, 4, 3, 2, 1, 0, 0, for a grand total of 27 diagonals. A diagonal of a polygon is a line from a vertex to a non-adjacent vertex. (a). The sum of the measures of the exterior angles of a polygon is 360°. 8. A regular nonagon with radius of 10 inches. (n —2) x 1800 (iii) For a nonagon: . You are allowed to answer only once per question. If we again draw in all the diagonals from one vertex, we form six triangles. A town is installing a sandbox in the park. com All in all we found that the pattern of diagonals was: 6, 6, 5, 4, 3, 2, 1, 0, 0, for a grand total of 27 diagonals. Diagonals of nonagon. CCore ore CConceptoncept Area of a Regular Polygon The area of a regular n-gon with side length s is one-half the product of the apothem nonagon: 10: decagon: 11: undecagon un- = 1 deca- = 10: 12: dodecagon do- = 2 my diagonals are not equal, my diagonals meet each other at right angles, and one nonagon? Diagonal. Find the number of diagonals of a decagon. 7 Sep 2016 Diagonal of a polygon: any line through the interior of a polygon that discussed in this blog can be extended to apply to a nonagon (n = 9), be the point of intersection of the diagonals of convex quadrilateral $ABCD$ shortest diagonal and a longest diagonal, respectively, of a regular nonagon (see 5 Mar 2019 of the measures of the exterior angles of a nonagon. How many diagonals can each circle be tangent to? Ready the diagonal of an equilateral traingle measures 21. </p> <p>In conclusion, I'm going against the popular feeling. The formulas for calculating the lengths are actually quite easy to use. Which polygon has 42 more diagonals than sides? Nine-gon Calculate the perimeter of a regular nonagon (9-gon) inscribed in a circle with a radius 13 cm. RHOMBUS A rhombus is a four-sided shape where all sides have equal length. Congruent Nonagon Definition. As the blurb says : " Measurement is an invitation to summon curiosity, courage, and creativ Jan 11, 2018 · The diagonals of a convex regular pentagon are in the golden ratio to its sides. A Heptagon has 14 diagonals. the number of diagonals in each polygon with three through ten sides, then develop a formula for the relationship between the number of sides and the number of diagonals of the polygons. Although not an instant tent, the poles of the Tenaya Lake 6 person tent are Given a side, perimeter, or area, this calculator will determine the following properties of a nonagon: Sum of Interior Angles Number of Diagonals in the polygon Number of Diagonals from one vertex Number of Triangles that can be drawn from one vertex Nonagon Calculator Apr 24, 2019 · Question. 2/6/2020 0 Comments Question 618: (bu Abdulkadir Altintas) Diagonal Diameter Equilateral Triangle Geometric Probability Harmonic Mean Jul 24, 2020 · In a rhombus, the diagonals bisect and are perpendicular, meaning that inside the rhombus there are four right-angle triangles, with side lengths half that of the diagonals, use the pythagoran theorem to find the hypotenuses a² + b² = c² Now we have the base, given any of the angles inside the rhombus, we can find all of them because they are corresponding and all add up to 360 degrees Area Of Quadrilateral With Sides And Diagonal Calculates the area, perimeter, angle and number of diagonal lines of a regular polygon given the side. Find the angle and the central angle of the following regular polygons: Figure 4. In fact, there are only five diagonals in a pentagon because of that duplication. I can not seem to find the answer to this question, so can somebody help me?? Jun 15, 2020 · A nonagon has 9-3=6 diagonals connecting to each vertex. Dividing a polygon with n sides into (n − 2) triangles shows that the sum of the measures of the interior angles of a polygon is a multiple of 180°. To find the sum of the interior angles of a decagon, first, divide it into triangles. Our online polygon trivia quizzes can be adapted to suit your requirements for taking some of the top polygon quizzes. There are total of 27 diagonals in a nonagon as shown below: Angles of a nonagon: A nonagon consists of 9 angles. In an n-gon, from each vertex you can draw n - 3 diagonals. Polygon A simple closed curve consisting of the union of segments that intersect each other at their endpoints. Meanwhile, Kal-El reappeared with some graph paper. Example 8: A regular nonagon is inscribed in a circle with radius 5 units. Another interesting thing is that the diagonals (dashed lines in second figure) of a rhombus bisect each other at right angles. Circle Inscribed in a Semicircle, 45 Degrees Angle. Another image of the hexagon on Saturn. Step 3 How many triangles are formed? Step 4 Make a table like the one below. com/subscription_center?add_user=brightstorm2 diagonals which can be drawn from vertex 3. (4 * 1) / 4 / 2 = 2 This geometry video tutorial explains how to calculate the number of diagonals in a regular polygon such as a square, pentagon, hexagon, heptagon, and an oct Dec 29, 2008 · The length of the shortest diagonal of a regular nonagon is X and the length of its longest diagonal is Y. You will notice that the number of triangles formed is always two less than the number of sides of the polygon. In the figure a regular nonagon and a regular triangle ABO are given. Since a complete graph contains all n(n − 1) / 2 possible edges (where n=number of outside edges) [9 in your case] then number of diagonals = 36-9 = 27 diagonals. Prove that JM is parallel to OB. Parallelograms Properites, Shape, Diagonals, Area and Side Lengths plus interactive applet. equal We know that, in a rectangle, both the diagonals are of equal length. Hexagonal nuts and bolts are easy to grip with a wrench, which can be re-positioned every 60° if needed. im/rCRs3 Apr 20, 2011 · One of the legends of math relates to the invention of the coordinate plane system. The diagonals drawn from a single vertex to the remaining vertices in an n-sided polygon will divide the figure into n-2 triangles. 3D objects can be stacked or rolled and items can be put inside some 3D objects. The question of a number of ways in which a convex (n+2)-gon can be split into n triangles (Euler's problem) leads to the famous Catalan A diagonal of a polygon is a segment that connects any two _____ vertices. Try resizing the circles by dragging the green points. 000 180 60. this makes the formula: (10 * 7)/2 which becomes 35. the circle that goes (iii) For a nonagon: Answer: Number of diagonal in an n-sided polygon = n(n-3) 2 How many diagonals are there in a polygon having 12 sides? (a) 12 (b) 24 24 Jan 2014 In a convex nonagon all the diagonals are drawn. What is the perimeter of rhombus? c) 26 b) 15 What is the sum of the interior angles of a nonagon? b) 140 a) 13 d) 52 a) 1260 c) 40 d) 27 Mar 26, 2020 · An octagon has 20 diagonals. If a polygon has n sides, it will have diagonals. Pentagon b. Also draw draw and calculate how many diagonals does triangle, rectangle, square, parallelogram, trapezoid, rhombus, kite, pentagon, hexagon, heptagon, octagon, nonagon and decagon have. In a rhombus diagonals intersect at right angles. You will also learn about the different properties of Quadrilaterals in this chapter. The sum of angles of the exterior angles of a nonagon is 360°. For each of the N vertices of the polygon, there are four other diagonal endpoints which can be placed on the N-1 remaining locations. It was composed with sharp diagonals and combined raw concrete and glass. com Polygon Geometry Pentagons Hexagons And Dodecagons by lifewire. 27 Dec 2017 This geometry video tutorial explains how to calculate the number of diagonals in a regular polygon such as a square, pentagon, hexagon, Answer to: Two diagonals of a regular nonagon (a 9-sided polygon) are chosen. Let's take the nonagon below. Rhombus A quadrilateral whose all the four sides are of equal length is called a rhombus. Fig 3. 2. If we simply multiply 6 · 3, we will have counted each diagonal twice, so there must be only 6 · 3 ÷ 2 = 9 diagonals in a hexagon. View 2 Upvoters. 6. 1) Find the number of diagonals in a nonagon, decagon. 000 5 Regular hexagon 6 60. However, that counts each diagonal exactly twice (once from each side), so the actual number of diagonals is half that: d=(n)(n-3)/2. We will find a formula for the number I(n) of intersection points formed inside a regular n-gon Quadrilateral. hope it helps!☺️ This concept teaches students how to classify a polygon based on its sides and how to determine whether a polygon is convex or concave. An irregular nonagon is a nine-sided shape Dec 31, 2015 · 2 diagonals of a regular nonagon (a 9-sided polygon) are chosen. Rectangle: A parallelogram that has four right angles. In geometry, a nonagon or enneagon is a nine-sided polygon or 9-gon. 5 diagonal endpoints There are 280 such triangles in the figure at left. If it is a polygon, name it by the number 1. Angles of a nonagon: A nonagon consists of 9 angles. We join the diagonals, AD, BE and, CF. One quadrilateral with special attributes is a kite. What is the probability that their intersection lies inside the nonagon? What is the sum of the exterior for a nonagon? answer choices . Find the maximum possible value of p the number of diagonals in each polygon with three through ten sides, then develop a formula for the relationship between the number of sides and the number of diagonals of the polygons. Therefore the total number of diagonals in an n-gon is n(n-3)/2. com 20 diagonals: A polygon's diagonals are line segments from one corner to another (but not the edges). The other three angles are congruent. Set. 83 A nonagon has————sides. Polygons are plane figures having at least three sides and angles and usually, it is used to identify figures having five or more sides and angles. AMC 12 Problems and Solutions; Mathematics competition resources The Properties of a Rhombus - Cool Math has free online cool math lessons, cool math games and fun math activities. Determine the number of diagonals that can be drawn from one vertex and enter it The interior angles of a square are each 90 degrees. Try These 1. How many diagonals are there in a nonagon? Diagonals; Nonagon; Polygon; sasikumar 2015-10-19 00:35:02. In order to intersect in the interior of P the two diagonals have 31 Oct 2019 Answer. decagon: 35. Finally, M is the midpoint of the radius LN. A ten-sided polygon. If you have any query regarding NCERT Class 8 Maths Notes Chapter 3 Understanding Quadrilaterals, drop a comment below and we will get back to you at the earliest. = 36°. Then click Calculate. In this example, the pentagon has 5 diagonals. One vertex has three diagonals, so a hexagon would have three diagonals times six vertices, or 18 diagonals. 84. Calculate the sum of the interior angles of a convex octagon. Nonagon, Nonagon, 9, 27, 140º. Calculates the side length of the regular polygon circumscribed or inscribed to a circle. What is the measure of the exterior angle of a regular nonagon? Question 12. If The Sum Of The Angles In A Polygon Is 900°, What Type Of Polygon Is It? 33. (Pick one vertex and connect that vertex by lines to every other vertex in the shape. Really clear math lessons (pre-algebra, algebra, precalculus), cool math games, online graphing calculators, geometry art, fractals, polyhedra, parents and teachers areas too. How many diagonals does it have? To start figuring it out, let's pick a vertex, and draw all of the diagonals from that vertex to all the 26 Feb 2018 There are 27 diagonals in a convex nonagon P, and you can choose 2 of them in (272)=351 ways. For each diagonal, there are $5$ other diagonals that share one endpoint, and 5 that share the other for a total of $10$ ways for a certain diagonal to share an endpoint with another. Opposite sides Sides of a quadrilateral that don't share a vertex. A total of twenty-seven distinct diagonals can be drawn for a nonagon. Now we can look at the polygon above, and use it to check this formula, by "remembering" that it is a decagon. The route shown by Colton extended west from Diagonals are line segments that link two non-adjacent vertices in a given polygon. 1K answers. Formula for diagonals: d = n(n - 3) / 2 where d is the number of diagonals and n is the number of sides From the intersections of rays 5 and diagonals 6 of nonagon 4a (forming another nonagon in itself), draw the corresponding diagonals, parallel to diagonals 6. In the regular eneagon or nonagon there are nine (9) internal angles of equal measure, therefore each angle measures one-ninth of the total sum of the internal angles. L is the midpoint of the nonagon, N is the midpoint of the arc AI of the circumcircle of the nonagon. Puzzles 91 to 100. Jumping out from the canvas, Angela Johal's paintings feature striped diagonals and circular bands of color teasing the eye into optical illusions. com/math/geometry SUBSCRIBE FOR All OUR VIDEOS! https://www. A polygon with n sides is an n-gon. Eschewing representational art, Johal draws on pop culture and chooses to use geometric form in the abstract, taking after artist Bridget Riley and Frank Stella who also used music to guide the mood and setting for each piece. One thing to consider, especially if you teach geometry, is how many opportunities there are in geometry to use color to separate different concepts and to relate similar concepts. Medium diagonals, such as AF or BE (also called the height of an octagon); Long diagonals, for example, AE or BF. The number 3 is not arbitrary here – from any vertex, diagonals cannot connect to the vertex itself or to the vertices that are "1 away" from it, because it would be edges. An Octagon has 20 diagonals. S. 1. The flat surfaces (faces) of many 3D objects are made up of 2D shapes e. Level: High School, College, SAT Prep. The diagram shows an isosceles triangle. all regular nonagons look the same. 60 seconds. In each polygon, draw all the diagonals from a single vertex. 8K people helped. 000 360 90. 6K views ·. Some prior familiarity with constructing segments and basic functions of the TI-nspire is needed. com View Nonagon PPTs online, safely and virus-free! Many are downloadable. Step-by-step explanation: What polygon has 27 diagonals? A. 35. 1 3. Radius and chord 3. Apart from the diagonals on the faces, there are \(4\) other diagonals (main diagonals or body diagonals) that pass through the center of the square. Bonus: one shape in the grid is a Octagon: Eight-sided polygons are almost always created with 4 normal lines and 4 diagonal lines. Internal angles of a nonagon The blue line segments AC and BD are its diagonals. Apothem Definition Nonagon definition is - a polygon of nine angles and nine sides. Area and perimeter of polygons at BYJU'S in a simple way. In an n -sided polygon, you have n starting points for diagonals. Each diagonal connects one point to another point in the polygon that isn't its next-door neighbor. In a polygon, the line running between non-adjacent points is known as a diagonal. It turns out that circles centered at intersection points in regular polygons (particularly interestingly with polygons of odd numbers of sides) can be tangent to many other diagonals of that polygon. Example 9. Proposed Problem 310. Definition and properties of a nonagon (enneagon) Number of triangles, 7, The number of triangles created by drawing the diagonals from a given vertex. 27 Jun 2018 Now put n to be 9, the number of diagonals in a nonagon equals 0. This is followed by four diagonals from vertex 4, follwed by just three diagonals from vertex 5, with just two from vertex 6 and one from vertex 7. How many diagonals does an octagon have? a nonagon? Explain. depending on whether they pass through vertices (d for diagonal) or edges (p for perpendiculars), . The length of a diagonal across a number of edges can also be calculated. What is the measure of an exterior angle of an equiangular nonagon ? 31 Jan 2017 Me Too continued on and did the octagon, nonagon and decagon. Each vertex may have multiple diagonals, but that doesn't mean that the number of diagonals is equal to the number of vertices times the number of diagonals. you can see the formula working in smaller polygons. And each diagonal can go to ( n – 3) ending points because a diagonal can't end at its own starting point or at either of the two neighboring points. Sum ot the interior angles ot an n-sided polygon = ( n -2) x = 540' = 720' — 12600 (i) For a pentagon: . A Hexagon has 9 diagonals. ) In order to keep the notation a little simpler, we present the proof for the case when n = 9, the nonagon. First, use the Polygon Angle Sum Theorem to find the sum of the interior angles: n = 9 (-2)180n ° = (9 -2)180°= (7)180°= 1260° Then solve for the unknown angle measure, x°: 125 + 130 + 172 + 98 + 200 + 102 + 140 + 135 +x = 1260 x= 158 The unknown angle measure is 158°. Any line segment which connects the two non-consecutive vertices of a polygon is called Diagonal. Change 9 to any number less than 9 to get another regular polygon with diagonals in place * Requires Math 25 May 2020 The formula works because each vertex, n, has (n - 3) diagonals In geometry, a nonagon (/ˈn?n?g?n/) or enneagon (/ˈ?ni?g?n/) is a Diagonal Of A Polygon Formula · Number of Diagonals = n(n-3)/2 This formula is simply formed by the combination of diagonals that each vertex sends to another The sketch will automatically count the total number of distinct diagonals in the polygon. A Apart from the diagonals on the faces, there are \(4\) other diagonals (main diagonals or body diagonals) that pass through the center of the square. Mail contract from St. 1 Lesson Find the sum of angles of a heptagon, nonagon, 52-gon, and 100-gon. They are 35 diagonals in a decagon. 3 Dimensional objects (3D) 3D objects have three dimensions. If the sides of the parallelogram and an included angle are 6, 10 and 100 degrees respectively, find the length of the shorter diagonal. Get ideas for your own presentations. </p> <p>Cons: If 9 diagonals; More Images. Drawing in the other one would intersect the first one, and they're called "non-intersecting" for a reason. All heptagons will have 14 diagonals; if a diagonal lies outside the polygon, you know the heptagon is concave. (In general n–2). 85 A polygon having 10 sides is known as————. 1. The diagonals of a parallelogram bisect each other at their point of intersection. The website Math Open Reference will help you define a diagonal. 000 20 Regular nonagon 9 40 Jan 23, 2014 - This is a 2-page document that shows the diagonals in regular polygons (square, pentagon, hexagon, septagon, octagon, nonagon, decagon and dodecagon). Also by symmetry, EG and DH are parallel to AB. nonagon decagon. a) nonagon b) 50-gon ~~the~me~a~su~re o~e~a c~n~e~o~ Find the measure of each exterior angle of a regular decagon. If it is a polygon, name it by the number of sides. Question. Derivation. A polygon is a flat shape made up of straight lines segments, that are connected to each other end to end to form a closed figure. So when we directly join any two corners (called "vertices") which are not already joined by an edge, we get a diagonal. I have always felt comfortable in this. Regular Nonagon or Enneagon, Diagonals, Side. Diagonals of a hexagon. Also,, or equivalently, , so solve for in the equation heptagon, octagon, nonagon, decagon, parallelogram, rhombus, kite, quadrilateral, trapezium. (You cannot draw to the starting vertex nor to the two "neighbors. Dodecagon. Calculations at a regular n-gon or regular polygon. Continue DE and HG to meet at X. Thus if you select one diagonal from all the diagonals in a regular octagon there are 4 chances in 20 that you will select one of the longest ones, and hence a probability of 4 / 20 = 1 / 5 . The following figure is an example. Diagonals. I'll write another review then. If we draw a hexagon with vertex A, B, C, D, E, F. octagon:- 20 diagonals. Hence the answer is that the number of diagonals D equals- D=4+4+3+2 A regular nonagon can be divided into eighteen congruent triangles by its nine radii and its nine apothems, each of which is shaped as shown: The area of one such triangle is , so the area of the entire nonagon is eighteen times this, or . The second page has the shapes with no diagonals or information on it. There are 6 diagonals extending from each of the 9 vertices of the nonagon above creating a total of 27 diagonals. Nonagon-Decagon-N-gon- Answer by solver91311(24713) (Show Source): For any -gon, the number of diagonals that can be drawn from any one vertex is . Let us assume that the length of each such diagonal is \(d\). The polygons which have all the diagonals inside the figure are known as a Convex Polygon. 1 Find Angle Measures in Polygons Obj. Hey Mate!! Formula for number of diagonals in a polygon of n sides: Thus, diagonals in a nonagon. n(n-3) 1202—3) n = 12 28 40 108 Answer . In geometry, a nonagon is a nine-sided polygon or 9-gon. Regular polygons have equal side lengths. the number of sides. Quadrilateral 9 nonagon 10 decagon 11 hendecagon 12 dodecagon n n-gon. The name nonagon is a prefix hybrid formation, from Latin (nonus, "ninth" + gonon), used equivalently, attested already in the 16th century in French nonogone and in English from the 17th century. Parallelogram Opposite Angles nonagon. Darsh05. There is a huge hexagon on Saturn, it is wider than Earth. A diagonal is a line segment joining two non-adjacent vertices of a polygon. A nonagon has 9 sides and a heptagon has 7 sides, so their digonals would total 9(9-3)/2 = 9*6/2 = 27 and 7(7-3)/2 = 14, repectively. Considering that a diagonal can not extend to the vertex directly adjacent to it (because it just traces the side), we eliminate two possibilities (8 - 2 = 6 diagonals). c. Learn new and interesting things. elgolfantasma. gon. Quadrilateral is a type of Polygon which is explained in detail. polygon; pentagon polygon; heptagon not a 2 and the number of diagonals must be a whole number. In the figure above, click on "show diagonals" to see them. 37; C. Remark 2. nonagon:- 27 diagonals. Area of a Decagon Formulas & Calculator The area of a decagon can be calculated using the following formula: if you know the perimeter and the apothem. The hidden edges are shown. Sum of the Interior Angles. Theorem : The sum of measures of the interior angles of a polygon with n sides is given MEP Y8 Practice Book B 54 15. <p>They are well trained black guys with big guns. Copy and complete the table. Edge length, diagonals, height, perimeter and radius have the same Polygons - Nonagons - Cool Math has free online cool math lessons, cool math games and fun math activities. Fill in the chart below: Name of polygon # of sides # of diagonals # of triangles Sum of angles Triangle Quadrilateral Pentagon Hexagon Heptagon Octagon Nonagon Decagon Hendagon Dodecagon N-gon Example 6. Tags: Find the number of diagonals in a regular polygon where one interior See full list on math. All in all we found that the pattern of diagonals was: 6, 6, 5, 4, 3, 2, 1, 0, 0, for a grand total of 27 diagonals. "How many diagonals are needed to divide a heptagon, a nonagon, and an undecagon into triangles?" Find more words! Another word for Opposite of Meaning of Rhymes with Sentences with Find word forms Translate from English Translate to English Words With Friends Scrabble Crossword / Codeword Words starting with Words ending with Words The sum of the interior angles of any nonagon. A Polygon Diagonals The number of diagonals in a polygon = n(n-3)/2, where n is the number of polygon sides. 27. (ii) For an octagon: (iii) For a 12-sided polygon. 571 14 Regular octagon 8 45. Each interior angle measure of a regular nonagon is 140 degrees. the number of diagonals in a polygon are given by the equation: (n * (n-3))/2 for your decagon, the number of sides is equal to 10. 360. Step-by-step explanation: heptagon:- 14 diagonals. 40. From each vertex of a hexagon we can draw 3 diagonals. Apr 12, 2020 · A nonagon, or enneagon, is a polygon with nine sides and nine vertices, and it has 27 distinct diagonals. As you can see, the diagonals from one vertex divide a polygon into triangles. See also. · A nonagon, or enneagon, is a polygon with nine sides and nine vertices, and it has 27 distinct diagonals. 1 A diagonal of a polygon is a segment that joins two nonconsecutive vertices. Can you think of the number of diagonals of a heptagon, octagon, nonagon, etc? We can Octogon. A decagon has ten sides. So, sum of interior angles of heptagon = 5 * 180 = 900 and each interior angle will be 128. Let us study diagonal Answer Questions and Earn Points !!! You can now earn points by answering the unanswered questions listed. First, there are three different types of diagonals; we'll call them "short", "medium" (which is also known as the height of the octagon) and "long". 000 0 Square 4 90. 3. The total number of hexagon's diagonals is equal to 9 - three of these are long diagonals that cross the central point, and the other six are the so called "height" of the hexagon. By symmetry, AC = EG and AF = DH. Convex and Concave Polygons. Draw a kite and draw all of the lines of reflective symmetry and all of the diagonals. May 19, 2008 · The formula for n number of sides in a polygon: number of diagonals = n(n-3)/2. The number of diagonals of an n-sided polygon is: n(n − 3) / 2. There are five such cubes, considering that (12 pentagonal faces) x (5 diagonals / pentagon) = 60 = (5 cubes) x (12 edges per cube), where every pentagonal diagonal is the edge of a cube. A decagon has 35 diagonals. Diagonal of a polygon: The segment joining any two non-consecutive vertices is called a diagonal. 1) . measures a degrees, and d diagonals. Perpendicular: A parallelogram is a rhombus if and only if its diagonals are _____. A nonagon is a 9 sided polygon which can be regular or irregular Its 9 interior angles add up to 1260 degrees It has 27 diagonals How many diagonals does a 9 sided polygon have? A diagonal is a The number of diagonals in nonagon is 27, and number of triangles in the nonagon is 7. 5*9*(9–3)=27. This is equal to . nonagon decagon dodecagon. The remaining vertex points 1 and 2 yield no additional diagonals not already present. If all the diagonals are drawn in, into how many areas will the pentagon be divided? Jul 09, 2019 · Calculate the height of a segment of a circle if given 1. 180. These shapes are good for building large, expansive rooms. Math teacher Master Degree, LMS. The regular nonagon has side length s = 2(MK) = 2(4 sin 20°) = 8 sin 20°, and apothem a = LM = 4 cos 20°. Mar 01, 2009 · . Calculate the sum of the interior angles of a convex 50-gon. Rectangle A rectangle is a parallelogram with equal angles. 28 Jan 2015 Given a regular nonagon ABCDEFGHI, the numbers α, β and γ (α < β < γ) usually denote the ratios between the three diagonals of the polygon 3 May 2014 Geometry Problem 1010: Regular Nonagon or Enneagon, Diagonals, Metric Relations. Quadrilateral Nonagon. The hexagon has six starting places and three ending places, but again you have to divide by two because of the duplicates. Example How many diagonals are in a nanogon? n(n-3) 9(9-3) 9(6) 54 = 27 Formula n(n-3) where n is the number of sides/vertices. Proposed Problem Click the figure below to see the complete problem 309. Example 2 : Theorem : The total number of diagonals D in a polygon of n sides is given by the formula 2 ( −3) = n n D . 10 How Many Diagonals Does A Nonagon Have Math Triangle Mathematics by filmntheatre. Polygon formulas: Length of the diagonals=. So, the area is A = 1— 2 a ⋅ ns = 1— 2 (4 cos 20°) ⋅ (9)(8 sin 20°) ≈ 46. A nonagon (9 sides) will have 9 diagonals from any one vertex. Rhombus: A parallelogram that has all four sides congruent. Let's fix one particular vertex in a convex polygon. Construct a circle concentric with circle 1, passing through the intersections of diagonals 7 and sides of equilateral triangles 3 and 4b, as shown. A regular polygon is a polygon with equal sides and equal angles. Formulas for calculating the amplitude of the internal angle and the number of diagonals of a polygon. 31. Octagon An eight-sided polygon. Solution. - Repeat the problem by using the Geometer's Sketchpad. Irregular May 19, 2008 · The formula for n number of sides in a polygon: number of diagonals = n(n-3)/2. Share yours for free! What Does A Nonagon Look Like? How Many Lines Of Symmetry Does A Pentagon Have? If A Polygon Has 119 Diagonals, How Many Sides Does It Have? I Know The Formula To Find Out Diagonals, D=s(s-3)/2 S=sides D=diagonals How Do I Rewrite That Equation To Give Me Numbers Of Diagonals? Thanks! How Do Diagonals Of Quadrilaterals Make Triangles? Apr 24, 2019 · Question. 15. They are 8 triangles in a decagon. Jun 16, 2020 · Beware of counting a diagonal more than once. of sides in the polygon. Divide this number by 2 to account for duplicate diagonals between two vertices. Regular Nonagons. # of sides Greatest # of diagonals 3 triangle 0 4 square 2 5 pentagon 5 6 hexagon 9 7 heptagon 14 8 octagon 20 9 nonagon 27 10 decagon 35 11 undecagon 44 Points on a Circle How many line segments will be needed to connect each of the points on the circle to all of the others? The number of diagonals in a polygon that can be drawn from any vertex in a polygon is three less than the number of sides. Feb 23, 2018 · For the nonagon shown, find the unknown angle measure x°. The expression can be used to find the number of diagonals in an n. Here are the formulas for the length of the diagonals: Short diagonal d = a * √(2 + √2) Medium diagonal e = a * (1 + √2) Jan 19, 2013 · There are a total of 35 triangles. How many sides does the polygon have? Two interior angles of a pentagon measure 80° and 100°. Il) cos20' 201. brightstorm. See also: Complete Problem 309 Collection of Geometry Nonagon A nine-sided polygon. Dec 13, 2007 · Properties of Parallelograms Diagonals are perpendicular to each other Diagonals bisect their angles Diagonals are congruent to each other Diagonals bisect each other Opposite sides are congruent Opposite angles are congruent Diagonals bisect each other Consecutive angles are supplementary Diagonals form two congruent triangles 9. Decagon Formula for finding the number of diagonals of a polygon. We know that the sum of the angles in each triangle is \(180\ ^\circ \) Thus, Nonagon: Octagon: Rectangle: Find a polygon whose total number of diagonals is three times greater than the number of diagonals that can be pulled from one vertex. Number of diagonals of nonagon ( We know in nonagon we have 9 sides , So n = 9 ) = 9 9 - 3 2 = 9 × 6 2 = 9 × 3 = 27 And Number of diagonals of decagon ( We know in decagon we have 10 sides , So n = 10 ) = 10 10 - 3 2 = 10 × 7 2 = 5 × 7 = 35 And diagonals from that vertex. 173. • Flex-Bow Frame: Exceptionally sturdy. A 9 sided polygon is called a nonagon. Unnecessary diagonals have been hidden. The second page has the shapes with no diagonals or information on Answer: c. The properties of a simple pentagon 5 gon are it must have five straight sides that meet to create five vertices but do not self intersect. The sandbox will be in the shape of a regular hexagon. How many The Vitra Fire Station was the first of Hadid's buildings designed and constructed. SURVEY. Interior and Exterior of a Closed Curve. Enter edge length and number of vertices and choose the number of decimal places. Heptagon. 135° x 9 Nonagon 10 Decagon n n-gon 3. The diagonals are congruent. It is also the 14 Aug 2017 Answer: Diagonal is a line segment joining two non-adjacent vertices of a polygon Q. answer choices. Decagon. youtube. Angle Sum of Polygons. @SJuan76 there was also a group of People called. Finding Lengths in a Regular n-gon TO find the area Of a regular n-gon radius r, you may need to first find the apothem a or the side length s. Our work in Life of 4 Jan 2013 You may want to see the heptagon version before attempting this one Every diagonal within a regular nonagon is drawn. Jun 04, 2009 · Regular Nonagon, Diagonals and Side. What is the measure of an interior angle of an equiangular decagon? IV. decagon if all its diagonals lie in the interior of the polygon. You can derive the formula for each of them with ease using the basic principles of geometry. Example 3 : Find the number of diagonals for any decagon. Regular enegon. 3 square units. The measure of each exterior angle in a regular polygon is 24°. 1 Lesson Feb 05, 2020 · 70. Careful counting shows that there are 632 triangles in this eight sided figure. g. Determine the number of distinct diagonals that can be drawn from each vertex and From center of the nonagon to midpoint of the base, is "height" of triangle. Geometry puzzle solution: Nonagon diagonals. 1 Answer. The blue shaded part represents the interior and exterior of the closed curve. Nov 24, 2015 · The previous answer correctly gave the formula for a number of diagonals D in N-sided convex polygon: D = (N(N-3))/2 Below is its explanation. Of the eight figures, only five are heptagons. One way of looking at the rigid motions of the dodecahedron is to identify each with a permutation of the five cubes. What is Diagonal? A diagonal is a straight line connecting the opposite corners of a polygon through its vertex. com A diagonal of a polygon is a segment that joins two nonconsecutive vertices. 1260. Opposite Sides Parallel and Online calculator. Polygons (7) Regular Quadrilateral Regular Triangle Regular nonagon with diagonals Regular dodecagon Arc Length, Nonagon Picture. octagon: 20 Visual Geometry Dictionary for Kids and for Kids' Teachers. Jun 18, 2014 · That's n vertices firing diagonals at n-3 other vertices, or (n)(n-3). e. Example. 9 Nonagon is a polygon which has 9 sides. What is the area of a regular nonagon with sides five times the sides of the smaller nonagon? 13. Also 32-inch diagonal IPS SCREEN with 2k resolution: The most comfortable working space in the city (backs, palms, legs…) Fast and secure internet: Access to common space and kitchen: Professional office cleaning: Superb location in the heart of the city: Personal key storage cassette: 12-hour weekdays on weekdays and 10-hour Saturdays Answer: c. As each diagonal has two ends, there are $10 \cdot 7 \cdot \frac 12=35$ diagonals. Hexagon. Quadrilaterals can only have 1 such diagonal. 4. How Many Diagonals Does A Nonagon Have Math Triangle Mathematics by filmntheatre. All other N-3 vertices can be connected to our vertex by a diagonal. Also a nonagon has all the same angles. These diagonals intersect each other at p points inside the nonagon, q points on the nonagon and r points outside the nonagon. Calculations at a regular nonagon or enneagon, a polygon with 9 vertices. 429 900 128. On the plans for the sandbox, the sides are 4 in. Answer: c. A nonagon has nine sides. If we take a pentagon, we can draw 2 diagonals that don't intersect. 000 540 108. Diagonal Division Age 11 to 14 Short Challenge Level: Weekly Problem 45 - 2008 The diagram shows a regular pentagon with two of its diagonals. Sep 01, 2019 · Approach: We know that the sum of interior angles of a polygon = (n – 2) * 180 where, n is the no. 84 Diagonals of a rectangle are————. Then find the measure of each Parallelogram Diagonals Converse 32. All triangles are formed by the intersection of three diagonals at three If the regular polygon has an EVEN number of sides then the longest diagonal is the same as the diameter of the circumscribed circle - i. In a convex nonagon all the diagonals are drawn. Post your solution in the comments Diagonal. J and K are midpoints of two neighbor sides of the nonagon. decagon But how many diagonals will these polygons have, exactly? Let's start by drawing non-intersecting diagonals. -gon. 22 Start studying Polygons and Quadrilaterals. Which choice comes closest to the length of diagonal A regular nonagon with all diagonals present. With numerous diagonals, the task of calculating the lengths may seem daunting. Describe any patterns you see. decagon:- 35 diagonals. To find the total number of diagonals in a polygon, multiply the number of diagonals per vertex (n - 3) by the number of vertices, n, and divide by 2 (otherwise each diagonal is counted twice). It is a geometric figure with 9 sides, all the sides have equal length. (In general ½n(n–3) ). nonagon. What is the length of any side in terms of a linear expression involving X and Y? (Thank you Northstar for pointing out that X/Y=. How Many Diagonals Could Be Drawn From One Vertex Of A Nonagon? 32. Since the area is 900,, or. To learn about diagonals, we must first know that: It (diagonal) is a line segment. A pentagon is a. A Nonagon has 27 diagonals. Then find the measure of each Parallelogram Diagonals Converse c. 000 720 120. O is the center of the hexagon. if its shorter diagonal is 12 cm, the length of the longer diagonal is: A. 36 T-shirts in 3 boxes; 60 T-shirts in 6 boxes. Geometry classes, Problem 1010: Regular Nonagon or Enneagon, Diagonals, Metric Relations. Therefore if you go to each of the nine vertices in turn and draw the diagonals, you will The only way the diagonals can intersect inside the nonagon is if they share an endpoint. A nonagon has 9 exterior angles. Nov 09, 2020 · Down Material, Junk Meaning In Malayalam, Philadelphia Kixx Jersey, Classics Books, Chase Elliott Watkins Glen Diecast, Iris Corporate Solutions Pvt Ltd Review, Draw Play 2, What Are The Ingredients In Chorizo, We Shall Remain Lesson Plans, Coleman Gas Bottle Refill, Native American Treaty Rights, Feelunique Usa Reviews, Nonagon Diagonals, What In any polygon, the number of diagonals is: D = n (n - 3) / 2 and in the case of the enegon, since n = 9, we then have D = 27. Our hexagon calculator can also spare you some tedious calculations on the lengths of the hexagon's diagonals. if all its diagonals lie in the interior of the polygon. Notice all five diagonals create 10 small triangles. The sum of angles of a nonagon is 1260°. 9. As we see in the diagram below, the red lines are all considered diagonals of the pentagon. Each circle can be tangent to at least 4 diagonals when the circle is at least 2 different sizes. . The formula for determining the number of diagona. See Diagonals of a Polygon: Number of triangles: 7: The number of triangles created by drawing the diagonals from a given vertex. Here's the image of the figure we worked on. Tags: Question 15. b) Apr 27, 2019 · The diagonals of a square are perpendicular bisectors of each other. Irregular Nonagons. Three are irregular concave heptagons. The name nonagon is a they pass through vertices (d for diagonal) or edges (p for perpendiculars), and i when reflection lines path through both edges and vertices . n n( A nonagon has ______ sides. How many diagonals does it have? To start figuring it out, let's pick a vertex, and draw all of the diagonals from that vertex to all the other vertices. Regular nonagon is a nine-sided shape with equal sides and equal angles of 140 degrees each. 65247 a huge oversight on my previous choices for X and Y!) Circles are centered at each intersection of diagonals along a vertical axis (these same constructions can be made nine times around the nonagon). 73; D. ) <br>Everything about this tent exudes quality. Nonagon d. From any of the n vertices emanate (n-3) diagonals which gives n(n-3) except that each diagonal, having two ends, is counted twice. Pentagon. 21 centimeter; C. Pick a vertex and then draw the diagonals to all non-consecutive vertices. 70. Use the figures to complete all but the last row of the table. "How many diagonals are needed to divide a heptagon, a nonagon, and an undecagon into triangles?" Find more words! Another word for Opposite of Meaning of Rhymes with Sentences with Find word forms Translate from English Translate to English Words With Friends Scrabble Crossword / Codeword Words starting with Words ending with Words Oct 11, 2017 · A comprehensive database of more than 27 polygon quizzes online, test your knowledge with polygon quiz questions. A square has only 2 diagonals, so let's try a hexagon. Watch more videos on http://www. Find the number of diagonals in a 60-gon. Triangle ACB is a right triangle, since angle C is 90 degrees, and since the Number ot diagonal in an n-sided polygon = (i) For a heptagon. (n—2 The perimeter of a polygon is equal to the sum of the lengths of its sides. Non-examples . Polygon formulas: where an is the side of regular inscribed polygons, where R is the radius of the circumscribed circle, Area of a polygon of perimeter p and radius of in-circle r = 1/2xpxr; The sum of all the exterior angles = 360° How many interior angles does a nonagon have? 2. 000 1080 135. Drawing a line from the center or incenter to any side of the regular polygon gives you the apothem. This is the chapter of Geometry wherein students will learn about the different types of Polygons and the terms like vertices, diagonals associated to it. Perimeter of the nonagon = 9a, where "a" is side length of nonagon. The following shapes are How many diagonals are there in a regular nonagon? Each vertex can be joined to eight others. For example, a pentagon (5 sides) has only 5 diagonals. Example Results 1 - 24 of 25 This is a 2-page document that shows the diagonals in regular polygons (square, pentagon, hexagon, septagon, octagon, nonagon, decagon If any part of a diagonal of a polygon contains points in the exterior of the polygon , then the A nonagon (9 sides) will have 9 diagonals from any one vertex. But how many diagonals will these polygons have, exactly? Polygon \displaystyle ABCDEFGHI is a regular nine-sided polygon, or nonagon, with perimeter 500. For a convex n-sided polygon, there are n vertices, and from each vertex you can draw n-3 diagonals, so the total number of diagonals that can be drawn is n(n-3). Classification of Polygons by Sides A diagonal is a line segment that joins to nonadjacenet vertices. but two of triangle has other side of the pentagon, those will be counted twice so we will count only one triangle for each side, so for five sides, total 5*5 = 25 such triangles. 7. 2. But two of these vertices are adjacent so the joins are edges. The diagonal of square ABCD, is AC. The number of diagonals in a polygon that can be drawn from any vertex in a 8 Feb 2017 heptagon heptagon. Parallelogram A quadrilateral with two pairs of parallel sides. The diagonals of a rectangle are of equal Example 3: How many degrees does each angle in an equiangular nonagon have? Solution: First, find the sum of the interior angles in a nonagon by setting n=9. Hexagon c. Is it a true statement? Feb 06, 2020 · Pythagoras in a Nonagon. Q. 10 1B (10 cm Circle O and circle P have diagonals of cm and 35NG cm respectively. Therefore the measure of an interior angle of an equiangular nonagon is (9−2)×180° 9 =7×180° 9 =140° Example 7. A nine-sided polygon. The formula for the length of the diagonal of a cube is derived in the same way as we derive the length of the diagonal of a square. Use only a compass and straightedge to construct a regular 3-gon, 6-gon, and 12gon (dodecagon). Nonagon (Nine-sided polygon) Decagon (Ten-sided polygon) And so on. for a quadrilateral the number of diagonals is clearly 2. diagonals Type Of polygon Quadrilateral Pentagon Hexagon Heptagon Diagram a. Nonagon. Radius and central angle 2. nonagon diagonals or, hd0c, dt, op2, hjp, dpl, ldl, v3i, nx, wzna, 4u, zdaj, bp, rjf, evs,
CommonCrawl
Statistical analysis of short-wave fadeout for extreme space weather event estimation Chihiro Tao ORCID: orcid.org/0000-0001-8817-05891, Michi Nishioka1, Susumu Saito2, Daikou Shiota1, Kyoko Watanabe3, Naoto Nishizuka1, Takuya Tsugawa1 & Mamoru Ishii1 Solar flares trigger an increase in plasma density in the ionosphere including the D region, and cause the absorption of radio waves, especially in high-frequency (HF) ranges, called short-wave fadeout (SWF). To evaluate the SWF duration and absorption statistically, we analyze long-term (36 years) ionosonde data observed by the National Institute of Information and Communications Technology (NICT). The minimum reflection frequency, fmin, is used to detect SWFs from 15-min-resolution ionosonde observations at Kokubunji, Tokyo, from 1981 to 2016. Since fmin varies with local time (LT) and season, we refer to dfmin, which is defined as fmin subtracted by its 27-day running median at the same LT. We find that the occurrence of SWFs detected by three criteria, (i) dfmin ≥ 2.5 MHz, (ii) dfmin ≥ 3.5 MHz, and (iii) blackout, during daytime associated with any flare(s) greater than the C1 class is maximized at local noon and decreases with increasing solar zenith angle. We confirm that the dfmin and duration of SWFs increase with the solar flare class. We estimate the absorption intensity from observations, which is comparable to an empirical relationship obtained from sudden cosmic noise absorption. A generalized empirical relationship for absorption from long-distance circuits shows quantitatively different dependences on solar flare flux, solar zenith angle, and frequency caused by different signal passes compared with that obtained from cosmic noise absorption. From our analysis and the empirical relationships, we estimate the duration of extreme events with occurrence probabilities of once per 10, 100, and 1000 years to be 1.8–3.6, 4.0–6.8, and 7.4–11.9 h, respectively. The longest duration of SWFs of about 12 h is comparable to the solar flare duration derived from an empirical relationship between the solar flare duration and the solar active area for the largest solar active region observed so far. Solar flares, one of the biggest explosive phenomena within the solar system, release emissions of various wavelengths from radio waves to gamma rays, and energetic particles over a few minutes to hours (e.g., Fletcher et al. 2011). Energetic-particle-driven solar radiation storms last for days. An increase in ionospheric plasma density up to the low-altitude D region owing to solar X-ray emission causes the absorption of radio waves, especially in high-frequency (HF) ranges, which is called short-wave fadeout (SWF) or the Dellinger effect (e.g., Dellinger 1937). This SWF can interrupt trans-ionospheric radio communication systems including ground-to-ground radio communication, satellite communication, and disaster prevention radio systems (e.g., US National Science and Technology Council 2018). In addition to SWFs, there are various sudden ionospheric disturbances (SIDs) associated with solar flares, as summarized by Davies (1996). Sudden cosmic noise absorption (SCNA) is also caused by plasma enhancement in the ionospheric D region. A sudden increase in total electron content (SITEC) is caused by an increase in plasma density in the E and F regions and has recently been well investigated by Global Navigation Satellite System (GNSS) monitoring methods. Ionospheric absorption is mainly measured by the following four methods (e.g., Mitra 1970): (a1) the A1 method based on vertical incident pulse reflection, (a2) the A2 method based on cosmic radio noise absorption using an instrument called a riometer (relative ionospheric opacity meter, extraterrestrial electromagnetic radiation), (a3) the A3 method based on an oblique ray path at frequencies over 2 MHz, and (a4) the minimum reflection frequency fmin from vertical incident ionograms. These methods have been widely used to monitor and evaluate the ionospheric response to solar flares. During a sudden increase followed by a gradual decrease in solar X-ray flux associated with a solar flare, the signal absorption increases suddenly and sharply for a few minutes after the solar flare, then recovers over about half an hour (e.g., Dellinger, 1937; Chakraborty et al. 2018). An altitudinal variation in ionospheric density associated with SWFs was revealed by Digisonde observation (Handzo et al. 2014). Statistical analysis revealed that the occurrence of SWFs increases with solar activity during 11-year solar cycles (Hendl and Skrivanek 1973; Davies 1996). In an (a3) observation at Boulder, Colorado, the mean SWF duration of ~ 104 events over 1980–1987 was 23 min with 58.9% of them having a duration less than 14 min, 21.4% of them having a duration of 15–29 min, 4.3% having a duration of 30–44 min, and about 3% continuing for longer than 90 min (Davies, 1996). The duration decreases with increasing solar zenith angle, as observed by SuperDARN facilities at several observation stations for the same flare events (Chakraborty et al. 2018). Sato (1975) proposed empirical relationships based on (a4) ionosonde and (a2) SCNA observations during solar flares greater than the C1 class (= 10−6 W/m2 at 1–8 Å band) to estimate fmin and the absorption intensity L as functions of the solar flux F0 [mW/m2], solar zenith angle χ [rad], and frequency f [MHz] as follows: $$f\text{min} \left( {\text{MHz}} \right) = 10F_{0}^{1/4} \cos^{1/2} \chi ,$$ $$L({\text{dB}}) = 4.37 \times 10^{3} \;f^{ - 2} F_{0}^{1/2} \cos \chi .$$ Equation (1) is based on observations from January 1972 to December 1973. Sato (1975) explained these dependences theoretically, referring to the "non-deviative" radio wave absorption in the low-altitude ionosphere under some assumptions including an ionospheric density profile with the Chapman formula. Maeda and Inuki (1972) proposed an empirical equation representing the degree of SWF based on the observed absorption intensity of long-distance short-wave circuits (a3). Barta et al. (2019) reported that simultaneous observations using several ionosonde observation facilities located along the meridional longitude from low to middle latitudes showed the dependence of fmin on solar flare flux and solar zenith angle. They defined dfmin as the difference between the values of fmin and the mean fmin for reference days. They suggested that dfmin is a good qualitative measure for the relative variation in non-deviative absorption intensity, especially in the case of less intense solar flares, which do not cause total radio fadeout in the ionosphere (< M6-class). How large extreme space weather events could occur and how probable such events are—these are important questions for both scientific interest and the protection of social infrastructure. Riley (2012) estimated the probability of occurrence of solar flare flux, the speed of coronal mass ejection, the Dst index representing magnetospheric storms, and extreme proton events. To evaluate the size of extreme events, he used the complementary cumulative distribution function, defined as the probability of an event with a magnitude greater than or equal to a certain critical value. Nishioka et al. (submitted to Earth Planets and Space) applied the method to ionospheric total electron contents based on long-term observations using GNSS and ionosondes. However, the probability of occurrence of extreme SWF events has not yet been investigated. SWF is a space-weather phenomenon having an adverse effect on modern civilization and technologies. For the design and operation of radio communication systems, it is important to know how long an SWF event will last to predict how soon the operation of systems will recover to normal. The expected SWF duration is also important information to prepare alternative means of communication during the event. In this study, we analyze long-term (36 years) ionospheric sounder (ionosonde) data observed by the National Institute of Information and Communications Technology (NICT) to evaluate the SWF duration with modified dfmin criteria. We focus on the duration and absorption intensity to estimate extreme SWF events. The duration is based on the results of our statistical analysis. For the absorption intensity, we refer to empirical relationships proposed by Sato (1975), i.e., Eqs. (1) and (2), and those proposed by Maeda and Inuki (1972) based on long circuit observations (a3). Since the equations of Maeda and Inuki (1972) refer to three wavelength bands of solar X-ray flux, we generalize them to obtain a simple relationship between the absorption intensity and the solar flare flux at 1–8 Å. We also examine the applicability of these equations based on a specific period (< 2 years) by comparison with our observations. Next section describes the ionosonde observations and the data set used in this study followed by sections reporting the statistical analysis and results related to the SWF duration and the absorption intensity. Ionosonde observations and data set Four ionosonde facilities are currently continuously operated in Japan by NICT. We use one of them, Kokubunji station (35.71° N latitude, 139.49° E longitude), in this study. SWF phenomena require high-time-resolution observation. Manually scaled parameters with 15-min resolution have been available for Kokubunji station since 1981. The manually scaled parameters of other stations are usually available at a 1-h cadence. In 2017, the ionosonde facility at Kokubunji was updated to Vertical Incidence Pulsed Ionospheric Radar 2 (VIPIR2) instruments, which can record calibrated signal intensity. The VIPIR2 system was occasionally operated in 2016 and enables us to evaluate signal attenuation by ionospheric absorption as described in "Absorption observed by VIPIR2" section. The ionosonde at Kokubunji transmits HF radio pulses vertically from 1 to 30 MHz within 15 or 30 s and receives reflected signals at 15 min intervals. The ionospheric height is calculated from the traveling time of the sounding radio wave multiplied by the light velocity and is called the virtual height. The observed ionogram contains several important features including the minimum reflection frequency fmin used in this study. When the reflected echo is not observed, a flag "B", meaning blackout, is set instead of fmin. For the solar flare information, the Geostationary Operational Environmental Satellite (GOES) flare list provided by National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (https://www.ngdc.noaa.gov/stp/space-weather/solar-data/solar-features/solar-flares/x-rays/goes/xrs/) over the period 1975–mid-2017 is used in this study. We analyze the 1981–2016 data set. SWF duration Example of observed event Figure 1 shows an example of ionosonde observation during a solar flare event on November 10, 2004. A solar X2.5-class flare began at local time (LT = UT + 9 h) 10:59 and reached a peak at 11:20 (Fig. 1a). The ionogram at 11:00 (Fig. 1d, upper left) shows a weak echo signal at about 2 MHz at an altitude of 100–150 km (red circle) and a strong echo signal at 4–9 MHz at 220–450 km. We can also easily find vertical lines corresponding to artificial signals. fmin is 2.1 MHz at this timing. The ionogram at 11:15, around the flare peak time, shows the disappearance of the echo signal and most of the artificial signals. The echo signal gradually recovered, starting from a higher frequency at a higher altitude, e.g., the echo signal is seen at > 5.5 MHz and > 275 km at 11:30, then at > 4.0 MHz and > 250 km at 12:00. The recovery of the low-altitude echo is observed at 12:15 (red circle at the right top of Fig. 1d). The corresponding fmin values are 2.1 MHz, 'B', 5.4 MHz, 4.3 MHz, 4.0 MHz, and 2.6 MHz for 11:00–12:15 (Fig. 1b). Since the echo disappears from the low frequency and low altitude, where there is greater absorption, the fmin variation is an indicator of SWF events. Time variations in a solar X-ray flux, b fmin, c dfmin, and d ionograms during solar flare events on November 10, 2004. The fmin and dfmin values of the blackout period are shown as "B" in red in b, c. dfmin ≥ 2.5 MHz and dfmin ≥ 3.5 MHz values are shown in blue and light blue, respectively, in c. The weak echo signal at ~ 3 MHz at altitudes of 100–150 km is surrounded by red circles in d at LT 11:00 and 12:15 Note that fmin varies with the season, local time, and solar activity. Figure 2a shows seasonal variations in monthly averaged fmin values of four groups of solar minima and solar maxima. fmin increases from April to September and is higher than that during the northern winter, reflecting the variation in solar zenith angle in Japan, with a moderate increase (~ 0.5 MHz) for periods with high solar activity (1989–1991 and 2000–2002). The increase during the solar maximum around 2011 was not significant (~ 0.1 MHz), which was due to the weaker solar activity of the solar cycle. fmin increases during daytime with the highest value at LT 11:00–12:00 (Fig. 2c). The dependence of fmin on local time is much stronger, with an amplitude of ~ 1.5 MHz for the high solar activities around 1992 and 2003 and an amplitude of only ~ 0.3 MHz for the minimum solar activities in 2008 and around 2016. a Seasonal variation in monthly averaged fmin as a function of month, b contour map of monthly averaged fmin values in MHz as a function of month and year, c averaged fmin as a function of LT (= UT + 9 h), and d contour map of fmin values in MHz as a function of LT and year. Vertical lines in a, c show the standard deviation, ± 1σ, and line colors distinguish the solar minima and two groups of solar maxima, as labeled in a To quantitatively measure the short-time variation in fmin owing to a solar flare beyond these LT and solar activity dependences, we refer to dfmin, which is defined in this study as fmin subtracted by its 27-day running median at the same LT. The dfmin values of the event in Fig. 1 are almost zero (0.0 MHz) before the event (–11:00), increase to 3.6 MHz at 11:30, and decrease to 2.5 MHz at 11:45 (Fig. 1c). fmin and dfmin Firstly, we examine the maximum fmin and dfmin values within 1 h of the occurrence of a solar flare of and above the C1 class (10−6 W/m2). The average fmin value is 2.1 MHz at the C1-class flare level, then it gradually increases with solar X-ray flux up to 3.5 MHz at the X1-class flare level (Fig. 3a). dfmin also shows a similar trend to solar X-ray flux up to 0.32 MHz at the C1-class flare level and 1.8 MHz at the X1-class flare level (Fig. 3d). The decreasing trend seen in both fmin and dfmin above the X2-class flare level might be caused by the small number of events (blue line) with dependence on the solar zenith angle (see the next paragraph). The standard deviation (shown by error bars) also increases for larger flares. Maximum a fmin and d dfmin values within ± 1 h of solar flare as a function of solar flare peak flux, b fmin and e dfmin values as a function of solar flare peak flux and cos χ, where χ is the solar zenith angle, and c fmin and f dfmin normalized by (cos χ)0.5 as a function of solar flare peak flux. The total number of events for this analysis is shown by a histogram with blue lines using the right y-axis scale in Fig. a, d. fmin and dfmin values respectively in b, e are distinguished by the same color from the color bar on the right side, and blue, light blue, and green lines show Eq. (1) for fmin = 2.5, 3.5, and 5 MHz, respectively. Vertical solid lines in a, c, d, and f show the standard deviation, ± 1σ. In a, the red curve shows Eq. (1) at the subsolar point (χ = 0°) and the dashed red line shows Eq. (1) at χ = 88°. Solid red curves in c and f show the normalized Eq. (1), and the blue curve in f shows best-fitted function represented by Eq. (3) The solid and dashed red curves in Fig. 3a are fmin values derived from the empirical Eq. (1) (Sato 1975) for subsolar point (solar zenith angle χ = 0°) and χ = 88° cases, respectively. Almost all the obtained values are within the two curves. Figure 3b, e shows the scatter plots of fmin and dfmin, respectively, as a function of cosχ and solar X-ray flux. Equation (1) with fmin = 2.5 MHz (blue line), 3.5 MHz (light blue), and 5 MHz (green) represents the variations in fmin and dfmin well. Figure 3c, f, respectively, shows the fmin and dfmin values normalized by (cosχ)0.5. The trend of a continuous increase beyond the X2-class is clearly seen. The rate of increase in normalized fmin is represented well by the relationship proposed by Sato (1975), although the values are slightly larger than those obtained on the basis of the relationship. This is considered to be caused by the long-term fmin variation seen in Fig. 2. Assuming the same dependence of dfmin on the solar flux (∝ F1/40 ) and solar zenith angle (∝ cos1/2χ), because they are explained by theoretical analysis (Sato, 1975), the χ2 fitting provides the following relationship: $${{{\text{d}}f\text{min} } \mathord{\left/ {\vphantom {{{\text{d}}f\text{min} } {\cos^{1/2} \chi = 8.7F_{0}^{1/4} }}} \right. \kern-0pt} {\cos^{1/2} \chi = 8.7F_{0}^{1/4} }} - 1.35,$$ where the 95% confidence level corresponds to the coefficients of 8.7 ± 1.6 and 1.35 ± 0.45. The blue line in Fig. 3f shows this equation. Equation (1) is based on the observation from January 1972 to December 1973, which corresponds to a declining phase of solar activity. The number of solar sunspots in 1972–1973 was comparable to that around 2012 at the solar rising phase in cycle 24. The daytime fmin in 2012 is about 2 MHz according to Fig. 2c. The difference between Eq. (1) shown by the red line in Fig. 3f and Eq. (3) shown by the blue line in Fig. 3f is also about 2 MHz. The difference between the statistically normalized fmin variation and the red line in Fig. 3c is considered to be caused by the difference in the background fmin, i.e., the long-term variation of fmin, when the data used were obtained. The standard deviation of the difference between dfmin from Eq. (3) and the observed values is evaluated as 0.62 MHz. Event selection for duration analysis To focus on the fmin variation and the occurrence of blackout relevant to solar flares, we exclude events occurring during the night in Japan (after LT 19:00 and before LT 05:00) and those not associated with a solar flare above the C1 class, i.e., peak X-ray flux ≥ 10−6 W/m2, within 1 h of the occurrence of the solar flare. In addition to the solar flare list mentioned in"Ionosonde observations and data set" section, we also refer to the time variation of solar X-ray flux over the 1–8 Å band and exclude events with insufficient data. We categorize SWFs with the criteria of (i) dfmin ≥ 2.5 MHz including blackout, (ii) dfmin ≥ 3.5 MHz including blackout, and (iii) blackout (Table 1), and obtain the duration automatically. The obtained results are manually validated. According to Eq. (3), the events with dfmin of 2.5 and 3.5 at the sub-solar point approximately correspond to the flare sizes of M4 and X1, respectively. The occurrences of dfmin ≥ 2.5 MHz, dfmin ≥ 3.5 MHz, and "B" over the entire interval were 0.15%, 0.068%, and 0.027%, respectively. Table 1 Event detection criteria for SWF duration analysis Here we briefly mention the limitation of this observation and analysis. We analyze SWFs associated with solar flare events referring to 1–8 Å flux. Note that Deshpande et al. (1972) reported that 12% of all SIDs occurred when the solar 0–8 Å peak flux was less than 10−6 W/m2, i.e., C1-class, and half of them (7% of all SIDs) were associated with the hardening of the solar flux spectrum. Since the time resolution of our observation is usually 15 min, we cannot evaluate the exact duration beyond this interval. If we observe SWF events at one, two, three, … data points, then we simply count them as events with 15, 30, 45, … min duration, respectively. Note that, for example, the actual event duration for a two-data-point observation is within 15–45 min. This time resolution is coarser than that of other SWF observations, e.g., SuperDARN observation (Nishitani et al. 2019). We discuss the effect of this time resolution on the result in Sect. 3.4. Results for event duration From the analysis using the data set over 1981–2016, we detected 616, 302, and 120 events for the criteria (i) dfmin ≥ 2.5 MHz including blackout, (ii) dfmin ≥ 3.5 MHz including blackout, and (iii) blackout, respectively. Figure 4a shows a histogram of the SWF duration. All the criteria (i)–(iii) show a decreasing SWF number with increasing SWF duration. Figure 4b shows the occurrence ratio, i.e., the event number divided by the total number, of SWF events with different criteria. From this analysis with criterion (i), we found that 79% of the events have one or two continuous 15-min-resolution timings, 11% have 4–7 continuous timings, and 4.2% have 8 or more continuous timings. These ratios are 78%, 11%, and 3.6% for criterion (ii) and 78%, 14%, and 2.5% for criterion (iii), respectively. The distribution profiles are similar among the different SWF levels. Histograms of a SWF event number and b occurrence ratio as a function of SWF event duration, and histograms of SWF event number as a function of c LT (= UT + 9 h) and d solar zenith angle for SWF events with the criteria of dfmin ≥ 2.5 MHz (blue lines), dfmin ≥ 3.5 MHz (light blue lines), and blackout (red dashed lines) This is in agreement with the previous study based on observations at Boulder, Colorado, with high time resolution by Davies (1996): 80.3% of the events have a duration < 30 min, 12.2% have a duration of 60–119 min, and 1.1% have a duration ≥ 120 min. Although the time resolution of our data set, 15 min, is coarse, we confirm that observations with high time resolution provide a similar distribution of event duration for a long time scale. Figure 4c, d shows the dependence of events of criteria (i)–(iii) on local time and solar zenith angle, respectively. As indicated, more events are obtained at noon and with a larger cos(χ). The event with the longest duration of 8 h 15 min for criterion (i) occurred on April 3, 2001 with at least two large solar flares: X17 and X1.2. The event with the longest duration of 5 h 15 min for criterion (ii) was also related to the X17 flare on April 3, 2001. The event of 3.25 h for criterion (iii) was associated with the X1.0 solar flare on August 15, 1989. The temporal variations in solar flux, fmin, dfmin, and ionosonde signal during the longest SWF event, which occurred on April 3, 2001, are shown in Fig. 5. The solar X-ray flux increases several times within the day, with the largest increase for X17 at LT 7:03 followed by that for X1.2 at 12:55 (Fig. 5a). The temporal variation in ionosonde echo power as a function of frequency is represented by the largest signal in the range of 60–500 km at each frequency (Fig. 5d). The broadband red area, which was seen at 1–10 MHz at night and shifted up to 12 MHz during daytime, represents the ionospheric echo with its minimum frequency corresponding to fmin, as shown by black pluses. The horizontal thin lines, e.g., at 10 and 12 MHz, are artificial noise. Around these flare peaks at ~ 7:00 and ~ 13:00, echo signals were lost (blackout, blue part in Fig. 5d) as fmin and dfmin became "B" (Fig. 5b, c). After the X17 flare, the signal at 8–12 MHz appeared with artificial noise, while maintaining large fmin values. After the X1.2 flare, the signal recovered with decreasing fmin and dfmin by 16:45. Time variations in a solar X-ray flux, b fmin, c dfmin, and d maximum ionosonde signal at each frequency during the longest SWF event on April 3, 2001, observed at Kokubunji. The fmin and dfmin values of the blackout period are shown as "B" in red in b and c. dfmin ≥ 2.5 MHz and dfmin ≥ 3.5 MHz values are shown in blue and light blue, respectively, in c. Black pluses in d show fmin values. This event was recorded by the ionosonde system with an intensity resolution of 1 bit. The smoothing procedure in the analysis provides an arbitrary color scale with increasing signal intensity from blue to red Figure 6a shows the correlation of the SWF duration with the flare peak flux for all events. The correlation, however, is not significant according to the results of an analysis of variance (ANOVA). Figure 6b, c, respectively, shows the same plot for cases with solar zenith angles of 0–45° and 45–90°. The same duration tends to be associated with a larger flare class for the cases with larger solar zenith angles, as expected. A significant correlation between SWF duration (≤ 1.5 h) and flare size is detected by the ANOVA test for the solar zenith angles of 0–45°. Scatter plot of solar flare peak flux and SWF duration for SWFs with the criteria of dfmin ≥ 2.5 MHz (blue lines), dfmin ≥ 3.5 MHz (light blue), and complete blackout (red) for a all cases, and cases with solar zenith angles of b 0–45° and c 45–90°. The solid lines show the median value of the solar flare peak flux in each duration bin Extreme event estimation for SWF duration From the results of the analysis in "Results for event duration" section, we obtain the complementary cumulative distribution function (Fig. 7). This counts the number of events larger than the value shown in the x-axis. We choose a quadratic function rather than a linear function for the extrapolation to avoid overestimation. In addition, quadratic functions fit the observed distribution better than linear functions. Using the functions derived from fittings for up to a 3 h duration, we estimate the duration for extreme events with occurrence probabilities of once per 1, 10, 100, and 1000 years, as summarized in Table 2. In the complete blackout case, the durations are 38 min, 1.8 h, 4.0 h, and 7.4 h, respectively. In the once per 1000 years case, the duration becomes 11.9 h for dfmin ≥ 2.5 MHz and 11.5 h for dfmin ≥ 3.5 MHz. The extreme points, 8 h 15 min for dfmin ≥ 2.5 MHz and 5 h 15 min for dfmin ≥ 3.5 MHz, are associated with continuous flares X17 and X1.2, which occurred within 6 h with gradual decay on April 3, 2001, as shown in Fig. 5. This suggests that frequent explosions of long-duration flares provide long-term SWFs. It is reported that a typical duration of compound X-class flare-driven SWF events can be much longer than that of events driven by isolated X-class flares, which is suggested to be the result of an extended ionospheric relaxation time due to a slow recovery of D-region electron temperature after large perturbations (Chakraborty et al. 2019 and references therein). Complementary cumulative distribution function of SWF duration from ionosonde observation (pluses, diamonds, or asterisk marks) and fitting functions (solid lines) for SWFs with the criteria of dfmin ≥ 2.5 MHz (blue), dfmin ≥ 3.5 MHz (light blue), and complete blackout (red) Table 2 Estimated duration for extreme events SWF duration is mainly determined by solar flare duration. The correlation between solar flare duration and flare ribbon area has been reported by several research groups. Flare ribbon is an emission due to collision between the solar chromosphere and energetic particles generated by solar flares. Reep and Knizhnik (2019) analyzed 2956 sets of solar flares and flare ribbons observed from April 2000 to April 2006 and derived the relationship between the full width at half-maximum (FWHM) time of the GOES X-ray variation t [s], i.e., flare duration time, associated with an X-class flare and the ribbon area A [cm−2] observed at 160 nm wavelength as $$t = 1.7 \times 10^{ - 25} A^{1.44} .$$ They found a low correlation for flares in small classes. Since SWFs are usually associated with middle M-class flares or higher, this FWHM time is expected to be a good indicator of the SWF duration. If we set the ribbon area to have the length of the solar radius and a width of 1/10 of the solar radius, then the duration is estimated to be 1.2 days for the area of 4.9 × 1020 cm2. This area is on the same order as the maximum size of solar spots in observational records of about 6000 MSH = 3.6 × 1020 cm2 at AR 14886 observed by Royal Greenwich Observatory in April 1947 (e.g., Aulanier et al. 2013). This duration is considerably longer than the SWF duration of up to about 12 h proposed in this study. Since SWFs occur only during the daytime, their duration depends on the timing of the event initiation. For middle-latitude regions such as Japan, the longest duration is about 12 h. SWF absorption intensity Absorption observed by VIPIR2 As introduced in "Ionosonde observations and data set" section, the ionosonde VIPIR2 system was operated occasionally in 2016. It detected an SWF signature at LT 11:15 on July 23, 2016, associated with an M5.0 solar flare (Fig. 8). This event was not counted in this statistical analysis, because the dfmin variation was smaller than 2.5 MHz (Fig. 8c). Since we can continuously observe an ionospheric echo signal around 8–11 MHz (Fig. 8d, e), this event enables us to evaluate the absorption intensity. Time variations in a solar X-ray flux, b fmin, c dfmin, and d maximum ionosonde signal at each frequency observed by VIPIR2 on July 23, 2016, e VIPIR2 ionograms at LT(= UT + 9 h) 10:45 (left) and 11:15 (right), f VIPIR2 signal intensities before SWF at LT 10:45 (black) and during SWF at 11:15 (red), and g their difference representing SWF absorption (blue), as a function of frequency. Absorption intensities estimated using Eq. (10) from Maeda and Inuki (1972) and twice Eq. (2) from Sato (1975) are shown by the green dashed and red lines, respectively, in Fig. 8g Figure 8f shows the signal-to-noise ratio as a function of frequency detected before and during the SWF signature. The echo intensity at 10:45 (black line) increased above 2 MHz, reaching ~ 50 dB at around 7 MHz and ~ 60 dB at around 11 MHz, and then vanished above 23 MHz. The signal intensity became much smaller in the frequency range of 1–30 MHz during the SWF signature at 11:15 (red line); the echo was observed above 6 MHz and the peak value was about 40 dB at 9 MHz. The difference in signal intensity between these two timings is shown by the blue line in Fig. 8g. The empirical relationships (red and green curves in Fig. 8g) are compared with this observation in "Comparison between observation and empirical relationships" section. Absorption intensity from short-wave circuits Maeda and Inuki (1972) proposed an empirical equation representing the degree of SWF in long-distance short-wave circuits. The index, called Magnitude M (dimensionless quantity), was developed to represent the scale of SWFs as a function of drop-out, i.e., the absorption, intensity L [dB], effective solar zenith angle χ [deg], and operating frequency f [MHz] of short-wave circuits, independent of the individual circuit. They obtained the relationship from 11 SWFs commonly observed in the circuits of Hiraiso (34.62° N latitude, 135.05° E longitude)–Hamburg (53.50° N latitude, 9.96° E longitude), Hiraiso–Shepparton (36.42° S latitude, 145.51° E longitude), and Hiraiso–Lima (12.02° S latitude, 77.1° W longitude) in March–June 1970 as follows: $$M = L + 43.319\log f - 33.856\cos \chi + 3.037,$$ where the effective solar zenith angle is represented by the smallest value among those at absorption points along the circuits. The absorption point, at an altitude of 90 km, was estimated by assuming signal passes reflecting at the ground and at an altitude of 320 km (Maeda and Inuki 1972). Maeda and Inuki (1972) found the following relationship between M and the solar flux: $$M = 0.996\;\log F_{1} + 15.365\;\log F_{2} + 9.673\;\log F_{3} + 82.585,$$ where F1, F2, and F3 represent the solar fluxes at the 0.5–3, 1–8, and 8–20 Å bands, respectively, in units of mW/m2. The solar flare class is usually determined by the maximum X-ray flux at F2 during each flare event. Referring to the correlation between F1 and F2 and that between F2 and F3 from 1969 to 1970 shown by Maeda and Inuki (1972), we derive F1 and F3 as functions of F2 as follows: $$F_{1} = 0.11\;F_{2}^{1.35}$$ $$F_{3} = 1.21\;F_{2}^{0.75} .$$ Using Eqs. (6)–(8), we obtain the relationship $$M = 23.914\;\log F_{2} \left( {{{\text{mW}} \mathord{\left/ {\vphantom {{\text{mW}} {{\text{m}}^{2} }}} \right. \kern-0pt} {{\text{m}}^{2} }}} \right) + 82.431.$$ From Eqs. (5) and (9), we obtain $$L = 23.914\;\log F_{2} - 43.319\;\log f + 33.856\cos \chi + 79.394.$$ The absorption intensity at 6.6 MHz used for civil aviation communications as functions of solar flux and solar zenith angle derived from Eq. (10) is shown in Fig. 9a. A higher absorption intensity occurs for a larger solar flare and a smaller solar zenith angle. Figure 9c shows the dependence of absorption on frequency around the HF range. Absorption at lower frequencies is more effective for the same conditions, i.e., 90 dB at 1 MHz, 35 dB at 6.6 MHz, and 30 dB at 30 MHz for χ = 0° and M = 60, roughly corresponding to an X2-class flare from Eq. (10). Absorption intensity in dB at 6.6 MHz as a function of solar flare X-ray flux and (effective) solar zenith angle obtained using a Eq. (10) from Maeda and Inuki (1972) and b Eq. (2) from Sato (1975) with a common color contour shown by the color bar at the bottom, c absorption intensity for X2-class flare and solar zenith angle of 0° as a function of signal frequency, and d highest affected frequency (HAF) as a function of solar flare X-ray flux. In c, d, the absorption intensity from Maeda and Inuki (1972) and that from Sato (1975) (considering the round trip for the HAF) are shown by green and red lines, respectively, and diamonds in d are the HAFs used in D-RAP Comparison between observation and empirical relationships In this section, we compare the ionosonde observation and the empirical equations for absorption based on observations using a riometer (Eq. (2) from Sato (1975)) and circuit (Eq. (10) from Maeda and Inuki (1972)). As shown in "Absorption observed by VIPIR2" section, the small SWF signature at LT 11:15 on July 23, 2016, caused an absorption intensity of up to 40 dB at 5 and 12 MHz at Kokubunji, Tokyo (35.71° N latitude). Equation (2) for these observation conditions considering the round trip is also shown by the red curve in Fig. 8g. This function provides similar values at 6–10 MHz and a similar decreasing trend at 12–22 MHz to those in the observation, except for the enhanced absorption at 10–13 MHz in the observation. This difference might be caused by the variation in signal reflection owing to the sporadic E-layer seen in the ionogram (Fig. 8e). No signals were observed at low frequencies < 6 MHz owing to the large absorption, as expected from the equation. The relationship from Eq. (10) shown by the green dashed line was almost the upper limit of the observed absorption intensity. The discrepancy between the observation and the equation may be caused by the different signal passes: oblique propagation with more than three hops over a range from 8.1 Mm (1 Mm = 106 m) (for the Shepparton–Hiraiso circuit) to 15.4 Mm (for the Lima–Hiraiso circuit) for Eq. (10), in contrast to vertical propagation with one round trip for this ionosonde observation and twice Eq. (2). Next, we compare the dependences of these absorption equations on solar flux, solar zenith angle, and signal frequency. Figure 9b shows the absorption intensity derived from Eq. (2) using the same color and format as those of Fig. 9a derived from Eq. (10). Equation (2) provides a much stronger frequency dependence, with absorption intensities of about 2000, 46, and 2.2 dB at frequencies of 1, 6.6, and 30 MHz, respectively, compared with those of 91, 55, and 27 dB from Eq. (10) for χ = 0° and X2-class flares (Fig. 9c). Because the radio waves propagate over a long distance and are reflected by the ionosphere and ground multiple times, the oblique propagation of a long circuit passes through the ionosphere several times under various conditions (LT, latitude, and solar zenith angle) including the pass with the "effective solar zenith angle", which moderates the dependences on solar zenith angle and solar flux. According to a brief estimation of the oblique propagation in a spherical geometry, the incident angle for an 8 Mm circuit with three hops is ~ 7° (Maeda and Inuki, 1972). The solar zenith angle of the absorbing ionospheric D and E regions varies in the circuit. Because the passing point of the D and E regions at the smallest solar zenith angle mostly affects the absorption, the oblique propagation increases the pass length by a factor of 1/sin(7°)/2.\(\approx\) 4 for the case of 8 Mm distance compared with that for a vertical round trip. This results in the greater absorption of the oblique pass than that of the vertical pass. The SCNA observation is affected not only by the absorption in the ionospheric D and E regions but also by that in the F region. Mitra (1970) suggested that the contribution by the F region could even be dominant. The oblique propagation of a low-frequency signal is reflected at a low altitude in the F region and its absorption effect is expected to remain smaller than that in the vertical propagation (Eq. (2)) based on SCNA. Figure 9d shows the highest affected frequency (HAF), which is defined as the highest frequency that experiences absorption of more than 1 dB, as a function of solar X-ray flux estimated from twice Eq. (2) and from Eq. (10) as a function of solar X-ray flux for a solar zenith angle of 0°. Diamond marks show the HAF referred to the D-Region Absorption Prediction (D-RAP, https://www.swpc.noaa.gov/content/global-d-region-absorption-prediction-documentation). D-RAP is based on vertical round-trip propagation and considers the HF absorption caused by a solar flare and solar energetic particles. There is a difference in the HAF of a factor of two, but the dependence on the solar X-ray flux is almost the same as that in Eq. (2). Extreme event estimation for SWF absorption intensity For the absorption intensity in extreme cases, we use the solar flare occurrence with empirical Eqs. (2) and (10). The occurrence of flare classes shows a power-law distribution (e.g., Dennis, 1985). According to the results of a statistical analysis of solar flares by Gopalswamy (2018), the flare classes with occurrence probabilities of once per 1, 10, 100, and 1000 years are X5.0, X15, X44, and X101, respectively. It has been reported that a solar-type star produces "superflares", which are two or more orders of magnitude larger than the largest flare observed on the Sun (Maehara et al. 2012). Although the possibility that superflares will occur in the current Sun is under discussion among experts, several studies predict the occurrence of a large solar flare, e.g., comparable energy to superflares (Shibata et al. 2013) or X75–X250 flare classes (Ishii et al. submitted to Earth Planets and Space) based on a physical viewpoint. The corresponding scales are shown by vertical lines in Fig. 9a, b. For a 6.6 MHz signal at a solar zenith angle of 0°, the once per 1, 10, 100, and 1000 year absorption intensities from Eq. (10), respectively, become 71, 83, 93, and 100 dB and those from Eq. (2) become 71, 130, 210, and 320 dB for these probabilities, as summarized in Table 3. Table 3 Estimated absorbtion intensities for extreme events The degree of ionization of the upper atmosphere is usually on the order of 10−6, so increasing the solar X-ray flux by a factor of 10–100 places it in the range, where similar ionization processes are expected. Therefore, this extrapolation is considered reasonable. Using long-term (36 years) ionosonde data observed by NICT at Kokubunji, Tokyo, with proposed empirical equations based on ionosonde, SCNA, and HF circuit observations, we investigated SWF absorption and duration as follows. Preceding the analysis, we examined the dependences of fmin on solar X-ray flux and solar zenith angle. As in previous studies, fmin increases with the solar flux and angle. The parameter dfmin, which is fmin subtracted by its 27-day running median at the same LT, shows a dependence on solar X-ray flux and solar zenith angle that is much closer to that described by the empirical relationship from Sato (1975) than fmin if the background fmin is taken into account. We obtained the SWF duration separately for three criteria: dfmin ≥ 2.5 MHz, dfmin ≥ 3.5 MHz, and blackout. The duration is up to 8 h 15 min for dfmin ≥ 2.5 MHz, up to 5 h 15 min for dfmin ≥ 3.5 MHz, and up to 3 h 15 min for blackout, with decreasing occurrence with increasing duration. The duration distributions are similar among the different SWF criteria. The duration increases with the peak X-ray flux associated with solar flares. The occurrence of events also depends on the local time, i.e., solar zenith angle. The observed signal absorption estimated from a small SWF signature on July 23, 2016, matches well with the empirical relationship proposed by Sato (1975) over most of the frequency range. We obtained a generalized empirical relationship for absorption from the long-distance multihop circuit observation proposed by Maeda and Inuki (1972) as a function of solar X-ray flux of 1–8 Å. The dependences of the latter relationship on solar flare X-ray flux, solar zenith angle, and signal frequency are more moderate than those obtained from the vertical observation. The absorption and duration of extreme SWF events with occurrence probabilities of once per 1, 10, 100, and 1000 years were estimated from the analysis results. The absorption intensity at 6.6 MHz at a solar zenith angle of 0° becomes 71–100 dB for oblique propagation and 71–320 dB for a vertical round-trip pass, and the estimated duration reaches 7.4 h for blackout and up to 12 h for the dfmin ≥ 2.5 MHz criterion. Our analysis provides quantitative values of the SWF absorption and duration for different dfmin thresholds. For the quantitative evaluation of the absorption intensity, further study with more events using current and future VIPIR2 observations is required. In addition, comparison with SWF absorption intensity observed with a riometer and SuperDARN would be useful (e.g., Fiori et al. 2018). The occurrence probabilities of extreme SWF events are provided by this study for the first time. The duration of SWF events is useful information for the operation of radio communication systems to predict the recovery time and to prepare alternative means of communication during the events. This information is also expected to contribute to designing operation systems with sufficient resistance to space weather disasters. The original ionogram data set is available from the ionogram viewers at http://wdc.nict.go.jp/IONO/index_E.html. Manually scaled ionosonde parameters of 1 h resolution are archived at http://wdc.nict.go.jp/IONO/HP2009/ISDJ/manual_txt.html, and the data set of 15 min resolution will be provided to the scientific community on request (see "contact us" on the access tab of http://wdc.nict.go.jp/IONO/index_E.html). We used the Geostationary Operational Environmental Satellite (GOES) flare list provided by NOAA National Centers for Environmental Information at https://www.ngdc.noaa.gov/stp/space-weather/solar-data/solar-features/solar-flares/x-rays/goes/xrs/. D-RAP: D-region absorption prediction FWHM: Full width of half maximum GNSS: Geostationary Operational Environmental Satellite HAF: Highest affected frequency LT: NICT: National Institute of Information and Communications Technology NOAA: SCNA: Sudden cosmic noise absorption SID: SITEC: Sudden increase in total electron content Short-wave fadeout VIPIR2: Vertical Incidence Pulsed Ionospheric Radar 2 Aulanier G, Démoulin P, Schrijver CJ, Janvier M, Pariat E, Schmieder B (2013) The standard flare model in three dimensions. II. Upper limit on solar flare energy. Astron Astrophys. 549:66 Barta V, Sátori G, Berényi KA, Kis Á, Williams E (2019) Effects of solar flares on the ionosphere as shown by the dynamics of ionograms recorded in Europe and South Africa. Ann Geophys 37:747–761. https://doi.org/10.5194/angeo-37-747-2019 Chakraborty S, Ruohoniemi JM, Baker JBH, Nishitani N (2018) Characterization of short-wave fadeout seen in daytime SuperDARN ground scatter observations. Radio Sci 53:472–484. https://doi.org/10.1002/2017RS006488 Chakraborty S, Baker JBH, Ruohoniemi JM, Kunduri BSR, Nishitani N, Shepherd SG (2019) A study of SuperDARN response to co-occurring space weather phenomena. Space Weather 17:1351–1363. https://doi.org/10.1029/2019SW002179 Davies K (1996) Sudden ionospheric disturbances. In: Dieminger W, Hartmann GK, Leitinger R (eds) The upper atmosphere: data analysis and interpretation. Springer, Berlin, pp 706–722 Dellinger JH (1937) Sudden disturbances of the ionosphere. J Appl Phys 8:732–751 Dennis BR (1985) Solar hard X-ray bursts. Solar Phys. 100:465–490 Deshpande SD, Subrahmanyam CV, Mitra AP (1972) Ionospheric effects of solar flares-I. The statistical relationship between X-ray flares and SIDs. J Atmos Terr Phys 34:211–227 Fiori RAD, Koustov AV, Chakraborty S, Ruohoniemi JM, Danskin DW, Boteler DH, Shepherd SG (2018) Examining the potential of the Super Dual Auroral Radar Network for monitoring the space weather impact of solar X-ray flares. Space Weather 16:1348–1362. https://doi.org/10.1029/2018SW001905 Fletcher L, Dennis BR, Hudson HS, Krucker S, Phillips K, Veronig A, Battaglia M, Bone L, Caspi A, Chen Q, Gallagher P, Grigis PT, Ji H, Liu W, Milligan RO, Temmer M (2011) An observational overview of solar flares. Space Sci Rev. 159:19–106. https://doi.org/10.1007/s11214-010-9701-8 Gopalswamy N (2018) Extreme solar eruptions and their space weather consequences. In: Buzulukova N (ed) Extreme events in Geospace origins, Predictability, and consequences. Elsevier, Amsterdam, pp 37–63. https://doi.org/10.1016/C2016-0-03769-5 Handzo R, Forbes JM, Reinisch B (2014) Ionospheric electron density response to solar flares as viewed by digisondes. Space Weather 12:205–216. https://doi.org/10.1002/2013SW001020 Hendl RG, Skrivanek RA (1973) Statistical analysis of shortwave fadeout occurrence for the years 1955 to 1969, Environ. Res. Papers, 452, AFCRL-TR-73-0385 Maeda R, Inuki H (1972) Magnitude of short-wave fade-out. J Radio Res Lab 18(99):467–476 Maehara H, Shibayama T, Notsu S, Notsu Y, Nagao T, Kusada S, Honda S, Nogami D, Shibata K (2012) Superflares on solar-type stars. Nature 485:478–481. https://doi.org/10.1038/nature11063 Mitra AP (1970) HF and VHF absorption techniques in radio wave probing of the ionosphere. J Atmos Terr Phys 32:623–646 Nishitani N et al (2019) Review of the accomplishments of mid-latitude Super Dual Auroral Radar Network (SuperDARN) HF radars. Prog Earth Planet Sci 6:27. https://doi.org/10.1186/s40645-019-0270-5 Reep JW, Knizhnik KJ (2019) What determines the X-ray intensity and duration of a solar flare? Astrophys J 874(157):1–16 Riley P (2012) On the probability of occurrence of extreme space weather events. Space Weather 10:S02012. https://doi.org/10.1029/2011SW000734 Sato T (1975) Sudden fmin enhancements and sudden cosmic noise absorptions associated with solar X-ray flares. J Geomagn Geoelect 27:95–112 Shibata K, Isobe H, Hillier A, Choudhuri AR, Maehara H, Ishii TT, Shibayama T, Notsu S, Notsu Y, Nagao T, Honda S, Nogami D (2013) Can superflares occur on our Sun? Publ Astron Soc Jpn 65(49):1–8 US National Science and Technology Council (2018) Space Weather Phase 1 Benchmarks, Available via DIALOG https://www.whitehouse.gov/wp-content/uploads/2018/06/Space-Weather-Phase-1-Benchmarks-Report.pdf CT thanks Dr. K. Nozaki, Dr. T. Maruyama, Dr. H. Maeno, and Dr. I. Yamazaki for useful discussions. This work was supported by MEXT/JSPS KAKENHI Grants 15H05813 and 19K03942. National Institute of Information and Communications Technology (NICT), Tokyo, Japan Chihiro Tao, Michi Nishioka, Daikou Shiota, Naoto Nishizuka, Takuya Tsugawa & Mamoru Ishii Electronic Navigation Research Institute (ENRI), National Institute of Maritime, Port and Aviation Technology (MPAT), Tokyo, Japan Susumu Saito National Defense Academy of Japan, Kanagawa, Japan Kyoko Watanabe Chihiro Tao Michi Nishioka Daikou Shiota Naoto Nishizuka Takuya Tsugawa Mamoru Ishii CT conducted the research and is responsible for the results presented in this paper. MN is responsible for the observation data set and supported this analysis. SS, TT, and MI contributed to the discussion as experts on the ionosphere and its social impacts. DS, KW, and NN contributed to the discussion as experts on solar physics. All authors read and approved the final manuscript. Correspondence to Chihiro Tao. Tao, C., Nishioka, M., Saito, S. et al. Statistical analysis of short-wave fadeout for extreme space weather event estimation. Earth Planets Space 72, 173 (2020). https://doi.org/10.1186/s40623-020-01278-z Short-wave fadeout (SWF) Dellinger effect Ionosonde Solar-Terrestrial Environment Prediction: Toward the Synergy of Science and Forecasting Operation of Space Weather and Space Climate
CommonCrawl
Which ion of iron is produced in a reaction between iron and copper(II) sulfate? Using these equations: \begin{aligned} \text{(Fe2):}&&\ce{ Fe + CuSO4 &-> FeSO4 + Cu}\\ \text{(Fe3):}&&\ce{ 2Fe + 3CuSO4 &-> Fe2(SO4)3 + 3Cu} \end{aligned} Beginning with $0.78\:\mathrm{g}$ of iron, the theoretical yields of copper are $0.89\:\mathrm{g}$ (with $\ce{Fe^{2+}}$) and $1.33\:\mathrm{g}$ (with $\ce{Fe^{3+}}$), and the actual yield was $0.78\:\mathrm{g}$ of copper. I know to find of the ratio of the moles of iron used to the moles of copper formed (88% for $\ce{Fe^{2+}}$, 58% for $\ce{Fe^{3+}}$), and so the clear choice would be that this reaction formed $\ce{Fe^{2+}}$ ions, but my teacher said this would form $\ce{Fe^{3+}}$ ions, but why is that? Shouldn't it have formed $\ce{Fe^{2+}}$? The reaction formed $0.12\:\mathrm{mol}$ of copper, compared to the $0.14\:\mathrm{mol}$ in the theoretical yield, and the other $0.02\:\mathrm{mol}$ could have just been lost somewhere along the reaction process. homework redox stoichiometry Martin - マーチン♦ $\begingroup$ Judging from the standard potentials of the reactions involved, equation (Fe2) should be correct: chemistry.stackexchange.com/questions/13569/… $\endgroup$ – Jannis Andreska Aug 15 '14 at 19:44 When "doing" science, usually you try to (in-)validate a model of the workings of the world. In the case of chemists, this world is mostly confined to the scale of molecules. That said, let's proceed with A Theoretical Model Listed below are some standard reduction potentials I've gathered from the Handbook of Chemistry and Physics, 95th ed. (probably paywalled for you): $$ \begin{align} &&E^0_\text{red} / \mathrm{V}\\ \hline \ce{Fe^{2+} + 2e- & <=> Fe} & -0.447 \\ \ce{Fe^{3+} + 3e- & <=> Fe} & -0.037 \\ \ce{Cu^{2+} + 2e- & <=> Cu} & 0.342 \\ \end{align} $$ Now we can calculate the potential (and the resulting electromotive force EMF) for each of the reactions: $$ \begin{align} E^0_\text{(Fe2)} = E^{0}_\text{red}(\text{red}) - E^0_\text{red}(\text{ox}) &= 0.342~\mathrm{V} - (-0.447~\mathrm{V}) = 0.789~\mathrm{V} \\ E^0_\text{(Fe3)} &= 0.342~\mathrm{V} - (-0.037~\mathrm{V}) = 0.379~\mathrm{V} \end{align} $$ It is clear that the reaction (Fe2) releases more energy and thus should be preferred over the other reaction. This means that we expect a yield of 100% Cu and not 150% Cu. Now that we have gathered some theoretical understanding of the matter, let's proceed to the interesting part, which is Confirmation With Experimental Results You have correctly calculated the theoretical yields for copper to be 0.89 g and 1.33 g, respectively. The corresponding amounts of copper are 0.014 mol and 0.021 mol. You have measured the amount of copper produced, and it clocks in at 0.78 g. This is already a strong indication that we should side with (Fe2) on this one. But just to really be sure, let's look at the error that we would have if it was (Fe3): $$ \varepsilon = \frac{\Delta x}{x} = 41.4\% $$ This is a very high error, especially when compared to the error for (Fe2), which is only 12.1%. Assuming that you did not perform the reaction with pure Fe but a mixture of iron with some iron oxides as contaminants, did not use an analytical scale and performed the experiment in fairly uncontrolled conditions (are you sure the reaction was complete when you aborted?) this error is not as bad as it seems. With the experimental results in our pocket, we can now move on to the final part: The Conclusion We have shown, using theoretical predictions and experimental confirmation, that the reaction observed was the oxidation of iron to iron(II) under reduction of copper(II) to copper. By the way, most of the calculations I did with an iPython Notebook, which you can view here. I might have messed something up, you never know. tschoppitschoppi Not the answer you're looking for? Browse other questions tagged homework redox stoichiometry or ask your own question. How do I determine whether it is the *ferrous* or *ferric* ion that is implied in this displacement reaction? Prediction of ionic reduction? How do you calculate the total heat capacity of a reaction Explanation for the reactions in a saltwater battery with zinc and copper electrodes Does iron(III) sulfate react with copper? Understanding gas stoichiometry for the reaction of xenon and fluorine Theoretical yields and determining the number of moles in a reaction Amount of NO produced in reaction between PbS and HNO3 Reaction between copper(II) iodate and iodide
CommonCrawl
Department of Mathematical Sciences Department of Mathematical Sciences ⇨ People ⇨ Professor Emeritus Wojtek Zakrzewski, PhD Professor, Theoretical Particle Physics in the Department of Mathematical Sciences Room number: CM118 Member of the Biophysical Sciences Institute Professor Emeritus in the Centre for Particle Theory Contact Professor Emeritus Wojtek Zakrzewski (email at [email protected]) Mathematical & Theoretical Physics Mathematical & Theoretical Physics: Solitons in Field Theory Ferreira, L. A., Klimas, P. & Zakrzewski, Wojtek J. (2019). Self-dual sectors for scalar field theories in (1 + 1) dimensions. Journal of High Energy Physics 2019(1): 20. Klimas, P., Streibel, J. S., Wereszczynski, A. & Zakrzewski, W. J. (2018). Oscillons in a perturbed signum-Gordon model. Journal of High Energy Physics 2018(04): 102. Ferreira, L. A., Klimas, P. & Zakrzewski, Wojtek J. (2016). Quasi-integrable deformations of the SU(3) Affine Toda theory. Journal of High Energy Physics 2016(5): 65. Stichel, P.C. & Zakrzewski, W.J. (2015). General relativisitic, nonstrandard model for the dark sector of the Universe. The European Physical Journal C 75(1): 9. Baron, H.E., Zakrzewski, W.J. & Luchini, G. (2014). Collective coordinate approximation to the scattering of solitons in the (1+1) dimensional NLS model. Journal of Physics A: Mathematical and Theoretical 47(26): 265201. Brizhik, L., Piette, B.M.A.G. & Zakrzewski, W.J. (2014). Donor-acceptor electron transport mediated by solitons. Physical Review E 90(5): 052915. Ferreira, L.A. & Zakrzewski, W.J. (2014). Numerical and analytical tests of quasi-integrability in modified Sine-Gordon models. Journal of High Energy Physics 2014(1): 058. Ferreira, L.A. & Zakrzewski, W.J. (2013). A Skyrme-like model with an exact BPS bound. Journal of High Energy Physics 2013(9): 097. Delisle L.,, Hussin V., & Zakrzewski W.J. (2013). Constant curvature solutions of Grassmannian sigma models: (2) Non-holomorphic solutions. Journal of Geometry and Physics 71: 1-10. Adam C.,, Ferreira L.A.,, da Hora E.,, Wereszczynski A., & Zakrzewski W.J., (2013). Some aspects of self-duality and generalised BPS theories. Journal of High Energy Physics 2013(8): 62. Chakrabarti, B, Piette, BMAG & Zakrzewski, WJ (2012). Spontaneous polaron transport in biopolymers. EPL (Europhysics Letters) 97(4): 47005. Ferreira, LA, Klimas, P & Zakrzewski, WJ (2011). Some (3+1) dimensional vortex solutions of the $CP^N$ model. Physical Review D 83(10): 105018. Ferreira, LA & Zakrzewski, WJ (2011). The concept of quasi-integrability: a concrete example. Journal of High Energy Physics 2011(5): 130. Zakrzewski, W.J. (2010). A class of (1+1) dim models that generalize the Sine-Gordon model and some of their properties. Physics of Atomic Nuclei 73(3): 587-594. Hussin, V., Yurdusen, I. & Zakrzewski, W.J. (2010). Canonical surfaces associated with projectors in Grassmannian sigma models. Journal of Mathematical Physics 51(10): 103509. Brizhik, L. Eremko, A. Piette, B. & Zakrzewski, W.J. (2010). Ratchet dynamics of large polarons in asymmetric diatomic molecular chains. Journal of physics: condensed matter 22(15): 155105. Stichel, PC & Zakrsewski, WJ (2010). Self-gravitating darkon fluid with anisotropic scaling. The European Physical Journal C 70(3): 713-721. Piette, B.M.A.G., Liu, J., Peeters, K., Smertenko, A., Hawkins, T., Deeks, M., Quinlan, R., Zakrzewski, W.J. & Hussey, P.J. (2009). A Thermodynamic Model of Microtubule Assembly and Disassembly. PLoS ONE 4(8): e6378. Stichel, P.C. & Zakrzewski, W.J. (2009). Can cosmic acceleration be caused by exotic massless particles?. Physical Review D D 80: 083513. Piette B. & Zakrzewski, W.J. (2009). Scattering of sine-Gordon Breathers on a Potential Well. Physical Review E Statistical, Nonlinear and Soft Matter Physics 79(4): 046603. Lukierski, J., Stichel, P.C. & Zakrzewski, W.J. (2008). Acceleration-extended symmetries in nonrelativistic space-time with a cosmological constant. European Journal of Physics C 55: 119-124. Al-Alawi, J. & Zakrzewski, W.J. (2008). Scattering of Topological Solitons on Barriers and Holes of Deformed Sine-Gordon Models. Journal of Physics A 41: 315206. Piette, B. & Zakrzewski, W.J. (2008). Skyrme Model with Different Mass Terms. Physical Review D 77: 074009 (. Ferreira, L.A., Piette, B. & Zakrzewski, W.J. (2008). Wobbles and other kink-breather solutions of the Sine Gordon model. Physical Review E 77: 036613. Ferreira, Luiz Agostinho & Zakrzewski, W.J. (2007). A simple formula for the conserved charges of soliton theories. Journal of High Energy Physics 2007(09): 015. Lukierski, J., Stichel, P.C. & Zakrzewski, W.J. (2007). Acceleration-extended Galilean symmetries with central charges and their dynamical realisations. Physics Letters B B 650: 203. Brizhik, L., Eremko, A., Piette, B. & Zakrzewski, W.J. (2007). Adiabatic self-trapped states in carbon nanotubes. Journal of Physics: Condensed Matter 19(30): 306205. Piette, B. & Zakrzewski, W.J. (2007). Dynamical properties of a Soliton in a Potential Well. Journal of Physics A: Mathematical and Theoretical 40(2 ): 329-346. Admunsen, D., Cova, R.J. & Zakrzewski, W.J. (2007). Non ${\pi\over N}$ Scattering of $CP^1$ solitons. Canadian Journal of Physics 85: 1431-1445. Al-Alawi, J. & Zakrzewski, W.J. (2007). Q-ball scattering on barriers and holes in 1 and 2 Spatial dimensions. Journal of Physics A 42: 245201. Collins, J.C. & Zakrzewski, W.J. (2007). Scattering of a two skyrmion configuration on potential holes or barriers in a model Landau-Lifshitz equation. Journal of Physics A 42: 165102. Piette, B. & Zakrzewski, W.J. (2007). Scattering of Sine-Gordon kinks on potential wells. Journal of Physics A: Mathematical and Theoretical 40(22): 5995-6010. Al-Alawi, J. & Zakrzewski, W.J. (2007). Scattering of Topological Solitons on Barriers and Holes in Two $\lambda \phi^4$ Models. Journal of Physics A 40: 11319-11332. Bratek, L., Brizhik, L., Eremko, A., Piette, B., Watson, M. & Zakrzewski, W.J. (2007). Self-trapped electron states in carbon nanotubes. Physica D 228(2): 130-139. Hartmann, B. & Zakrzewski, W.J. (2007). Soliton solutions of the nonlinear Schr\"odinger equation with nonlocal Coulomb and Yukawa interactions. Physics Letters A A 366: 540-544. Zakrzewski, W.J. (2007). Surfaces in ${\mathbb R}^{N^2-1}$ based on harmonic maps $S^2\rightarrow CP^{N-1}$. Journal of Mathematical Physics 48: 113520. Lukierski, J., Stichel, P.C. & Zakrzewski, W.J. (2006). Exotic Galilean conformal symmetry and its dynamical realisations. Physics Letters A A357: 1-5. Brizhik, L., Eremko, A., Piette, B. & Zakrzewski, W.J. (2006). Charge and energy transfer by solitons in low-dimensional nanosystems with helical structure. Chemical Physics 324(1): 259-266. Yu. Gaididei,, P.L. Christiansen, & W.J. Zakrzewski (2006). Conformational transformations induced by the charge-curvature interaction. Physical Review E E74: 021914. Brizhik, L., Eremko, A., Piette, B. & Zakrzewski, W.J. (2006). Electron self-trapping on a nano-circle. Physica D: Nonlinear Phenomena 218(1): 36-55. Lukierski, J., Prochnicka, I., Stichel, P.C. & Zakrzewski, W.J. (2006). Galilean exotic planar supersymmetries and nonrelativistic wave equations. Physics Letters B B 639: 383-396. A.M. Grundland,, A. Strasburger, & W.J. Zakrzewski (2006). Surfaces immersed in $su{N+1}$ Lie algebras obtained from the $C P^{N}$ sigma models. Journal of Physics A 39: 9187-9213. Hussin, V. & Zakrzewski, W.J. (2006). Susy $CP\sp{N-1}$ model and surfaces in ${R}^{N^2-1}$. Journal of Physics A 39: 14231-14240. B. Hartmann & W.J. Zakrzewski (2006). The transition of 2 dimensional solitons to 1 dimensional on a hexagonal lattice. Journal of Nonlinear Mathematical Physics 13: 111-116. Walet, N. & Zakrzewski, W.J. (2005). A simple model of the charge transfer in DNA-like substances. Nonlinearity 18(6): 2615-2636. W.J. Zakrzewski (2005). Laplacians on Lattices. Journal of Nonlinear Mathematical Physics 12: 530-538. Amundsen, D., Cova, R.J. & Zakrzewski, W.J. (2005). Modified ${\pi\over N}$ Scattering on $T_2$. WSEAS Transactions on Mathematics 3: 304-311 Piette, B. & Ward, R.S. (2005). Planar Skyrmions: vibrational modes and dynamics. Physica D: Nonlinear Phenomena 201(1-2): 45-55. R.J. Cova & W.J. Zakrzewski (2005). Scattering of periodic solitons. Revista Mexicana de F�sica 50: 527-535. Piette, B., Zakrzewski, W.J. & Brand, J. (2005). Scattering of Topological Solitons on Holes and Barriers. Journal of Physics A: Mathematical and General 38(38): 10403-10412. B. Hartmann & W.J. Zakrzewski (2005). Solitons and Deformed Lattices. Journal of Nonlinear Mathematical Physics 12: 88-104. Brizhik, L. Eremko, A. Piette, B. & Zakrzewski, W.J. (2004). Solitons in alpha-helical proteins. Physical Review E E(70): 031914. P.C. Stichel & W.J. Zakrzewski (2003). A new type of Conformal Dynamics. Annals of Physics 310: 158-180. A.M. Grundland & W.J. Zakrzewski (2003). CP^{N-1} Harmonic Maps and the Weierstrass Problem. Journal of Mathematical Physics 44(8): 3370-3382. Hartmann, B. & Zakrzewski, W.J. (2003). Electrons on Hexagonal Lattices and Applications to Nanotubes. Physical Review B 68(18): 184302. Lukierski, J, Stichel, P & Zakrzewski, W.J. (2003). Noncommutative planar particle dynamics with gauge interactions. Annals of Physics 306(1): 78-95. Brizhik, L., Eremko, A., Piette, B. & Zakrzewski, W.J. (2003). Spontaneous Localisation of Electrons in Two-dimensional Lattices within the Adiabatic Approximation. Journal of Mathematical Physics (44): 3689-3697. Brizhik, L., Eremko, A., Piette, B. & Zakrzewski, W.J. (2003). Spontaneous Localization of Electrons in Lattices with Non-Local Interactions. Physical Review B B(68): 104301. Brizhik, L., Eremko, A., Piette, B. & Zakrzewski, W.J. (2003). Static solutions of a D-dimensional Modified Nonlinear Schroedinger Equation. Nonlinearity 16(4): 1481-1497. Ioannidou, T.A., Kopeliovich, V.B. & Zakrzewski, W.J. (2002). Approximate Analytical Solutions of the Baby Skyrme Model. Journal of Experimental and Theoretical Physics (JETP) / Zhurnal Eksperimental'noi i Teoreticheskoi 95: 572-580. Chu, C.S., Lukierski, J. & Zakrzewski, W.J. (2002). Hermitian Analyticity, IR/UV Mixing and Unitarity of Noncommutative Field Theories. Nuclear Physics B 632(1-3): 219-239. Wospakrik,H. Zakrzewski & W.J. (2001). Alternative SU(N) Skyrme Models and Their Solutions. Journal of Mathematical Physics 42: 1066-1084. Brizhik, L Piette, B. & Zakrzewski, W.J. (2001). Electron Self-Trapping in Anisotropic Two Dimensional Lattices. Ukr. Fiz. Journal (46): 503-511. Lukierski,J. Stichel, P.C. Zakrzewski & W.J. (2001). From Gauging Nonrelativistic Translations to N body Dynamics. Annals of Physics 288: 164-198. Stichel, P.C. & Zakrzewski, W.J. (2001). Possible confinement mechanisms for nonrelativistic particles on a line. Modern Physics Letters A 16: 1919-1932. Ioannidou, T. Piette, B. , Sutcliffe, P. & Zakrzewski, W.J. (2001). Skyrmions and Rational Maps. Nonlinearity (14): C1-C5. Brizhik, L., Eremko, A. & Piette, B.M.A.G. (2001). Solitonic Electron States in a Discrete Two-Dimensional Lattice. Physica D: Nonlinear Phenomena 2802: 1-20. Brizhik, L., Eremko, A., Piette, B.M.A.G. & Zakrzewski, W.J. (2001). Spontaneously Localized Electron States in a Discrete Anisotropic Two-Dimensional Lattice. Physica D: Nonlinear Phenomena D(159): 71-90. Eslami,P. Sarbishei, M. Zakrzewski & W.J. (2000). Baby Skyrme Models for a Class of Potentials. Nonlinearity 13: 1867-1881. Zakrzewski, W., Piette, B.M.A.G. & Kudryavtsev, A. (2000). Interactions of Skyrmions with Domain Walls. Physical Review D 61: 025016. Kopeliovich, V.B., Stern, B.E. & Zakrzewski, W.J. (2000). Skyrmions from SU(3) Harmonic Maps and Their Quantization. Physics Letters B 492(1-2): 39-46. Lukierski, J., Stichel P.C. & Zakrzewski, W.J. (2000). Translational Chern–Simons action and new planar particle dynamics. Physics Letters B 484(3-4): 315-322. Zakrzewski, W.J., Lukierski, J. & Stichel, P.C. (1999). (2 + 1)-Dimensional Models with a Chern-Simons-Like Term and Noncommutative Geometry. Reports on Mathematical Physics 215-229. Zakrzewski, W. & Dziarmaga, J. (1999). Diffusion of overdamped classical solitons. Physics Letters A 251: 193-198. Zakrzewski, W.J. & Kopeliovich, V.B. (1999). Flavoured Multiskyrmions,. JETP Letters 721-726. Zakrzewski, W.J. & Dziarmaga, J. (1999). Noise driven diffusion of solitons. Physics Letters A 193-198. Ioannidou, T., Piette, B.M.A.G. & Zakrzewski, W.J. (1999). Spherically Symmetric Solutions of the SU(N) Skyrme Models. Journal of Mathematical Physics 40: 6223-6233. Ioannidou, T., Piette, B.M.A.G. & Zakrzewski, W.J. (1999). SU(N) Skyrmions and Harmonic Maps. Journal of Mathematical Physics (40): 6353-6365. Zakrzewski, W.J. & Ioannidou, T. (1998). A note on Ward's chiral model. Physics Letters A 242: 233-238. Burzlaff, J. & Zakrzewski, W.J. (1998). CP2 soliton scattering: the collective coordinate approximation. Nonlinearity 11(5): 1311-1320. Zakrzewski, W.J. & Ioannidou, T. (1998). Lagrangian formulation of the general modified chiral model. Physics Letters A 303-306. Piette, B.M.A.G. & Zakrzewski, W.J. (1998). Localized solutions in a 2 dimensional landau-lifshitz model,. Physica D: Nonlinear Phenomena D(119): 314-326. Piette, B.M.A.G., Kudryavtsev, A. & Zakrzewski, W.J. (1998). Mesons, baryons and waves in the baby skyrmion model,. European Physical Journal C: Particles and Fields C(1): 333-341. Piette, B.M.A.G. & Zakrzewski, W.J. (1998). Metastable stationary solutions of the radial d dimensional sine-Gordon model. Nonlinearity (11): 1103-1110. Piette, B.M.A.G. & Zakrzewski, W.J. (1998). Numerical integration of (2+1) dimensional PDEs for valued functions,. Journal of Computational Physics (145): 359-381. Piette, B.M.A.G., Kudryavtsev, A. & Zakrzewski, W.J. (1998). Skyrmions and domain walls in (2+1) dimensions. Nonlinearity (11): 783-795. Piette, B.M.A.G., Bogolubsky, I.L. & Zakrzewski, W.J. (1998). Solitons of a general gauged S^2 model with a mass term,. Physics Letters B B(432): 151-158. Zakrzewski, W.J., Thomova, Z. & Winternitz, P. (1998). Solutions of (2+1) dimensional spin systems. Journal of Mathematical Physics 39: 3927-3944. Zakrzewski, W.J. & Ioannidou, T. (1998). Solutions of the modified chiral model in (2+1) dimensions. Journal of Mathematical Physics 39: 2693-2701. Zakrzewski, W.J. & Dziarmaga, J. (1998). What happens with an initially kicked soliton?. Physics Letters A 242: 227-232. Zakrzewski, W.J, Lukierski, J. & Stichel, P. (1997). A Galilean invariant (2+1) dimensional model with a Chern-Simmons like term. Annals of Physics 260: 224-249. Zakrzewski, W.J, Rutenberg, A. & Zapotocky, M. (1997). Dynamical multiscaling in quenched skyrme systems. Europhysics Letters 39: 49-54. Leznov, A.N., Piette, B.M.A.G. & Zakrzewski, W.J (1997). On the integrability of pure skyrme models in 2 dimensions. Journal of Mathematical Physics (38): 3007-3011. Zakrzewski, W.J. & Cova, R.J (1997). Soliton scattering in the model on a torus. Nonlinearity 10: 1305-1317. Zakrzewski, W.J. & Burzlaff, J. (1996). CP2 Soliton Scattering: Simulations and Mathematical Underpinning. Nonlinearity 9: 1317-1324. Zakrzewski, W.J., Belov, N.A. & Leznov, A.N. (1996). Discrete Symmetry and its Use to Find Mulitsoliton Solutions of the Equations of Anisotropic Heisenberg Ferromagnets. Journal of Nonlinear Mathematical Physics 3: 319-329. Zakrzewski, W.J. & Papanicolaou, N. (1996). Dynamics of Magnetic Bubbles in a Skyrme Model. Physics Letters A 210: 328-336. Zakrzewski, W.J., Belov, N.A. & Leznov, A.N. (1996). Generalization of the Toda Chain System to the Elliptic Curve Case. Letters in Mathematical Physics 36: 27-34. Zakrzewski, W.J., Grundland, A.M. & Winternitz, P. (1996). On the Solutions of the CP1 Model in (2+1) Dimensions. Journal of Mathematical Physics 37: 1501-1520. Piette, B.M.A.G. & Zakrzewski, W.J. (1996). Shrinking of Solitons in the 2+1 Dimensional S2 Sigma Model. Nonlinearity (9): 897-910. Zakrzewski, W.J. (1995). A modified discrete Sine-Gordon model. Nonlinearity 8: 517-540. Zakrzewski, W.J., Lukierski, J. & Ruegg, H. (1995). Classical and Quantum Mechanics of Free -relativistic Particles. Annals of Physics 243: 90-116. Zakrzewski, W.J., Leznov, A.N. & Mukhtarov, M.A. (1995). Discrete Symmetry and Solution of the Principal Chiral Field Problem. Turkish Journal of Physics 19: 416-420. Piette, B.M.A.G., Schroers, B.J. & Zakrzewski, W.J. (1995). Dynamics of Baby Skyrmions. Nuclear Physics B B(439): 205-238. Zakrzewski, W.J. & Papanicolaou (1995). Dynamics of Interacting Magnetic Vortices in a Model Landau-Lifshitz Equation. Physica D: Nonlinear Phenomena 80: 225-245. Zakrzewski, W.J. & M. Zapotocky (1995). Kinetics of phase ordering with topological textures. Physical Review E 51: 189-191. Piette, B.M.A.G., Schroers, B.J. & Zakrzewski, W.J. (1995). Multisolitons in a two-dimensional Skyrme Model. Zeitschrift f�r Physik C Particles and Fields C(65): 165-174. Piette, B.M.A.G., Tchrakian, D.H. & Zakrzewski, W.J. (1994). Chern-Simons solitons in a model. Physics Letters B B339: 95-100. Zakrzewski, W.J., Below, N.A. & Leznov, A.N. (1994). On the solutions of the anisotropic Heisenberg equation. Journal of Physics A: Mathematical and Theoretical 27(16): 5607-5621. Piette, B.M.A.G. & Zakrzewski, W.J. (1994). Some aspects of the scattering of skyrmions in (2+1) dimensions. Nonlinearity (7): 231-244. Authored book (Published). JHEP. Chapter in book Sutcliffe, P. & Zakrzewski, W.J. (2001). Skyrmions from Harmonic Maps. In Integrable hierarchies and modern physical theories. Aratyn, Henrik & Sorin, Alexander S. Dordrecht: Kluwer Academic Publishers. 215-241. Piette, B.M.A.G. & Zakrzewski, W.J. (1996). Some Aspects of Soliton Unwindings. In From Field Theory to Quantum Groups: birthday volume dedicated to Jerzy Lukierski. Jancewicz, Bernard & Sobczyk, Jan Singapore: World Scientific. 275-285. Zakrzewski, W.J., Lukierski, J. & Ruegg, H. (1995). Classical and Quantum Mechanics of Free κ-Relativistic Systems. In Quantum groups formalism and applications XXX Karpacz Winter School of Theoretical Physics, Karpacz, Poland, 14-26 February 1994. Lukierski, J., Popowicz, Z. & Sobczyk, J. Warsaw: Polish Scientific Publishers. 539-554. Brizhik, L., Eremko, A. Piette, B. & Zakrzewski, W.J. (2010), Ratchet effect of Davydov's solitons in nonlinear low-dimensional nanosystems, International Journal of Quantum Chemistry 110: Molecular Self-Organization in Micro-, Nano-, and Macro-Dimensions: From Molecules to Water, to Nanoparticles, DNA and Proteins". Kiev, Wiley, Kiev, 25-37. Brizhik, L., Eremko, A., Piette, B. & Zakrzewski, W.J. (2009), Davydov's solitons in zigzag carbon nanotubes, International Journal of Quantum Chemistry 110: Molecular Self-Organization in Micro-, Nano-, and Macro-Dimensions: From Molecules to Water, to Nanoparticles, DNA and Proteins. Wiley, 11-24. Brizhik, L., Eremko, A., Ferreira, L.A., Piette, B. & Zakrzewski, W.J. (2009), Some Properties of Solitons, in Russo, Nino, Antonchenko, V. IA. & Kryachko, Eugene S. eds, NATO Science for Peace and Security Series A: Chemistry and Biology SelfOrganization of Molecular Systems: NATO Advanced Research Workshop on Molecular Self-Organization. Kiev, Springer, Dordrecht, 103-121. Ferreira, L.A., Piette, B. & Zakrzewski, W.J. (2008), Dynamics of the topological structures in inhomogeneous media, Journal of Physics: Conference Series 128: The 5th International Symposium on Quantum Theory and Symmetries. Valladolid, Spain, IOP Publishing, 012027. Zakrzewski, W.J. & Cova, R.J. (1995), Skyrmions in (2+1) Dimensions, in Barut, A. O., Feranchuk, I. D., Shnir, Ya. M. & Tomil'chik, L. M. eds, International Workshop on Quantum Systems: New Trends and Methods. Minsk, World Scientific, Singapore, 84-88. Piette, B.M.A.G. & Zakrzewski, W.J. (1994), General Structures in (2+1) Dimensional Models, in Spatschek, K.H. & Mertens, F.G. eds, Nonlinear Coherent Structures in Physics and Biology Plenum Press, 283-286. Brizhik, L. Eremko, A. Piette, B. & Zakrzewski, W. (2008). Effects of Periodic electromagnetic Field on Charge Transport in Macromolecules. Frohlich Symposium, Biophysical Aspects of Cancer: Electromagnetic Mechanisms. Piette, B. & Zakrzewski, W.J. (2008). Scattering of sine-Gordon Kinks and Breathers on a Finite Width Well. Dynamic Systems and Applications, Atlanta, Georgia, USA. Piette, B. & Zakrzewski, W.J. (2008). Some Aspects of Dynamics of Topological Solitons. 22nd Max Born Symposium,, Wroclaw. Kopeliovich, V.B., Piette, B. & Zakrzewski, W.J. (2006). Mass Terms in the Skyrme Model. Quark 2006, St' Petersbourg, Russia. Piette, B.M.A.G. & Zakrzewski, W.J. (2000). Nontopological structures in the baby-Skyrme model. Solitons, Properties, Dynamics, Interactions and Applications,, Springer. Piette, B.M.A.G. & Zakrzewski, W.J. (1999). Skyrmions and Domain Walls. Properties, Dynamics, Interactions and Applications,, Springer. Ioannidou, T., Piette, B.M.A.G. & Zakrzewski, W.J. (1999). SU(N) Skyrmions and two dimensional CPN Rational Maps. New symmetries and integrable models, Karpacz. Ioannidou, T., Piette, B.M.A.G. & Zakrzewski, W.J. (1999). Three Dimensional Skyrmions and Harmonic Maps. Halifax. Piette, B.M.A.G. & Zakrzewski, W.J (1997). Soliton-like structures in two dimensions and their properties. Piette, B.M.A.G. & Zakrzewski, W.J. (1995). Scattering of extended structures in (2+1) dimensional models,. World Scientific. Newspaper/Magazine Article Delisle, L., Hussin, V. & Zakrzewski, W.J. (2013). Constant curvature solutions of Grassmannian sigma models: (1) Holomorphic solutions. Journal of Geometry and Physics 66: 24-36. Stichel, P.C. & Zakrzewski, W.J. (2013). Nonstandard approach to gravity for the dark sector of the Universe. Entropy 15(2): 559-605. Ferreira, L.A., Luchini, G. & Zakrzewski, W.J. (2013). The concept of quasi-integrability. AIP Conference Proceedings 1562: 43. Adam, C., Sanchez-Guillen, J., Wereszczynski, A. & Zakrzewski, W.J. (2013). Topological duality between vortices and planar skyrmions in BPS theories with APD symmetries. P D 87: 027703. Ferreira, L.A. & Zakrzewski, W.J. (2012). Attempts to define quasi-integrability. IJGMMP 6: 1261004. Stichel, P.C. & Zakrzewski, W.J. (2012). Darkon fluid - a model for the dark sector of the Universe? IJGMMP 9: 1261014. Ferreira, L.A., Luchini, G & Zakrzewski, W.J. (2012). The concept of quasi-integrability for modified non-linear Schr\"odinger models. JHEP 09: 103. Ferreira, L.A., Klimas, P. & Zakrzewski, W.J. (2011). Properties of some (3+1) dimensional vortex solutions of the $CP^N$ model. Phys. Rev D 84: 085022. Ferreira, L.J. & Zakrzewski, W.J. (2011). Some comments on quasi-integrability. Reports Math. Physics 67: 197. Ferreira, L.A., Klimas, P. & Zakrzewski, W.J. (2011). Some properties of (3+1) dimensional vortex solutions in the extended $CP^N$ Skyrme Faddeev model. JHEP 1112: 098. Al-Alawi, J.H. & Zakrzewski, W.J. (2009). Q-ball scattering on barriers and holes in 1 and 2 Spatial dimensions. Journal of Physics A 42: 245201. Brizhik, L.S. Eremko, A.A. Piette, B.M.A.G. & Zakrzewski, W.J. (2008). Ratchet behaviour of polarons in molecular chains. Journal of Physics: Condensed Matter 20(25): 255242. Show all publications For current UG students For current PG students Staff Gallery Applied & Computational Mathematicians (Analysis & Partial Differential Equations) Applied & Computational Mathematicians (Magneto-hydrodynamics) Biomathematicians Mathematical & Theoretical Particle Physicists Probabilists Pure Mathematicians (Algebra & Number Theory) Pure Mathematicians (Geometry) Pure Mathematicians (Topology) Statisticians Academic Recruitment Updated: 10th April 2019
CommonCrawl
Search Results: 1 - 10 of 181 matches for " Fiori " Asymmetric Variate Generation via a Parameterless Dual Neural Learning Algorithm Simone Fiori Computational Intelligence and Neuroscience , 2008, DOI: 10.1155/2008/426080 Abstract: In a previous work (S. Fiori, 2006), we proposed a random number generator based on a tunable non-linear neural system, whose learning rule is designed on the basis of a cardinal equation from statistics and whose implementation is based on look-up tables (LUTs). The aim of the present manuscript is to improve the above-mentioned random number generation method by changing the learning principle, while retaining the efficient LUT-based implementation. The new method proposed here proves easier to implement and relaxes some previous limitations. Neural Systems with Numerically Matched Input-Output Statistic: Isotonic Bivariate Statistical Modeling Computational Intelligence and Neuroscience , 2007, DOI: 10.1155/2007/71859 Abstract: Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there are "holes" in the data) or when the data sets have been acquired independently. Also, statistical modeling is useful when the amount of available data is enough to show relevant statistical features of the phenomenon underlying the data. We propose to tackle the problem of statistical modeling via a neural (nonlinear) system that is able to match its input-output statistic to the statistic of the available data sets. A key point of the new implementation proposed here is that it is based on look-up-table (LUT) neural systems, which guarantee a computationally advantageous way of implementing neural systems. A number of numerical experiments, performed on both synthetic and real-world data sets, illustrate the features of the proposed modeling procedure. El problema del objeto del contrato en la tradición civil Roberto Fiori Revista de Derecho Privado , 2007, A Note on a Riemann-Hurwitz Type Theorem Andrew Fiori Mathematics , 2015, Abstract: We prove an analogue of the Riemann-Hurwitz theorem for computing Euler characteristics of pullbacks of coherent sheaves through finite maps of smooth projective varieties, subject only to the condition that the irreducible components of the branch and ramification locus have simple normal crossings. Classification of Certain Subgroups of G2 Abstract: We give a concrete characterization of the rational conjugacy classes of maximal tori in groups of type G2, focusing on the case of number fields and p-adic fields. In the same context we characterize the rational conjugacy classes of A2 subgroups of G2. Having obtained the concrete characterization, we then relate it to the more abstract characterization which can be given in terms of Galois cohomology. We note that these results on A2 subgroups were simultaneously and independently developed in the work of Hooda whereas the results on tori were simultaneously and independently developed in the work of Beli-Gille-Lee. On The $j$-Invariants of CM-Elliptic Curves Defined Over $\mathbb{Z}_p$ Abstract: We characterize the possible reductions of $j$-invariants of elliptic curves which admit complex multiplication by an order $\mathcal{O}$ where the curve itself is defined over $\mathbb{Z}_p$. In particular, we show that the distribution of these $j$-invariants depends on which primes divide the discriminant and conductor of the order. Least-Squares on the Real Symplectic Group Computer Science , 2010, Abstract: The present paper discusses the problem of least-squares over the real symplectic group of matrices Sp(2n,R)$. The least-squares problem may be extended from flat spaces to curved spaces by the notion of geodesic distance. The resulting non-linear minimization problem on manifold may be tackled by means of a gradient-descent algorithm tailored to the geometry of the space at hand. In turn, gradient steepest descent on manifold may be implemented through a geodesic-based stepping method. As the space Sp(2n,R) is a non-compact Lie group, it is convenient to endow it with a pseudo-Riemannian geometry. Indeed, a pseudo-Riemannian metric allows the computation of geodesic arcs and geodesic distances in closed form on Sp(2n,R). Visualization of Manifold-Valued Elements by Multidimensional Scaling Statistics , 2010, Abstract: The present contribution suggests the use of a multidimensional scaling (MDS) algorithm as a visualization tool for manifold-valued elements. A visualization tool of this kind is useful in signal processing and machine learning whenever learning/adaptation algorithms insist on high-dimensional parameter manifolds. Forma, valor e renda na arquitetura contemporanea Arantes, Pedro Fiori; ARS (S?o Paulo) , 2010, DOI: 10.1590/S1678-53202010000200007 Abstract: contemporary architecture is dangerously enmeshed with the entertainment industry and the field of advertising. this meshing has pushed architectural form to the limits of materiality. architecture today searches for maximum informational rent, a process typical of global product branding; through this process, established building and production principles are subverted by a play of volumes and effects beyond any rule or limitation. relying on digital design technologies and the reorganization of the building site, this new fetishism of form, analogous to the autonomization of power and abstract wealth in contemporary capitalism, defines the new condition of cutting-edge architecture. Sobre o poder global Fiori, José Luís; Novos Estudos - CEBRAP , 2005, DOI: 10.1590/S0101-33002005000300005 Abstract: the article discusses the possibilities of "global governance". taking the notions of global "hegemony" and "governance" as a guideline, the author examines the constitution and expansion of hegemonic powers since the 17th century. further on, he puts into question the notion of "cosmopolitanism" proposed by kant and discusses the actual possibility of a global government.
CommonCrawl
Non-stationary ETAS to model earthquake occurrences affected by episodic aseismic transients Sasi Kattamanchi ORCID: orcid.org/0000-0003-1249-74401,2, Ram Krishna Tiwari1,2 & Durbha Sai Ramesh3 We present a non-stationary epidemic-type aftershock sequence (ETAS) model in which the usual assumption of stationary background rate is relaxed. Such a model could be used for modeling seismic sequences affected by aseismic transients such as fluid/magma intrusion, slow slip earthquakes (SSEs), etc. The non-stationary background rate is expressed as a linear combination of B-splines, and a method is proposed that allows for simultaneous estimation of background rate as well as other ETAS model parameters. We also present an extension to this non-stationary ETAS model where an adaptive roughness penalty function is used and consequently provides better estimates of rapidly varying background rate functions. The performance of the proposed methods is demonstrated on synthetic catalogs and an application to detect earthquake swarms (possibly associated with SSEs) in Hikurangi margin (North Island, New Zealand) is presented. Earthquakes are generated by a complex system—the Earth. Lithospheric plates gliding past one another causes stress buildup in the crustal rocks which is released in short, sudden bursts causing earthquakes. While this plate motion is the major source of earthquake activity, several other factors like the heterogeneity of the earth's crust, the local stress conditions in the seismogenic area, fluid content, the disposition of existing faults and their stress histories, etc., also determine the size, location and time of occurrence of an earthquake. Since it is difficult to represent such a complex system physically, stochastic models are increasingly being used to model earthquake occurrences (Console et al. 2010; Helmstetter and Sornette 2002; Kagan and Knopoff 1981; Marsan and Lengline 2008; Ogata 1988, 1999). Well-established empirical laws like the Gutenberg-Richter law and the Omori's law facilitate the formulation of stochastic models of earthquake occurrences such as the epidemic-type aftershock sequence (ETAS) model (Console 2003; Helmstetter 2003; Ogata 1988, 1998). The ETAS model (Ogata 1988, 1998) is currently the most popular model to describe seismicity in a region and to test various hypotheses related to it. It is based on the premise that every earthquake has a magnitude-dependent ability to trigger aftershocks. The model considers that the earthquake sequence comprises aftershocks (events triggered by other earthquakes) and background earthquakes (events that occur independent of other earthquakes). The aftershocks are triggered because of internal stress adjustments in the seismogenic system initiated by the occurrence of an earthquake, while the background earthquakes are generally caused by forces related to plate tectonics. In addition, aseismic transient forces related to fluid/magma intrusion, slow slip etc., can also trigger (swarms of) earthquakes. Because such earthquakes do not follow typical mainshock–aftershock patterns, they are included in the model as background events. To analyze earthquake sequences that include such swarms which are short-lived, the usual assumption of stationary background activity needs to be relaxed. This leads to an ETAS model with non-stationary background rate, referred to as non-stationary ETAS model in this study. The traditional epidemic-type aftershock sequence (ETAS) model (Ogata 1988, 1998) assumes the background activity to be stationary and hence a single (constant) parameter (\(\mu _0\)) is enough to represent it. On the contrary, a non-stationary background rate warrants representation by a continuous function \(\mu (t)\) which needs to be estimated from the observed data. Some previous studies (Hainzl and Ogata 2005; Lombardi et al. 2006, 2010; Marsan et al. 2013a; Reverso et al. 2015) have presented analysis of earthquake occurrences (sequences) using ETAS models with non-stationary background rate. Initial studies such as Hainzl and Ogata (2005) and Lombardi et al. (2006, 2010) approximated the non-stationary background rate by fitting stationary ETAS model to data in moving windows. Later, Marsan et al. (2013a) adapted the iterative procedure of Zhuang et al. (2002) to the case of non-stationary ETAS model. However, these methods do not always produce a smooth solution and often result in an estimate of \(\mu (t)\) that is wiggly, indicating overfit to the data. Marsan et al. (2013) described a procedure based on ETAS model to detect earthquake swarms triggered by aseismic transients. This was further extended in Reverso et al. (2015). However, the results from this method are sensitive to the choice of width of regular grid used to represent background rate. In a different approach, Llenos and McGuire (2011) estimated the background rate by modeling the aftershock activity using ETAS, and subtracting this from the total observed seismicity rate. An assumption of zero background rate is made in estimating the aftershock activity by fitting ETAS model to the complete sequence of earthquakes, and thus the estimate of \(\mu (t)\) obtained by this method could be biased. Recently, Kumazawa and Ogata (2014) adapted the hierarchical modeling approach of Ogata (2004, 2011) to non-stationary ETAS model. In this approach, the background rate function \(\mu (t)\) is expressed as a piece-wise linear function made up of linear pieces between each pair of consecutive earthquake occurrence times. A two-step iterative procedure involving penalized maximum likelihood estimation (penalized MLE) and Type-II maximum likelihood method (Type-II MLE) was proposed for estimation of the piece-wise linear \(\mu (t)\). In their method, however, only \(\mu (t)\) is estimated while employing a priori ETAS model parameters. In their study, the ETAS model parameters were fixed to those obtained by fitting a stationary ETAS model to the earthquake catalog corresponding to a wider region around the study area. Often, such pre-determination of model parameters may not be feasible because the seismicity is localized (e.g., reservoir associated seismicity) or the model parameters are spatially inhomogeneous. Aside, owing to the strong dependence between \(\mu (t)\) and other ETAS model parameters (Harte 2013; Marsan et al. 2013a), input of erroneous model parameters to the algorithm would result in biased estimates of the background rate. Therefore, a method that can simultaneously estimate both, the model parameters and the background rate, becomes desirable. Our study attempts to address this issue. The method proposed in this study is largely based on Kumazawa and Ogata (2014). We express the non-stationary background rate function \(\mu (t)\) as a spline function. Instead of using all earthquake occurrence times as knots to represent this spline function, we express it as a linear combination of finite number of basis splines (B-splines), akin to the P-splines method of Eilers and Marx (1996). Additionally, we employ the L-curve procedure (Frasso and Eilers 2015) to choose the optimal smoothing parameter in place of Type-II MLE. The penalized MLE method is then used to simultaneously invert for both the background rate \(\mu (t)\) and other ETAS model parameters related to aftershock activity. Such simultaneous estimation of all unknowns in the model produces more reliable results. Further, except for the method of Kumazawa and Ogata (2014), none of the above cited methods employ an explicit roughness penalty function to constrain the wiggliness of the estimated \(\mu (t)\), and therefore, could lead to estimates that overfit the data. Even the method of Kumazawa and Ogata (2014) uses a global smoothness parameter to weigh the roughness penalty function in the expression for penalized log-likelihood. Thus, these methods are less effective to estimate the background rate function in situations where significant non-uniformity in the roughness of \(\mu (t)\) exists. This situation often arises while analyzing earthquake sequences of long duration that are affected by occasional aseismic transients. To model such sequences, we therefore extend the method of Kumazawa and Ogata (2014) by representing the smoothness parameter as yet another spline function. We explore the performance of these proposed methods on synthetic catalogs simulated using two different types of background rate functions. Further, an application of these methods to earthquake catalog from Gisborne area near Hikurangi subduction zone (New Zealand) is presented. The estimated background rate exhibits a few peaks indicating brief periods of increased seismicity. Examination of continuous global positioning system (cGPS) data recorded at nearby stations suggests that these peaks coincide with periods of reversals in recorded displacements that indicate slow slip. The ETAS model The epidemic-type aftershock sequence (ETAS) model was introduced by Ogata (1988) and, since then, is widely used in the analysis of earthquake sequences. It is a point process model which assumes that the earthquake sequence is made up of aftershocks and background events. Aftershocks include those events that are triggered by other earthquakes while background events are those that occur independent of other earthquakes. So, according to the model, the sequence of observed earthquakes is generated by a non-homogeneous Poisson process with rate \(\lambda (t)\) expressed as $$\begin{aligned} \lambda (t) = \mu (t) + \nu (t) \end{aligned}$$ where \(\mu (t)\) and \(\nu (t)\) are the rates of background and aftershock events, respectively. Aftershock rate can be expressed in parametric form using well-known empirical characteristics of aftershocks—(a) the rate of aftershock activity triggered by an earthquake decays according to the Omori-Utsu law (Utsu et al. 1995) and (b) the total number of aftershocks triggered by an earthquake depends exponentially on its magnitude. Thus, the rate of aftershock activity triggered by an earthquake i occurring at time \(t_i\) with magnitude \(M_i\) is $$\begin{aligned} \xi _i(t) = K e^{\alpha (M_i-M_0)} (t-t_i+c)^{-p} \end{aligned}$$ where \(\{K, \alpha , c, p\}\) are unknown parameters and \(M_0\) is the cutoff magnitude of the catalog. Note that only the earthquakes with magnitudes above the cutoff magnitude are considered for analysis using the ETAS model. The total aftershock rate at any time t can thus be obtained as $$\begin{aligned} \nu (t) = \sum _{i:t_i<t} K e^{\alpha (M_i-M_0)} (t-t_i+c)^{-p} \end{aligned}$$ Background rate is generally considered to be constant \(\mu (t)=\mu _0\) in time. This is because, for the short time spans of observed catalogs, the effect of long-term plate tectonic forces, which are primarily responsible for background activity, can be considered to be constant. However, when aseismic transient forcings associated with slow slip earthquakes or fluid intrusion, etc., are believed to affect the observed earthquake sequence, a time-varying background rate function \(\mu (t)\) is required for better modeling of data. A non-stationary ETAS model with such a time-varying background rate is considered in this study. We assume that the other model parameters are stationary over the observed time period. Unlike aftershock rate, the background rate function \(\mu (t)\) cannot be expressed in a general parametric form. Various authors have employed different ways to represent \(\mu (t)\) and adopted different procedures for inversion/estimation of the unknowns as enunciated in the earlier section. For example, Kumazawa and Ogata (2014) represented the background rate as piece-wise linear function and performed estimation using an iterative procedure involving penalized maximum likelihood and Type-II maximum likelihood method. On the other hand, Hainzl et al. (2013) and Marsan et al. (2013a) used a different iterative procedure where they (a) begin with a homogeneous ETAS model and estimate its model parameters; (b) compute the probabilities of each event to be a background event (given by \(\mu (t_i)/\lambda (t_i)\)); (c) estimate the background rate by smoothing these probabilities; (d) use this estimated \(\mu (t)\) as the new background rate and re-estimate the other model parameters. The steps (b)–(d) are repeated till convergence to obtain final estimates. In the present study, we express the background rate \(\mu (t)\) as a spline function. Unlike Kumazawa and Ogata (2014) where all the earthquake occurrence times are used as internal knots of the spline function, we employ only a fixed number M of B-spline basis functions as in P-splines methodology (Eilers and Marx 1996). That is, \(\mu (t)\) is represented by $$\begin{aligned} \mu (t) = \sum _{i=1} ^M {\phi _iB_i(t,d,\kappa _t)} \end{aligned}$$ where M is the total number of B-splines, \(\phi _i\) are spline coefficients, \(B_i(t,d,\kappa _t), i=1,2,3,{\ldots }M,\) are the B-spline basis functions of degree d computed over the knot vector \(\kappa _t\). B-spline basis functions for any given knot vector and spline degree can be computed using the deBoor's algorithm (De Boor 1978). The degree and knot vector of B-splines determine the flexibility of the spline function in fitting the observations. A spline function of degree d would have continuous derivatives of up to \(d-1\). So, the larger the degree of spline function, the higher would be the smoothness. On the other hand, the knot vector determines the local flexibility of the spline function. The closer the knots, the more flexible would be the spline function in fitting the observations. In this study, we employ quantile spaced knots (Ruppert et al. 2003). That is, knots are chosen such that any two consecutive knots contain the equal number of events. This kind of knot placement is better suited for modeling any sudden surges in background activity compared to uniform knot placement. Thus, the non-stationary ETAS model can be described as a non-homogeneous Poisson process with rate $$\begin{aligned} \lambda (t) = \sum _{i=1} ^M {\phi _iB_i(t,d,\kappa _t)} + \sum _{i:t_i<t} K e^{\alpha (M_i-M_0)} (t-t_i+c)^{-p} \end{aligned}$$ The unknowns in this model are \(\Phi = \{\phi _i, i=1,2,3,{\ldots },M\}\) corresponding to background activity, and other model parameters \(\Theta =\{K, \alpha , c, p\}\) related to aftershock activity. Estimating these unknowns and plugging them in the above expression would give us the non-stationary ETAS model that best describes the observed earthquake sequence. In the simpler case of homogeneous ETAS model, where \(\mu (t)=\mu _0\), the unknowns are only \(\{\mu _0, K, \alpha , c, p\}\). They can be estimated using maximum likelihood estimation (MLE) method by maximizing the log-likelihood function \(\log {L}\) (Ogata 1998) $$\begin{aligned} \log {L} = \sum _{\{i:S<t_i<T\}} \log {\lambda (t_i)} -\int _S^T{\lambda (t)}\hbox {d}t \end{aligned}$$ where [S, T] is the time domain containing the observations. However, simple maximum likelihood method does not provide good estimates in case of a non-stationary ETAS model. The MLE estimates would be such that the background rate overfits the data. To avoid this, a roughness penalty has to be applied to constrain the wiggliness of the estimated \(\mu (t)\). Thus, the model unknowns \(\Phi\) and \(\Theta\) are estimated by penalized maximum likelihood estimation (penalized MLE) method. Estimated \(\hat{\Phi }\) and \(\hat{\Theta }\) are those that maximize the penalized log-likelihood objective (Kumazawa and Ogata 2014) $$\begin{aligned} R(\Phi , \Theta , \tau ) = \log {L(\Phi ,\Theta )}-\tau \times Q(\Phi ) \end{aligned}$$ where \(\tau\) is the regularization or smoothing parameter and \(Q(\Phi )\) is the roughness penalty function. The roughness penalty is generally considered in the form of integrated squared mth-order derivative of the desired function $$\begin{aligned} Q(\Phi ) = \int _S ^T[{\mu ^{(m)}(t)}]^2\hbox {d}t \end{aligned}$$ When \(\mu (t)\) is expressed in terms of B-spline basis, the penalty function can be conveniently expressed as $$\begin{aligned} Q(\Phi ) = \Phi ^{\prime }P\Phi \end{aligned}$$ where the penalty matrix P is a symmetric matrix with elements $$\begin{aligned} P_{ij} = \int _S ^T {B_i^{(m)}(t,d,\kappa _t)B_j^{(m)}(t,d,\kappa _t)}\hbox {d}t \end{aligned}$$ The smoothing parameter \(\tau\) in Eq. (7) controls the relative contribution of goodness-of-fit criterion (here log-likelihood) and the roughness penalty function in determining the values of estimated parameters. Large \(\tau\) values in a penalized MLE lead to over smoothed estimates of \(\mu (t)\), while a small τ results in a \(\mu (t)\) which is under smoothed. It is therefore important to employ a \(\tau\) that provides optimal smoothing. Choosing optimal smoothness parameter Consider the penalized log-likelihood objective function given in Eq. (7). The penalty function is used to regularize the inversion and the smoothness parameter \(\tau\) plays the role of a regularization parameter. The purpose of the penalty function is to provide additional constraints to the inversion to impart stability. As the penalty function depends only on \(\Phi\), it thus provides constraints for the parameters in \(\Phi\) alone. Depending on the choice of the order m of roughness penalty, the penalty matrix P could be rank deficient. For a ridge penalty (\(m=0\)), the penalty matrix would be full rank (\(r= M\)); for a penalty on first-order derivatives (\(m=1\)), the penalty matrix would have a rank \(M-1\), and so on. In general, for a penalty on the mth-order derivative, the penalty matrix would be rank deficient by m and would have a rank of \(r=M-m\). This rank deficiency implies that the penalty matrix cannot simultaneously constrain all the parameters \(\phi _i\) in \(\Phi\); only \(r = M-m\) of them would be constrained. In brief, of the \(M+4\) number of unknown parameters (\(\Phi\) of dimension M and \(\Theta\) of dimension 4), there exist \(M-m\) constrained parameters, say \(\Phi _f=\{\phi _{i}, i=1,2,3,{\ldots }M-m\}\), and \(4+m\) unconstrained parameters \(\Theta \cup \Phi _d\) where \(\Phi _{d}=\{\phi _{i}, i=M-m+1,{\ldots },M\}\). Apart from these, the optimal smoothing parameter \(\tau\) also has to be estimated. Kumazawa et al. (2016) assumed a priori \(\Theta\) and were hence left only with constrained parameters \(\Phi _f\) and the other unknowns \(\eta = \{\Phi _d, \tau \}\). They employed the penalized MLE step to estimate \(\Phi _f\) and the Type-II MLE to estimate \(\eta\). Type-II maximum likelihood estimation Consider the expression for penalized log-likelihood given in Eq. (7). From the Bayesian perspective, applying a roughness penalty to the log-likelihood function is equivalent to putting a prior on the variables. To understand this better, let us exponentiate both sides of Eq. (7) $$\begin{aligned} e^{R(\Phi , \Theta , \tau )} = e^{\log {L(\Phi ,\Theta )}}\times e^{-\tau \times Q(\Phi ,\tau )}. \end{aligned}$$ In the above equation, \(e^{-\tau \times Q(\Phi ,\tau )}\) corresponds to prior, \(e^{\log {L(\Phi ,\Theta )}}\) is the likelihood and \(e^{R(\Phi , \Theta , \tau )}\) is proportional to the posterior. Note that estimating the parameters that maximizes the penalized log-likelihood objective is same as finding the mode of the above posterior. Thus, penalized maximum likelihood estimation is equivalent to maximum a posteriori (MAP) estimation. The prior in expression (11) is improper because of the mentioned rank deficiency of the penalty matrix. However, the penalty matrix has rank \(M-m\) and thus the prior is proper on the \(M-m\) parameters \(\Phi _f\). So, we can normalize this part of the prior as \(e^{-\tau \times Q(\Phi _f)}/\int {e^{-\tau \times Q(\Phi _f)}}\) to obtain a prior, which is a proper probability density function. Using this, the posterior \(T(\Phi _f, \eta )\) can be written as $$\begin{aligned} T(\Phi _f, \eta ) = L(\Phi ,\Theta )\times \frac{ e^{-Q'(\Phi _f, \tau )}}{\int {e^{-Q'(\Phi _f, \tau )}}} \end{aligned}$$ where \(Q'(\Phi _f,\tau )= \tau \times Q(\Phi _f)\) is the roughness penalty scaled by the smoothing parameter \(\tau\). Note that there is no prior on the parameters \(\eta\). They are treated as hyperparameters and are estimated by maximizing the posterior that is marginalized over \(\Phi _{f}\) as $$\begin{aligned} {\Lambda (\eta )} = {\int {T(\Phi _f, \eta )}\hbox {d}\Phi _f} \end{aligned}$$ This procedure of estimating hyperparameters by maximizing the marginalized likelihood is called the Type-II maximum likelihood procedure (Kumazawa and Ogata 2014; Ogata 2011). This procedure is also known as empirical Bayes (Bishop 2006). The Akaike Bayesian Information Criterion (ABIC) is equal to \(-\,2 \times \max \log {\Lambda (\eta )} + \dim (\eta )\). Thus, estimation using Type-II MLE is equivalent to choosing a model that minimizes the Akaike Bayesian Information Criterion (ABIC). Computing the marginalized posterior involves integration over \(\Phi _f=\{\phi _{i}, i=1,2,3,{\ldots }M-m\}\) which has large dimensionality. This integration is difficult to compute in practice; so, Laplace approximation is used to approximate the posterior by a Gaussian distribution and thereby simplify this integration. Using Laplace approximation, the logarithm of marginal likelihood can be written as (Ogata 2011) $$\begin{aligned} log{\Lambda (\eta )} = R\left( \hat{\Phi }_f|\eta \right) - \frac{1}{2} \log {\det {H_{R}\left( \hat{\Phi }_f|\eta \right) }} + \frac{1}{2} \log {\det {H_{Q'}\left( \hat{\Phi }_f|\eta \right) }} \end{aligned}$$ where \(H_{R}(\hat{\Phi }_f|\eta )\) and \(H_{Q'}(\hat{\Phi }_f|\eta )\) denote the Hessians of penalized log-likelihood and roughness penalty, respectively, evaluated at \(\hat{\Phi }_f\) corresponding to the peak of penalized log-likelihood function. These \(\hat{\Phi }_f\) are generally unknown; in fact, it is our aim to estimate these. However, both \(\Phi _f\) and \(\eta\) can be estimated by an iterative procedure consisting of two steps: (a) given \(\eta\), \(\Phi _f\) is estimated by penalized MLE and (b) using this estimate \(\hat{\Phi }_f\), \(\eta\) is estimated using Type-II MLE by maximizing the approximated marginal likelihood objective given in Eq. (14). \(\Phi _f\) and \(\eta\) are re-estimated in each iteration, until convergence. This iterative procedure was adopted by Kumazawa and Ogata (2014) to estimate \(\Phi _f\) and \(\eta = \{\Phi _d, \tau \}\) given a priori known \(\Theta\). Even in the case when \(\Theta\) are unknown, this iterative procedure can be used to estimate \(\Phi _f\) (step (a)) and redefined \(\eta = \{\Phi _d, \Theta , \tau \}\) (step (b)) as was done by Ogata (2011) in the context of spatially inhomogeneous ETAS. However, in practice, the results obtained using such an approach remain unsatisfactory, because the strongly dependent parameters \(\Phi\) and \(\Theta\) are estimated in two different steps. Thus, we propose to use the L-curve procedure of Frasso and Eilers (2015) to choose the optimal \(\tau\) and then use penalized MLE to simultaneously estimate all model unknowns \(\Theta\) and \(\Phi\). L-curve method It is well known that a small smoothing parameter \(\tau\) results in a \(\mu (t)\) which is wiggly (under smoothed), while a large \(\tau\) gives an estimate of \(\mu (t)\) that is over smoothed. Therefore, a good strategy is to perform penalized MLE over a range of \(\tau\), from small to large values, and then pick the optimal \(\tau\). When the roughness penalty values corresponding to the estimates of \(\Phi\) and \(\Theta\) at each \(\tau\) are plotted against the respective negative log-likelihood values, an approximate L-shaped curve is obtained. The vertical part of the curve is associated with \(\tau\) values that provide under smoothed solutions and the horizontal part with over smoothed solutions. Hence, the \(\tau\) value corresponding to the corner of the L-curve can be chosen as the optimal smoothness parameter \(\hat{\tau }\) (Frasso and Eilers 2015). The estimates from penalized MLE corresponding to this \(\hat{\tau }\) are the optimal estimates of \(\hat{\Phi }\) and \(\hat{\Theta }\). This procedure is similar to the L-curve method for choosing the optimal regularization parameter in ill-posed inverse problems (Hansen 1999; Sen and Stoffa 2013). Adaptive roughness penalty function The above-described combination of penalized MLE and L-curve method may work well to effectively describe earthquake occurrences in most cases. However, in situations where the background rate function has significant non-uniform roughness over its time domain, use of a global smoothness parameter as indicated in the proposed method could be insufficient to model the earthquake sequence effectively. Quantile knots can take care of such variable smoothness to some extent. But, this is not always sufficient. To obtain better results in such cases, an adaptive penalty function is needed that allows local variations in roughness. In this study, such an adaptive penalty function is obtained by expressing the smoothness parameter as another spline function (Baladandayuthapani et al. 2005) defined over \(M_\tau (<< M)\) B-spline functions as $$\begin{aligned} \tau (t) = \sum _{i=1} ^{M_\tau } {\tau _i}B_i(t,\kappa _\tau ,d_\tau ) \end{aligned}$$ where \(\tau _i\), \(d_\tau\) and \(\kappa _\tau\) are the spline coefficients, degree and sub-knots corresponding to the smoothness parameter function \(\tau (t)\). A small subset of background rate knots (\(\kappa _t\)) are chosen as sub-knots (\(\kappa _\tau\)) for \(\tau\). Thus, the penalized log-likelihood function in Eq. (7) would now be $$\begin{aligned} R(\Phi , \Theta , \tau ) = \log {L(\Phi ,\Theta )}-\int _S ^T\tau (t)[{\mu ^{(m)}(t)}]^2\hbox {d}t \end{aligned}$$ where \(\tau (t)\) is given by Eq. (15). Since there are now more than one smoothness parameters (the spline coefficients \(\tau _i\)), the L-curve methodology cannot be applied. However, the iterative procedure involving penalized MLE and Type-II MLE procedure (Kumazawa and Ogata 2014) described previously can be used to estimate the spline coefficients \(\{\tau _i, i=1,2,{\ldots },M_\tau \}\) by including all of them in the hyper parameters vector \(\eta\). Note that this method involving Type-II MLE gives reliable results only in the case when \(\Phi\) is estimated given known \(\Theta\). Since model parameters are not always known beforehand, estimates are first obtained using the penalized MLE and L-curve approach described above. As we shall see in the section on synthetic tests, these are reasonably close to the true values. Thus, for estimation using the adaptive penalty approach, we first estimate (a) \(\hat{\Theta }_L\), \(\hat{\Phi }_L\) and \(\hat{\tau }_L\) using penalized MLE and L-curve approach. (b) Set \(\Theta =\hat{\Theta }_L\) (c) with \(\Phi = \hat{\Phi }_L\) and \(\tau _i = \hat{\tau }_L\), estimate \(\hat{\tau }(i)\) and \(\hat{\Phi }_A\) using the iterative procedure involving Type-II MLE (d) on convergence of the iterative procedure, use the newly estimated \(\hat{\tau }(t)\) and perform a final penalized MLE step to re-estimate both \(\Phi\) and \(\Theta\). In the case of a rapidly varying background rate function, as demonstrated in the following section, this adaptive procedure provides estimates of background rate that are much better than those obtained using a global smoothness parameter. Synthetic tests In this study, the purpose of testing is to learn if the combination of penalized MLE and the L-curve procedure is capable of determining all the model unknowns with reasonable accuracy. In addition, it is necessary to check whether all the model unknowns, \(\mu (t)\) and \(\Theta\), are identifiable. That is, can all the unknowns be determined uniquely from a given combination of model and data? Non-identifiability would arise, say, if certain amount of background activity can be described as aftershock activity and vice versa. It is therefore imperative to determine existence of non-identifiability and asses its impact on the results. In order to achieve this, we additionally perform estimation where we invert only for \(\mu (t)\) (or equivalently \(\Phi\)) while constraining \(\Theta\) to their true values. Large discrepancy in the estimated background rates obtained from the proposed simultaneous inversion scheme and that through constraining \(\Theta\) is a reasonable indicator of non-identifiability. In such a case, the model parameters \(\Theta\) and \(\Phi\) cannot both be determined from the data and further constraints become necessary to resolve the model parameters. Additionally, results from the proposed methods are compared with those obtained from Type-II MLE approach described above. Various approaches tested using synthetic catalogs are listed in Table 1 for ease of reference. Table 1 List of different types of models considered in the current study Synthetic catalogs are simulated using non-stationary ETAS model with (a) a Gaussian-type background rate and (b) an Omori's law-type background rate (Kumazawa and Ogata 2013; Marsan et al. 2013a). Both the background rate functions have an expected value of 500 over the time period of simulation. For each of these background rate functions and with a typical set of model parameters \(\Theta _{\mathrm{true}}=\{K=0.008\,\hbox {events/day}, \alpha =2,\, c=0.01\,\hbox {day},\, p=1.1 \}\), 100 synthetic datasets are simulated over a time period of [0, 500] days. The synthetic catalogs are simulated as a branching process (Zhuang and Touati 2015). First, the background events are simulated using the given non-stationary \(\mu (t)\) function by thinning method. Then, for each background event, aftershock sequences are simulated. Time sorted sequence of all these events combined together forms a simulated catalog. Magnitudes of the events are generated to follow Gutenberg-Richter law with b-value one and their values between 2 and 8, using the inverse transform method (Felzer et al. 2002; Zhuang and Touati 2015). The number of earthquakes in each synthetic catalog simulated using Gaussian-type background rate function ranges from 664 to 3597 among which \(93\%\) of the catalogs contain less than 1200 events. On the other hand, the number of earthquakes in the catalogs simulated using Omori's law-type background rate lies between 651 to 5142 with \(92\%\) of the catalogs contain less than 1200 events. The number of background events in the catalogs range between [453,556] and [427,554] for the Gaussian-type and Omori's law-type synthetic datasets, respectively. For all the non-stationary ETAS models tested in this study, we employ 100 linear B-splines to represent background rate and a roughness penalty on the first-order derivatives \((m=1)\). For the adaptive penalty method, we express the smoothness parameter \(\tau\) as a piece-wise constant spline function on 10 sub-knots. Wherever necessary, L-curves are computed using a range of log-spaced smoothing parameters \(\{10^{-4}, 10^{-3.5},{\ldots },10^{7.5}, 10^{8}\}\). Since L-curve procedure involves performing penalized MLE for a large number of \(\tau\) values, we did not estimate L-curves for all the 100 synthetic catalogs as that would be computationally expensive. Because all of 100 catalogs (of each type) are simulated using the same background rate function, optimal \(\tau\) estimated for one typical catalog would be approximately valid for all of them. Thus, we estimated optimal \(\tau\) by computing L-curve only for one catalog of each type (see Figs. 1a, 2a) and used this \(\tau\) value as the approximate optimal \(\tau\) for all the 100 synthetic catalogs of the corresponding type of background rate function. We provide this optimal \(\tau\) value as the initial \(\tau\) value even for the models employing Type-II MLE (NS_TII_MP and NS_TII). Note that, for all the algorithms, we provide \(\Theta =\Theta _{true}\) and \(\phi _i = 1 \forall i\) as the initial values to the algorithm except for the model with adaptive penalty. For the adaptive penalty model, we provide the final estimates of \(\Phi\), \(\Theta\) and \(\tau\) from NS_L_MP model as the initial values. Results for the synthetic datasets simulated with a Gaussian background rate function. The L-curves computed for one typical synthetic dataset along with the point corresponding to the chosen optimal smoothness parameter are shown in (a). The estimated background rate function for all 100 synthetic catalogs for each type of model listed in Table 1 is presented in (b-f). The true background rate function and the 10, 90 percentile bounds of estimates for each model are also shown Results for the synthetic datasets simulated with a Omori's law-type background rate function. The L-curves computed for one typical synthetic dataset along with the point corresponding to the chosen optimal smoothness parameter are shown in (a). The estimated background rate function for all 100 synthetic catalogs for each type of model listed in Table 1 is presented in (b–f). The true background rate function and the 10, 90 percentile bounds of estimates for each model are also shown Figure 1b–f shows the background rates for all the 100 Gaussian-type synthetic catalogs, estimated using the non-stationary ETAS models listed in Table 1. It can be seen that for the simple case of Gaussian background rate, all the models are capable of providing good estimates of \(\mu (t)\). Background rates estimated with models using a priori known model parameters (see Fig. 1c, e) and the models that allow simultaneous estimation along with \(\mu (t)\) (see Fig. 1b, d) are nearly the same. This indicates that no significant non-identifiability exists and thus simultaneous estimation of both the model parameters and the background rate function \(\mu (t)\) is feasible. Note that a few of the estimates obtained via adaptive penalty method (see Fig. 1f) show small block like distortions. This is caused by the usage of piece-wise constant \(\tau\) function; over smoothing seems to occur in some windows. So, the adaptive penalty method is not very helpful in the case where the background rate is not rapidly varying (like the Gaussian background rate function). Box plots of estimated model parameters for the Gaussian synthetic datasets. The box plots of model parameters estimated for the 100 synthetic catalogs using each of the models NS_L_MP, NS_TII_MP and NS_Adapt are shown. The true values of the respective parameters are plotted as horizontal lines Box plots of estimated model parameters for the Omori's law-type synthetic datasets. The box plots of model parameters estimated for the 100 synthetic catalogs using each of the models NS_L_MP, NS_TII_MP and NS_Adapt are shown. The true values of the respective parameters are plotted as horizontal lines In contrast, the results in the case of Omori's law-type background rate function indicate that the adaptive penalty method provides considerably better estimates of background rate. Estimates using the method employing L-curve approach with a global \(\tau\) exhibit undesired wiggles (see Fig. 2b, c), whereas the estimates employing Type-II MLE approach seem to over smooth (see Fig. 2d, e), especially near the region with sudden jump. But using an adaptive penalty seems to damp the undesired wiggles (see Fig. 2f) and provide smoother estimates even while preserving the jump. So, the adaptive penalty method seems to be very helpful in this case of Omori's law-type background rate function whose roughness is much more rapidly varying than the Gaussian background rate function. Table 2 Estimates of the ETAS model aftershock parameters (\(\hat{\Theta }\)) for the earthquake occurrences in Gisborne region for the period 2012/01–2015/05 Box plots of the estimated model parameters \(\Theta\) using Models NS_L_MP, NS_TII_MP and NS_Adapt are shown in Figs. 3 and 4, respectively, for catalogs simulated using Gaussian-type and Omori's law-type background rates. Overall, both the proposed models NS_L_MP are NS_Adapt provide considerably better estimates than the model NS_TII_MP that employs Type-II MLE. Between these, the NS_Adapt model provides slightly better estimates as is evident by the closeness of the median values of the estimates to the true values, and lower interquartile ranges of the estimates. The difference is particularly prominent for the Omori's law-type synthetic catalogs. Observe that the estimates of parameter p obtained with the model NS_TII_MP are close to the true value and have a lower interquartile range as well, even while the estimates of other model parameters are bad. This is possibly due to the inability of Type-II MLE step in exploring values away from the initial value (= true value) provided. Background rates of each kind of synthetic catalogs estimated using different number of splines \({{{M}}}\) and \({{{M}}}_{{\tau }}\). The estimated background rate function using all possible combinations of \(M=\{50,100,150,200,250,300\}\) and \(M_{\tau }=\{10,20,30,40,50,60\}\) such that \(M_{\tau }<M\), estimated using the models NS_L_MP and NS_Adapt for a typical Gaussian synthetic catalog are, respectively, shown in (a, c). These plots for a typical Omori's law-type synthetic catalog are presented in (b, d). The true background rate functions (dotted line) are also shown Seismicity Map of North Island, New Zealand and the target study area. Epicenters of earthquakes with magnitudes \(M\ge 2.5\) in the North Island (New Zealand) region with depths shallower than 65 km for the period 2012/01 to 2017/5 selected from the GeoNet catalog. The contours (in mm/year) show the cumulative slip of all detected SSEs on the Hikurangi subduction interface since 2002 until 2012, b taken from Wallace et al. (2012). Only the earthquakes with epicenters in the spatial window shown are considered for analysis in this study. Also displayed in the figure are the locations of cGPS station whose data are examined in the present study to investigate the association of slow slip earthquakes with increased seismicity The results reported above were computed using \(M=100\) B-splines to represent the background rate function \(\mu (t)\) and \(M_{\tau } = 10\) B-splines for the smoothness parameter \(\tau (t)\). It is well known that the estimates obtained using the P-splines methodology are nearly independent of the number of splines used, provided they are sufficiently large in number to produce an overfitting estimate in the absence of any roughness penalty (e.g., Baladandayuthapani et al. 2005; Ruppert 2002). To test if this remains valid even for the methods described in the current study, we analyze a typical synthetic catalog of each type, using all possible combinations of \(M=\{ 50,100,150,200,250,300\}\) and \(M_{\tau }=\{10,20,30,40,50,60\}\) such that \(M_{\tau } < M\). The estimates of the background rate function \(\mu (t)\) thus obtained for each type of synthetic catalog using the proposed models M_L_MP and M_Adapt are shown in Fig. 5. It can be seen that all these combinations of M and \(M_{\tau }\) yield similar estimates (see Fig. 5; Additional file 1: Tables S1, S2, S3 and S4 **in supporting information). Thus, these results suggest that the estimates obtained from the proposed methods are not very sensitive to the choice of the number of splines used to represent the background rate and smoothness parameter. Although use of large number of splines M and \(M_{\tau }\) may produce better results, the attendant computational cost increases steeply, especially with \(M_{\tau }\). Therefore, the choice of M and \(M_{\tau }\) must be guided by the prior knowledge/expectation of the underlying background rate function \(\mu (t)\). In any case, it would be prudent to examine the estimates using few other values of M and \(M_{\tau }\) to confirm their stability. In the current study, we chose quantile based knot vector, for spline representation of \(\mu (t)\), using all the earthquakes in the catalog. Therefore, more knots are inadvertently chosen even in the places where background rate is low but where aftershock activity is large. This can, in some cases, cause a small part of aftershock activity to be seen as background activity in the estimated model, owing to greater flexibility accorded by closely space knots. This is the cause of the occasional spurious bumps seen in the estimated background rate of synthetic catalogs. We observed that this issue chiefly arises when the analyzed catalogs contain large aftershock sequences and the background activity is such that a low smoothness parameter is needed. Using a low smoothness parameter \(\tau\) further adds to the flexibility allowed by closely spaced knots and therefore can cause local overfitting which results in spurious bumps. Adaptive penalties can alleviate this problem to a major extent. But, we speculate that devising and using a better knot selection strategy would also produce better and improved results. Application to New Zealand data We examine the GeoNet earthquake catalog of northern Hikurangi margin (New Zealand) and the cGPS data available for the region. The main objective is to demonstrate that the non-stationary ETAS model developed in this study is capable of identifying the transient increases in seismicity associated with slow slip earthquakes. Slow slip earthquakes (SSEs) have been observed at many subduction zones around the world. They have been found to accompany with tremors in most cases, and thus together they have been called Episodic Tremor and Slip (Rogers and Dragert 2003; Schwartz and Rokosky 2007). Some of the slow slip earthquakes have also been found to trigger earthquake swarms (Delahaye et al. 2009; Hirose et al. 2014). On the other hand, some large earthquakes have also been believed to affect slow slip (Wallace et al. 2014; Zigone et al. 2012). It is necessary to understand such interactions between normal earthquakes and slow slip earthquakes in order to better appraise the seismic hazard potential of such regions. At the Hikurangi margin, the Pacific plate is obliquely subducting beneath the Australian plate. Slow slip here occurs at a shallow depth similar to Boso peninsula, Japan. At both these subduction zones, the slow slip earthquakes have been found to trigger earthquake swarms. Previous studies have identified earthquake swarms triggered by a few slow slip earthquakes in Hikurangi margin e.g., Gisborne 2004 SSE (Delahaye et al. 2009), Cape Turnagain 2011 SSE (Wallace et al. 2012), Puketiti 2010 SSE (Todd and Schwartz 2016). Here, we analyze the observed earthquakes near Gisborne (see Fig. 6), where multiple SSE have been documented, using the non-stationary ETAS model. From the estimated background rate, we would try to see if any anomalous seismicity is associated with slow slip earthquakes in the region. Results for the GeoNet catalog analyzed using non-stationary ETAS model. The magnitudes of earthquakes in the catalog examined in the study are plotted versus time in (a). The computed L-curve along with the point corresponding to the chosen smoothness parameter is shown in (b). The background rate functions estimated using the models NS_L_MP and NS_Adapt are presented in (c, d) respectively, along with the error bounds computed using Hessian matrix of \(\Phi\) at the solution Association of slow slip earthquakes and the peaks in the estimated background rate. The east component of the displacement recorded by the LEYL, WAHU, MAHI, MAKO and ANAU cGPS stations are shown (a–e) and compared with the background rate (f) estimated using the non-stationary ETAS model with adaptive roughness penalty using \(M=150\) and \(M_{\tau }=30\). Note that the peaks in the estimated background rate correspond well with the reversals in the displacement that indicate slow slip earthquakes. Also shown in (g) are the estimates obtained using different values of M and \(M_{\tau }\), similar to Fig. 5 The location algorithm and the reported magnitudes in GeoNet catalog changed since the beginning of 2012 (www.geonet.org.nz/data/supplementary/earthquake_location). Thus, for the analysis in this study we consider only the earthquake catalog since January 2012 until May 2017. The magnitude of completeness for the events in the target spatial window and the time period is estimated to be 1.9 M using maximum curvature method (Wiemer 2001). Thus, the non-stationary ETAS model is applied to 5007 events with magnitudes \(M \ge 1.9\) (see Fig. 7a) and with depths less than 65 km, and whose epicenters are confined to the spatial window shown in Fig. 6. Given the large number of events in the catalog and the frequent occurrence of SSEs that could affect the background seismicity in the region, we chose to use (\(M=\)) 150 linear B-splines to express the background rate function \(\mu (t)\). A roughness penalty in terms of first-order derivative \((m=1)\) is used. For modeling with adaptive penalty, the smoothness parameter \(\tau (t)\) is expressed as a piece-wise constant spline made up of 30 B-splines. Spatial locations of earthquakes associated with peaks in the estimated background rate. Epicenters of earthquakes corresponding to each seismicity peak identified in Fig. 8e are plotted. These scatter plots are visually enhanced by adding color based on smoothed histogram as described in Eilers and Goeman (2004). Also plotted, where available, are the slip regions of the associated slow slip earthquakes. Distinct spatial zones where prominently identifiable seismicity clusters seem to occur are shown as colored rectangles (red, blue and green) The computed L-curve is shown in Fig. 7b, where the point corresponding to the chosen optimal smoothness parameters is marked. Estimated background rate function \(\hat{\mu }(t)\), corresponding to this optimal \({\hat{\tau}}\), is wiggly (see Fig. 7c) and thus the adaptive penalty approach as described above is applied. The background rate (shown in Fig. 7d) thus estimated looks reasonably better than the one estimated without adaptive penalty. The estimated model parameters \(\Theta\) corresponding to both the non-stationary ETAS models, with and without adaptive penalty, along with the standard errors are presented in Table 2. Note that the standard errors are computed as square root of diagonal elements of covariance matrix. The covariance matrix is estimated by taking inverse of the Hessian matrix at the solution. For goodness-of-fit tests, see supplementary material. To see if slow slip earthquakes are associated with increased seismicity that manifests as peaks in the estimated background activity, we look at cGPS data recorded at the nearby cGPS stations LEYL, WAHU, MAHI, MAKO and ANAU (see Fig. 8). Slip caused by slow earthquakes is recorded by continuous GPS (cGPS) stations as reversals in the direction of slip over time periods ranging from a few days to years. Comparing the estimated background rate function with the east component of cGPS data recorded at the said cGPS stations, it is apparent that the peaks in estimated background rate are associated with deviations in cGPS east-west displacement time series indicative of slow slip earthquakes (see Fig. 8). To check the sensitivity of the estimated background rate (shown in Fig. 8f) on the number of splines employed, we repeat the analysis described for the synthetic catalogs in the previous section, using different combinations of M and \(M_{\tau }\). The estimates of \(\mu (t)\) thus obtained using the non-stationary ETAS model with adaptive penalty are shown in Fig. 8g and the corresponding model parameters in Table S5. These suggest that the considered combinations of M and \(M_{\tau }\) produced similar estimates and reconfirm the presence of peaks in the background rate observed earlier (see Fig. 8). It is pertinent to note that not all SSEs are associated with peaks in the background rate. For example, SSE in October 2014 do not seem to produce any pronounced increase in seismic activity, although there seems to be a faint peak in \(\mu (t)\) (Fig. 8g) associated with this event. Lack of such recognizable increase in seismicity associated with this particular SSE was also observed by Todd and Schwartz (2016). They explain that as the SSE occurred far away from the shore, the associated increase in seismicity, if any, was not recorded by the onshore network. Employing a better catalog with a lower magnitude cutoff could possibly help identify more earthquake swarms associated with SSEs. We plot the epicenters of the earthquakes that occurred during the time windows (shaded regions) corresponding to each of the peaks shown in Fig. 8. These scatter plots of earthquake locations (shown in Fig. 9) are visually enhanced adding color to individual plots based on smoothed histogram as described in Eilers and Goeman (2004). Such visual enhancement leads to better identification of spatial clusters of earthquakes. In each of the plots (Fig. 9), corresponding to peaks 2, 3, 4, 5, 7, 9, 11 and 12, distinct spatial clustering of earthquakes is visible. Approximate regions of slow slip available for a few SSEs (Koulali et al. 2017; Wallace and Eberhart-Phillips 2013; Wallace et al. 2017) are shown in the corresponding plots. It can be noticed that the seismicity clusters related to peaks 5 and 11 are located near the down dip edges of the corresponding slip patches (Fig. 9e, k). However, the earthquake cluster in Fig. 9k seems to be located within the slip region while that in Fig. 9e lies even outside the slip region. While the former lends support to the hypothesis that slow slip triggers earthquakes on locked asperities within the slip zone, the latter suggests triggering by increased static stress. Examination of the subplots in Fig. 9 suggests that the distinct spatial clusters of earthquakes associated with SSEs occupies three narrow spatial zones marked by colored rectangles (R, G and B). Spatial zone R (Fig. 9) coincides with a region where slow slip earthquakes have been detected in earlier studies (e.g., Koulali et al. 2017; see also contours in Fig. 6). Hence, it is possible that the earthquake swarms in this region are mainly caused by failure of locked asperities within the slip region. In contrast, earthquake clusters in spatial zone G are located inland, while slow slip occurrence region is situated almost exclusively offshore (Todd and Schwartz 2016). Thus, the earthquake swarms falling in zone G, if triggered by SSEs, must have been caused by the associated increase in static stress. Previous studies on SSEs near Gisborne, that is spatial zone B, found that the associated swarms of earthquakes occur close to the down dip edge of the slip area (Bartlow et al. 2014; Delahaye et al. 2009). Thus, these swarms in spatial zone B are most likely trigged by static stress increase (Delahaye et al. 2009) akin to region G. Irrespective of the triggering mechanisms, it is important to understand if these spatial zones are particularly susceptible to SSE-associated seismicity. Detailed modeling of slow slip events and estimation of associated Coulomb stress changes might shed more light in this direction. In addition, if epicentral locations too are modeled along with the earthquake occurrence times, using a non-stationary spatio-temporal ETAS model, spatial clusters of earthquakes associated with SSEs could be better identified. In this study, we propose a P-splines-based non-stationary ETAS model and an estimation procedure involving (a) penalized maximum likelihood estimation and (b) L-curve method for choosing optimal smoothness parameter. This procedure allows for simultaneous estimation of both background rate function and the other ETAS model parameters. Such a non-stationary ETAS model is useful in modeling earthquake sequences affected by time varying processes such as fluid/magma intrusion. For example, modeling the earthquake sequence associated with a particular swarm would help in understanding the time evolution of that swarm and could provide useful insights into the underlying causative process. We also present a non-stationary ETAS model that employs adaptive roughness penalty function. Such a model provides superior results when the background rate has significant non-uniform smoothness over its domain. This adaptive penalty method is particularly useful when analyzing long duration earthquake catalogs belonging to regions affected by occasional aseismic transients. The performance of both the proposed methods was demonstrated on synthetic datasets. An application to data from Hikurangi margin (New Zealand) is presented where the observed earthquake sequence near Gisborne is analyzed to find instances of increased background rate (earthquake swarms). These episodes of increased seismicity are then compared with cGPS data to better understand their association with slow slip earthquakes. The non-stationary ETAS model and the estimation procedures described in this study allow us to model earthquake activity affected by transient aseismic processes and thus allow us to obtain meaningful insights into these processes. ETAS: epidemic-type aftershock sequence SSE: slow slip earthquakes MLE: maximum a posteriori estimation ABIC: Akaike Bayesian Information Criterion cGPS: continuous global positioning system Baladandayuthapani V, Mallick BK, Carroll RJ (2005) Spatially adaptive Bayesian penalized regression splines (P-splines). J Comput Graph Stat 14(2):378–394. http://www.tandfonline.com/doi/abs/10.1198/106186005X47345 Bartlow NM, Wallace LM, Beavan RJ, Bannister S, Segall P (2014) Time-dependent modeling of slow slip events and associated seismicity and tremor at the Hikurangi subduction zone, New Zealand. J Geophys Res Solid Earth 119(1):734–753. http://onlinelibrary.wiley.com/doi/10.1002/2013JB010609/abstract Bishop CM (2006) Pattern recognition and machine learning. Springer. http://cds.cern.ch/record/998831/files/9780387310732_TOC.pdf Console R (2003) Refining earthquake clustering models. J Geophys Res 108(B10). http://doi.wiley.com/10.1029/2002JB002130 Console R, Jackson DD, Kagan YY (2010) Using the ETAS model for catalog declustering and seismic background assessment. Pure Appl Geophys 167(6–7):819–830. http://link.springer.com/10.1007/s00024-010-0065-5 De Boor C (1978) A practical guide to splines, vol 27. Springer, New York. https://www.researchgate.net/profile/Carl_De_Boor/publication/200744645_A_Practical_Guide_to_Spline/links/02e7e51700ff609454000000.pdf Delahaye E, Townend J, Reyners M, Rogers G (2009) Microseismicity but no tremor accompanying slow slip in the Hikurangi subduction zone, New Zealand. Earth Planet Sci Lett 277(1–2):21–28. http://linkinghub.elsevier.com/retrieve/pii/S0012821X08006419 Eilers PHC, Goeman JJ (2004) Enhancing scatterplots with smoothed densities. Bioinformatics 20(5):623–628. https://academic.oup.com/bioinformatics/article-lookup/doi/10.1093/bioinformatics/btg454 Eilers PHC, Marx BD (1996) Flexible smoothing with B-splines and penalties. Stat Sci 11(2):89–121. http://projecteuclid.org/euclid.ss/1038425655 Felzer KR, Becker TW, Abercrombie RE, Ekström G, Rice JR (2002) Triggering of the 1999 \({\mathit{M}}_{w}\) 7.1 Hector Mine earthquake by aftershocks of the 1992 \({\mathit{M}}_{w}\) 7.3 Landers earthquake: TRIGGERING OF THE HECTOR MINE EARTHQUAKE. J Geophys Res Solid Earth 107(B9):ESE 6-1–ESE 6-13. http://doi.wiley.com/10.1029/2001JB000911 Frasso G, Eilers PH (2015) L- and V-curves for optimal smoothing. Stat Model 15(1):91–111. http://smj.sagepub.com/cgi/doi/10.1177/1471082X14549288 Hainzl S, Ogata Y (2005) Detecting fluid signals in seismicity data through statistical earthquake modeling. J Geophys Res 110(B5). http://doi.wiley.com/10.1029/2004JB003247 Hainzl S, Zakharova O, Marsan D (2013) Impact of aseismic transients on the estimation of aftershock productivity parameters. Bull Seismol Soc Am 103(3):1723–1732. http://www.bssaonline.org/content/103/3/1723 Hansen PC (1999) The L-curve and its use in the numerical treatment of inverse problems. IMM, Department of Mathematical Modelling, Technical University of Denmark. http://www.sintef.no/globalassets/upload/ikt/9011/simoslo/vskoler/2005/notes/lcurve.pdf Harte DS (2013) Bias in fitting the ETAS model: a case study based on New Zealand seismicity. Geophys J Int 192(1):390–412. http://gji.oxfordjournals.org/cgi/doi/10.1093/gji/ggs026 Helmstetter A (2003) Is earthquake triggering driven by small earthquakes? Phys Rev Lett 91(5):058,501. http://link.aps.org/doi/10.1103/PhysRevLett.91.058501 Helmstetter A, Sornette D (2002) Subcritical and supercritical regimes in epidemic models of earthquake aftershocks. J Geophys Res Solid Earth 107(B10):ESE 10-1–ESE 10-21. http://onlinelibrary.wiley.com/doi/10.1029/2001JB001580/abstract Hirose H, Matsuzawa T, Kimura T, Kimura H (2014) The Boso slow slip events in 2007 and 2011 as a driving process for the accompanying earthquake swarm. Geophys Res Lett 41(8):2778–2785. http://onlinelibrary.wiley.com/doi/10.1002/2014GL059791/abstract Kagan YY, Knopoff L (1981) Stochastic synthesis of earthquake catalogs. J Geophys Res Solid Earth 86(B4):2853–2862. http://onlinelibrary.wiley.com/doi/10.1029/JB086iB04p02853/abstract Koulali A, McClusky S, Wallace L, Allgeyer S, Tregoning P, D'Anastasio E, Benavente R (2017) Slow slip events and the 2016 te araroa mw 7.1 earthquake interaction: Northern hikurangi subduction, New Zealand. Geophys Res Lett 44(16):8336–8344. https://doi.org/10.1002/2017GL074776, 2017GL074776 Kumazawa T, Ogata Y (2013) Quantitative description of induced seismic activity before and after the 2011 Tohoku-Oki earthquake by nonstationary ETAS models: NONSTATIONARY ETAS MODEL. J Geophys Res Solid Earth 118(12):6165–6182. http://doi.wiley.com/10.1002/2013JB010259 Kumazawa T, Ogata Y (2014) Nonstationary ETAS models for nonstandard earthquakes. Ann Appl Stat 8(3):1825–1852. http://projecteuclid.org/euclid.aoas/1414091236 Kumazawa T, Ogata Y, Kimura K, Maeda K, Kobayashi A (2016) Background rates of swarm earthquakes that are synchronized with volumetric strain changes. Earth Planet Sci Lett 442:51–60. http://linkinghub.elsevier.com/retrieve/pii/S0012821X16300747 Llenos AL, McGuire JJ (2011) Detecting aseismic strain transients from seismicity data. J Geophys Res Solid Earth 116(B6):B06,305. http://onlinelibrary.wiley.com/doi/10.1029/2010JB007537/abstract Lombardi AM, Marzocchi W, Selva J (2006) Exploring the evolution of a volcanic seismic swarm: the case of the 2000 Izu Islands swarm. Geophys Res Lett 33(7):L07,310. http://onlinelibrary.wiley.com/doi/10.1029/2005GL025157/abstract Lombardi AM, Cocco M, Marzocchi W (2010) On the increase of background seismicity rate during the 1997–1998 Umbria-Marche, Central Italy, sequence: apparent variation or fluid-driven triggering? Bull Seismol Soc Am 100(3):1138–1152. http://www.bssaonline.org/cgi/doi/10.1785/0120090077 Marsan D, Lengline O (2008) Extending earthquakes' reach through cascading. Science 319(5866):1076–1079. http://www.sciencemag.org/cgi/doi/10.1126/science.1148783 Marsan D, Prono E, Helmstetter A (2013a) Monitoring aseismic forcing in fault zones using earthquake time series. Bull Seismol Soc Am 103(1):169–179. http://bssa.geoscienceworld.org/content/103/1/169 Marsan D, Reverso T, Helmstetter A, Enescu B (2013b) Slow slip and aseismic deformation episodes associated with the subducting Pacific plate offshore Japan, revealed by changes in seismicity: ASEISMIC TRANSIENTS IN SUBDUCTION ZONE. J Geophys Res Solid Earth 118(9):4900–4909. http://doi.wiley.com/10.1002/jgrb.50323 Ogata Y (1988) Statistical models for earthquake occurrences and residual analysis for point processes. J Am Stat Assoc 83(401):9–27 Ogata Y (1998) Space-time point-process models for earthquake occurrences. Ann Inst Stat Math 50(2):379–402. http://link.springer.com/article/10.1023/A:1003403601725 Ogata Y (1999) Seismicity analysis through point-process modeling: a review. Pure Appl Geophys 155(2–4):471–507. http://link.springer.com/article/10.1007/s000240050275 Ogata Y (2004) Space-time model for regional seismicity and detection of crustal stress changes. J Geophys Res Solid Earth 109(B3):B03,308. http://onlinelibrary.wiley.com/doi/10.1029/2003JB002621/abstract Ogata (2011) Significant improvements of the space-time ETAS model for forecasting of accurate baseline seismicity. Earth Planets Space 63(3):217–229. https://doi.org/10.5047/eps.2010.09.001, http://link.springer.com/article/10.5047/eps.2010.09.001 Reverso T, Marsan D, Helmstetter A (2015) Detection and characterization of transient forcing episodes affecting earthquake activity in the Aleutian Arc system. Earth Planet Sci Lett 412:25–34. http://www.sciencedirect.com/science/article/pii/S0012821X14007651 Rogers G, Dragert H (2003) Episodic tremor and slip on the Cascadia subduction zone: the chatter of silent slip. Science 300(5627):1942–1943. http://science.sciencemag.org/content/300/5627/1942 Ruppert D (2002) Selecting the number of knots for penalized splines. J Comput Graph Stat 11(4):735–757 Ruppert D, Wand MP, Carroll RJ (2003) Semiparametric regression. Cambridge University Press, google-Books-ID: Y4uEvXFP2voC Schwartz SY, Rokosky JM (2007) Slow slip events and seismic tremor at circum-Pacific subduction zones. Rev Geophys 45(3):RG3004. http://onlinelibrary.wiley.com/doi/10.1029/2006RG000208/abstract Sen MK, Stoffa PL (2013) Global optimization methods in geophysical inversion. Cambridge University Press. https://books.google.co.in/books?hl=en&lr=&id=FVshAwAAQBAJ&oi=fnd&pg=PR9&dq=Global+Optimization+Methods+in+Geophysical+Inversion&ots=w7e8EJH0lh&sig=CYrivXgLzgjooeSiOpe8LZQN5zA Todd EK, Schwartz SY (2016) Tectonic tremor along the northern Hikurangi Margin, New Zealand, between 2010 and 2015. J Geophys Res Solid Earth 121(12):2016JB013,480. http://onlinelibrary.wiley.com/doi/10.1002/2016JB013480/abstract Utsu T, Ogata Y, Matsu'ura RS (1995) The centenary of the Omori formula for a decay law of aftershock activity. J Phys Earth 43(1):1–33. http://www-solid.eps.s.u-tokyo.ac.jp/~hassei/2011/papers/utsu_et_al1995.pdf Wallace LM, Eberhart-Phillips D (2013) Newly observed, deep slow slip events at the central Hikurangi margin, New Zealand: implications for downdip variability of slow slip and tremor, and relationship to seismic structure. Geophys Res Lett 40(20):2013GL057,682. http://onlinelibrary.wiley.com/doi/10.1002/2013GL057682/abstract Wallace LM, Beavan J, Bannister S, Williams C (2012) Simultaneous long-term and short-term slow slip events at the Hikurangi subduction margin, New Zealand: implications for processes that control slow slip event occurrence, duration, and migration. J Geophys Res Solid Earth 117(B11):B11,402. http://onlinelibrary.wiley.com/doi/10.1029/2012JB009489/abstract Wallace LM, Bartlow N, Hamling I, Fry B (2014) Quake clamps down on slow slip. Geophys Res Lett 41(24):2014GL062,367. http://onlinelibrary.wiley.com/doi/10.1002/2014GL062367/abstract Wallace LM, Kaneko Y, Hreinsdóttir S, Hamling I, Peng Z, Bartlow N, D'Anastasio E, Fry B (2017) Large-scale dynamic triggering of shallow slow slip enhanced by overlying sedimentary wedge. Nat Geosci 10(10):765–770. http://www.nature.com/doifinder/10.1038/ngeo3021 Wiemer S (2001) A software package to analyze seismicity: ZMAP. Seismol Res Lett 72(3):373–382. http://srl.geoscienceworld.org/content/72/3/373 Zhuang J, Touati S (2015) Stochastic simulation of earthquake catalogs. Community Online Resource for Statistical Seismicity Analysis Zhuang J, Ogata Y, Vere-Jones D (2002) Stochastic declustering of space-time earthquake occurrences. J Am Stat Assoc 97(458):369–380. http://www.tandfonline.com/doi/abs/10.1198/016214502760046925 Zigone D, Rivet D, Radiguet M, Campillo M, Voisin C, Cotte N, Walpersdorf A, Shapiro NM, Cougoulat G, Roux P, Kostoglodov V, Husker A, Payero JS (2012) Triggering of tremors and slow slip event in Guerrero, Mexico, by the 2010 Mw 8.8 Maule, Chile, earthquake. J Geophys Res Solid Earth 117(B9):B09,304. http://onlinelibrary.wiley.com/doi/10.1029/2012JB009160/abstract SK developed the code, performed analysis, and drafted the manuscript. DSR participated in the design of the study. All authors participated in discussions and equally contributed to revising an earlier draft of the manuscript. All authors read and approved the final manuscript. We acknowledge the New Zealand GeoNet project and its sponsors EQC, GNS Science and LINZ, for providing the data used in this study. Dr. S. Das Sharma is thanked for numerous discussions during this work. Shri Appala Raju is gratefully acknowledged for his technical help. We thank the two anonymous reviewers for their constructive comments. SK gratefully acknowledges funding from Council of Scientific and Industrial Research, New Delhi, India, through Shyama Prasad Mukherjee Fellowship. The data used in the study can be obtained from GeoNet Web site www.geonet.org.nz Ethics approval, consent to participate and consent to publish This research was supported by funding from Council of Scientific and Industrial Research, New Delhi, India, through Shyama Prasad Mukherjee Fellowship (SPM-31/023(0178)/2013-EMR-I). Academy of Scientific and Innovative Research (AcSIR), CSIR-National Geophysical Research Institute (CSIR-NGRI) Campus, Uppal Road, Hyderabad, 500007, India Sasi Kattamanchi & Ram Krishna Tiwari CSIR-National Geophysical Research Institute, Uppal Road, Hyderabad, 500007, India Indian Institute of Geomagnetism, New Panvel, Mumbai, 410218, India Durbha Sai Ramesh Search for Sasi Kattamanchi in: Search for Ram Krishna Tiwari in: Search for Durbha Sai Ramesh in: Correspondence to Sasi Kattamanchi. 40623_2017_741_MOESM1_ESM.pdf Additional file 1. Goodness-of-fit tests for the modeled GeoNet catalog; Tables S1–S5 and Figure S1. Kattamanchi, S., Tiwari, R.K. & Ramesh, D.S. Non-stationary ETAS to model earthquake occurrences affected by episodic aseismic transients. Earth Planets Space 69, 157 (2017) doi:10.1186/s40623-017-0741-0 Non-stationary ETAS Slow slip Aseismic transients
CommonCrawl
Article Info. Asian-Australasian Journal of Animal Sciences (아세아태평양축산학회지) Pages.796-806 Asian Australasian Association of Animal Production Societies (아세아태평양축산학회) DOI QR Code Milk Yield, Composition, and Fatty Acid Profile in Dairy Cows Fed a High-concentrate Diet Blended with Oil Mixtures Rich in Polyunsaturated Fatty Acids Thanh, Lam Phuoc (School of Animal Production Technology, Institute of Agricultural Technology, Suranaree University of Technology) ; Suksombat, Wisitiporn (School of Animal Production Technology, Institute of Agricultural Technology, Suranaree University of Technology) Received : 2014.10.16 Accepted : 2015.01.09 Published : 2015.06.01 https://doi.org/10.5713/ajas.14.0810 Citation PDF KSCI To evaluate the effects of feeding linseed oil or/and sunflower oil mixed with fish oil on milk yield, milk composition and fatty acid (FA) profiles of dairy cows fed a high-concentrate diet, 24 crossbred primiparous lactating dairy cows in early lactation were assigned to a completely randomized design experiment. All cows were fed a high-concentrate basal diet and 0.38 kg dry matter (DM) molasses per day. Treatments were composed of a basal diet without oil supplement (Control), or diets of (DM basis) 3% linseed and fish oils (1:1, w/w, LSO-FO), or 3% sunflower and fish oils (1:1, w/w, SFO-FO), or 3% mixture (1:1:1, w/w) of linseed, sunflower, and fish oils (MIX-O). The animals fed SFO-FO had a 13.12% decrease in total dry matter intake compared with the control diet (p<0.05). No significant change was detected for milk yield; however, the animals fed the diet supplemented with SFO-FO showed a depressed milk fat yield and concentration by 35.42% and 27.20%, respectively, compared to those fed the control diet (p<0.05). Milk c9, t11-conjugated linoleic acid (CLA) proportion increased by 198.11% in the LSO-FO group relative to the control group (p<0.01). Milk C18:3n-3 (ALA) proportion was enhanced by 227.27% supplementing with LSO-FO relative to the control group (p<0.01). The proportions of milk docosahexaenoic acid (DHA) were significantly increased (p<0.01) in the cows fed LSO-FO (0.38%) and MIX-O (0.23%) compared to the control group (0.01%). Dietary inclusion of LSO-FO mainly increased milk c9, t11-CLA, ALA, DHA, and n-3 polyunsaturated fatty acids (PUFA), whereas feeding MIX-O improved preformed FA and unsaturated fatty acids (UFA). While the lowest n-6/n-3 ratio was found in the LSO-FO, the decreased atherogenecity index (AI) and thrombogenicity index (TI) seemed to be more extent in the MIX-O. Therefore, to maximize milk c9, t11-CLA, ALA, DHA, and n-3 PUFA and to minimize milk n-6/n-3 ratio, AI and TI, an ideal supplement would appear to be either LSO-FO or MIX-O. Linseed Oil;Sunflower Oil;Fish Oil;Milk Yield;Milk Fatty Acids;Dairy Cows Supported by : Suranaree University of Technology AbuGhazaleh, A. A. 2008. Effect of fish oil and sunflower oil supplementation on milk conjugated linoleic acid content for grazing dairy cows. Anim. Feed Sci. Technol. 141:220-232. https://doi.org/10.1016/j.anifeedsci.2007.06.027 Ahnadi, C. E., N. Beswick, L. Delbecchi, J. J. Kennelly, and P. Lacasse. 2002. Addition of fish oil to diets for dairy cows. II. Effects on milk fat and gene expression of mammary lipogenic enzymes. J. Dairy Res. 69:521-531. Allen, M. S. 2000. Effects of diet on short-term regulation of feed intake by lactating dairy cattle. J. Dairy Sci. 83:1598-1624. https://doi.org/10.3168/jds.S0022-0302(00)75030-2 Angulo, J., L. Mahecha, K. Nuernberg, G. Nuernberg, D. Dannenberger, M. Olivera, M. Boutinaud, C. Leroux, E. Albrecht, and L. Bernard. 2012. Effects of polyunsaturated fatty acids from plant oils and algae on milk fat yield and composition are associated with mammary lipogenic and SREBF1 gene expression. Animal. 6:1961-1972. https://doi.org/10.1017/S1751731112000845 AOAC. 1998. Official Method of Analysis, sixteenth ed. AOAC International, Washington DC, USA. Bauman, D. E. and J. M. Griinari. 2001. Regulation and nutritional manipulation of milk fat: low-fat milk syndrome. Livest. Prod. Sci. 70:15-29. https://doi.org/10.1016/S0301-6226(01)00195-6 Baumgard, L. H., B. A. Corl, D. A. Dwyer, A. Sae bo, and D. E. Bauman. 2000. Identification of the conjugated linoleic acid isomer that inhibits milk fat synthesis. Am. J. Physiol. 278:R179-R184. Benchaar, C., G. A. Romero-Perez, P. Y. Chouinard, F. Hassanat, M. Eugene, H. V. Petit, and C. Corte. 2012. Supplementation of increasing amounts of linseed oil to dairy cows fed total mixed rations: Effects on digestion, ruminal fermentation characteristics, protozoal populations, and milk fatty acid composition. J. Dairy Sci. 95:4578-4590. https://doi.org/10.3168/jds.2012-5455 Bu, D. P., J. Q. Wang, T. R. Dhiman, and S. J. Liu. 2007. Effectiveness of oils rich in linoleic and linolenic acids to enhance conjugated linoleic acid in milk from dairy cows. J. Dairy Sci. 90:998-1007. https://doi.org/10.3168/jds.S0022-0302(07)71585-0 Caroprese, M., A. Marzano, R. Marino, G. Gliatta, A. Muscio, and A. Sevi. 2010. Flaxseed supplementation improves fatty acid profile of cow milk. J. Dairy Sci. 93:2580-2588. https://doi.org/10.3168/jds.2008-2003 Chilliard, Y., C. Martin, J. Rouel, and M. Doreau. 2009. Milk fatty acids in dairy cows fed whole crude linseed, extruded linseed, or linseed oil, and their relationship with methane output. J. Dairy Sci. 92:5199-5211. https://doi.org/10.3168/jds.2009-2375 Chow, T. T., V. Fievez, A. P. Moloney, K. Raes, D. Demeyer, and S. De Smet. 2004. Effect of fish oil on in vitro rumen lipolysis, apparent biohydrogenation of linoleic and linolenic acid and accumulation of biohydrogenation intermediates. Anim. Feed Sci. Technol. 117:1-12. https://doi.org/10.1016/j.anifeedsci.2004.08.008 Coppock, C. E. and D. L. Wilks. 1991. Supplemental fat in highenergy rations for lactating cows: Effects on intake, digestion, milk yield, and composition. J. Anim. Sci. 69:3826-3837. https://doi.org/10.2527/1991.6993826x Cortes, C., D. C. da Silva-Kazama, R. Kazama, N. Gagnon, C. Benchaar, G. T. D. Santos, L. M. Zeoula, and H. V. Petit. 2010. Milk composition, milk fatty acid profile, digestion, and ruminal fermentation in dairy cows fed whole flaxseed and calcium salts of flaxseed oil. J. Dairy Sci. 93:3146-3157. https://doi.org/10.3168/jds.2009-2905 Cortes, C., D. C. da Silva-Kazama, R. Kazama, C. Benchaar, G. T. D. Santos, L. M. Zeoula, and H. V. Petit. 2011. Intake and digestion of fatty acids by dairy cows fed whole flaxseed and Ca salts of flaxseed oil. Anim. Feed Sci. Technol. 169:270-274. https://doi.org/10.1016/j.anifeedsci.2011.06.016 Dairy Records Management Systems. 2014. DHI Glossary. http://www.drms.org/PDF/materials/glossary.pdf Accessed February 2014. Dewhurst, R. J., W. J. Fisher, J. K. S. Tweed, and R. J. Wilkins. 2003. Comparison of grass and legume silages for milk production. 1. Production responses with different levels of concentrate. J. Dairy Sci. 86:2598-2611. https://doi.org/10.3168/jds.S0022-0302(03)73855-7 Folch, J., M. Lees, and G. H. Sloane-Stanley. 1957. A simple method for the isolation and purification of total lipides from animal tissues. J. Biol. Chem. 226:497-509. Gebauer, S. K., J. M. Chardigny, M. U. Jakobsen, B. Lamarche, A. L. Lock, S. D. Proctor, and D. J. Baer. 2011. Effects of ruminant trans fatty acids on cardiovascular disease and cancer: A comprehensive review of epidemiological, clinical, and mechanistic studies. Adv. Nutr. 2:332-354. https://doi.org/10.3945/an.111.000521 Gradzka, I., B. Sochanowicz, K. Brzoska, G. Wojciuk, S. Sommer, M. Wojewodzka, A. Gasinska, C. Degen, G. Jahreis, and I. Szumiel. 2013. Cis-9,trans-11-conjugated linoleic acid affects lipid raft composition and sensitizes human colorectal adenocarcinoma HT-29 cells to X-radiation. Biochim. Biophys. Acta. General Subjects 1830:2233-2242. https://doi.org/10.1016/j.bbagen.2012.10.015 Grummer, R. R. 1991. Effect of feed on the composition of milk fat. J. Dairy Sci. 74:3244-3257. https://doi.org/10.3168/jds.S0022-0302(91)78510-X Harvatine, K. J. and D. E. Bauman. 2006. SREBP1 and thyroid hormone responsive spot 14 (S14) are involved in the regulation of bovine mammary lipid synthesis during dietinduced milk fat depression and treatment with CLA. J. Nutr. 136:2468-2474. https://doi.org/10.1093/jn/136.10.2468 Huang, Y., J. P. Schoonmaker, B. J. Bradford, and D. C. Beitz. 2008. Response of milk fatty acid composition to dietary supplementation of soy oil, conjugated linoleic acid, or both. J. Dairy Sci. 91:260-270. https://doi.org/10.3168/jds.2007-0344 Joyce, T., A. J. Wallace, S. N. McCarthy, and M. J. Gibney. 2009. Intakes of total fat, saturated, monounsaturated and polyunsaturated fatty acids in Irish children, teenagers and adults. Public Health Nutr. 12:156-165. https://doi.org/10.1017/S1368980008002772 Lerch, S., A. Ferlay, K. J. Shingfield, B. Martin, D. Pomies, and Y. Chilliard. 2012. Rapeseed or linseed supplements in grassbased diets: Effects on milk fatty acid composition of Holstein cows over two consecutive lactations. J. Dairy Sci. 95:5221-5241. https://doi.org/10.3168/jds.2012-5337 Mach, N., R. L. G. Zom, H. C. A. Widjaja, P. G. van Wikselaar, R. E. Weurding, R. M. A. Goselink, J. van Baal, M. A. Smits, and A. M. van Vuuren. 2013. Dietary effects of linseed on fatty acid composition of milk and on liver, adipose and mammary gland metabolism of periparturient dairy cows. J. Anim. Physiol. Anim. Nutr. 97:89-104. https://doi.org/10.1111/jpn.12042 Martin, C., J. Rouel, J. P. Jouany, M. Doreau, and Y. Chilliard. 2008. Methane output and diet digestibility in response to feeding dairy cows crude linseed, extruded linseed, or linseed oil. J. Anim. Sci. 86:2642-2650. https://doi.org/10.2527/jas.2007-0774 Murphy, J. J., M. Coakley and C. Stanton. 2008. Supplementation of dairy cows with a fish oil containing supplement and sunflower oil to increase the CLA content of milk produced at pasture. Livest. Sci. 116:332-337. https://doi.org/10.1016/j.livsci.2008.02.003 Nantapo, C. T. W., V. Muchenje, and A. Hugo. 2014. Atherogenicity index and health-related fatty acids in different stages of lactation from Friesian, Jersey and Friesian$\times$Jersey cross cow milk under a pasture-based dairy system. Food Chem. 146:127-133. https://doi.org/10.1016/j.foodchem.2013.09.009 Neveu, C., B. Baurhoo, and A. Mustafa. 2014. Effect of feeding extruded flaxseed with different grains on the performance of dairy cows and milk fatty acid profile. J. Dairy Sci. 97:1543-1551. https://doi.org/10.3168/jds.2013-6728 NRC. 2001. Nutrient Requirements of Dairy Cattle, seventh. Rev. ed. National Academy Press, Washington, DC, USA. Peterson, S. E., P. Rezamand, J. E. Williams, W. Price, M. Chanine, and M. A. McGuire. 2012. Effects of dietary betaine on milk yield and milk composition of mid-lactation Holstein dairy cows. J. Dairy Sci. 95:6557-6562. https://doi.org/10.3168/jds.2011-4808 Romeu-Nadal, M., S. Morera-Pons, A. I. Castellote, and M. C. Lopez-Sabater. 2004. Comparison of two methods for the extraction of fat from human milk. Anal. Chim. Acta. 513:457-461. https://doi.org/10.1016/j.aca.2004.02.038 SAS. 2002. SAS software User's Guide, release 9.0. SAS Inst., Inc., Cary, NC, USA. Shingfield, K. J., C. K. Reynolds, G. Hervas, J. M. Griinari, A. S. Grandison, and D. E. Beever. 2006. Examination of the persistency of milk fatty acid composition responses to fish oil and sunflower oil in the diet of dairy cows. J. Dairy Sci. 89:714-732. https://doi.org/10.3168/jds.S0022-0302(06)72134-8 Siegel, G. and E. Ermilov. 2012. Omega-3 fatty acids: Benefits for cardio-cerebro-vascular diseases. Atherosclerosis. 225:291-295. https://doi.org/10.1016/j.atherosclerosis.2012.09.006 Silva, R. R., L. B. O. Rodrigues, M. de M. Lisboa, M. M. S. Pereira, and S. O. de Souza. 2014. Conjugated linoleic acid (CLA): A review. Int. J. Appl. Sci. Technol. 4:154-170. Stergiadis, S., C. Leifert, C. J. Seal, M. D. Eyre, H. Steinshamn, and G. Butler. 2014. Improving the fatty acid profile of winter milk from housed cows with contrasting feeding regimes by oilseed supplementation. Food Chem. 164:293-300. https://doi.org/10.1016/j.foodchem.2014.05.021 Ulbricht, T. L. V. and D. A. T. Southgate. 1991. Coronary heart disease: Seven dietary factors. Lancet 338:985-992. https://doi.org/10.1016/0140-6736(91)91846-M Vafa, T. S., A. A. Naserian, A. R. H. Moussavi, R. Valizadeh, and M. D. Mesgaran. 2012. Effect of supplementation of fish and canola oil in the diet on milk fatty acid composition in early lactating holstein cows. Asian Australas. J. Anim. Sci. 25:311-319. https://doi.org/10.5713/ajas.2010.10014 Van Soest, P. J., J. B. Robertson, and B. A. Lewis. 1991. Methods for dietary fiber, neutral detergent fiber, and non-starch polysaccharides in relation to animal production. J. Dairy Sci. 74:3583-3597. https://doi.org/10.3168/jds.S0022-0302(91)78551-2 Wasowska, I., M. R. G. Maia, K. M. Niedzwiedzka, M. Czauderna, J. Ribeiro, E. Devillard, K. J. Shingfield, and R. J. Wallace. 2006. Influence of fish oil on ruminal biohydrogenation of C18 unsaturated fatty acids. Br. J. Nutr. 95:1199-1211. https://doi.org/10.1079/BJN20061783 Effect of dietary flax seed and oil on milk yield, gross composition, and fatty acid profile in dairy cows: A meta-analysis and meta-regression vol.100, pp.11, 2017, https://doi.org/10.3168/jds.2017-12637 Inclusion of Flaxseed, Broken Rice, and Distillers Dried Grains with Solubles (DDGS) in Broiler Chicken Ration Alters the Fatty Acid Profile, Oxidative Stability, and Other Functional Properties of Meat vol.120, pp.6, 2018, https://doi.org/10.1002/ejlt.201700470 Inclusion of Sunflower Oil in the Bovine Diet Improves Milk Nutritional Profile vol.11, pp.2, 2019, https://doi.org/10.3390/nu11020481
CommonCrawl
MathOverflow is a question and answer site for professional mathematicians. Join them; it only takes a minute: Has a discrete/quantum theory of probability based on the Cournot-Borel principle or somthing been developed? [closed] In 1930, Émile Borel, the father of measure theory together with his student Lebesgue and a world-class expert in probability theory, published a short note Sur les probabilités universellement négligeables (On universally negligible probabilities) in Comptes rendus hebdomadaires des séances de l'Académie des Sciences, 190, pp. 537-40. Here it is: http://gallica.bnf.fr/ark:/12148/bpt6k3143v.f539 According to this question Have some works by Émile Borel ever been translated from French to English or another foreign language? and to the best of my knowledge, this note has never been translated in any foreign language. I would be happy to translate it entirely upon request despite my poor English. Borel is concerned by Cournot principle. As the bridge, the connection between the mathematical theory of probability and the real world of experience,Borel considers Cournot principle to be the most important and fundamental principle of probability theory: he used to call it the fundamental law of randomness or the unique law of randomness. Hence, Borel seeks for a quantitative version of Cournot principle. He starts like this: We know that, in the applications of the calculus of probability, when the probability becomes extremely close to unity, it can and must be practically confounded with certainty. Carnot principle, the irreversibility of many phenomena, are well-known examples in which the theoretical probability equals practical certainty. However, we may not have, at least to the best of my knowledge, sufficiently specified from which limits a probability becomes universally negligible, that is negligible in the widest limits of time and space that we can humanly conceive, negligible in our whole universe. and concludes, by a purely physical reasoning (emphases by Borel): The conclusion that must be drawn is that the probabilities that can be expressed by a number smaller than ${10^{ - 1000}}$ are not only negligible in the common practice of life, but universally negligible, that is they must be treated as rigorously equal to zero in every questions regarding our Universe. The fact that they are not effectively null may be of interest for the metaphysicist; for the scientist they are null and the phenomena to which they relate are absolutely impossible. This Cournot-Borel principle $\left\{ \begin{array}{l} p \in \left[ {{{0,10}^{ - 1000}}} \right]\;\;\;\;\,\; \Rightarrow p = 0\quad \quad {\rm{Borel - supracosmic}}\;{\rm{probabilities}}\\ p \in \left[ {1 - {{10}^{ - 1000}},1} \right] \Rightarrow p = 1\quad \quad \,{\rm{Borel - supercosmic}}\;{\rm{probabilities}} \end{array} \right.$ implies that there are only discrete probability measures/distributions in every probabilistic questions regarding our universe. Indeed, consider for instance a cumulative distribution function $F\left( x \right):\mathbb{R} \to \left[ {0,1} \right]$. Suppose $F\left( x \right)$ is left-continuous at some point ${x_0}$, that is $F\left( x \right)$ is continuous at ${x_0}$ since it is right-continuous by definition: $\forall \varepsilon > 0\;\exists \eta > 0,\forall x,{x_0} - \eta < x < {x_0} \Rightarrow \left| {F\left( x \right) - F\left( {{x_0}} \right)} \right| = F\left( {{x_0}} \right) - F\left( x \right) = {\text{Prob}}\left( {y \in \left[ {x,{x_0}} \right]} \right) = \mu \left( {\left[ {x,{x_0}} \right]} \right) < \varepsilon $ In particular, by the Cournot-Borel principle $\forall {10^{ - 1000}} > \varepsilon > 0\;\exists \eta > 0,\forall x,{x_0} - \eta < x < {x_0} \Rightarrow \mu \left( {\left[ {x,{x_0}} \right]} \right) = 0$ Hence, either $F\left( x \right)$ is constant or it is discontinuous: $F\left( x \right)$ is nothing but a discrete cumulative distribution function or cumulative mass function. Hence, following Borel, at least two different mathematical theories of probability would coexist: the mathematical, metaphysical, continuous one that relies heavily on measure theory, and the scientific, physical, discrete one where measure theory is almost irrelevant. This Borel-Cournot discrete theory of probability is not necessarily inconsistent nor trivial because continuous r.v.s have discrete probability measures. By construction and definition, it constitutes another potential answer or proposal to Hilbert sixth problem or program. We can also talk about a quantum theory of (classical and quantum?) probability (not the theory of quantum probability) with Borel probabilistic quanta $b = {10^{ - 1000}}$, analogous to the energy quanta in QM. Of course, this value should be updated according to our modern knowledge about the Universe. Has something like this theory ever been developed? I would be happy with non-probabilistic answers too, i.e. mathematical theories formalizing such a concept of finite resolution and indistinguishability. I found such a theory many years ago but I don't remember! pr.probability measure-theory mp.mathematical-physics ho.history-overview Fabrice Pautot Fabrice PautotFabrice Pautot closed as primarily opinion-based by Gro-Tsen, js21, Andrés E. Caicedo, Stefan Kohl, Gerald Edgar Oct 4 '17 at 23:39 $\begingroup$ C'est Carnot, non? $\endgroup$ – Yemon Choi Oct 4 '17 at 15:13 $\begingroup$ @YemonChoi. No, Cournot principle in probability theory and Carnot principle in thermodynamics. $\endgroup$ – Fabrice Pautot Oct 4 '17 at 15:15 $\begingroup$ I'm not an expert in history or measure theory, but I don't see why we should take it seriously. In my view, modern measure theory perfectly resolves the issue raised by Borel in these writings, by allowing sets with probability measure zero that are nonetheless "possible" (in the support). If one is concerned with practical physics rather than theoretical mathematics, there are many phenomena which are actually discrete but are modeled with continuous mathematics (e.g. a volume of liquid is actually a finite collection of molecules); why not probability as well? $\endgroup$ – usul Oct 4 '17 at 16:01 $\begingroup$ Why do you call a published paper of Borel a "confidential note"? Because it is published in French? $\endgroup$ – Alexandre Eremenko Oct 4 '17 at 17:26 $\begingroup$ with respect to an English text by Borel on this topic, see chapter 3 "Negligible probabilities and the probabilities of everyday life" in the Dover edition "Probabilities and Life" (1962). $\endgroup$ – Carlo Beenakker Oct 4 '17 at 18:19 This question belongs more to philosophy rather than mathematics, so it might be out of scope of this site. But the general answer is that mathematical models are only an approximation to reality, and we choose those approximations which are convenient. For example, we can discover sometimes that the space/time is not really continuous but consists of some discrete objects. But this will not make calculus based on the concept of real number useless or obsolete. One can argue without end whether real numbers really correspond to something "real". Nevertheless they are useful in physics and engineering. (See, for example, N. J. Wildberger, Real fish, real numbers, real jobs, The Mathematical Intelligencer, Volume 21, Issue 2, pp 4–7.) Same with probability. Continuous distributions historically arise as approximations to discrete distributions (normal distribution is an approximation of the binomial distribution via the de Moivre-Laplace theorem). But normal distribution is much nicer from the mathematical point of view and therefore it must be used, even if when the "real" distribution under consideration is binomial. In other words, the accepted axioms of Probability are chosen (from the several proposed systems) because of their mathematical convenience, rather than because they better approximate the "real world" from the philosophical point of view. (Of the systems of mathematical foundations of probability which were competing with Kolmogorov's axioms, I can mention those proposed by S. Bernstein, R. von Mises and H. Steinhaus. And philosophic considerations played a secondary role in the choice). Alexandre EremenkoAlexandre Eremenko $\begingroup$ We apply probability theory to mathematical phenomena. In this case small probabilities occur. For example R. Brent, van de Lune, and I compute the probability of $|\arg(\zeta(\sigma+it)|>\pi/2$ for $\sigma=1.165$ as $1.279\dots\times10^{-283}$ and this really happens for some $t$. We can obtain smaller probabilities for other $\sigma>1$. $\endgroup$ – juan Oct 4 '17 at 17:52 $\begingroup$ @alexandre eremenko . Sorry if you don't find this thread mathematical enough. But there are tags for historical and philosophical aspects of maths in MO, so that I hope discussing "philosophical" works... by well-known mathematicians is not too much out of scope. $\endgroup$ – Fabrice Pautot Oct 4 '17 at 18:32 $\begingroup$ @alexandre eremenko. The problem is precisely that Borel philosophical considerations seemsto have some drastic mathematical consequences! Do you agree that this "Cournot-Borel principle" implies that there are only discrete probability measures in every questions regarding our universe??? $\endgroup$ – Fabrice Pautot Oct 4 '17 at 18:35 $\begingroup$ @alexandre eremenko. My question nevertheless pertains to Hilbert sixth problem. Discrete or not discrete, that is the question. Is quantifying Cournot principle not a kind of mathematical physics? I propose to cross-post the question on PO if MO doesn't mind. $\endgroup$ – Fabrice Pautot Oct 4 '17 at 21:25 $\begingroup$ @alexandre eremenko Some years ago, I bought an original copy of Borel Le hasard (Randomness), second edition, 1947. Believe it or not, the book was brand new, I cut the sheets by myself! Tried to find his complete works in 4 volumes but no way. Not exactly best sellers! $\endgroup$ – Fabrice Pautot Oct 4 '17 at 22:48 Not the answer you're looking for? Browse other questions tagged pr.probability measure-theory mp.mathematical-physics ho.history-overview or ask your own question. What's the probability distribution of a deterministic signal or how to marginalize dynamical systems? (functional integrals in probability theory) When has the Borel-Cantelli heuristic been wrong? Is there a Bayesian theory of deterministic signal? Prequel and motivation for my previous question Characterizing the optimimum over the space of probability measures Finitely additive measures on Boolean algebras of regular open subsets: Is there a relationship with Borel measures? A theory of integration? Asymptotics of a 1D integral, or the orthant probability of an equicorrelated random Gaussian vector Existence of ε-optimal Borel measurable policies in stochastic control
CommonCrawl
Neural Development Timing the spinal cord development with neural progenitor cells losing their proliferative capacity: a theoretical analysis Manon Azaïs1, Eric Agius2, Stéphane Blanco3, Angie Molina2, Fabienne Pituello2, Jean-Marc Tregan3, Anaïs Vallet1 and Jacques Gautrais1Email authorView ORCID ID profile Neural Development201914:7 In the developing neural tube in chicken and mammals, neural stem cells proliferate and differentiate according to a stereotyped spatiotemporal pattern. Several actors have been identified in the control of this process, from tissue-scale morphogens patterning to intrinsic determinants in neural progenitor cells. In a previous study (Bonnet et al. eLife 7, 2018), we have shown that the CDC25B phosphatase promotes the transition from proliferation to differentiation by stimulating neurogenic divisions, suggesting that it acts as a maturating factor for neural progenitors. In this previous study, we set up a mathematical model linking fixed progenitor modes of division to the dynamics of progenitors and differentiated populations. Here, we extend this model over time to propose a complete dynamical picture of this process. We start from the standard paradigm that progenitors are homogeneous and can perform any type of divisions (proliferative division yielding two progenitors, asymmetric neurogenic divisions yielding one progenitor and one neuron, and terminal symmetric divisions yielding two neurons). We calibrate this model using data published by Saade et al. (Cell Reports 4, 2013) about mode of divisions and population dynamics of progenitors/neurons at different developmental stages. Next, we explore the scenarios in which the progenitor population is actually split into two different pools, one of which is composed of cells that have lost the capacity to perform proliferative divisions. The scenario in which asymmetric neurogenic division would induce such a loss of proliferative capacity appears very relevant. Neural progenitors Proliferative capacity How can a small number of apparently initially homogeneous neural stem cells (NSCs) give rise to the tremendous diversity of differentiated neurons and glia found in the adult central nervous system (CNS) ? The long-standing paradigm just claims: by proliferating first, and then restricting the kind of cells a progenitor can produce given its situation in time and space. How the progenitors fate progression occurs in different contexts is still under scrutiny. In Drosophila, NSCs are multi-potent and divide asymmetrically to generate different types of progenies in a stereotypical manner. The study of mechanisms by which a single NSC can generate a wide repertoire of neural fates in this system is in fast progress [1]. In particular, several studies have highlighted the deterministic role of a series of sequentially expressed transcription factors in the temporal specification of Drosophila NSCs [2], albeit further studies substantiated that they are possibly under the control of some extrinsic (especially nutritional) factors [3]. In the mammalian cerebral cortex, the diversity of neural progenies has been linked to different types of cortical progenitors [4]. Beside expressing specific transcription factors, a set of criteria allows classifying the various types of cortical progenitors, including the apical or basal location of mitosis, their cell polarity and morphological features and proliferative capacity [5]. In the developing spinal cord, morphogen gradients have been identified that induce neural progenitor cells to express specific combinations of transcription factors and thereby adopt different identities based on their position along the dorsoventral axis [6–8]. This spatial patterning system ensures that different types of neurons are generated in an adequate stereotypical spatial order. The molecular players that control this spatial specification and their mode of action have been characterized [7]. However, little is known yet about how temporal differentiation of neural progenitor cells is orchestrated, namely what controls the timing of their transition from proliferation to differentiation at a given location [9]. Unlike cortical progenitors, spinal progenitors appear as a homogeneous population. They all divide apically and display the same morphology: an elongated shape with cytoplasmic connections to both the apical and basal surfaces. Spinal neural progenitors perform three modes of cell division: proliferative division that generates two progenitors (PP), asymmetric neurogenic division giving rise to a progenitor and a neuron (PN), and terminal neurogenic division producing two neurons (NN). The temporality of the transitions among the three modes of division (hereafter MoD) is critical in the control of the temporality of differentiation. Interestingly, we identified a G2/M cell cycle regulator, the CDC25B phosphatase whose expression correlates temporally and spatially remarkably well with areas where neurogenesis occurs [9, 10]. Moreover, CDC25B induces the conversion of proliferating neural progenitor cells into differentiating neurons by promoting sequentially neurogenic divisions, PN and NN [11]. We thus propose that CDC25B acts as a maturating factor that progressively restricts the mode of division of neural progenitor cells. Following our previous study on the maturing role of CDC25B in the control of neurogenesis [11], our question here is to examine whether this maturation can be due to an accumulating number of progenitors losing their proliferative capacity. From that point of view, we note that ventral neural progenitors in the neural tube have been already shown to display a fate switch, transiting from early motoneurons production to late oligodendroglial production, under the control of Shh induction [12]. Here, we consider the possibility that a similar kind of switch operates sooner in the same population and sustains the transition from pure proliferative divisions to neurogenic divisions. To examine this hypothesis, we start from the model of MoD transition we have proposed in our previous paper about the instrumental role that CDC25B plays in the progression from proliferative to neurogenic divisions [11]. In the spirit of Lander et al. [13], modeling is used here as a way to gain clarity in the face of intricacy. To this end, we have first extended our model presented in [11]. This model considered MoD as stationary over the 24 hours of our experiment. We now consider their change over time in order to extend this model over the full dynamics of ventral spinal cord motoneurons production. This extension over time uses the data published by Marty's team [14] who measured the two essential components of this system at different times of spinal cord development: MoD on the one hand, and dynamics of Progenitors / Neurons (P/N) populations on the other hand. From the modeling point of view, we point out the importance of being unequivocal about what the experimentally measured entities are in this system, and what are the conceptual entities we are thinking with. Namely, we propose below a first model which is based on the observable entities only (MoD and P/N evolutions). We use this model to make the link between these observable entities and check how experimentally measured evolution of modes of divisions can explain the evolution of cellular populations of progenitors (P-cells) and neurons (N-cells). Next, we explore the idea that the temporality of the transitions among the three modes of division is based on a loss of proliferative capacity in some progenitors. To implement this hypothesis, we have to define two non observable kinds of progenitors, one of which is unable to perform proliferative divisions. We identify three scenarios compatible with this hypothesis. In order to check the structural consequences of each scenario, we reconstruct for each of them what the progression of their MoD should be if we take as a constraint that they must match the observable ones, and concurrently produce the correct evolution of P/N cells. In the end, we advocate that one scenario is of great relevance: the hypothesis that asymmetric neurogenic division would induce the loss of proliferative capacity in the self-renewed progenitor. We offer a speculative additional component to the model so that robustness against small perturbations is secured. We discuss our findings compared to the model proposed by Marty's team to explain their own data [14]. We finally suggest that lineage tracing may now be the best experimental avenue to go further in the understanding of how the progression from proliferative to neurogenic divisions is timed. Minimal Model for the Dynamics with three Modes of Division We start from the model with fixed MoD proportions we designed in Bonnet et al. [11], incorporating here the possibility for the MoD to evolve with time. We consider a population of cells at time t, some of which are proliferating progenitors P(t), and others are differentiated neurons N(t). The dividing progenitors can undergo three kinds of division, yielding: symmetric proliferative divisions ending with two progenitors (pp-divisions) asymmetric self-renewing divisions ending with one progenitor and one neuron (pn-divisions) symmetric consumptive neurogenic divisions ending with two neurons (nn-divisions) Let us denote : η the rate at which P-cells undergo divisions (in fraction of the P-pool per unit time) αpp(t) the fraction of dividing cells undergoing pp-divisions αpn(t) the fraction of dividing cells undergoing pn-divisions αnn(t) the fraction of dividing cells undergoing nn-divisions The fractions of pp-, pn- and nn-divisions can evolve with time, under the constraint that αpp(t)+αpn(t)+αnn(t)=1. The time derivative \(\dot {P}(t)\) of pool P(t) (resp. \(\dot {N}(t)\)) is then given by the balance equation at time t, reading: $$ \left\{ \begin{aligned} \dot{P}(t) &= -\eta(t) P(t) &+2\alpha_{pp}(t)\eta(t) P(t) +1\alpha_{pn}(t)\eta(t) P(t)\\ \dot{N}(t) &= &+ 2\alpha_{nn}(t)\eta(t) P(t) + 1\alpha_{pn}(t)\eta(t) P(t) \end{aligned} \right. $$ where in the first equation : −η(t)P(t) quantifies the rate at which P-cells disappear from the pool P(t) because they divide. The quantity of disappearing P-cells between t and t+dt is then η(t)P(t)dt αpp(t)η(t)P(t) quantifies the fraction of this quantity that undergoes a pp-division ; it doubles to yield 2 P and adds up to the pool P(t) (hence the factor 2) αpn(t)η(t)P(t) quantifies the fraction of this quantity that undergoes a pn-division ; it doubles to yield 1 P and 1 N, so only half (the P part) adds up to the pool P(t) (hence the factor 1) Correspondingly in the second equation : αnn(t)η(t)P(t) quantifies the fraction of this quantity that undergoes a nn-division ; it doubles to yield 2 N and adds up to the pool N(t) (hence the factor 2) αpn(t)η(t)P(t) is the fraction of this quantity that undergoes a pn-division ; it doubles to yield 1 P and 1 N and only half (the N part) adds up to the pool N(t) (hence the factor 1) System (1) is a textbook continuous-time representation of population dynamics. It is a very good approximation of the evolution of progenitors and neurons, considering that division events are instantaneous (M-phase is very short compared to the cell cycle duration), and occur uniformly in time (asynchronously) [11]. Since αpp+αpn+αnn=1, system (1) can be rewritten: $$ \left\{ \begin{array}{ll} \dot{P}(t) &= (\alpha_{pp}(t) - \alpha_{nn}(t)) \eta P(t)\\ \dot{N}(t) &= \left(1-(\alpha_{pp}(t) - \alpha_{nn}(t))\right) \eta P(t) \end{array} \right. $$ so that the general form of the solution for the evolution of the pools is given by: $$ \left\{ \begin{aligned} P(t) &= P(0)\exp\left[{\int_{0}^{t}(\alpha_{pp}(\tau) - \alpha_{nn}(\tau))\eta(\tau) d\tau}\right] \\ N(t) &= N(0) + \int_{0}^{t} \left(1-(\alpha_{pp}(\tau) - \alpha_{nn}(\tau))\right) \eta(\tau) P(\tau) d\tau \end{aligned} \right. $$ Starting from an initial configuration, P(0)=1,N(0)=0 at time t0 and considering a steady rate η(t)=η, the system evolution will be only driven by the two functions αpp(t) and αnn(t). Calibration from data for the embryonic spinal cord In the embryonic spinal cord, pp-divisions are largely dominant at the beginning of the process so that proliferation increases the pool of progenitors for a while, but their proportion decreases with time so that the process ends with terminal neurogenic divisions. Estimations of MoD were collected by Saade et al. [14] at discrete times (Fig. 1a), as well as the corresponding evolutions of the pools of progenitors and neurons (Fig. 1b). We used these MoD data to calibrate the two continuous time functions αpp(t) and αnn(t), with αpn(t) being constrained to be their complement to 1 (Fig. 1a, Additional file 1). PN Model for the dynamics of Modes of Division (MoD) and evolution of cells population (P,N) in the developing ventral spinal cord. a MoD measured by [14] (square dots) and [11] (circles, bars are 95% CI). Black : pp-divisions, red : nn-divisions, blue : pn-divisions. Curves report the fitted continuous time functions. b Evolutions of the pools of progenitors (black) and neurons (red) from [14]. Circle points indicate estimates of P/N proportion from [11], and scaled to the total amount of cells. Black and red lines report numerical solution of system (3) using MoD shown in a). Green line reports the analytical solution for the P-pool (Eq. 6). c CDC25B Gain-of-Function promotes neurogenic divisions so that the transition from proliferation to differentiation is shifted 8 hours sooner (thick lines) than the CTL profiles (thin lines). d Predicted evolution of the pools of progenitors (black) and neurons (red) under GoF (thick lines) compared to CTL (thin lines). The dots report the proportion of progenitors / neurons measured in Bonnet et al. in GoF condition [11], scaled to the total amount predicted at their respective times. e CDC25B- ΔCDK Gain-of-Function have a differential effect upon neurogenic divisions: pp-divisions are shifted 2 hours sooner and nn-divisions are shifted 4 hours later. As a consequence, the complementary PN profile is enhanced (compared to the CTL) and lasts longer. f The dynamics of the two pools is very close to the CTL dynamics and match with the measured proportions given in Bonnet et al. [11] From a minimalistic approach, we constrain the shape of the two functions with a minimal set of parameters. The pp-divisions display an evolution from αpp(t0)=1 down to αpp(t→∞)=0. This transition will be characterized by a characteristic time τpp, with αpp(τpp)=0.5, and a characteristic scale σpp indicating the sharpness of transition. A standard form for this is: $$ \alpha_{pp} (t) = \frac{1}{2} \left[1 - \text{tanh} \left(\frac{t - \tau_{pp}}{\sigma_{pp}} \right)\right] $$ Least-square error estimation of the two parameters yields: τpp=67.0 hpf and σpp=8.0 hpf. The adjusted profile fits the data rather well (sq. error = 0.007). We fit the same kind of tanh profile for the evolution of nn-divisions from αnn(t0)=0 to αnn(t→∞), following: $$ \alpha_{nn} (t) = \frac{1}{2} \alpha_{nn}(t \rightarrow \infty) \left[1 + \text{tanh} \left(\frac{t - \tau_{nn}}{\sigma_{nn}} \right)\right] $$ We lack the data to fit exactly the plateau value and we set the reasonable value αnn(t→∞)=0.8. Least-square error estimation of the two parameters yields: τnn=79.3 hpf and σnn=14.5 hpf (sq. error = 0.03). With these profiles for αpp(t) and αnn(t), the evolution of the P-pool evolves according to (details in Methods Eq. 18): $$ {\begin{aligned} \frac{P(t)}{P(0)} =\exp\left[ \frac{\eta}{2} \right.& \left(\left[ t - \sigma_{pp}\ln \left(\frac{\cosh((t-\tau_{pp})/\sigma_{pp})}{\cosh(-\tau_{pp}/\sigma_{pp})}\right) \right]\right.\\ &\left.\left.- \alpha_{nn,\infty} \left[ t + \sigma_{nn}\ln \left(\frac{\cosh((t-\tau_{nn})/\sigma_{nn})}{\cosh(-\tau_{nn}/\sigma_{nn})}\right) \right] \right) \right] \end{aligned}} $$ Setting P(0)=1,N(0)=0 at time t0=44 hpf and η=1/12 hours [11, 15], this system yields a good account of the evolution of P,N pools as measured by Saade et al. [14] (Fig. 1b, original data were rescaled to correspond to the number of cells per progenitor originally present). At the beginning, the large bias towards pp-divisions amplifies the pool of progenitors up to a maximum value: Pmax[CTL]=5 per initial progenitor at around tmaxp[CTL]=72 hpf. Then, the production of neurons raises mainly due to pn-divisions, until nn-divisions become dominant over pn-divisions (at around 82-83 hpf). The pool of progenitors depletes to zero while nn-divisions increase the pool of neurons up to a plateau value of N(t→∞)[CTL]=17.6 neurons per progenitor initially present. We note that this evolution, and especially N(t→∞) is highly sensitive to the chosen initial condition (t0,P(t0)). This point is addressed below. Incorporating CDC25B experiments Bonnet et al. [11] have performed a series of experimental manipulations of the expression of CDC25B phosphatase in this biological system. Their experimental measures are the proportions of progenitors / neurons, and a corresponding measure of the modes of division, depending on the experimental conditions : Control (CTL), Gain of Function (CDC25B GoF) using the wild-type form of CDC25B, and Gain of Function using a CDC25B modified to be unable to interact with its known substrates CDKs (CDC25B ΔCDK GoF). Modes of division were measured by Bonnet et al. [11] at stage HH17, and fit well with the MoDs measured by Saade et al. [14] at time 72 hpf (Fig. 1-a, circle dots). However, to make the correspondence between P/N fractions reported in Bonnet et al. [11] and the P/N evolution measured in Saade et al. [14], we had to consider that the former correspond respectively to times 60 hpf and 84 hpf on the time scale in Saade et al. [14] (i.e. 12 h before and after 72 hpf, keeping the correct interval of 24h in between). To check the power of this simple model, we now explore the hypothesis that CDC25B GoF has only an effect upon the schedule of MoD transitions. We expect that GoF should trigger differentiation sooner in time, and indeed, the measured MoD in the GoF experiment can be fitted by shifting the three time profiles 8 hours sooner (Fig. 1-c). Interestingly, at time 72 hpf, this strongly affects αpp and αnn but leaves αpn unchanged. The corresponding evolutions of the pools P/N are strongly affected, since the progenitors lack time to proliferate, reaching now a maximum of Pmax[GoF]=2.6 per initial progenitor at around tmaxp[GoF]=64 hpf (Fig. 1-d). As a consequence, the pool of neurons increases sooner, but reaches a plateau value nearly half of that of the CTL condition, N(t→∞)[GoF]=9.2 neurons per initial progenitor. The proportions P/N measured by Bonnet et al. [11] fit well with this picture. The case of CDC25B ΔCDK GoF yields a different prediction. Here, the pp-divisions had to be advanced by 2 hours while the nn-divisions had to be delayed by 4 hours to correspond to the ones measured by Bonnet et al. [11] (Fig. 1e). As a result, the main effect of CDC25B ΔCDK GoF is to greatly promote pn-divisions, so they appear sooner and reach a higher proportion. This suggests that CDC25B ΔCDK GoF promotes self-renewing neurogenic pn-divisions, but fails to promote the transition from pn-divisions to nn-divisions as does CDC25B GoF. Here again, the predicted dynamics of the two pools fit well the proportions P/N measured by Bonnet et al. [11] (Fig. 1f). Remarkably, since the pn-divisions are neutral to the balance proliferation / differentiation, these dynamics are almost identical to the CTL case. We note that the effect of CDC25B- ΔCDK could not be detected by measuring only the P/N pools evolution. Altogether, the model given by the system 1 (PN model) expresses the dynamics at the population scale, yielding the evolution of the two kinds of cells: the pool of progenitors, and the pool of neurons. Being formulated at the population scale, the variables and the parameters represent averages over a large ensemble of cells. In the biological system, those averages can correspond to numerous scenarios at the cell level. Nonetheless, the model dynamics produced by Eqs. 3, 4, 5 should be taken as a point of reference because any scenario at the cell scale should reproduce these dynamics at the population scale. In that sense, PN model should be regarded as a way to describe a strong constraint over the set of possible cell-scale scenarios and a guide to narrow the research of mechanistic explanations. In the next section, we will use it as such in order to explore three scenarios incorporating a loss of proliferative capacity at the cell scale as a means to time the progression from proliferative to purely neurogenic divisions. Models with loss of proliferative capacity PN model is compatible with the simplest interpretation at the cell level: that each dividing cell is liable to stochastically produce the three possible MoD, in proportion to what is measured at the population scale. Since the data show that progenitors MoD display an irreversible vanishing of pp-divisions with time, we now explore alternative models in which we explicitly introduce loss of proliferative capacity at the cell scale, so that more and more dividing progenitors cannot perform proliferative divisions. This loss of proliferative capacity at the cell scale implies that the pool of progenitors is actually composed of different kinds of dividing cells. Let's consider the case with only two kinds of dividing cells, denoted G and A, where only cells of type G are able to perform proliferative divisions (G→(G,G)). A-cells would be produced by non proliferative MoD of G-cells when they stochastically adopt the alternative MoD, producing daughter cells with no proliferative capacity. The total pool of dividing cells (progenitors in the model 1) becomes P(t)=G(t)+A(t). The loss of proliferative capacity in cells of type A implies that they cannot give birth to a cell of type G nor perform proliferative divisions (A→(A,A)). Hence, they can only undergo asymmetric self-renewing neurogenic division A→(A,N) or symmetric consumptive neurogenic division A→(N,N). The only choice left then is to define the pair of cells produced by non proliferative MoD of G-cells. The only four possibilities are: G→(G,A) : asymmetric non-neurogenic division One cell keeps proliferative capacity (keeps type G) and one cell loses it (becomes type A). G→(A,A): symmetric non-neurogenic division The two daughter cells lose proliferative capacity but keep self-renewing capacity (both become type A). G→(A,N): asymmetric neurogenic division Both cells lose proliferative capacity, with one cell keeping self-renewing capacity (becomes type A) and the other cell will become a neuron. G→(N,N): symmetric neurogenic division The two cells will become neurons, with no proliferative nor self-renewing capacity. Using the nomenclature established in [5], the types and effects of those MoD are summarized in Table 1. Description of the MoD in the three models with loss of proliferative capacity Present in model G→(G,G) Symmetric proliferative Proliferative GGA, GAA, GAN G→(G,A) Asymmetric self-renewing GGA G→(A,A) Symmetric consumptive G→(A,N) Asymmetric consumptive G→(N,N) A→(A,N) A→(N,N) In symmetric divisions, the two daughter cells display the same identity. In asymmetric divisions, the two daughter cells have different identities. In self-renewing divisions, one of the daughter cells has the same identity as the mother cell. In consumptive divisions, the two daughter cells differ in identity from the mother cell. In neurogenic divisions, at least one daughter cell is a neuron We note that the fourth possibility would correspond to PN model (since no cell of type A would even be produced), but with such parameters that no asymmetric division would appear at all. We discard it in the spinal cord context since asymmetric divisions are observed. We examine below the three other scenarios, naming them after the specific non-proliferative MoD of the G-cells: GGA-model, GAA-model and GAN-model. Structural flaw of GGA model Under the GGA model, the MoD are : G→(G,G) and G→(G,A) for the G-cells. They can then perform either proliferative divisions or self-renewing divisions. As a consequence, this model cannot structurally account for the decreasing of the P-pool after 73 hpf. Even if their MoDs evolve from proliferative in the beginning to self-renewing in the end, the early proliferation would lead to a given amount of G-cells that could not decrease later and the G-pool would stabilize. When stabilized, it would continuously produce A-cells at a constant rate by self-renewing division. Since these A-cells would in turn differentiate into neurons, that would produce a population of neurons growing to infinite: the structure of the model would trap the dynamics in a perpetual regime of permanent production of neurons. This model is then to be rejected because of its structure. Incidentally, we note that this rejection based on the structure of the model is an indication that not any model with loss of proliferative capacity could fit the observed dynamics. Predictions of GAN model Writing explicitly the balance of evolution, the dynamics of GAN model obeys: $$ \left\{ \begin{aligned} \dot{G}(t) & = \eta \, \left[ -G(t) + 2 \alpha_{GGG}(t)G(t) \right]\\ \\ \dot{A}(t) & = \eta \, \left[-A(t) + \alpha_{GAN}(t)G(t) + \alpha_{AAN}(t)A(t) \right]\\ \\ \dot{N}(t) & = \eta \, \left[ \alpha_{GAN}(t)G(t) + 2 \alpha_{ANN}(t)A(t) \,+\, \alpha_{AAN}(t)A(t) \right] \\ \\ & \alpha_{GGG}(t) + \alpha_{GAN}(t)= 1 \; ; \; \alpha_{AAN}(t)+ \alpha_{ANN}(t) = 1 \end{aligned} \right. $$ Let's denote γG(t)=αGAN(t) and γA(t)=αANN(t). Using the fourth line of system (7), system (7) simplifies to (omitting time dependencies for clarity): $$ \left\{ \begin{aligned} \dot{G} & = \eta \, \left(1-2\gamma_{G}\right) G\\ \dot{A} & = \eta \, \left(\gamma_{G}G -\gamma_{A}A \right)\\ \dot{N} & = \eta \, \left(\gamma_{G}G + (1+\gamma_{A})A \right) \\ \end{aligned} \right. $$ showing that the evolution is fully determined by γG(t) and γA(t). To calibrate these two time-continuous functions, we will use the evolutions of MoD in the PN model for the three conditions (CTL, CDC25B GoF, CDC25B ΔCDK GoF). For this, we establish the correspondence between GAN model variables and PN model variables : $$ \left\{ \begin{aligned} P(t) &= G(t) + A(t) \\ \alpha_{pp}(t) &= (1-\gamma_{G}(t))\frac{G(t)}{G(t) + A(t)}\\ \\ \alpha_{pn}(t) &= \gamma_{G}(t)\frac{G(t)}{G(t) + A(t)} + (1-\gamma_{A}(t))\frac{A(t)}{G(t) + A(t)}\\ \\ \alpha_{nn}(t) &= \gamma_{A}(t) \, \frac{A(t)}{G(t) + A(t)} \\ \\ \end{aligned} \right. $$ To establish this correspondence, we have considered that the observable α∙∙(t) functions express the proportions of each MoD among a total number of divisions. They can be regarded as a probability that a given division is of a given kind of MoD. Hence, to reconstruct a given observable MoD, we have to multiply the probability that the corresponding kind of progenitor would adopt this MoD by the proportion of this kind of progenitors among the total number of progenitors. For instance, the probability observing a pp-division, αpp(t) (the observable proportion of proliferative divisions), is the probability that a given progenitor is of type G (namely G(t)/(G(t)+A(t))) times the probability that this progenitor performs an G→(G,G) division (αGGG(t)=1−γG(t)). We proceed this way for the three kinds of observable MoD, considering that the observed asymmetric divisions αpn aggregate the asymmetric divisions G→(A,N) by the G pool and asymmetric divisions A→(A,N) by the A pool. As shown in Methods, analytical inversion of the evolution of γG can be matched very well by a tanh ansatz, so we used the same function for the evolution of γA. To calibrate γG(t) and γA(t), we used the continuous time functions fitting the MoD in PN model to fit the two parameters of this ansatz by a least-square error procedure (full details are given in Methods "GAN calibration"). The fitted parameters are reported in Table 2, and the corresponding predictions for the evolutions of cells population are given in Fig. 2. GAN Model. The fitted evolutions of MoD of G-cells (γG) and A-cells (γA) (left column) and their respective predictions for the evolutions of populations (right column) are reported for the three experimental conditions. In the two GoF conditions, the thin lines report the CTL condition for eye-comparison. Under the CTL condition, the evolutions of the two MoD are very similar (a). Under GoF of the wild-type CDC25B, both evolutions are shifted sooner in time by the same delay (8 h, c). Under GoF of the mutated form of CDC25B, only the evolution of A-cells MoD is affected, being delayed by 11 hours (e). In the three cases, the fitted MoD predict evolutions of progenitors (P=G+A) and neurons (N) in accordance with the data (b, d, f) Parameters found for the GAN model (in hpf) CDC25B GoF CDC25B ΔCDK GoF τ G τ A σ G σ A In the CTL case, we found a remarkable convergence of the MoD evolutions for G-cells and A-cells and we recover a perfect prediction for the evolution of P(t) and N(t) populations. The typical time of MoD progression is 68hpf for the G-cells and 65.5hpf for the A-cells, and their progression rates are practically identical. In the beginning, the G-pool is mainly proliferating, while G→(G,G) is dominant over G→(A,N), for about 20 hours (Fig. 2a, green, γG(t)<0.5 before 68 hpf). This yields a growth of the G pool up to a peak at 4.5 G-cells (per initial G-cells) at 68 hpf (Fig. 2b, green). They represent 88% of P-cells at that time. After that peak, G-cells slowly decreases while populating A and N cells through G→(A,N) divisions. From that time, A-cells are produced up to a peak from which terminal neurogenic divisions A→(N,N) become dominant so the A-pool decreases and neural production ends, with about 20 neurons per initial progenitor. We note that the MoD of A-cells are already very skewed in favor of A→(N,N) at the time they begin to be produced by G→(A,N) divisions (Fig. 2a, blue, γA(t)>0.65 after 68 hpf). Hence, most A-cells are consumed by terminal divisions as soon as they are produced. Seeing this, we checked an even simpler scenario with only three modes of division: G→(G,G), G→(A,N), A→(N,N) so a progenitor issued from an asymmetric division (A-cells) would always differentiate into two neurons at the next cycle. This yields practically the same results (Additional file 2: Figure S1). In the CDC25B GoF case, the 8-hours advanced evolution of the MoD in PN model directly translates into an equivalent and parallel 8-hours advanced evolution for G and A MoD, which is not surprising given the calibration method. Contrastingly, the evolution of these MoD differs in the case of CDC25B ΔCDK GoF. As expected, the small advanced αpp profile little affects the progression of G-cells MoD. However, the 4-hours delayed αnn profile translates into a threefold larger delay for the A-cells MoD, namely they are shifted 11-hours later than in the CTL condition (76.4 hpf vs 65.4 hpf). As a consequence, the A→(A,N) MoD becomes operative since it is still around 0.5 when A-cells reach their peak. In the end, the production of neurons is very similar to the CTL value. Overall, this structure for introducing a type of cells with no more proliferative capacity appears perfectly compatible with the available data. Under this model, the evolutions of the MoD have two striking features: they show a monotone progression, and they are very similar to each other, opening the possibility that they could be under the control of a same regulation process (see below). Predictions of GAA model The dynamics of this model obey: $$ \left\{ \begin{aligned} \dot{G}(t) &= \eta \, \left[ -G(t) + 2 \alpha_{GGG}(t) G(t) \right]\\ \\ \dot{A}(t) &= \eta \, \left[ -A(t) + 2 \alpha_{GAA}(t) G(t) + \alpha_{AAN}(t)A(t) \right]\\ \\ \dot{N}(t) &= \eta \, \left[ 2 \alpha_{ANN}(t)A(t) + \alpha_{AAN}(t)A(t) \right]\\ \\ &\alpha_{GGG}(t) + \alpha_{GAA}(t)= 1 \; ; \; \alpha_{AAN}(t)+ \alpha_{ANN}(t) = 1 \end{aligned} \right. $$ Denoting γG(t)=αGAA(t) and γA(t)=αANN(t), system (10) simplifies to: $$ \left\{ \begin{aligned} \dot{G} & = \eta \, \left(1-2\gamma_{G}\right) G\\ \dot{A} & = \eta \, \left(2\gamma_{G}G -\gamma_{A}A \right)\\ \dot{N} & = \eta \, (1+\gamma_{A})A \\ \end{aligned} \right. $$ The correspondences between GAA scenario variables and the variables in PN model are: $$ \left\{ \begin{aligned} \alpha_{pp}(t) &= (1-\gamma_{G}(t)) \, \frac{G(t)}{G(t) + A(t)} + \gamma_{G}(t)\, \frac{G(t)}{G(t) + A(t)} \\ \\ \alpha_{pn}(t) &= (1-\gamma_{A}(t)) \, \frac{A(t)}{G(t)+A(t)} \\ \\ \alpha_{nn}(t) &= \gamma_{A}(t) \, \frac{A(t)}{G(t) + A(t)} \\ \\ P(t) &= G(t) + A(t) \\ \end{aligned} \right. $$ We used MoD fitted in PN model to calibrate the two MoD functions γG(t) and γA(t) the same way as we did for GAN model (full details in Methods "GAA calibration" section). The fitted parameters are given in Table 3 and the predicted evolutions are given in Fig. 3. GAA Model. Same conventions as Fig. 2: evolutions of MoD under the three experimental conditions (a, c, e) and corresponding predictions for progenitors and neurons (b, d, f). Under the GAA model, the evolutions of the two MoD are very different in the CTL condition : G-cells switch to G→ (A,A) MoD early in the process while A-cells keep dividing by self-renewing division A→(A,N) for a long time to compensate the lack of proliferation. Under GoF, both transitions are shifted sooner in time (by 8 hours). Under GoF of mutated CDC25B, only the evolution of A-cells MoD is affected, being delayed by 8 hours. In the three cases, the fitted MoD predict evolutions of progenitors (P=G+A) and neurons (N) in accordance with the data Parameters found for the GAA model (in hpf) Under CTL condition, we observe an abrupt and early switch of the G-cells MoD, from dominant G→(G,G) MoD before 60 hpf to dominant G→(A,A) MoD after 60 hpf (Fig. 3a, green). As a consequence, the P-pool is made of only G-cells up to that time (Fig. 3b, black and green curves). After that proliferative burst, G-cells mainly differentiate into A-cells, and the former become dominant in the system (Fig. 3b, blue curve). Contrastingly, the MoD of A-cells evolves smoothly (Fig. 3a, blue) and the characteristic time of their switch is as late as 79 hpf. This leaves time for A-cells to produce neurons by self-renewing divisions A→(A,N) and to compensate for the early stopping of proliferative divisions by G-cells. After 79 hpf, A-cells engage more and more in terminal differentiation until their extinction. The evolutions of P=G+A and N pools produced by these calibrated MoD match very well the measured ones (Fig. 3b, black and red curves). In CDC25B GoF condition, the 8-hours advance of MoD in PN model is directly reflected in the MoD for the G-cells (Fig. 3c). This is expected given the calibration procedure, and this is true also for the progression of the MoD for the A-cell, although their slopes are further smoothened. This results into P/N evolutions under GoF condition that match the profiles under PN model (Fig. 3d). In CDC25B ΔCDK GoF condition, the switch of MoD for the G-cells happens slightly sooner than in CTL condition (Fig. 3d, green), so that the total number of A cells produced by G→(A,A) is a bit lower (and hence so are P=G+A cells). On the contrary, the switch of MoD for the A-cells are delayed by about 5 hours (Fig. 3e, blue). This is consistent with the observation that pn-divisions in PN model are favored under CDC25B ΔCDK GoF condition where they operate for a longer time than in the CTL condition. Eventually, A-cells are fewer but self-renew longer and yield the same number of neurons as in CTL condition in the end. Overall, this structure for introducing a type of cells with no more proliferative capacity also appears compatible with the available data for the P/N evolutions. We note however that the MoD profiles obtained by analytical inversion do not fit the MoD fitted to the ansatz (details in Methods "GAA calibration" section). Models comparison Since the three models PN, GAA and GAN can be fitted to correctly predict the evolutions of P/N populations, they can only be discriminated by their capacity to reflect the measured evolutions of observable MoD, namely to account for both MoD and P/N evolutions at the same time. Importantly, we note that the three models do not differ in degrees of freedom, since they all have four parameters (two parameters per tanh function), so differences are only attributable to the difference in their structures. In Fig. 4, we report the reconstruction of observable MoD from the hidden MoD in the GAN and GAA models, along with the MoD directly fitted at the PN level. Visual inspection is sufficient to prefer GAN model over GAA model. Compatibility of models PN, GAN and GAA regarding the MoDs. The fitted MoD in the PN model are reported for eye comparison (a, b, c, same data as in Fig. 1-a, c, e). Observable MoD reconstructed from the evolutions of G/A MoD under GAN (d, e,f) and GAA (g, h, i) models, and for the three experimental conditions. GAN model perfectly matches the observed MoD. GAA model is to be rejected GAN and PN models however differ only slightly. We note a difference at the beginning of the process where nn-divisions rise up later in GAN model than in PN model and seem more adequate. This difference is due to the fact that, in GAN model, nn-divisions are A→(N,N) divisions convoluted by the population of A-cells so they cannot appear before the A-pool has increased. In PN model, they can happen earlier through nn-divisions of P-cells that are present from the beginning. Overall, if the temporality of the transition among the three modes of division should be controlled by a loss of proliferative capacity in more and more progenitors, then the structure of GAN model should be retained as the best scenario. Securing robustness against initial conditions and perturbations In the calibration of PN model, we have mentioned that the obtained dynamics were highly sensitive to the chosen initial condition (t0,P(t0)). This is also true for GAN model. In terms of dynamical systems theory, PN and GAN models are non autonomous linear systems of ODE because we have considered so far that the evolutions of MoD were decoupled from the evolutions of cells population (MoD were taken as inputs, cells production as outputs) as if MoD were controlled by an external process insensitive to the current amount of cells. In linear models, the final number of produced neurons must be proportional to P(t0), hence the sensitivity. For the sake of completeness of our modeling proposal, we now speculate about formal refinement that could secure robustness against initial conditions or perturbations. To secure robustness, we have to introduce some feedback control so that the state of the system (the current amount of P/N or G/A/N cells) would directly affect the MoD (see e.g. [13]). For instance, the current amount of P-cells could favor the progression to neurogenic divisions, so that the accumulation of P-cells by initial proliferation would finally promote more and more nn-divisions. The current amount of N-cells could as well favor neurogenic divisions, so that few N-cells in the beginning would promote pp-divisions (proliferation) while later accumulation of N-cells would progressively dampen proliferation down. We have systematically explored every possible combination [16], and we present here the one that appeared as the most consistent with the data: the one in which the MoD evolutions are controlled by the total amount of cells. In the terminology of dynamical systems, the PN model with feedback (hereafter denoted PN+fb model) becomes autonomous non linear, following : $$ \left\{ \begin{array}{ll} \dot{P}(t) &= (\alpha_{pp}(P,N) - \alpha_{nn}(P,N)) \eta P(t)\\ \dot{N}(t) &= \left(1-(\alpha_{pp}(P,N) - \alpha_{nn}(P,N))\right) \eta P(t) \end{array} \right. $$ To establish the form of this feedback control, we plot the MoD as a function of the total amount of cells all along the process in PN model (Fig. 5a, black curves). This suggests, here again, using tanh as an ansatz and the control takes the form: $$ \left\{ \begin{aligned} \alpha_{pp}(P,N) &= \frac{1}{2} \left[1 - \tanh \left(\frac{P+N - \kappa_{pp}}{s_{pp}} \right)\right] \\ \alpha_{nn}(P,N) & = \frac{1}{2} \left[1 + \tanh \left(\frac{P+N - \kappa_{nn}}{s_{nn}} \right)\right] \\ \end{aligned} \right. $$ Models PN and GAN with feedback control. Parametric plots of MoD and total amount of cells for PN model (a) and GAN model (d) (black curves). Feedback control functions with tanh shape were fitted for the two MoD in PN model (red curves), and only one for GAN model (fitting γG, red curve). The corresponding predictions are given for the evolution of MoD (b and e), which now result for the dynamics, and the cells population (c and f). In GAN model, using only one feedback control for the two MoD recovers perfectly the observed data The fitted functions are reported in Fig. 5a (red curves), with κpp=6.4, κnn=13.2, spp=3 and snn=13.2. Using system (13) with (14), we recover the dynamics of MoD and P/N populations (Fig. 5b and c). Importantly, the MoD are now controlled by the evolution of P/N cells and are not the result of a direct fitting anymore. Likewise, the GAN model would become: $$ \left\{ \begin{aligned} \dot{G} & = \eta \, \left(1-2\gamma_{G}(G+A+N)\right) G\\ \dot{A} & = \eta \, \left(\gamma_{G}(G+A+N)G -\gamma_{A}(G+A+N)A \right)\\ \dot{N} & = \eta \, \left(\gamma_{G}(G+A+N)G + (1+\gamma_{A}(G+A+N))A \right) \\ \end{aligned} \right. $$ However, plotting the MoD as a function of the total amount of cells (Fig. 5d, black curves) suggests that both MoD could be driven by one and the same feedback. Denoting γ≡γG(=γA), GAN+fb is finally: $$ \left\{ \begin{aligned} \dot{G} & = \eta \, G \,\left(1-2\gamma(G+A+N)\right)\\ \dot{A} & = \eta \, (G - A)\, \gamma(G+A+N) \\ \dot{N} & = \eta \, \left[A + (G + A)\,\gamma(G+A+N) \right] \\ \end{aligned} \right. $$ $$ \gamma(G,A,N) = \frac{1}{2} \left[1 + \tanh \left(\frac{G+A+N - \kappa_{gan}}{s_{gan}} \right)\right] $$ The fitted function is reported in Fig. 5d (red curve), with κgan=6.9 and sgan=3.5. Here again, the MoD are now controlled by the evolution of G/A/N cells and not the result of a direct fitting anymore. Using system (16) with (17), we recover the dynamics of MoD and G/A/N populations (Fig. 5e and f). By introducing this feedback control, the dynamics would gain robustness against (reasonable) perturbations and converge to the same amount of neurons (three illustrations are given in Additional file 3: Figure S2). In the end, GAN model appears quite relevant as it allows to robustly account for the whole process with only two parameters κgan,sgan (in addition to η) and it matches the data very well. Our question was to test whether the progression from proliferation to neurogenic divisions can be explained by a loss of proliferative capacity in an increasing proportion of progenitors. To this end, we have first established a general restriction-free model with progenitors able to perform any kind of division (PN model). Fitting the evolution of its MoDs (PP, PN, NN) from data published by Saade et al. [14], we found smooth MoDs time-profiles that can account for the evolution of the P and N pools reported in [14]. We consider that this general model reflects Sox2 progenitors and HuC/D neurons immunostaining together with the biomarkers which allow differentiating proliferative versus neurogenic divisions [14]. We take it as a benchmark to constrain refined scenarios with heterogeneous progenitors. We note that its general structure is also compatible with a broad description of progenitors / neurons evolution in the neocortex [17, 18]. It should hold as well for other neural tube zones, such as the dorsal area where CDC25B is expressed at the peak of neuronal production [10, 11]. We characterized the behavior of this model under CDC25B GoF experiments carried out by some of us [11] and this gives support to the hypothesis that the action of this phosphatase could be to advance MoD progression, acting there as a maturation factor. Next we have explored three model structures embedding a loss of proliferative capacity in progenitors, introducing two different progenitors population with the structural constraint that one of them cannot do proliferative divisions. For the three models compatible with this constraint, we have derived the corresponding system of evolution equations. One model (GGA) has been discarded because it could not structurally account for the observed evolutions. For the two other models (GAN, GAA), we have established the correspondence between the evolution of their MoDs and the evolution of the MoDs observed in the benchmark PN model. This correspondence was used to calibrate their parameters and compute their predictions. From these two models, only GAN model appeared to be structurally compatible with observed MoD and P/N evolutions at the same time. In this model, the MoD of G and A cells evolve at a common pace in the CTL condition, opening the possibility that both are under control of the same regulators. CDC25B GoF accelerates them the same way while CDC25B ΔCDK GoF only delays MoD of A-cells. We note that our modeling proposition displays an important difference with the model proposed by Saade et al. themselves [14] (see also [19]): we do not detect a strong switch of MoD at the population level. Their basic model incorporates an all-or-nothing switch at time t∗≃80 hpf with only proliferative divisions (pp) before t∗ and only neurogenic divisions (pn or nn) after t∗. This is equivalent to a loss of proliferative capacity that would apply to progenitors all at once, at time t∗. Translated in terms of GAN model, all G-cells would instantly become A-cells at time t∗, whatever their phase in cell cycle. They next extend this model to allow smoother transitions, division asynchrony, accelerating cell cycle and a de novo incorporation of new progenitors under the induction of Shh. Even with this smoother model, their fitting yields a sharp extinction of pp-divisions at 73 hpf (from 60% to 0% within one hour). It is difficult to determine how this finding is constrained by the initial choice in their basic model, but this predicted evolution of the MoD appears at odd with their experimental observations of MoD and can predict a meaningful evolution of the P/N populations only due to the ad hoc additional source that compensates for the early and sharp extinction of proliferative pp-divisions. We observe that our model does not incorporate a source of progenitors so the structures of the models are different. We also note that the fitting procedures were not the same. Saade et al. fit the 13 free parameters of their extended model using an error minimization algorithm with respect to the experimental data [14] (Extended Experimental Procedures — Mathematical Modeling). As we understand this sentence, they fit the MoD profiles and the source intensity so that the predicted dynamics of the P/N populations matched as close as possible the observed evolution. We have proceeded differently: we have minimized the error between the modeled MoD evolution and the observed MoD evolution, and only then we have checked how the predicted P/N evolutions match or not the observed ones. As a consequence of our procedure, the MoD profiles in the PN and in the GAN models are by construction as close as possible to the observed MoD, and we have no freely adjustable parameters. Importantly, both procedures have to set an initial condition (i.e. an absolute time 0 at which we fix the initial pool of progenitors), and since proliferative processes are exponential, evolutions of P/N populations are highly sensitive to that choice. We can guess that a small change (by more or less two hours) of that "time 0" in Saade et al. model would have a strong effect upon the required intensity of their additional source. Our first versions of PN and GAN models are to the same extent sensitive to the choice of "time 0". This sensitivity to initial conditions and timing is due to the fact that these models consider the evolution of MoD as decoupled from the evolution of the populations. The crucial point here is that the relative error for experimental data is the highest at early time, because there are few progenitors then, and the developmental stage is only determinable with an error of the same extent (more or less two hours). To gain robustness against the indetermination of "time 0", we have incorporated an hypothetical feedback process so that the evolution of the MoD could be regulated by the state of the system at any time. In these second versions (PN+fb, GAN+fb), there is no need anymore to specify an absolute time scale for the evolution of MoD since it is paced by the evolution of the cells population. This opens new questions about the regulation of CDC25B by upstream signaling, since the maturation factor should itself be under the control of a regulator sensitive to the local amount of cells in the system (e.g. its local extension). Finally, we advocate that our GAN models (GAN or GAN+fb) indeed incorporate a switch mechanism, but it is specified at the cell level: the switch operates when a daughter of a G-cell loses its proliferative capacity and becomes an A-cell. Considering that a A-cell loses its proliferative capacity during the M-phase of its G-parent, it would only display its new division modes (AAN, ANN) at its next M-phase. At the population scale, this cell-based MoD switching would then require at least one cell cycle time length to fully display. This is the order of time we observe in the GAN models where the MoD progression at the population level happens over one cell cycle length (12 hours). Under the hypothesis of asynchronous divisions, the smooth progression of MoD in GAN models at the population scale is then compatible with an abrupt signaling event at the cell scale. From a modeling standpoint (where modeling is used as a way to gain clarity in the face of intricacy [13]), GAN models displays several interesting features compared to PN model. First, CDC25B GoF effect is the same for both models: it hastens MoD progression to neurogenic divisions. Secondly, CDC25B ΔCDK GoF effect can be interpreted straightforwardly in GAN model: the phosphatase unable to interact with its CDK substrate just delays the progression of A-cells MoD (it maintains A-cells in self-renewing mode for a longer time). By contrast, CDC25B ΔCDK GoF effect appears as compound in PN model, so it would ask for a convoluted explanation for the differential effect upon advanced pp-divisions and delayed nn-divisions. Thirdly, GAN model can be considered as simpler to interpret from a mechanistic point of view since both types of progenitors display the same monotone evolution of their MoDs. Introducing feedback control to secure some robustness, we showed that MoDs could be under the control of the same signal accumulating monotonously over time, and reflecting directly the system size. With this feedback control, GAN+fb could account for the whole dynamics with only three parameters: η which basically represents the unit time of the dynamics, and the two parameters of the feedback control: κgan determines the critical size of the total population above which neurogenic divisions become dominant, and sgan determines how sharp the feedback is. In contrast, PN model would call for a specific explanation of the non monotone evolution of pn-divisions as well as an explanation of the complicated progression among MoDs (five parameters in PN+fb). Still, the lack of clear discrimination between PN and GAN models is interesting because it shows that the two biological hypotheses (one kind of progenitors able to perform the three kinds of division versus two kinds of progenitors with a loss of proliferative capacity in one kind) can produce predictions compatible with both the MoD and populations evolutions. Since these measures are averages over population, this calls for alternative experimental strategies to support further the plausibility of GAN model. Actually, the two models yield very different predictions if we consider the distributions of content in progenitors/neurons issued from a single initial progenitors (distribution of progenitors/neurons within clones, see Methods "PN and GAN models predictions for clones contents." section for an illustration). So, the most appealing alternative would be to collect data at the cell scale, either by performing lineage tracing or collecting data about clones contents. Solving P(t) from Eq. 3 $$ {\begin{aligned} \begin{array}{llll} P(t) &= P(0)\exp \left[ \eta \int_{0}^{t}(\alpha_{pp}(\tau) - \alpha_{nn}(\tau)) d\tau \right] \\ &= P(0)\exp \left[ \eta \int_{0}^{t} \left(\frac{1}{2} \left[1 - \text{tanh} \left(\frac{\tau - \tau_{pp}}{\sigma_{pp}} \right)\right] \right.\right.\\ & \left. \left.\quad - \frac{1}{2} \alpha_{nn,\infty} \left[1 + \text{tanh} \left(\frac{\tau - \tau_{nn}}{\sigma_{nn}} \right)\right] \right) d\tau \right]\\ &= P(0)\exp \left[ \eta \left(\left(\frac{1}{2} t - \frac{\sigma_{pp}}{2}\ln\left[\frac{\cosh((t-\tau_{pp})/\sigma_{pp})}{\cosh(-\tau_{pp}/\sigma_{pp})}\right] \right)\right.\right.\\ & \left.\left.- \left(\frac{\alpha_{nn,\infty}}{2} t + \frac{\alpha_{nn,\infty}\sigma_{nn}}{2}\ln\left[\frac{\cosh((t-\tau_{nn})/\sigma_{nn})}{\cosh(-\tau_{nn}/\sigma_{nn})} \right] \right) \right) \right]\\ \end{array} \end{aligned}} $$ hence : $$ {\begin{aligned} \begin{array}{lll} \frac{P(t)}{P(0)} = \exp\left[{\vphantom{t - \sigma_{pp}\ln\left(\frac{\cosh((t-\tau_{pp})/\sigma_{pp})}{\cosh(-\tau_{pp}/\sigma_{pp})}\right)}}\right. \frac{\eta}{2} &\left({\vphantom{t - \sigma_{pp}\ln\left(\frac{\cosh((t-\tau_{pp})/\sigma_{pp})}{\cosh(-\tau_{pp}/\sigma_{pp})}\right)}}\right. &\left[ t - \sigma_{pp}\ln\left(\frac{\cosh((t-\tau_{pp})/\sigma_{pp})}{\cosh(-\tau_{pp}/\sigma_{pp})}\right) \right]\\ &- \alpha_{nn,\infty} &\left.\left.\left[ t + \sigma_{nn}\ln\left(\frac{\cosh((t-\tau_{nn})/\sigma_{nn})}{\cosh(-\tau_{nn}/\sigma_{nn})}\right) \right] \right) \right] \end{array} \end{aligned}} $$ GAN calibration Estimating γ G(t) from α GGG(t) Under the GAN model, we have at any time the structural correspondence between the two models: $$ \alpha_{GGG}(t)G(t) = \alpha_{pp}(t)P(t) $$ Setting G(0)=1, we have an explicit solution for G(t) depending on αGGG(t) only: $$ G(t) = G(0)\exp\left[\eta \int_{0}^{t} (2\alpha_{GGG}(\tau)-1)d\tau\right] $$ so we have : $$ \alpha_{GGG}(t)G(0)\exp\left[\eta \int_{0}^{t} (2\alpha_{GGG}(\tau)-1)d\tau\right] = \alpha_{pp}(t)P(t) $$ We seek a direct expression for αGGG(t) despite αGGG(t) appears twice, with once in an integral term. The lhs (left-hand-side) term can be rewritten: $$ \begin{aligned} & \alpha_{GGG}(t)G(0)\exp\left[\eta \int_{0}^{t} (2\alpha_{GGG}(\tau)-1)d\tau\right] \\ = &\alpha_{GGG}(t)G(0)\exp\left[2 \eta \int_{0}^{t} \alpha_{GGG}(\tau)d\tau \right] exp\left[ -\eta t\right] \end{aligned} $$ Plugging into Eq. 22, and grouping αGGG terms on the left side, we have: $$ \alpha_{GGG}(t)\exp\!\left[2 \eta \int_{0}^{t} \alpha_{GGG}(\tau)d\tau \!\right] \,=\, \frac{1}{G(0)}\alpha_{pp}(t)P(t)\exp(\eta t) $$ The lhs can be read as a time-derivative: $$ {\begin{aligned} \frac{d}{dt}\!\left(\frac{1}{2\eta}\exp\!\left[2 \eta \int_{0}^{t} \!\alpha_{GGG}(\tau)d\tau \!\right]\right) \,=\, \frac{1}{G(0)}\alpha_{pp}(t)P(t)\exp(\eta t) \end{aligned}} $$ Integrating both sides over [0..t] : $$ {\begin{aligned} \int_{0}^{t}dt' \frac{d}{dt'}\left(\frac{1}{2\eta}\exp\left[2 \eta \int_{0}^{t'} \alpha_{GGG}(\tau)d\tau \right]\right) = \int_{0}^{t}d\tau\frac{1}{G(0)}\alpha_{pp}(\tau)P(\tau)\exp(\eta \tau) \end{aligned}} $$ Solving the lhs integral: $$ {\begin{aligned} \frac{1}{2\eta}\exp\left(2 \eta \int_{0}^{t} \alpha_{GGG}(\tau)d\tau \right) -\frac{1}{2\eta} = \int_{0}^{t}d\tau\frac{1}{G(0)}\alpha_{pp}(\tau)P(\tau)\exp(\eta \tau) \end{aligned}} $$ Rearranging terms and taking the ln of both sides : $$ {\begin{aligned} \int_{0}^{t} \alpha_{GGG}(\tau)d\tau \,=\, \frac{1}{2\eta} \ln\left(1+ \frac{2\eta}{G(0)}\int_{0}^{t}d\tau\alpha_{pp}(\tau)P(\tau)\exp(\eta \tau) \right) \end{aligned}} $$ Taking the time derivatives of both sides: $$ \begin{aligned} \alpha_{GGG}(t) & = \frac{d}{dt}\left(\frac{1}{2\eta} \ln\left(1+ \frac{2\eta}{G(0)}\int_{0}^{t}d\tau\alpha_{pp}(\tau)P(\tau)\exp(\eta \tau) \right) \right)\\ & = \frac{1}{2\eta} \frac{d}{dt}\ln\left(1+ \frac{2\eta}{G(0)}\int_{0}^{t}d\tau\alpha_{pp}(\tau)P(\tau)\exp(\eta \tau) \right) \end{aligned} $$ Solving the derivative in the rhs: $$ \alpha_{GGG}(t) = \frac{1}{2\eta} \frac{ \frac{2\eta}{G(0)} \alpha_{pp}(t)P(t)\exp(\eta t) }{1+ \frac{2\eta}{G(0)}\int_{0}^{t}d\tau\alpha_{pp}(\tau)P(\tau)\exp(\eta \tau)} $$ which simplifies to: $$ \alpha_{GGG}(t) = \frac{ \alpha_{pp}(t)P(t)\exp(\eta t) }{G(0)+ 2\eta\int_{0}^{t}d\tau\alpha_{pp}(\tau)P(\tau)\exp(\eta \tau)} $$ so we can estimate γG(t)=1−αGGG(t) from : $$ {\gamma}_{G}(t) = 1 - \frac{ {\alpha}_{pp}(t){P}(t)\exp(\eta t) }{G(0)+ 2\eta\int_{0}^{t}d\tau{\alpha}_{pp}(\tau){P}(\tau)\exp(\eta \tau)} $$ using the evolution of αpp(t) (Eq. 4) and P(t) (Eq. 6) obtained in the three experimental conditions. The results are given in Additional file 4: Figure S3 (green curves). We note that calibrating γG(t) by this method only yields a raw unparameterized temporal series. The obtained results however strongly suggest an hyperbolic tangent shape (tanh) as an ansatz for this evolution, following: $$ \gamma'_{G}(t, \tau_{G}, \sigma_{G}) = \frac{1}{2} \left[1 + \tanh \left(\frac{t - \tau_{G}}{\sigma_{G}} \right)\right] $$ To parametrize γG′, we seek the pair \((\tau ^{*}_{G}, \sigma ^{*}_{G})\) that minimizes the error between the evolution predicted by system (7) and the observed evolutions in the PN model (1). Using Eq. 22, we then seek to minimize the error function: $$ {\begin{aligned} E(\tau_{G},\sigma_{G})& = \int_{0}^{T}dt \left({\vphantom{\int_{0}^{t}}} \hat{\alpha}_{pp}(t)\hat{P}(t) \right. \\ &\qquad \left. - \left(1-\gamma^{\prime}_{G}(t, \tau_{G}, \sigma_{G})\right)G(0)\exp\left[\eta \int_{0}^{t} (1-2\gamma^{\prime}_{G}(\tau, \tau_{G}, \sigma_{G}))d\tau\right] \right)^{2} \end{aligned}} $$ We used Nelder-Mead optimization from R-software 'optim' using time-discretized series with Δt=0.01 hour, T=96h. The tanh ansatz appears to match perfectly the analytically derived time series (Additional file 4: Figure S3, black curves), so we parametrize γG with the corresponding parameters \(\tau ^{*}_{G}\) and \(\sigma ^{*}_{G}\): $$ \gamma_{G}(t) = \frac{1}{2} \left[1 + \tanh \left(\frac{t - \tau^{*}_{G}}{\sigma^{*}_{G}} \right)\right] $$ The fitted values for \(\left (\tau ^{*}_{G}, \sigma ^{*}_{G}\right)\) under the three experimental conditions are given in Table 2. Estimating γ A Considering the parameter for the evolution of population A, the structural correspondence is: $$ \begin{aligned} \gamma_{A}(t) \, \frac{A(t)}{G(t) + A(t)} = \alpha_{nn}(t) \end{aligned} $$ Here, A(t) is governed by: $$ \begin{aligned} \dot{A}(t) &= \eta \, \left[ \gamma_{G}(t) G(t) - \gamma_{A}(t) A(t)\right] \end{aligned} $$ so we can not obtain an explicit solution for A(t) as a function of γA(t). Hence, we proceed with the ansatz method, testing a tanh shape for γA(t), following: $$ \gamma_{A} (t,\tau_{A},\sigma_{A}) = \frac{1}{2} \left[1 + \text{tanh} \left(\frac{t - \tau_{A}}{\sigma_{A}} \right)\right] $$ Since γG(t) and G(t) are known from section above, we can then use Eq. 37 to numerically solve the evolution of population A, once given γA(t,τA,σA). We denote A(t,τA,σA) this numerical solution. We then seek the pair \((\tau ^{*}_{A},\sigma ^{*}_{A})\) that minimizes the square error: $$ {\begin{aligned} E(\tau_{A},\sigma_{A}) \,=\, \int_{0}^{T}dt \ \left({\alpha}_{nn}(t) - \gamma_{A}(t,\tau_{A},\sigma_{A})\frac{A(t,\tau_{A},\sigma_{A})}{G(t) + A(t,\tau_{A},\sigma_{A})} \right)^{2} \end{aligned}} $$ using Nelder-Mead optimization over time-discretized series (with Δt=0.01 hour, T=96h). The fitted values for \((\tau ^{*}_{A}, \sigma ^{*}_{A})\) under the three experimental conditions are given in Table 2. The tanh ansatz seems to be highly relevant since the predicted evolutions for the evolution of the P,N populations are well in accordance with the observed ones (Fig. 2). GAA calibration Under GAA model, we have the structural correspondence between the two models: $$ \begin{aligned} \alpha_{pp}(t) & = (1-\gamma_{G}(t)) \, \frac{G(t)}{G(t) + A(t)} + \gamma_{G}(t)\, \frac{G(t)}{G(t) + A(t)}\\ & = \frac{G(t)}{G(t) + A(t)} \end{aligned} $$ Using P(t)=G(t)+A(t), we obtain: $$ \begin{aligned} G(t) & = \alpha_{pp}(t) \left(G(t) + A(t) \right) = \alpha_{pp}(t) P(t) \end{aligned} $$ Setting G(0)=1, we have an explicit solution for G(t) depending on γG(t) only: $$ \begin{aligned} G(t) = G(0)\exp\left[ \eta\int_{0}^{t}(2\alpha_{GGG}(\tau)-1) d\tau \right] \end{aligned} $$ $$ G(0)\exp\left[\eta \int_{0}^{t} (2\alpha_{GGG}(\tau)-1)d\tau\right] = \alpha_{pp}(t)P(t) $$ We seek a direct expression for αGGG(t). From Eq. 43, we have: $$ \int_{0}^{t} \alpha_{GGG}(\tau)d\tau = \frac{1}{2\eta}\ln\left[ \frac{1}{G(0)} \alpha_{pp}(t)P(t)\exp(\eta t) \right] $$ Taking the time derivatives of both sides, we obtain: $$ \alpha_{GGG}(t) = \frac{1}{2\eta}\left[ \frac{\dot{\alpha}_{pp}(t)P(t)+\alpha_{pp}(t)\dot{P}(t)}{\alpha_{pp}(t)P(t)} + \eta \right] $$ so we can estimate γG(t)=1−αGGG(t) from this expression, using the evolution of αpp(t) (Eq. 4) and P(t) (Eq. 6) obtained in the three experimental conditions, and numerical derivation for \(\dot {\alpha }_{pp}(t)\) and \(\dot {P}(t)\). The results are given in Additional file 5: Figure S4 (green curves). It appeared that the estimated functions γG(t) violate the constraint of belonging to the interval [0..1]. This is the sign that this model can not at the same time be adjusted to the MoD of PN model and predict correct evolutions for the P,N populations. Notwithstanding, we proceeded with the ansatz method in order to examine which γG(t) would yield correct predictions for the P,N populations. Setting a tanh shape for it, $$ \gamma_{G}(t, \tau_{G}, \sigma_{G}) = \frac{1}{2} \left[1 + \text{tanh} \left(\frac{t - \tau_{G}}{\sigma_{G}} \right)\right] $$ we then seek the pair \(\left (\tau ^{*}_{G},\sigma ^{*}_{G}\right)\) that minimizes the error of prediction upon \(\hat {\alpha }_{pp}(t) \hat {P}(t)\), given by: $$ {\begin{aligned} E(\tau_{G},\sigma_{G}) = \int_{0}^{T}dt \left(\hat{\alpha}_{pp}(t)\hat{P}(t) - \exp \left[\int_{0}^{t}d\tau \ \eta \left(1-2 \gamma_{G}(\tau, \tau_{G}, \sigma_{G}) \right) \right] \right)^{2} \end{aligned}} $$ We used Nelder-Mead optimization over time-discretized series (with Δt=0.01 hour, T=96h). To estimate γA for model GAA, we proceeded the same way as for the model GAN, except that we used: $$ \begin{aligned} \dot{A}(t) &= \eta \, \left[ 2 \gamma_{G}(t) G(t) - \gamma_{A}(t) A(t)\right] \end{aligned} $$ The fitted values for \(\left (\tau ^{*}_{A}, \sigma ^{*}_{A}\right)\) under the three experimental conditions are given in Table 3. The tanh ansatz can then be adjusted to produce predicted evolutions of P,N populations in accordance with the observed ones (Fig. 3). PN and GAN models predictions for clones contents. We illustrate here that even if PN and GAN models yield the same predictions regarding the averaged populations of progenitors and neurons they produce, they however differ in predictions if we consider the distributions of contents in progenitors/neurons issued from a single initial progenitors (distribution of progenitors/neurons within clones). Due to the stochastic nature of the MoD embedded in the model, each initial progenitor should indeed produce a stochastic tree of descent. Clone contents are then defined here by the pairs (number of progenitors, number of neurons) obtained after a number C of cell cycles. For instance, if an initial P-cell undergoes a first division of PN MoD, it will produce one neuron of generation 1, and one progenitor of generation 1. If the latter undergoes a nn-division, it will produce two neurons of generation 2, so in the end the content of the clone after two cell cycles will be (0,3). Another initial P-cell could undergo a first pp-division producing two progenitors of generation 1; if one of them undergoes a pp-division, and the other one undergoes a nn-division, this will end in a (2,2) clone content at generation 2. We can then compute the statistical distribution of these contents by repeatedly sampling the stochastic production of trees of descent. To build an illustration of the above process in a simple manner, we consider here MoD that are fixed in time, and we compute the distribution of clones contents produced by G-cells after two cell cycles (we used 106 stochastic samples under each model). To make predictions comparable, we fix the MoD under both models including feedback control at the values they have at the same time point. We chose that time point as 68 hpf, i.e. the time at which MoD of G-cells becomes predominantly neurogenic in GAN model (the conclusions are independent of that choice). Hence, the MoD values we used are: αpp=0.4975 and αnn=0.135 in PN model, and γG=γA=0.446 in GAN model. Importantly for the comparison, both models expectedly predict similar amounts of averaged number of P/N-cells after two generations (P2=1.86 and N2=1.51 with PN model, and P2=1.96 and N2=1.59 with GAN model; in both cases, the observed proportions of progenitors are 55.2% at generation 2). The expected clone contents are reported in Table 4 for PN model and in Table 5 for GAN model (in these tables, empty cells are unreachable contents). The different clone contents would not appear with same probabilities under the two scenarios. For instance, clones made of (P,N)=(0,3) should appear in 5% of clones under PN model whereas they should appear in 20% of clones under GAN model. Even more discriminative, the content (P,N)=(2,1) which is the most expected under PN model should not appear at all under GAN model (at generation 2 of a G-cell). Further theoretical work is needed to build completely usable predictions to be compared with experimental data, taking into account asynchronous divisions, time mixing of G/A populations in the GAN model and MoD evolving with time or by feedback control. PN predictions for the expected fraction of each clone contents (P,N) at generation 2 GAN predictions for clone contents at generation 2 BMP: CDC25B: Cell division cycle 25 B phosphatase CDK: Cyclin-dependent kinases CNS: CTL: Mode of division NSC: Neural stem cells Shh: Sonic hedgehog MA warmly thanks Leah Edelstein-Keshet for the 3-months stay in her lab at the onset of this project. Work in FP's laboratory is supported by the Centre National de la Recherche Scientifique, Université P. Sabatier, Ministère de L'Enseignement Supérieur et de la Recherche (MESR), the Fondation pour la Recherche sur le Cancer (ARC; PJA 20131200138) and the Fédération pour la Recherche sur le Cerveau (FRC; CBD_14-V5-14_FRC). Work in JG's laboratory is supported by the Centre National de la Recherche Scientifique, Université P. Sabatier, Ministère de L'Enseignement Supérieur et de la Recherche (MESR), the Agence Nationale de la Recherche (ANR-15-CE13- 0010-01). Manon Azaïs is recipient of MESR studentships. Angie Molina is a recipient of IDEX UNITI and Fondation ARC. The funding entities had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The datasets supporting the conclusions of this article are included within the article (and its additional files). MA, EA, JG, AM, FP and AV contributed to the conceptualization. MA, SB, JG, JMT and AV contributed to the formal developments. MA, EA, JG, AM and FP wrote the paper. All authors read and approved the final manuscript. The authors declare that no competing interests exist. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Additional file 1 All data and codes used to generate the figures are contained in the R script DataAndCode.R. (R 72 kb) Additional file 2 Simplified GAN Model. Same legend as Fig. 2. The simplified version of GAN model is when a A-cell only performs A→(N,N) divisions, so γA(t) is forced to the value 1 at any time. This simplified version yields predictions which are practically identical to GAN predictions, except a slight difference in the early rise of nn-divisions, and an incorrect prediction for the MoD under the GoF of mutated CDC25B experiment (i). (PDF 22 kb) Additional file 3 In all cases, the models with feedback control converge to about the same final amount of neurons. (PDF 21 kb) Additional file 4 Analytical and least-square fitted γG(t)for GAN model. Predicted evolution of γG(t)obtained by analytical inversion are reported in green. Fitted tanh ansatz are reported in black and perfectly overlap. (PDF 199 kb) Additional file 5 Analytical and least-square fitted γG(t) for GAA model. Same conventions as in Additional file 4: Figure S3. In GAA model, the analytical inversion of γG(t)yields an evolution that violates the constraint of belonging to the interval [0..1] (green curves). Fitted tanh ansatz are reported in black. (PDF 187 kb) Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, Toulouse, France Centre de Biologie du Développement (CBD), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, Toulouse, France LaPlaCE, Université de Toulouse; CNRS, UPS, Toulouse, France Doe CQ. Temporal Patterning in the Drosophila CNS. Annu Rev Cell Dev Biol. 2017; 33:219–40. https://doi.org/10.1146/annurev-cellbio-111315-125210. Kang KH, Reichert H. Control of neural stem cell self-renewal and differentiation in Drosophila. Cell Tissue Res. 2015; 359(1):33–45. https://doi.org/10.1007/s00441-014-1914-9. Syed MH, Mark B, Doe CQ. Playing Well with Others: Extrinsic Cues Regulate Neural Progenitor Temporal Identity to Generate Neuronal Diversity. Trends Genet. 2017; 33(12):933–42. https://doi.org/10.1016/j.tig.2017.08.005. Molyneaux BJ, Arlotta P, Menezes JRL, Macklis JD. Neuronal subtype specification in the cerebral cortex,. Nat Rev Neurosci. 2007; 8(6):427–37. https://doi.org/10.1038/nrn2151. Taverna E, Götz M, Huttner WB, Vol. 30. The Cell Biology of Neurogenesis: Toward an Understanding of the Development and Evolution of the Neocortex; 2014. pp. 465–502. https://doi.org/10.1146/annurev-cellbio-101011-155801. http://www.annualreviews.org/doi/10.1146/annurev-cellbio-101011-155801. Kicheva A, Bollenbach T, Ribeiro A, Valle HP, Lovell-Badge R, Episkopou V, Briscoe J. Coordination of progenitor specification and growth in mouse and chick spinal cord. Science. 2014; 345(6204):1254927. https://doi.org/10.1126/science.1254927. Kicheva A, Briscoe J. Developmental Pattern Formation in Phases. Trends Cell Biol. 2015; 25(10):579–91. https://doi.org/10.1016/j.tcb.2015.07.006. Zagorski M, Tabata Y, Brandenberg N, Lutolf MP, Tkačik G, Bollenbach T, Briscoe J, Kicheva A. Decoding of position in the developing neural tube from antiparallel morphogen gradients. Science. 2017; 356(6345):1379–83. https://doi.org/10.1126/science.aam5887. Agius E, Bel-Vialar S, Bonnet F, Pituello F. Cell cycle and cell fate in the developing nervous system: the role of CDC25B phosphatase. Cell Tissue Res. 2015; 359(1):201–13. https://doi.org/10.1007/s00441-014-1998-2. Peco E, Escude T, Agius E, Sabado V, Medevielle F, Ducommun B, Pituello F. The CDC25B phosphatase shortens the G2 phase of neural progenitors and promotes efficient neuron production. Development. 2012; 139(6):1095–104. https://doi.org/10.1242/dev.068569. Bonnet F, Molina A, Roussat M, Azais M, Vialar S, Gautrais J, Pituello F, Agius E. Neurogenic decisions require a cell cycle independent function of the CDC25B phosphatase. eLife. 2018; 7. https://doi.org/10.7554/eLife.32937. Danesin C, Soula C. Moving the Shh Source over Time: What Impact on Neural Cell Diversification in the Developing Spinal Cord?J Dev Biol. 2017; 5(2):4. https://doi.org/10.3390/jdb5020004. Lander AD, Gokoffski KK, Wan FYM, Nie Q, Calof AL. Cell lineages and the logic of proliferative control. PLoS Biol. 2009; 7(1):15. https://doi.org/10.1371/journal.pbio.1000015. Saade M, Gutiérrez-Vallejo I, LeDréau G, Rabadán MA, Miguez DG, Buceta J, Martí E. Sonic hedgehog signaling switches the mode of division in the developing nervous system. Cell Rep. 2013; 4(3):492–503. https://doi.org/10.1016/j.celrep.2013.06.038. Molina A, Pituello F. Playing with the cell cycle to build the spinal cord. Dev Biol. 2016; 432(1):14–23. https://doi.org/10.1016/j.ydbio.2016.12.022. Azaïs M. Embryogenèse de la moelle épinière: de la dynamique collective observable à une proposition de modèle comportemental à l'échelle cellulaire. Phd thesis (in french), Université de Toulouse. 2018. https://doi.org/10.13140/RG.2.2.25828.83848. Takahashi T, Nowakowski RS, Caviness VS. The leaving or Q fraction of the murine cerebral proliferative epithelium: a general model of neocortical neuronogenesis,. J Neurosci. 1996; 16(19):6183–96. https://doi.org/10.1523/JNEUROSCI.16-19-06183.1996. Nowakowski RS, Caviness VS, Takahashi T, Hayes NL. Population dynamics during cell proliferation and neuronogenesis in the developing murine neocortex. Results Probl Cell Differ. 2002; 39:1–25. https://doi.org/10.1007/978-3-540-46006-0_1. Míguez DG. A Branching Process to Characterize the Dynamics of Stem Cell Differentiation. Sci Rep. 2015; 5(1):13265. https://doi.org/10.1038/srep13265.
CommonCrawl
Only show content I have access to (56) Publications of the Astronomical Society of Australia (36) Symposium - International Astronomical Union (13) Proceedings of the International Astronomical Union (7) International Astronomical Union Colloquium (6) Transactions of the International Astronomical Union (1) International Astronomical Union (27) WALLABY Pilot Survey: Public release of HI kinematic models for more than 100 galaxies from phase 1 of ASKAP pilot observations N. Deg, K. Spekkens, T. Westmeier, T. N. Reynolds, P. Venkataraman, S. Goliath, A. X. Shen, R. Halloran, A. Bosma, B Catinella, W. J. G. de Blok, H. Dénes, E. M. DiTeodoro, A. Elagali, B.-Q. For, C Howlett, G. I. G. Józsa, P. Kamphuis, D. Kleiner, B Koribalski, K. Lee-Waddell, F. Lelli, X. Lin, C. Murugeshan, S. Oh, J. Rhee, T. C. Scott, L. Staveley-Smith, J. M. van der Hulst, L. Verdes-Montenegro, J. Wang, O. I. Wong We present the Widefield ASKAP L-band Legacy All-sky Blind surveY (WALLABY) Pilot Phase I Hi kinematic models. This first data release consists of Hi observations of three fields in the direction of the Hydra and Norma clusters, and the NGC 4636 galaxy group. In this paper, we describe how we generate and publicly release flat-disk tilted-ring kinematic models for 109/592 unique Hi detections in these fields. The modelling method adopted here—which we call the WALLABY Kinematic Analysis Proto-Pipeline (WKAPP) and for which the corresponding scripts are also publicly available—consists of combining results from the homogeneous application of the FAT and 3DBarolo algorithms to the subset of 209 detections with sufficient resolution and $S/N$ in order to generate optimised model parameters and uncertainties. The 109 models presented here tend to be gas rich detections resolved by at least 3–4 synthesised beams across their major axes, but there is no obvious environmental bias in the modelling. The data release described here is the first step towards the derivation of similar products for thousands of spatially resolved WALLABY detections via a dedicated kinematic pipeline. Such a large publicly available and homogeneously analysed dataset will be a powerful legacy product that that will enable a wide range of scientific studies. WALLABY pilot survey: Public release of H i data for almost 600 galaxies from phase 1 of ASKAP pilot observations T. Westmeier, N. Deg, K. Spekkens, T. N. Reynolds, A. X. Shen, S. Gaudet, S. Goliath, M. T. Huynh, P. Venkataraman, X. Lin, T. O'Beirne, B. Catinella, L. Cortese, H. Dénes, A. Elagali, B.-Q. For, G. I. G. Józsa, C. Howlett, J. M. van der Hulst, R. J. Jurek, P. Kamphuis, V. A. Kilborn, D. Kleiner, B. S. Koribalski, K. Lee-Waddell, C. Murugeshan, J. Rhee, P. Serra, L. Shao, L. Staveley-Smith, J. Wang, O. I. Wong, M. A. Zwaan, J. R. Allison, C. S. Anderson, Lewis Ball, D. C.-J. Bock, D. Brodrick, J. D. Bunton, F. R. Cooray, N. Gupta, D. B. Hayman, E. K. Mahony, V. A. Moss, A. Ng, S. E. Pearce, W. Raja, D. N. Roxby, M. A. Voronkov, K. A. Warhurst, H. M. Courtois, K. Said We present WALLABY pilot data release 1, the first public release of H i pilot survey data from the Wide-field ASKAP L-band Legacy All-sky Blind Survey (WALLABY) on the Australian Square Kilometre Array Pathfinder. Phase 1 of the WALLABY pilot survey targeted three $60\,\mathrm{deg}^{2}$ regions on the sky in the direction of the Hydra and Norma galaxy clusters and the NGC 4636 galaxy group, covering the redshift range of $z \lesssim 0.08$ . The source catalogue, images and spectra of nearly 600 extragalactic H i detections and kinematic models for 109 spatially resolved galaxies are available. As the pilot survey targeted regions containing nearby group and cluster environments, the median redshift of the sample of $z \approx 0.014$ is relatively low compared to the full WALLABY survey. The median galaxy H i mass is $2.3 \times 10^{9}\,{\rm M}_{{\odot}}$ . The target noise level of $1.6\,\mathrm{mJy}$ per 30′′ beam and $18.5\,\mathrm{kHz}$ channel translates into a $5 \sigma$ H i mass sensitivity for point sources of about $5.2 \times 10^{8} \, (D_{\rm L} / \mathrm{100\,Mpc})^{2} \, {\rm M}_{{\odot}}$ across 50 spectral channels ( ${\approx} 200\,\mathrm{km \, s}^{-1}$ ) and a $5 \sigma$ H i column density sensitivity of about $8.6 \times 10^{19} \, (1 + z)^{4}\,\mathrm{cm}^{-2}$ across 5 channels ( ${\approx} 20\,\mathrm{km \, s}^{-1}$ ) for emission filling the 30′′ beam. As expected for a pilot survey, several technical issues and artefacts are still affecting the data quality. Most notably, there are systematic flux errors of up to several 10% caused by uncertainties about the exact size and shape of each of the primary beams as well as the presence of sidelobes due to the finite deconvolution threshold. In addition, artefacts such as residual continuum emission and bandpass ripples have affected some of the data. The pilot survey has been highly successful in uncovering such technical problems, most of which are expected to be addressed and rectified before the start of the full WALLABY survey. The First Large Absorption Survey in H i (FLASH): I. Science goals and survey design James R. Allison, E. M. Sadler, A. D. Amaral, T. An, S. J. Curran, J. Darling, A. C. Edge, S. L. Ellison, K. L. Emig, B. M. Gaensler, L. Garratt-Smithson, M. Glowacki, K. Grasha, B. S. Koribalski, C. del P. Lagos, P. Lah, E. K. Mahony, S. A. Mao, R. Morganti, V. A. Moss, M. Pettini, K. A. Pimbblet, C. Power, P. Salas, L. Staveley-Smith, M. T. Whiting, O. I. Wong, H. Yoon, Z. Zheng, M. A. Zwaan Published online by Cambridge University Press: 21 March 2022, e010 We describe the scientific goals and survey design of the First Large Absorption Survey in H i (FLASH), a wide field survey for 21-cm line absorption in neutral atomic hydrogen (H i) at intermediate cosmological redshifts. FLASH will be carried out with the Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope and is planned to cover the sky south of $\delta \approx +40\,\deg$ at frequencies between 711.5 and 999.5 MHz. At redshifts between $z = 0.4$ and $1.0$ (look-back times of 4 – 8 Gyr), the H i content of the Universe has been poorly explored due to the difficulty of carrying out radio surveys for faint 21-cm line emission and, at ultra-violet wavelengths, space-borne searches for Damped Lyman- $\alpha$ absorption in quasar spectra. The ASKAP wide field of view and large spectral bandwidth, in combination with a radio-quiet site, will enable a search for absorption lines in the radio spectra of bright continuum sources over 80% of the sky. This survey is expected to detect at least several hundred intervening 21-cm absorbers and will produce an H i-absorption-selected catalogue of galaxies rich in cool, star-forming gas, some of which may be concealed from optical surveys. Likewise, at least several hundred associated 21-cm absorbers are expected to be detected within the host galaxies of radio sources at $0.4 < z < 1.0$ , providing valuable kinematical information for models of gas accretion and jet-driven feedback in radio-loud active galactic nuclei. FLASH will also detect OH 18-cm absorbers in diffuse molecular gas, megamaser OH emission, radio recombination lines, and stacked H i emission. GASKAP-HI pilot survey science I: ASKAP zoom observations of Hi emission in the Small Magellanic Cloud N. M. Pingel, J. Dempsey, N. M. McClure-Griffiths, J. M. Dickey, K. E. Jameson, H. Arce, G. Anglada, J. Bland-Hawthorn, S. L. Breen, F. Buckland-Willis, S. E. Clark, J. R. Dawson, H. Dénes, E. M. Di Teodoro, B.-Q. For, Tyler J. Foster, J. F. Gómez, H. Imai, G. Joncas, C.-G. Kim, M.-Y. Lee, C. Lynn, D. Leahy, Y. K. Ma, A. Marchal, D. McConnell, M.-A. Miville-Deschènes, V. A. Moss, C. E. Murray, D. Nidever, J. Peek, S. Stanimirović, L. Staveley-Smith, T. Tepper-Garcia, C. D. Tremblay, L. Uscanga, J. Th. van Loon, E. Vázquez-Semadeni, J. R. Allison, C. S. Anderson, Lewis Ball, M. Bell, D. C.-J. Bock, J. Bunton, F. R. Cooray, T. Cornwell, B. S. Koribalski, N. Gupta, D. B. Hayman, L. Harvey-Smith, K. Lee-Waddell, A. Ng, C. J. Phillips, M. Voronkov, T. Westmeier, M. T. Whiting Published online by Cambridge University Press: 07 February 2022, e005 We present the most sensitive and detailed view of the neutral hydrogen ( ${\rm H\small I}$ ) emission associated with the Small Magellanic Cloud (SMC), through the combination of data from the Australian Square Kilometre Array Pathfinder (ASKAP) and Parkes (Murriyang), as part of the Galactic Australian Square Kilometre Array Pathfinder (GASKAP) pilot survey. These GASKAP-HI pilot observations, for the first time, reveal ${\rm H\small I}$ in the SMC on similar physical scales as other important tracers of the interstellar medium, such as molecular gas and dust. The resultant image cube possesses an rms noise level of 1.1 K ( $1.6\,\mathrm{mJy\ beam}^{-1}$ ) $\mathrm{per}\ 0.98\,\mathrm{km\ s}^{-1}$ spectral channel with an angular resolution of $30^{\prime\prime}$ ( ${\sim}10\,\mathrm{pc}$ ). We discuss the calibration scheme and the custom imaging pipeline that utilises a joint deconvolution approach, efficiently distributed across a computing cluster, to accurately recover the emission extending across the entire ${\sim}25\,\mathrm{deg}^2$ field-of-view. We provide an overview of the data products and characterise several aspects including the noise properties as a function of angular resolution and the represented spatial scales by deriving the global transfer function over the full spectral range. A preliminary spatial power spectrum analysis on individual spectral channels reveals that the power law nature of the density distribution extends down to scales of 10 pc. We highlight the scientific potential of these data by comparing the properties of an outflowing high-velocity cloud with previous ASKAP+Parkes ${\rm H\small I}$ test observations. The GLEAM 200-MHz local radio luminosity function for AGN and star-forming galaxies T. M. O. Franzen, N. Seymour, E. M. Sadler, T. Mauch, S. V. White, C. A. Jackson, R. Chhetri, B. Quici, M. E. Bell, J. R. Callingham, K. S. Dwarakanath, B. For, B. M. Gaensler, P. J. Hancock, L. Hindson, N. Hurley-Walker, M. Johnston-Hollitt, A. D. Kapińska, E. Lenc, B. McKinley, J. Morgan, A. R. Offringa, P. Procopio, L. Staveley-Smith, R. B. Wayth, C. Wu, Q. Zheng Published online by Cambridge University Press: 06 September 2021, e041 The GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) is a radio continuum survey at 76–227 MHz of the entire southern sky (Declination $<\!{+}30^{\circ}$ ) with an angular resolution of ${\approx}2$ arcmin. In this paper, we combine GLEAM data with optical spectroscopy from the 6dF Galaxy Survey to construct a sample of 1 590 local (median $z \approx 0.064$ ) radio sources with $S_{200\,\mathrm{MHz}} > 55$ mJy across an area of ${\approx}16\,700\,\mathrm{deg}^{2}$ . From the optical spectra, we identify the dominant physical process responsible for the radio emission from each galaxy: 73% are fuelled by an active galactic nucleus (AGN) and 27% by star formation. We present the local radio luminosity function for AGN and star-forming (SF) galaxies at 200 MHz and characterise the typical radio spectra of these two populations between 76 MHz and ${\sim}1$ GHz. For the AGN, the median spectral index between 200 MHz and ${\sim}1$ GHz, $\alpha_{\mathrm{high}}$ , is $-0.600 \pm 0.010$ (where $S \propto \nu^{\alpha}$ ) and the median spectral index within the GLEAM band, $\alpha_{\mathrm{low}}$ , is $-0.704 \pm 0.011$ . For the SF galaxies, the median value of $\alpha_{\mathrm{high}}$ is $-0.650 \pm 0.010$ and the median value of $\alpha_{\mathrm{low}}$ is $-0.596 \pm 0.015$ . Among the AGN population, flat-spectrum sources are more common at lower radio luminosity, suggesting the existence of a significant population of weak radio AGN that remain core-dominated even at low frequencies. However, around 4% of local radio AGN have ultra-steep radio spectra at low frequencies ( $\alpha_{\mathrm{low}} < -1.2$ ). These ultra-steep-spectrum sources span a wide range in radio luminosity, and further work is needed to clarify their nature. GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) survey III: South Galactic Pole data release Murchison Widefield Array T. M. O. Franzen, N. Hurley-Walker, S. V. White, P. J. Hancock, N. Seymour, A. D. Kapińska, L. Staveley-Smith, R. B. Wayth We present the South Galactic Pole (SGP) data release from the GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) survey. These data combine both years of GLEAM observations at 72–231 MHz conducted with the Murchison Widefield Array (MWA) and cover an area of 5 113 $\mathrm{deg}^{2}$ centred on the SGP at $20^{\mathrm{h}} 40^{\mathrm{m}} < \mathrm{RA} < 05^{\mathrm{h}} 04^{\mathrm{m}}$ and $-48^{\circ} < \mathrm{Dec} < -2^{\circ} $. At 216 MHz, the typical rms noise is ${\approx}5$ mJy beam–1 and the angular resolution ${\approx}2$ arcmin. The source catalogue contains a total of 108 851 components above $5\sigma$, of which 77% have measured spectral indices between 72 and 231 MHz. Improvements to the data reduction in this release include the use of the GLEAM Extragalactic catalogue as a sky model to calibrate the data, a more efficient and automated algorithm to deconvolve the snapshot images, and a more accurate primary beam model to correct the flux scale. This data release enables more sensitive large-scale studies of extragalactic source populations as well as spectral variability studies on a one-year timescale. New candidate radio supernova remnants detected in the GLEAM survey over 345° < l < 60°, 180° < l < 240° N. Hurley-Walker, M. D. Filipović, B. M. Gaensler, D. A. Leahy, P. J. Hancock, T. M. O. Franzen, A. R. Offringa, J. R. Callingham, L. Hindson, C. Wu, M. E. Bell, B.-Q. For, M. Johnston-Hollitt, A. D. Kapińska, J. Morgan, T. Murphy, B. McKinley, P. Procopio, L. Staveley-Smith, R. B. Wayth, Q. Zheng We have detected 27 new supernova remnants (SNRs) using a new data release of the GLEAM survey from the Murchison Widefield Array telescope, including the lowest surface brightness SNR ever detected, G 0.1 – 9.7. Our method uses spectral fitting to the radio continuum to derive spectral indices for 26/27 candidates, and our low-frequency observations probe a steeper spectrum population than previously discovered. None of the candidates have coincident WISE mid-IR emission, further showing that the emission is non-thermal. Using pulsar associations we derive physical properties for six candidate SNRs, finding G 0.1 – 9.7 may be younger than 10 kyr. Sixty per cent of the candidates subtend areas larger than 0.2 deg2 on the sky, compared to < 25% of previously detected SNRs. We also make the first detection of two SNRs in the Galactic longitude range 220°–240°. GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) survey II: Galactic plane 345° < l < 67°, 180° < l < 240° N. Hurley-Walker, P. J. Hancock, T. M. O. Franzen, J. R. Callingham, A. R. Offringa, L. Hindson, C. Wu, M. E. Bell, B.-Q. For, B. M. Gaensler, M. Johnston-Hollitt, A. D. Kapińska, J. Morgan, T. Murphy, B. McKinley, P. Procopio, L. Staveley-Smith, R. B. Wayth, Q. Zheng This work makes available a further $2\,860~\text{deg}^2$ of the GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) survey, covering half of the accessible galactic plane, across 20 frequency bands sampling 72–231 MHz, with resolution $4\,\text{arcmin}-2\,\text{arcmin}$. Unlike previous GLEAM data releases, we used multi-scale CLEAN to better deconvolve large-scale galactic structure. For the galactic longitude ranges $345^\circ < l < 67^\circ$, $180^\circ < l < 240^\circ$, we provide a compact source catalogue of 22 037 components selected from a 60-MHz bandwidth image centred at 200 MHz, with RMS noise $\approx10-20\,\text{mJy}\,\text{beam}^{-1}$ and position accuracy better than 2 arcsec. The catalogue has a completeness of 50% at ${\approx}120\,\text{mJy}$, and a reliability of 99.86%. It covers galactic latitudes $1^\circ\leq|b|\leq10^\circ$ towards the galactic centre and $|b|\leq10^\circ$ for other regions, and is available from Vizier; images covering $|b|\leq10^\circ$ for all longitudes are made available on the GLEAM Virtual Observatory (VO).server and SkyView. Candidate radio supernova remnants observed by the GLEAM survey over 345° < l < 60° and 180° < l < 240° N. Hurley-Walker, B. M. Gaensler, D. A. Leahy, M. D. Filipović, P. J. Hancock, T. M. O. Franzen, A. R. Offringa, J. R. Callingham, L. Hindson, C. Wu, M. E. Bell, B.-Q. For, M. Johnston-Hollitt, A. D. Kapińska, J. Morgan, T. Murphy, B. McKinley, P. Procopio, L. Staveley-Smith, R. B. Wayth, Q. Zheng We examined the latest data release from the GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) survey covering 345° < l < 60° and 180° < l < 240°, using these data and that of the Widefield Infrared Survey Explorer to follow up proposed candidate Supernova Remnant (SNR) from other sources. Of the 101 candidates proposed in the region, we are able to definitively confirm ten as SNRs, tentatively confirm two as SNRs, and reclassify five as H ii regions. A further two are detectable in our images but difficult to classify; the remaining 82 are undetectable in these data. We also investigated the 18 unclassified Multi-Array Galactic Plane Imaging Survey (MAGPIS) candidate SNRs, newly confirming three as SNRs, reclassifying two as H ii regions, and exploring the unusual spectra and morphology of two others. The Phase II Murchison Widefield Array: Design overview Randall B. Wayth, Steven J. Tingay, Cathryn M. Trott, David Emrich, Melanie Johnston-Hollitt, Ben McKinley, B. M. Gaensler, A. P. Beardsley, T. Booler, B. Crosse, T. M. O. Franzen, L. Horsley, D. L. Kaplan, D. Kenney, M. F. Morales, D. Pallot, G. Sleap, K. Steele, M. Walker, A. Williams, C. Wu, Iver. H. Cairns, M. D. Filipovic, S. Johnston, T. Murphy, P. Quinn, L. Staveley-Smith, R. Webster, J. S. B. Wyithe We describe the motivation and design details of the 'Phase II' upgrade of the Murchison Widefield Array radio telescope. The expansion doubles to 256 the number of antenna tiles deployed in the array. The new antenna tiles enhance the capabilities of the Murchison Widefield Array in several key science areas. Seventy-two of the new tiles are deployed in a regular configuration near the existing array core. These new tiles enhance the surface brightness sensitivity of the array and will improve the ability of the Murchison Widefield Array to estimate the slope of the Epoch of Reionisation power spectrum by a factor of ∼3.5. The remaining 56 tiles are deployed on long baselines, doubling the maximum baseline of the array and improving the array u, v coverage. The improved imaging capabilities will provide an order of magnitude improvement in the noise floor of Murchison Widefield Array continuum images. The upgrade retains all of the features that have underpinned the Murchison Widefield Array's success (large field of view, snapshot image quality, and pointing agility) and boosts the scientific potential with enhanced imaging capabilities and by enabling new calibration strategies. Calibration and Stokes Imaging with Full Embedded Element Primary Beam Model for the Murchison Widefield Array M. Sokolowski, T. Colegate, A. T. Sutinjo, D. Ung, R. Wayth, N. Hurley-Walker, E. Lenc, B. Pindor, J. Morgan, D. L. Kaplan, M. E. Bell, J. R. Callingham, K. S. Dwarakanath, Bi-Qing For, B. M. Gaensler, P. J. Hancock, L. Hindson, M. Johnston-Hollitt, A. D. Kapińska, B. McKinley, A. R. Offringa, P. Procopio, L. Staveley-Smith, C. Wu, Q. Zheng The Murchison Widefield Array (MWA), located in Western Australia, is one of the low-frequency precursors of the international Square Kilometre Array (SKA) project. In addition to pursuing its own ambitious science programme, it is also a testbed for wide range of future SKA activities ranging from hardware, software to data analysis. The key science programmes for the MWA and SKA require very high dynamic ranges, which challenges calibration and imaging systems. Correct calibration of the instrument and accurate measurements of source flux densities and polarisations require precise characterisation of the telescope's primary beam. Recent results from the MWA GaLactic Extragalactic All-sky Murchison Widefield Array (GLEAM) survey show that the previously implemented Average Embedded Element (AEE) model still leaves residual polarisations errors of up to 10–20% in Stokes Q. We present a new simulation-based Full Embedded Element (FEE) model which is the most rigorous realisation yet of the MWA's primary beam model. It enables efficient calculation of the MWA beam response in arbitrary directions without necessity of spatial interpolation. In the new model, every dipole in the MWA tile (4 × 4 bow-tie dipoles) is simulated separately, taking into account all mutual coupling, ground screen, and soil effects, and therefore accounts for the different properties of the individual dipoles within a tile. We have applied the FEE beam model to GLEAM observations at 200–231 MHz and used false Stokes parameter leakage as a metric to compare the models. We have determined that the FEE model reduced the magnitude and declination-dependent behaviour of false polarisation in Stokes Q and V while retaining low levels of false polarisation in Stokes U. Spectral-Line Observations Using a Phased Array Feed on the Parkes Telescope T.N. Reynolds, L. Staveley-Smith, J. Rhee, T. Westmeier, A. P. Chippendale, X. Deng, R. D. Ekers, M. Kramer We present first results from pilot observations using a phased array feed (PAF) mounted on the Parkes 64-m radio telescope. The observations presented here cover a frequency range from 1 150 to 1 480 MHz and are used to show the ability of PAFs to suppress standing wave problems by a factor of ~10, which afflict normal feeds. We also compare our results with previous HIPASS observations and with previous H i images of the Large Magellanic Cloud. Drift scan observations of the GAMA G23 field resulted in direct H i detections at z = 0.0043 and z = 0.0055 of HIPASS galaxies J2242-30 and J2309-30. Our new measurements generally agree with archival data in spectral shape and flux density, with small differences being due to differing beam patterns. We also detect signal in the stacked H i data of 1 094 individually undetected galaxies in the GAMA G23 field in the redshift range 0.05 ⩽ z ⩽ 0.075. Finally, we use the low standing wave ripple and wide bandwidth of the PAF to set a 3σ upper limit to any positronium recombination line emission from the Galactic Centre of <0.09 K, corresponding to a recombination rate of <3.0 × 1045 s−1. A High-Resolution Foreground Model for the MWA EoR1 Field: Model and Implications for EoR Power Spectrum Analysis P. Procopio, R. B. Wayth, J. Line, C. M. Trott, H. T. Intema, D. A. Mitchell, B. Pindor, J. Riding, S. J. Tingay, M. E. Bell, J. R. Callingham, K. S. Dwarakanath, Bi-Qing For, B. M. Gaensler, P. J. Hancock, L. Hindson, N. Hurley-Walker, M. Johnston-Hollitt, A. D. Kapińska, E. Lenc, B. McKinley, J. Morgan, A. Offringa, L. Staveley-Smith, Chen Wu, Q. Zheng Published online by Cambridge University Press: 10 August 2017, e033 The current generation of experiments aiming to detect the neutral hydrogen signal from the Epoch of Reionisation (EoR) is likely to be limited by systematic effects associated with removing foreground sources from target fields. In this paper, we develop a model for the compact foreground sources in one of the target fields of the MWA's EoR key science experiment: the 'EoR1' field. The model is based on both the MWA's GLEAM survey and GMRT 150 MHz data from the TGSS survey, the latter providing higher angular resolution and better astrometric accuracy for compact sources than is available from the MWA alone. The model contains 5 049 sources, some of which have complicated morphology in MWA data, Fornax A being the most complex. The higher resolution data show that 13% of sources that appear point-like to the MWA have complicated morphology such as double and quad structure, with a typical separation of 33 arcsec. We derive an analytic expression for the error introduced into the EoR two-dimensional power spectrum due to peeling close double sources as single point sources and show that for the measured source properties, the error in the power spectrum is confined to high k⊥ modes that do not affect the overall result for the large-scale cosmological signal of interest. The brightest 10 mis-modelled sources in the field contribute 90% of the power bias in the data, suggesting that it is most critical to improve the models of the brightest sources. With this hybrid model, we reprocess data from the EoR1 field and show a maximum of 8% improved calibration accuracy and a factor of two reduction in residual power in k-space from peeling these sources. Implications for future EoR experiments including the SKA are discussed in relation to the improvements obtained. Low-Frequency Spectral Energy Distributions of Radio Pulsars Detected with the Murchison Widefield Array Tara Murphy, David L. Kaplan, Martin E. Bell, J. R. Callingham, Steve Croft, Simon Johnston, Dougal Dobie, Andrew Zic, Jake Hughes, Christene Lynch, Paul Hancock, Natasha Hurley-Walker, Emil Lenc, K. S. Dwarakanath, B.-Q. For, B. M. Gaensler, L. Hindson, M. Johnston-Hollitt, A. D. Kapińska, B. McKinley, J. Morgan, A. R. Offringa, P. Procopio, L. Staveley-Smith, R. Wayth, C. Wu, Q. Zheng We present low-frequency spectral energy distributions of 60 known radio pulsars observed with the Murchison Widefield Array telescope. We searched the GaLactic and Extragalactic All-sky Murchison Widefield Array survey images for 200-MHz continuum radio emission at the position of all pulsars in the Australia Telescope National Facility (ATNF) pulsar catalogue. For the 60 confirmed detections, we have measured flux densities in 20 × 8 MHz bands between 72 and 231 MHz. We compare our results to existing measurements and show that the Murchison Widefield Array flux densities are in good agreement. A Southern-Sky Total Intensity Source Catalogue at 2.3 GHz from S-Band Polarisation All-Sky Survey Data B. W. Meyers, N. Hurley-Walker, P. J. Hancock, T. M. O. Franzen, E. Carretti, L. Staveley-Smith, B. M. Gaensler, M. Haverkorn, S. Poppi The S-band Polarisation All-Sky Survey has observed the entire southern sky using the 64-m Parkes radio telescope at 2.3 GHz with an effective bandwidth of 184 MHz. The surveyed sky area covers all declinations δ ⩽ 0°. To analyse compact sources, the survey data have been re-processed to produce a set of 107 Stokes I maps with 10.75 arcmin resolution and the large scale emission contribution filtered out. In this paper, we use these Stokes I images to create a total intensity southern-sky extragalactic source catalogue at 2.3 GHz. The source catalogue contains 23 389 sources and covers a sky area of 16 600 deg2, excluding the Galactic plane for latitudes |b| < 10°. Approximately, 8% of catalogued sources are resolved. S-band Polarisation All-Sky Survey source positions are typically accurate to within 35 arcsec. At a flux density of 225 mJy, the S-band Polarisation All-Sky Survey source catalogue is more than 95% complete, and ~ 94% of S-band Polarisation All-Sky Survey sources brighter than 500 mJy beam−1 have a counterpart at lower frequencies. The Radio Remnant of Supernova 1987A − A Broader View G. Zanardo, L. Staveley-Smith, C. -Y. Ng, R. Indebetouw, M. Matsuura, B. M. Gaensler, A. K. Tzioumis Journal: Proceedings of the International Astronomical Union / Volume 12 / Issue S331 / February 2017 Published online by Cambridge University Press: 17 October 2017, pp. 274-283 Supernova remnants (SNRs) are powerful particle accelerators. As a supernova (SN) blast wave propagates through the circumstellar medium (CSM), electrons and protons scatter across the shock and gain energy by entrapment in the magnetic field. The accelerated particles generate further magnetic field fluctuations and local amplification, leading to cosmic ray production. The wealth of data from Supernova 1987A is providing a template of the SN-CSM interaction, and an important guide to the radio detection and identification of core-collapse SNe based on their spectral properties. Thirty years after the explosion, radio observations of SNR 1987A span from 70 MHz to 700 GHz. We review extensive observing campaigns with the Australia Telescope Compact Array (ATCA) and the Atacama Large Millimeter/submillimeter Array (ALMA), and follow-ups with other radio telescopes. Observations across the radio spectrum indicate rapid changes in the remnant morphology, while current ATCA and ALMA observations show that the SNR has entered a new evolutionary phase. ALMA observations of Molecules in Supernova 1987A M. Matsuura, R. Indebetouw, S. Woosley, V. Bujarrabal, F. J. Abellán, R. McCray, J. Kamenetzky, C. Fransson, M. J. Barlow, H. L. Gomez, P. Cigan, I De Looze, J. Spyromilio, L. Staveley-Smith, G. Zanardo, P. Roche, J. Larsson, S. Viti, J. Th. van Loon, J. C. Wheeler, M. Baes, R. Chevalier, P. Lundqvist, J. M. Marcaide, E. Dwek, M. Meixner, C.-Y. Ng, G. Sonneborn, J. Yates Supernova (SN) 1987A has provided a unique opportunity to study how SN ejecta evolve in 30 years time scale. We report our ALMA spectral observations of SN 1987A, taken in 2014, 2015 and 2016, with detections of CO, 28SiO, HCO+ and SO, with weaker lines of 29SiO. We find a dip in the SiO line profiles, suggesting that the ejecta morphology is likely elongated. The difference of the CO and SiO line profiles is consistent with hydrodynamic simulations, which show that Rayleigh-Taylor instabilities causes mixing of gas, with heavier elements much more disturbed, making more elongated structure. Using 28SiO and its isotopologues, Si isotope ratios were estimated for the first time in SN 1987A. The estimated ratios appear to be consistent with theoretical predictions of inefficient formation of neutron rich atoms at lower metallicity, such as observed in the Large Magellanic Cloud (about half a solar metallicity). The deduced large HCO+ mass and small SiS mass, which are inconsistent to the predictions of chemical model, might be explained by some mixing of elements immediately after the explosion. The mixing might have made some hydrogen from the envelope to sink into carbon and oxygen-rich zone during early days after the explosion, enabling the formation of a substantial mass of HCO+. Oxygen atoms may penetrate into silicon and sulphur zone, suppressing formation of SiS. Our ALMA observations open up a new window to investigate chemistry, dynamics and explosive-nucleosynthesis in supernovae. Ionospheric Modelling using GPS to Calibrate the MWA. II: Regional Ionospheric Modelling using GPS and GLONASS to Estimate Ionospheric Gradients B. S. Arora, J. Morgan, S. M. Ord, S. J. Tingay, M. Bell, J. R. Callingham, K. S. Dwarakanath, B.-Q. For, P. Hancock, L. Hindson, N. Hurley-Walker, M. Johnston-Hollitt, A. D. Kapińska, E. Lenc, B. McKinley, A. R. Offringa, P. Procopio, L. Staveley-Smith, R. B. Wayth, C. Wu, Q. Zheng Published online by Cambridge University Press: 13 July 2016, e031 We estimate spatial gradients in the ionosphere using the Global Positioning System and GLONASS (Russian global navigation system) observations, utilising data from multiple Global Positioning System stations in the vicinity of Murchison Radio-astronomy Observatory. In previous work, the ionosphere was characterised using a single-station to model the ionosphere as a single layer of fixed height and this was compared with ionospheric data derived from radio astronomy observations obtained from the Murchison Widefield Array. Having made improvements to our data quality (via cycle slip detection and repair) and incorporating data from the GLONASS system, we now present a multi-station approach. These two developments significantly improve our modelling of the ionosphere. We also explore the effects of a variable-height model. We conclude that modelling the small-scale features in the ionosphere that have been observed with the MWA will require a much denser network of Global Navigation Satellite System stations than is currently available at the Murchison Radio-astronomy Observatory. A Large-Scale, Low-Frequency Murchison Widefield Array Survey of Galactic H ii Regions between 260 < l < 340 L. Hindson, M. Johnston-Hollitt, N. Hurley-Walker, J. R. Callingham, H. Su, J. Morgan, M. Bell, G. Bernardi, J. D. Bowman, F. Briggs, R. J. Cappallo, A. A. Deshpande, K. S. Dwarakanath, B.-Q For, B. M. Gaensler, L. J. Greenhill, P. Hancock, B. J. Hazelton, A. D. Kapińska, D. L. Kaplan, E. Lenc, C. J. Lonsdale, B. Mckinley, S. R. McWhirter, D. A. Mitchell, M. F. Morales, E. Morgan, D. Oberoi, A. Offringa, S. M. Ord, P. Procopio, T. Prabu, N. Udaya Shankar, K. S. Srivani, L. Staveley-Smith, R. Subrahmanyan, S. J. Tingay, R. B. Wayth, R. L. Webster, A. Williams, C. L. Williams, C. Wu, Q. Zheng Published online by Cambridge University Press: 17 May 2016, e020 We have compiled a catalogue of H ii regions detected with the Murchison Widefield Array between 72 and 231 MHz. The multiple frequency bands provided by the Murchison Widefield Array allow us identify the characteristic spectrum generated by the thermal Bremsstrahlung process in H ii regions. We detect 306 H ii regions between 260° < l < 340° and report on the positions, sizes, peak, integrated flux density, and spectral indices of these H ii regions. By identifying the point at which H ii regions transition from the optically thin to thick regime, we derive the physical properties including the electron density, ionised gas mass, and ionising photon flux, towards 61 H ii regions. This catalogue of H ii regions represents the most extensive and uniform low frequency survey of H ii regions in the Galaxy to date. HI Supergiant Shells in the Large Magellanic Cloud S. Kim, L. Staveley-Smith, R. J. Sault, M. J. Kesteven, D. McConnell, M. A. Dopita, M. Bessell Journal: Publications of the Astronomical Society of Australia / Volume 15 / Issue 1 / 1998 The recently completed HI mosaic survey of the Large Magellanic Cloud (Kim et al. 1997) reveals complex structure in the interstellar medium, including filaments, arcs, holes and shells. We have catalogued giant and supergiant HI shells and searched for correlations with Hα emission, using a new image taken with a camera lens mounted on the 16-inch telescope at Siding Spring Observatory.
CommonCrawl
March 2020, 12(1): 107-140. doi: 10.3934/jgm.2020006 Siran Li 1,2, , Jiahong Wu 3, and Kun Zhao 4,, Department of Mathematics, Rice University, MS 136 P.O. Box 1892, Houston, Texas, 77251, USA Department of Mathematics, McGill University, Burnside Hall, 805 Sherbrooke Street West, Montreal, Quebec, H3A 0B9, Canada Department of Mathematics, Oklahoma State University, 401 Mathematical Sciences, Stillwater, Oklahoma, 74078, USA Department of Mathematics, Tulane University, 6823 Saint Charles Avenue, New Orleans, LA 70118, USA * Corresponding author: Kun Zhao Received May 2019 Revised November 2019 Published January 2020 In this paper we study the non-degenerate and partially degenerate Boussinesq equations on a closed surface $ \Sigma $. When $ \Sigma $ has intrinsic curvature of finite Lipschitz norm, we prove the existence of global strong solutions to the Cauchy problem of the Boussinesq equations with full or partial dissipations. The issues of uniqueness and singular limits (vanishing viscosity/vanishing thermal diffusivity) are also addressed. In addition, we establish a breakdown criterion for the strong solutions for the case of zero viscosity and zero thermal diffusivity. These appear to be among the first results for Boussinesq systems on Riemannian manifolds. Keywords: Boussinesq Equations, Strong Solution, Closed Surfaces, Well-posedness, Breakdown Criteria, Vanishing Viscosity Limit, Vanishing Diffusivity Limit. Mathematics Subject Classification: Primary: 35Q35, 58J90; Secondary: 76D03. Citation: Siran Li, Jiahong Wu, Kun Zhao. On the degenerate boussinesq equations on surfaces. Journal of Geometric Mechanics, 2020, 12 (1) : 107-140. doi: 10.3934/jgm.2020006 H. Abidi and T. Hmidi, On the global well-posedness for Boussinesq system, J. Diff. Equ., 233 (2007), 199-220. doi: 10.1016/j.jde.2006.10.008. Google Scholar D. Adhikari, C. Cao, H. Shang, J. Wu, X. Xu and Z. Ye, Global regularity results for the 2D Boussinesq equations with partial dissipation, J. Diff. Equ., 260 (2016), 1893-1917. doi: 10.1016/j.jde.2015.09.049. Google Scholar D. Adhikari, C. Cao and J. Wu, The 2D Boussinesq equations with vertical viscosity and vertical diffusivity, J. Diff. Equ., 249 (2010), 1078-1088. doi: 10.1016/j.jde.2010.03.021. Google Scholar D. Adhikari, C. Cao and J. Wu, Global regularity results for the 2D Boussinesq equations with vertical dissipation, J. Diff. Equ., 251 (2011), 1637-1655. doi: 10.1016/j.jde.2011.05.027. Google Scholar D. Adhikari, C. Cao, J. Wu and X. Xu, Small global solutions to the damped two-dimensional Boussinesq equations, J. Diff. Equ., 256 (2014), 3594-3613. doi: 10.1016/j.jde.2014.02.012. Google Scholar D. Alonso–Orán, A. Córdoba and Á. D. Martínez, Continuity of weak solutions of the critical surface quasigeostrophic equation on $\mathbb{S}^2$, Adv. Math., 328 (2018), 264-299. doi: 10.1016/j.aim.2018.01.015. Google Scholar D. Alonso–Orán, A. Córdoba and Á. D. Martínez, Global well-posedness of critical surface quasigeostrophic equation on the sphere, Adv. Math., 328 (2018), 248-263. doi: 10.1016/j.aim.2018.01.016. Google Scholar T. Aubin, Nonlinear Analysis on Manifolds. Monge–Amperè Equations, Grundlehern der Mathematischen Wissenschaften, Springer–Verlag, 252, 1982. doi: 10.1007/978-1-4612-5734-9. Google Scholar J. T. Beale, T. Kato and A. Majda, Remarks on the breakdown of smooth solutions for the $3$–$D$ Euler equations, Comm. Math. Phys., 94 (1984), 61-66. doi: 10.1007/BF01212349. Google Scholar A. Biswas, C. Foias and A. Larios, On the attractor for the semi-dissipative Boussinesq equations, Ann. Inst. H. Poincaré Anal. Non Linéaire, 34 (2017), 381-405. doi: 10.1016/j.anihpc.2015.12.006. Google Scholar L. Brandolese and M. Schonbek, Large time decay and growth for solutions of a viscous Boussinesq system, Transactions AMS, 364 (2012), 5057-5090. doi: 10.1090/S0002-9947-2012-05432-8. Google Scholar H. Brezis and S. Wainger, A note on limiting cases of Sobolev embeddings and convolution inequalities, Comm. PDE, 5 (1980), 773-789. doi: 10.1080/03605308008820154. Google Scholar L. Caffarelli, R. Kohn and L. Nirenberg, Partial regularity of suitable weak solutions of the Navier-Stokes equations, Comm. Pure Appl. Math., 35 (1982), 771-831. doi: 10.1002/cpa.3160350604. Google Scholar J.R. Cannon and E. DiBenedetto, The initial value problem for the Boussinesq equations with data in $L^p$, Approximation Methods for Navier-Stokes Problems (Proc. Sympos., Univ. Paderborn, Paderborn, 1979), Lecture Notes in Math., 771, Springer, Berlin, 1980,129–144. doi: 10.1007/BFb0086903. Google Scholar C. Cao and J. Wu, Global regularity for the 2D anisotropic Boussinesq equations with vertical dissipation, Arch. Ration. Mech. Anal., 208 (2013), 985-1004. doi: 10.1007/s00205-013-0610-3. Google Scholar D. Chae, Global regularity for the 2D Boussinesq equations with partial viscosity terms, Adv. Math., 203 (2006), 497-513. doi: 10.1016/j.aim.2005.05.001. Google Scholar D. Chae, P. Constantin and J. Wu, An incompressible 2D didactic model with singularity and explicit solutions of the 2D Boussinesq equations, J. Math. Fluid Mech., 16 (2014), 473-480. doi: 10.1007/s00021-014-0166-5. Google Scholar D. Chae and H. Nam, Local existence and blow-up criterion for the Boussinesq equations, Proc. Roy. Soc. Edinburgh Sect. A, 127 (1997), 935-946. doi: 10.1017/S0308210500026810. Google Scholar D. Chae, S. Kim and H. Nam, Local existence and blow-up criterion of Hölder continuous solutions of the Boussinesq equations, Nagoya Math. J., 155 (1999), 55-80. doi: 10.1017/S0027763000006991. Google Scholar D. Chae and O. Y. Imanuvilov, Generic solvability of the axisymmetric $3$-D Euler equations and the $2$-D Boussinesq equations, J. Diff. Equ., 156 (1999), 1-17. doi: 10.1006/jdeq.1998.3607. Google Scholar D. Chae and J. Wu, The 2D Boussinesq equations with logarithmically supercritical velocities, Adv. Math., 230 (2012), 1618-1645. doi: 10.1016/j.aim.2012.04.004. Google Scholar D. Córdoba, C. Fefferman and R. De La Llave, On squirt singularities in hydrodynamics, SIAM J. Math. Anal., 36 (2004), 204-213. doi: 10.1137/S0036141003424095. Google Scholar R. Danchin and M. Paicu, Existence and uniqueness results for the Boussinesq system with data in Lorentz spaces, Phys. D, 237 (2008), 1444-1460. doi: 10.1016/j.physd.2008.03.034. Google Scholar R. Danchin and M. Paicu, Global well-posedness issues for the inviscid Boussinesq system with Yudovich's type data, Commun. Math. Phys., 290 (2009), 1-14. doi: 10.1007/s00220-009-0821-5. Google Scholar R. Danchin and M. Paicu, Global existence results for the anisotropic Boussinesq system in dimension two, Math. Models Methods Appl. Sci., 21 (2011), 421-457. doi: 10.1142/S0218202511005106. Google Scholar C. Doering, J. Wu, K. Zhao and X. Zheng, Long-time behavior of two-dimensional Boussinesq equations without buoyancy diffusion, Phys. D, 376/377 (2018), 144-159. doi: 10.1016/j.physd.2017.12.013. Google Scholar W. E and C.-W. Shu, Small-scale structures in Boussinesq convection, Phys. Fluids, 6 (1994), 49-58. doi: 10.1063/1.868044. Google Scholar H. Engler, An alternative proof of the Brezis–Wainger inequality, Comm. PDE, 14 (1989), 541-544. Google Scholar P. Górka, Brézis–Wainger inequality on Riemannian manifolds, J. Ineq. Appl., (2008), ID 715961, 1–6. doi: 10.1155/2008/715961. Google Scholar B. Guo and G. Yuan, On the suitable weak solutions to the Boussinesq equations in a bounded domain, Acta Math. Sinica, 12 (1996), 205-216. doi: 10.1007/BF02108163. Google Scholar E. Hebey, Nonlinear Analysis on Manifolds: Sobolev Spaces and Inequalities, Courant Lecture Notes in Mathematics, 5. New York University, Courant Institute of Mathematical Sciences, New York; American Mathematical Society, Providence, RI, 1999. Google Scholar T. Hmidi and S. Keraani, On the global well-posedness of the 2D Boussinesq system with a zero diffusivity, Adv. Diff. Equ., 12 (2007), 461-480. Google Scholar T. Hmidi and S. Keraani, On the global well-posedness of the Boussinesq system with zero viscosity, Indiana Univ. Math. J., 58 (2009), 1591-1618. doi: 10.1512/iumj.2009.58.3590. Google Scholar T. Hmidi, S. Keraani and F. Rousset, Global well-posedness for a Boussinesq-Navier-Stokes system with critical dissipation, J. Diff. Equ., 249 (2010), 2147-2174. doi: 10.1016/j.jde.2010.07.008. Google Scholar T. Hmidi, S. Keraani and F. Rousset, Global well-posedness for Euler-Boussinesq system with critical dissipation, Comm. PDE, 36 (2011), 420-445. doi: 10.1080/03605302.2010.518657. Google Scholar T. Hou and C. Li, Global well-posedness of the viscous Boussinesq equations, Disc. Cont. Dyn. Sys., 12 (2005), 1-12. doi: 10.3934/dcds.2005.12.1. Google Scholar L. Hu and H. Jian, Blow-up criterion for 2-D Boussinesq equations in bounded domain, Front. Math. China, 2 (2007), 559-581. doi: 10.1007/s11464-007-0034-1. Google Scholar W. Hu, I. Kukavica and M. Ziane, Persistence of regularity for a viscous Boussinesq equations with zero diffusivity, Asymptot. Anal., 91 (2015), 111-124. doi: 10.3233/ASY-141261. Google Scholar W. Hu, I. Kukavica and M. Ziane, On the regularity for the Boussinesq equations in a bounded domain, J. Math. Phys., 54 (2013), 081507, 10 pp. doi: 10.1063/1.4817595. Google Scholar A. A. Il'in, The Navier–Stokes equation and Euler euqation on two-dimensional closed manifolds, Mathematics of the USSR–Sbornik, 69 (1991), 559-579. Google Scholar Q. Jiu, C. Miao, J. Wu and Z. Zhang, The 2D incompressible Boussinesq equations with general critical dissipation, SIAM J. Math. Anal., 46 (2014), 3426-3454. doi: 10.1137/140958256. Google Scholar Q. Jiu, J. Wu and W. Yang, Eventual regularity of the two-dimensional Boussinesq equations with supercritical dissipation, J. Nonlinear Science, 25 (2015), 37-58. doi: 10.1007/s00332-014-9220-y. Google Scholar S. Kobayashi and K. Nomizu, Foundations of Differential Geometry, Vol.I, Interscience, 1963. Google Scholar M. Lai, R. Pan and K. Zhao, Initial boundary value problem for 2D viscous Boussinesq equations, Arch. Ration. Mech. Anal., 199 (2011), 739-760. doi: 10.1007/s00205-010-0357-z. Google Scholar A. Larios, E. Lunasin and E. S. Titi, Global well-posedness for the 2D Boussinesq system with anisotropic viscosity and without heat diffusion, J. Diff. Equ., 255 (2013), 2636-2654. doi: 10.1016/j.jde.2013.07.011. Google Scholar D. Li and X. Xu, Global wellposedness of an inviscid 2D Boussinesq system with nonlinear thermal diffusivity, Dyn. Par. Diff. Equ., 10 (2013), 255-265. doi: 10.4310/DPDE.2013.v10.n3.a2. Google Scholar J. Li, H. Shang, J. Wu, X. Xu and Z. Ye, Regularity criteria for the 2D Boussinesq equations with supercritical dissipation, Comm. Math. Sci., 14 (2016), 1999-2022. doi: 10.4310/CMS.2016.v14.n7.a10. Google Scholar J. Li and E. S. Titi, Global well-posedness of the 2D Boussinesq equations with vertical dissipation, Arch. Ration. Mech. Anal., 220 (2016), 983-1001. doi: 10.1007/s00205-015-0946-y. Google Scholar S. A. Lorca and J. L. Boldrini, The initial value problem for a generalized Boussinesq model, Nonlinear Analysis, 36 (1999), 457-480. doi: 10.1016/S0362-546X(97)00635-4. Google Scholar J. Saito, Boussinesq equations in thin spherical domains, Kyushu J. Math., 59 (2005), 443-465. doi: 10.2206/kyushujm.59.443. Google Scholar A. Sarria and J. Wu, Blowup in stagnation-point form solutions of the inviscid 2d Boussinesq equations, J. Diff. Equ., 259 (2015), 3559-3576. doi: 10.1016/j.jde.2015.04.029. Google Scholar A. Stefanov and J. Wu, A global regularity result for the 2D Boussinesq equations with critical dissipation, J. d'Analyse Mathematique, 137 (2019), 269-290. doi: 10.1007/s11854-018-0073-4. Google Scholar Y. Taniuchi, A note on the blow-up criterion for the inviscid 2-D Boussinesq equations, in: R. Salvi (Ed.), The Navier-Stokes Equations: Theory and Numerical Methods, Lec. Notes Pure Appl. Math., 223 (2002), 131-140. Google Scholar L. Tao, J. Wu, K. Zhao and X. Zheng, Stability near hydrostatic equilibrium to the 2D Boussinesq equations without thermal diffusion, submitted. Google Scholar M. Taylor, Euler equation on a rotating surface, J. Funct. Anal., 270 (2016), 3884-3945. doi: 10.1016/j.jfa.2016.02.023. Google Scholar N. Varopoulos, Small time Gaussian estimates for heat diffusion kernels, Part I: The semigroup techniques, Bulletin des Sciences Mathématiques, 113 (1989), 253-277. Google Scholar C. Wang, The Calderón–Zygmund inequality on a compact Riemannian manifold, Pacific J. Math., 217 (2004), 181-200. doi: 10.2140/pjm.2004.217.181. Google Scholar J. P. Whitehead and C. R. Doering, Ultimate State of Two-Dimensional Rayleigh-Bénard Convection between Free-Slip Fixed-Temperature Boundaries, Phys. Rev. Lett., 106 (2011), 244501. doi: 10.1103/PhysRevLett.106.244501. Google Scholar J. Wu and X. Xu, Well-posedness and inviscid limits of the Boussinesq equations with fractional Laplacian dissipation, Nonlinearity, 27 (2014), 2215-2232. doi: 10.1088/0951-7715/27/9/2215. Google Scholar J. Wu, X. Xu, L. Xue and Z. Ye, Regularity results for the 2D Boussinesq equations with critical and supercritical dissipation, Comm. Math. Sci., 14 (2016), 1963-1997. doi: 10.4310/CMS.2016.v14.n7.a9. Google Scholar W. Yang, Q. Jiu and J. Wu, Global well-posedness for a class of 2D Boussinesq systems with fractional dissipation, J. Diff. Equ., 257 (2014), 4188-4213. doi: 10.1016/j.jde.2014.08.006. Google Scholar K. Zhao, 2D inviscid heat conductive Boussinesq system in a bounded domain, Michigan Math. J., 59 (2010), 329-352. Google Scholar Boris P. Andreianov, Giuseppe Maria Coclite, Carlotta Donadello. Well-posedness for vanishing viscosity solutions of scalar conservation laws on a network. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5913-5942. doi: 10.3934/dcds.2017257 Hua Chen, Jian-Meng Li, Kelei Wang. On the vanishing viscosity limit of a chemotaxis model. Discrete & Continuous Dynamical Systems - A, 2020, 40 (3) : 1963-1987. doi: 10.3934/dcds.2020101 Zhigang Wang. Vanishing viscosity limit of the rotating shallow water equations with far field vacuum. Discrete & Continuous Dynamical Systems - A, 2018, 38 (1) : 311-328. doi: 10.3934/dcds.2018015 Thomas Strömberg. A system of the Hamilton--Jacobi and the continuity equations in the vanishing viscosity limit. Communications on Pure & Applied Analysis, 2011, 10 (2) : 479-506. doi: 10.3934/cpaa.2011.10.479 Alberto Bressan, Yilun Jiang. The vanishing viscosity limit for a system of H-J equations related to a debt management problem. Discrete & Continuous Dynamical Systems - S, 2018, 11 (5) : 793-824. doi: 10.3934/dcdss.2018050 Thomas Y. Hou, Congming Li. Global well-posedness of the viscous Boussinesq equations. Discrete & Continuous Dynamical Systems - A, 2005, 12 (1) : 1-12. doi: 10.3934/dcds.2005.12.1 Pierluigi Colli, Gianni Gilardi, Pavel Krejčí, Jürgen Sprekels. A vanishing diffusion limit in a nonstandard system of phase field equations. Evolution Equations & Control Theory, 2014, 3 (2) : 257-275. doi: 10.3934/eect.2014.3.257 Wenjun Wang, Lei Yao. Vanishing viscosity limit to rarefaction waves for the full compressible fluid models of Korteweg type. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2331-2350. doi: 10.3934/cpaa.2014.13.2331 Jing Wang, Lining Tong. Vanishing viscosity limit of 1d quasilinear parabolic equation with multiple boundary layers. Communications on Pure & Applied Analysis, 2019, 18 (2) : 887-910. doi: 10.3934/cpaa.2019043 Fucai Li, Yanmin Mu, Dehua Wang. Local well-posedness and low Mach number limit of the compressible magnetohydrodynamic equations in critical spaces. Kinetic & Related Models, 2017, 10 (3) : 741-784. doi: 10.3934/krm.2017030 Fei Meng, Xiao-Ping Yang. Elastic limit and vanishing external force for granular systems. Kinetic & Related Models, 2019, 12 (1) : 159-176. doi: 10.3934/krm.2019007 Qunyi Bie, Qiru Wang, Zheng-An Yao. On the well-posedness of the inviscid Boussinesq equations in the Besov-Morrey spaces. Kinetic & Related Models, 2015, 8 (3) : 395-411. doi: 10.3934/krm.2015.8.395 Saoussen Sokrani. On the global well-posedness of 3-D Boussinesq system with partial viscosity and axisymmetric data. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 1613-1650. doi: 10.3934/dcds.2019072 Oleksiy V. Kapustyan, Pavlo O. Kasyanov, José Valero, Michael Z. Zgurovsky. Strong attractors for vanishing viscosity approximations of non-Newtonian suspension flows. Discrete & Continuous Dynamical Systems - B, 2018, 23 (3) : 1155-1176. doi: 10.3934/dcdsb.2018146 Stefano Bianchini, Alberto Bressan. A case study in vanishing viscosity. Discrete & Continuous Dynamical Systems - A, 2001, 7 (3) : 449-476. doi: 10.3934/dcds.2001.7.449 Umberto Mosco, Maria Agostina Vivaldi. Vanishing viscosity for fractal sets. Discrete & Continuous Dynamical Systems - A, 2010, 28 (3) : 1207-1235. doi: 10.3934/dcds.2010.28.1207 Jishan Fan, Fucai Li, Gen Nakamura. Regularity criteria for the Boussinesq system with temperature-dependent viscosity and thermal diffusivity in a bounded domain. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 4915-4923. doi: 10.3934/dcds.2016012 David M. Ambrose, Jerry L. Bona, David P. Nicholls. Well-posedness of a model for water waves with viscosity. Discrete & Continuous Dynamical Systems - B, 2012, 17 (4) : 1113-1137. doi: 10.3934/dcdsb.2012.17.1113 Hyung Ju Hwang, Youngmin Oh, Marco Antonio Fontelos. The vanishing surface tension limit for the Hele-Shaw problem. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3479-3514. doi: 10.3934/dcdsb.2016108 Sirui Li, Wei Wang, Pingwen Zhang. Local well-posedness and small Deborah limit of a molecule-based $Q$-tensor system. Discrete & Continuous Dynamical Systems - B, 2015, 20 (8) : 2611-2655. doi: 10.3934/dcdsb.2015.20.2611 Siran Li Jiahong Wu Kun Zhao
CommonCrawl
John Wayland Bales BA, MA in mathematics from University of Texas/Austin. PhD in mathematics from Auburn University. Retired full professor of mathematics. Taught thirty nine years at Tuskegee University. Worked four years at Applied Research Laboratories at UT/Austin as a Research Associate. Also taught at Auburn University, Louisiana State University and Southern Union Community College. Took sabbaticals at Battelle Memorial Institute and Johns Hopkins Applied Physics Laboratory. Had research fellowships with Georgia State University, NASA/Langley Research Center, NASA/Huntsville Research Center, US Army Corps of Engineers Waterways Experiment Station, US Army Aeromedical Research Laboratory, US Army Missile Command, US Army Ballistic Missile Command, Oak Ridge National Laboratory High-Performance Computing. Mathematical descent: E. H. Moore > David Birkhoff > Hyman Ettlinger > Ben Fitzpatrick, Jr. Auburn, AL, United States jwbales.us Mathematics 16.7k 16.7k 22 gold badges1313 silver badges3838 bronze badges Ask Ubuntu 701 701 44 silver badges1515 bronze badges Academia 373 373 33 silver badges77 bronze badges 41 How much mass can colliding black holes lose as gravitational waves? 23 How to install USA International keyboard option on Ubuntu Gnome 16.10 20 How is y'all'dn't've pronounced 15 Why does $3^{16} \times 7^{-6}$ become $\frac{3^{16}} {7^{6}}$? 14 How to integrate $|x| \cdot x$ 12 How many whole pieces can be taken out in this way? (Infinite chocolate bar problem) 2 Should I be worried about a security breach if Google Maps incorrectly shows that I drove over 650 miles from home? Jun 3 '19
CommonCrawl
View source for Bonferroni, Carlo Emilio ← Bonferroni, Carlo Emilio {| class="wikitable" !Copyright notice <!-- don't remove! --> |- | This article ''Carlo Emilio Bonferroni'' was adapted from an original article by M. E. Dewey and E. Seneta, which appeared in ''StatProb: The Encyclopedia Sponsored by Statistics and Probability Societies''. The original article ([<nowiki>http://statprob.com/encyclopedia/CarloEmilioBONFERRONI.html</nowiki> StatProb Source], Local Files: [[Media:CarloEmilioBONFERRONI.pdf|pdf]] | [[Media:CarloEmilioBONFERRONI.tex|tex]]) is copyrighted by the author(s), the article has been donated to ''Encyclopedia of Mathematics'', and its further issues are under ''Creative Commons Attribution Share-Alike License'. All pages from StatProb are contained in the [[:Category:Statprob|Category StatProb]]. |- |} <!-- \documentclass[12pt]{article} \begin{document} \noindent --> '''Carlo Emilio BONFERRONI''' b. 28 January 1892 - d. 18 August 1960 <!-- \noindent --> '''Summary.''' His name is attached to the Bonferroni Inequalities which facilitate the treatment of statistical dependence. Improvements to the inequalities have generated a large literature. Carlo Emilio Bonferroni was born in 1892 at Bergamo, a university town in northern Italy. He studied conducting and piano at the conservatory in Torino (Turin), and then studied for the degree of ''laurea'' in mathematics in Torino under Peano and Segre. He spent a year broadening his education in Wien (Vienna) at the University, and in Z&uuml;rich at the Eidgen&ouml;ssicher Technischen Hochschule. During the 1914-1918 war he served as an officer in the engineers. He became ''incaricato'' (assistant professor) at the Turin Polytechnic, and then in 1923 took up the chair of financial mathematics at the Economics Institute in Bari where he was also Rector for 7 years. In 1933 he transferred to Firenze (Florence) where he held his chair until his death in 1960. He was Dean of his Faculty for five years. He received honours from within his own country, but the only one from outside was from the Hungarian Statistical Society. The obituary of him by Pagni lists his works under three main headings: actuarial mathematics (16 articles, 1 book); probability and statistical mathematics (30, 1); analysis, geometry and rational mechanics (13, 0). His name is familiar in the statistical world through his 1936 paper in which the Bonferroni Inequalities first appear. If $p_i$ is the probability of having characteristic $i$, $p_{ij}$ the probability of having $i$ and $j$ and so on then he introduces the notation $S_0 = 1, \quad S_1 = \sum p_i, \quad S_2 = \sum p_{ij}, \quad S_3 = \sum p_{ijh},$... Then writing $P_r$ for the probability of exactly $r$ events \[ P_0 \leq 1 , P_0 \geq 1 - S_1, P_0 \leq 1 - S_1 + S_2, P_0 \geq 1 - S_1 + S_2 - S_3 \] <!-- \noindent --> and so on. The first of these inequalities is due to Boole (q.v.) in 1854. It is the highlighting of Boole's Inequality by Francesco Paolo Cantelli (1875-1966) as a tool for treating statistical dependence at the International Congress of Mathematicians held at Bologna in 1928, which Bonferroni attended, that may have led Bonferroni to produce the elegant pattern of the other members of the sequence. Attribution to Boole is in fact made on pp. 4 and 25 of Bonferroni's paper , in which the inequalities are justified in "symbolic" fashion". Similar ideas using $S_k$'s were pursued by K&aacute;roly Jordan (q.v.) and Henri Poincar&eacute;. The inequalities achieved their popularity through a book of Maurice Fr&eacute;chet (q.v), a frequent correspondent of Cantelli's, of 1940; and William Feller's celebrated "An Introduction to Probability Theory and its Applications,'', Vol.1, first published in 1950, which was probably the source for most English speaking readers, It is interesting to note that he cites only the monograph by Fr&eacute;chet. The inequalities have given rise to a large literature. The first two have been used in particular in simultaneous statistical inference. The method known as Bonferroni adjustment usually relies only on Boole's Inequality. Bonferroni's 1936 article uses the classical definition of probability, in terms of a finite sample space of equally likely events, usually attributed to Laplace (q.v.). His notion of probability was not, however, confined to this. In his inaugural address for the academic year (1924-25) published in 1927 he clearly states: <blockquote> A weight is determined directly by a balance. And a probability, how is that determined? What is, so to say, the probability balance? It is the study of frequencies which gives rise to a specific probability (p. 32) </blockquote> <!-- \noindent --> Then he moves on to consider long run frequency in more detail. On p. 35 he specifically denies that subjective probability is amenable to mathematical analysis. After these papers he moved away from writing on the foundations of probability. A reason for this change of direction could have been the appearance of the work of von Mises (q.v.) now often regarded as a landmark in the development of the frequentist view. Bonferroni also worked in a number of other statistical areas. A competitor to the well-known Gini (q.v.) index of concentration, Bonferroni's concentration index is designed to measure income inequality. Let $x_{(i)}$ be the observed $i$th order statistic in a sample of size $n$, so that $x_{(i-1)} \le x_{(i)} \quad(i=2, \dots, n)$. Define $m_i, i=1,2,...,n$, as the sample partial means, so that $m=m_n$ is the ordinary sample mean, by: \[ m_i = \frac{1}{i}\sum_{j=1}^{i}x_{(j)}, i=1,2,...n. \] <!-- \noindent --> Then Bonferroni's index $B_n$ is \[ B_n = 1 - \frac{1}{n-1}\sum_{i=1}^{n-1}\frac{m_i}{m} \] <!-- \noindent --> He was also interested in "algebraic" means which he treats extensively in his textbook, "Elementi di Statistica Generale''. The algebraic mean $M_p$ of order $p$ is $\sqrt[p]{\frac{x_1^p + \dots + x_2^p} {n}}$. He published on the properties of a generalisation of this $M_{p+q}$ defined as $\sqrt[p+q] {\frac{x_1^p x_2^q + x_1^q x_2^p + \dots} {n(n-1)}}$ and similarly for higher orders. One reason for his current lack of recognition may be the fact that his books were never properly disseminated. Apart from ''Elementi di analisi matematica'', and that only in its last edition of 1957, and a smaller research monograph: ''Sulla correlazione e sulla connessione'' (1942), they probably do not exist in typeset versions. One of the volumes, ''Elementi di Statistica Generale'' was reprinted in facsimile after his death at the instigation of the Faculty of Economics of University of Firenze; bound with it is a memoir by de Finetti. The reason his books were never properly typeset is that he believed that books were too expensive for students to buy, and so to hold costs down he handwrote his teaching material, and had the books printed from that version. (This pattern is reasserting itself for the same reasons with the aid of electronic publishing.) They run to hundreds of pages, neat and almost correction free. His articles have a clear explanatory nature. He was someone with a genuine interest in communicating his ideas to his audience. <!-- \noindent --> ====References==== {| |- |valign="top"|{{Ref|1}}||valign="top"| Benedetti, C. (1982). Carlo Emilio Bonferroni (1892-1960). ''Metron'', '''40''', N.3-4, 1-36. |- |valign="top"|{{Ref|2}}||valign="top"| Bonferroni, C. E. (1936). Teoria statistica delle classi e calcolo delle probabilit\`{a}. ''Pubblicazioni del R Istituto Superiore di Scienze Economiche e Commerciali di Firenze'', '''8''', 1-62. |- |valign="top"|{{Ref|3}}||valign="top"| Bonferroni, C. E. (1927). Teoria e probabilit\`a. In ''Annuario del R Istituto Superiore di Scienze Economiche e Commerciali di Bari per L'anno Accademico 1925-1926'', pp. 15-46. |- |valign="top"|{{Ref|4}}||valign="top"| Bonferroni, C. E. (1941). ''Elementi di Statistica Generale'' Universit\`a Bocconi, Milano (First edition 1927-28). |- |valign="top"|{{Ref|5}}||valign="top"| Galambos, J. and Simonelli, I. (1996). ''Bonferroni-type Inequalities with Applications''. Springer-Verlag, New York. |- |valign="top"|{{Ref|6}}||valign="top"| Pagni P (1960). Carlo Emilio Bonferroni. ''Bollettino dell'Unione Matematica Italiana'', '''15''', 570--574. |- |} <!-- \end{document} --> <references /> Reprinted with permission from Christopher Charles Heyde and Eugene William Seneta (Editors), Statisticians of the Centuries, Springer-Verlag Inc., New York, USA. [[Category:Statprob]] [[Category:Biographical]] Template:Ref (view source) Return to Bonferroni, Carlo Emilio. Bonferroni, Carlo Emilio. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Bonferroni,_Carlo_Emilio&oldid=39179 Retrieved from "https://encyclopediaofmath.org/wiki/Bonferroni,_Carlo_Emilio"
CommonCrawl
\$\$\$"); newstr = newstr.replace(/<\/mathblock>/g, "\$\$\$ "); newstr = newstr.replace(//g, "\$\$"); newstr = newstr.replace(/<\/mathinline>/g, "\$\$"); newstr = newstr.replace(/\<\;/g, "<"); newstr = newstr.replace(/\>\;/g, ">"); console.log(newstr); previewID.innerHTML = convert(newstr); //if (preview=="MD-Pseudocode" && document.querySelector("#MD-Pseudocode > ol")) document.querySelector("#MD-Pseudocode > ol").className = "code"; MathJax.Hub.Queue(["Typeset",MathJax.Hub,previewID]); } } loginStateChange=0; PICUPCodeTemplateStateChange=0; PICUPCompletedCodeStateChange=0; PICUPDataFileStateChange=0; PICUPAdditionalResourceStateChange=0; hs.preserveContent = false; hs.graphicsDir = '/services/js/highslide/graphics/'; hs.showCredits = false; hs.outlineType = 'custom'; hs.allowSizeReduction = false; hs.dimmingOpacity = 0.33; hs.registerOverlay({ html: ' ', position: 'top right', useOnHtml: true, fade: 2 // fading the semi-transparent overlay looks bad in IE }); hs.Expander.prototype.onBeforeClose = function (sender) { if(loginStateChange!=0) { location.reload(true); return false; } else if(PICUPCodeTemplateStateChange!=0) { location.href='/PICUP/exercises/exercise.cfm?I=323&S=6'; return false; } else if(PICUPCompletedCodeStateChange!=0) { location.href='/PICUP/exercises/exercise.cfm?I=323&S=6'; return false; } else if(PICUPDataFileStateChange!=0) { location.href='/PICUP/exercises/exercise.cfm?I=323&S=6'; return false; } else if(PICUPAdditionalResourceStateChange!=0) { location.href='/PICUP/exercises/exercise.cfm?I=323&S=6'; return false; } return true; } var _gaq = _gaq || []; function grabInnerText(ITextElement) { if (document.body.innerText) return document.getElementById(ITextElement).innerText; else return document.getElementById(ITextElement).innerHTML.replace(/\<br\>/gi,"\n").replace(/(<([^>]+)>)/gi, ""); } function FlipPICUPCode(id, lang, t) { if(t=='code template') hideshow('PICUPCodeTSet'+id); else hideshow('PICUPCodeSet'+id); if (document.getElementById('VCS'+id).innerHTML=='View '+lang+' '+t) { document.getElementById('VCS'+id).innerHTML='Hide '+lang+' '+t; show('VCSh'+id); } else { document.getElementById('VCS'+id).innerHTML='View '+lang+' '+t; hide('VCSh'+id); } } MathJax.Hub.Config({ showProcessingMessages: false, messageStyle: "none", TeX: { equationNumbers: {autoNumber: "all"} }, extensions: ["tex2jax.js"], jax: ["input/TeX", "output/HTML-CSS"], tex2jax: { inlineMath: [ ['$','$'] ], displayMath: [ ['$$','$$'] ], processEscapes: true }, SVG: { linebreaks: { automatic: true } }, "HTML-CSS": { availableFonts: ["TeX"] } }); /* MathJax.Hub.Config({ TeX: { noErrors: { disabled: true } } }); */ Partnership for Integration of Computation into Undergraduate Physics Exercise Sets Faculty Commons About PICUP Exercise Sets » Visualizing Effects of a Gravitational Wave with a Ring of Test Masses Visualizing Effects of a Gravitational Wave with a Ring of Test Masses Developed by Deva O'Neil and Parker Cline - Published June 27, 2018 This ES allows students to explore different polarizations of gravitational waves and simulate the effect on a ring of test masses. Intuition for vector fields is developed as well. The student compares the response of a ring of test masses to plus-polarized waves, cross-polarized waves, and circularly-polarized waves. After getting a feel for polarizations, we put the simulation in an astrophysical context. For the source, we use a system revolving around a center of mass (we start with the Alpha Centauri binary system as a model). To gain insight into what effect on the wave different parameters have, students modify the simulation to compare a compact binary to the Alpha Centauri system. These simulations ignore the effect that radiating gravitational waves has on the source itself. Those interested in modeling this "back-reaction" are encouraged to consult Robert Hilborn's Exercise Sets, which focus on the evolution of the waveform and the modeling of the source, rather than the effect the gravitational wave would have on a detector or test mass. Waves & Optics, Mathematical/Numerical Methods, and Astronomy/Astrophysics Available Implementation Glowscript - The student will simulate and interpret a vector field. The student will calculate divergence and identify, visually and mathematically, a divergence-free vector field (Exercises 1 and 2). - Students will simulate a ring of test masses to determine how the gravitational wave determines the motion of the test masses. Wave parameters such as amplitude and frequency are analyzed in terms of their effect on the motion (Exercise 3). - The student will relate the physical behavior of the system to the choice of initial conditions for coupled ODEs. Using calculus, the initial conditions for the test masses that result in pure oscillatory motion will be derived (Exercises 3 and 4). - The student will construct a circularly-polarized wave from plus and cross polarizations, and will determine the effect of right- and left-circularly polarized waves on a ring of test masses (Exercise 4). - The student will develop intuition for the properties of an astrophysical system as a source of gravitational waves. The student will identify parameters that affect detectability and then compare gravitational wave amplitudes for ordinary binary stars (Exercise 4) to compact binaries (Exercise 5). Add Errata Login or register as a verified educator for free to access instructor-only material. These exercises are not tied to a specific programming language. Example implementations are provided under the Code tab, but the Exercises can be implemented in whatever platform you wish to use (e.g., Excel, Python, MATLAB, etc.). **Exercise 1: Force Fields** A force field, like any field, associates a value with each point in space. Since force is a vector, the force field due to the gravitational wave can be represented with an arrow (to indicate both magnitude and direction) at every point in space. In the programs that follow, we will ignore the z direction and analyze force fields in the x-y plane. We will start with a program that places an arrow, representing force, at every point on a 15 x 15 grid. The point (0,0) will be in the middle of the grid. We will begin with a constant and uniform force field before modeling a gravitational wave. A physical example is gravity near the surface of the Earth. It is approximately constant in time[^footnote] and uniform on small scales. 1. Run the template provided for Exercise 1. The force used is $\vec{F} = <0,-mg,0>$, where $g = 9.8 ~m/s^2$. 2. The divergence of a vector field $\vec{A}$ is defined as $div \vec{A} = \frac{\partial A_x}{\partial x}+\frac{\partial A_y}{\partial y}+\frac{\partial A_z}{\partial z}$. The value of the divergence may depend on (x,y,z), in which case it varies from point to point in the field. Physically, it represents the net flux (flow) from an infinitesimal volume surrounding a point. One can get an intuitive feel for whether a field has non-zero divergence in a certain region by visually examining the flux. We will imagine a small volume centered on a certain point in the field. Roughly speaking, a field that appears to be (overall) "exiting" the volume has a positive divergence; a field with a net "entrance" into the volume has a negative divergence. If there is equal entrance and exit, there is no divergence. Before continuing, it may help to get some practice identifying positive and negative divergences. The following links give examples where the divergence is either positive at all points, or negative at all points: https://mathinsight.org/divergence_idea https://mathinsight.org/divergence_subtleties When you feel comfortable recognizing divergence, return to examining your template. * Looking at the output of this program, predict whether the divergence of the field is positive, negative, or zero. * Check your prediction with a direction calculation (on paper). * Modify the code to display a force field with positive divergence. (The force field does not have to have physical significance.) **Exercise 2: Gravitational Wave Polarization** 1. Change your code so that it visualizes the following gravitational wave: $$F_{x}= \frac{-m}{2} h_{+} x\omega^{2}\cos(\omega t),$$ $$F_{y}= \frac{m}{2} h_{+} y\omega^{2}\cos(\omega t).$$ Each grid point has a different $(x,y)$ value, and thus a different value of force. The following parameters are recommended: | Parameter | Symbol | Value | | ------------- |:-------------:| -----:| | Amplitude of gravitational wave | $h_{+}, h_{\times}$ | $ 0.2 $ | | Angular frequency of oscillation | $\omega$ | $50 ~rad/s $ | | Mass of object subject to force field | $m$ | $0.1~ kg$ | | Time | $t$ | $0$ | This will show the output at a time $\omega t =0$. 2. Let's check to make sure the force field is plausible. Suppose we examine the arrow at the upper right-hand corner. $F_x$ at this point depends on $x$, $\cos(\omega t)$, and a factor $\frac{-m}{2} h_{+} \omega^{2}$. Let's figure out whether $F_x$ is overall negative or positive by examining these in turn. First, what is (x,y) for this grid point? The middle of the plot is (0,0), and the points range from -7 to +7 in each direction, incrementing in integer steps. Therefore, the value of (x,y) at this point is (+7,+7). The factor $x$ is positive. * Is $\cos(\omega t)$ positive, negative, or zero at $\omega t =0$? * What about $\frac{-m}{2} h_{+} \omega^{2}$? * Use your answers above to determine if $F_{x}$ negative, zero, or positive. * Do the same for $F_{y}$. * Based on your answer, determine the direction of the force arrow at this point. Compare to the output of the code. If they disagree, resolve the discrepancy. Another way to check the plausibility of the output is to calculate the divergence. 3. Looking at the output of your program, predict whether the divergence is positive, negative, or zero. 4. Calculate the divergence of this force (assume the z-component is zero). Hint: for $F_x$, as far as the variable "x" is concerned, the force looks like: $ F_x = const*x$. The time-dependent cosine term has no explicit x dependence, so in calculating $\frac{\partial F_x}{\partial x}$, it is treated like a constant. 5. Finally, check that your plot of the force field resembles a plus sign. This wave is called plus-polarized. The term "polarization" refers to the direction of the force field (not the direction the wave is travelling). Here, the wave travels in the z-direction, but the polarization is in the x-y plane. Light waves work similarly - if an electromagnetic plane-wave is travelling in the z direction, the polarization (direction of the associated electric field) is in the x-y plane. These types of waves are known as "transverse." *Gravitational Waves: Cross Polarization* 6. Repeat the exercise above for the cross polarization, $$ F_{x}= \frac{-m}{2} h_{\times}y \omega^{2}\cos(\omega t),$$ $$ F_{y}= \frac{-m}{2} h_{\times}x \omega^{2}\cos(\omega t).$$ Note that the placements of $x$ and $y$ are reversed. As before, perform at least 3 plausibility checks on the output: 7. Pick a point and determine the direction of the force at that point using the above equations. Compare with the arrow direction at that point. 8. All gravitational waves in this Newtonian approximation are divergence-free. Check that the divergence is zero (visually and mathematically). 9. Check that the output resembles a cross. [^footnote]: The small changes of Earth's surface gravity with time (due to mass shifting around) are a major factor that limits the precision of ground-based gravitational wave detectors. For this reason, space-based gravitational wave detectors (eg, LISA) have been designed. **Exercise 3: Effect of Gravitational Waves on a Ring** In this exercise, we will allow the force due to the gravitational wave to evolve in time, and show the effect on a ring of test masses in the x-y plane. Each test mass will have a different value of $(x_0,y_0)$ that it will oscillate around as a result of being in the force field. In this Newtonian approximation, force determines the acceleration of the test masses through Newton's 2nd law. In the x direction, this gives (for the plus polarization): $$ F_x = m \ddot x.$$ which gives $$\ddot x= \frac{-1}{2} h_{+} x\omega^{2}\cos(\omega t). \label{ac1}$$ The motion of the test mass at location $x_0$ can be written $x(t) = x_0 + \delta x$, where the displacements $\delta x$ are small compared to $x_0$. Since $x_0$ has no time dependence, $\ddot{x_0} = 0$, and thus eq. \ref{acc} actually represents $\ddot{ \delta x}$: $$\ddot{\delta x}= \frac{-1}{2} h_{+} x\omega^{2}\cos(\omega t). \label{acc}$$ To simulate this system numerically, starting with acceleration and allowing the program to determine the subsequent motion, we need to set the initial conditions for each test mass. The point $(x_0,y_0)$ is not (necessarily) the initial position of each mass; it is the equilibrium position that the mass oscillates around. We will call the initial positions and velocities $(x_i,y_i)$ and $(v_{xi},v_{yi})$. A convenient choice for the initial conditions is to set them so that the resulting motion is purely oscillatory around the point $(x_0,y_0)$. In the calculations that follow, you will use calculus to determine what these initial values will need to be. So that we can solve this equation analytically, we will approximate $x(t) \approx x_0$ in eq. \ref{acc}, and take $\omega$ and $h_{+}$ to be constant. Thus, the equation to solve is: $$\ddot{ \delta x}= \frac{-1}{2} h_{+} x_0\omega^{2}\cos(\omega t).$$ * Integrate this equation twice to derive $\delta x$ as a function of time. Show that the solution has the form: $\delta x= \frac{h_{+}}{2}x_0\cos(\omega t)+v_{xi} t + c_2$, or equivalently: $$x(t) = x_0 + \delta x=x0+ \frac{h_{+}}{2}x_0\cos(\omega t)+v_{xi} t + c_2 \label{sol} $$ To get motion that oscillates around $x_0$, we need to set $v_{xi} = 0$ and $c_2 = 0$. But our simulation doesn't allow us to input $c_2$ directly; our inputs are the initial conditions. * Find the value of $x_i$ that corresponds to $c_2 = 0$ by setting $t = 0$ and $x(0) = x_i$ in eq.\ref{sol}. Hint: you should find that the test mass has to be initially displaced from its equilibrium value $x_0$ in order to get $c_2 = 0$. * Now, find the initial conditions for the y component of the motion similarly. Integrate the acceleration in the y direction twice, and find the value of $y_i$ that corresponds to $c_2 = 0$. As before, determine the value of $v_{yi}$ that eliminates the linear term in $y(t)$. * Simulate the effect of this gravitational wave on a ring of test masses using the template for Exercise 3. You'll need to put in the force and the initial conditions. Use $x_0$ instead of $x$, and $y_0$ instead of $y$, when you enter the force. (Otherwise, the force you are simulating will not exactly match the force you used to determine the desired initial conditions.) Plausibility checks: * Do you observe behavior consistent with a "plus" polarization? * Do the masses oscillate around $(x_0,y_0)$? If the motion is shifted, the initial position is not quite right. If they fly off to infinity, the initial velocity is not quite right. (Physically, there is nothing wrong with making the initial conditions anything you like, but for the purposes of understanding the effect of the force, it is helpful to remove the "flying off" behavior.) *Analysis* For each of the following parameters in the simulation, describe in words what effect the parameter has on the motion of the test masses. If it has no effect, explain why not. For example, try changing $R$, the radius of the ring of test masses. Does the amount of deformity of the ring change as a result? Why or why not? | Parameter | Effect on motion (if none, explain why not) | | ------------- |:----------------------------------------------------------:| | $h_{+}$ | $ $ | | $\omega$ | $ $ | | $R$ | $ $ | * Copy your code into a new file so that you don't overwrite your existing code. Modify your simulation so that it shows the cross polarization. Make sure to re-derive the initial positions, since they will be different than before. * Perform plausibility checks, as before, to confirm that your program is working correctly. **Exercise 4: Orbiting Systems and Circular Polarization** Objects that orbit each other create gravitational waves. In this section, our source of gravitational waves are two masses, $m_1$ and $m_2$, that orbit in the x-y plane, with the origin at the center of mass of the system. This could represent a system where the two masses are of comparable size, such as a binary star system, or one in which a planet orbits a star. The gravitational waves produced by this system will be the same as would arise from a single mass $\mu$ orbiting at a radius $A$. (We will assume that this orbit is circular). This effective radius $A$ is equal to the difference in the orbital radii of the physical masses ($\vec{r_i}$). Specifically, $$\mu = \frac{m_1 m_2}{m_1+m_2}\label{mu}$$ and $$A \equiv |\vec{r_1} - \vec{r_2}|$$ Let's imagine that our gravitational wave detector (which we will again model as a ring of test masses) is placed on the z axis, a far distance $r$ from the source of the gravitational waves. In this case, the resulting gravitational wave will have both plus and cross polarizations at the same time. The angular frequency of the gravitational wave turns out to have twice the angular frequency of the orbiting sources. The frequency of the source will be denoted $\omega_s$. $$h_{+}(t) = \frac{4G\mu \omega_{s}^{2} A^2}{rc^4}\cos(2\omega_s t), $$ $$ h_{\times}(t) = \frac{4G\mu \omega_{s}^{2} A^2}{rc^4}\sin(2\omega_s t). $$ Note that Newton's gravitational constant (G) appears, because these equations come from the Einstein equation. The Einstein equation relates the stress-energy tensor (information about mass/energy) to the curvature of spacetime. The constant $G$ tells spacetime how strongly to curve in response to the mass/energy present. * Explain what happens to the amplitudes $h_{+}$ and $h_{\times}$ as the detector gets farther from the source. (The distance between them is $r$). Why does this make sense? * Determine the initial conditions that will produce purely oscillatory motion for $\delta x$ and $\delta y$ (no constant or linear terms). You'll have to obtain acceleration from the force equations, and then integrate twice to find $\delta x$ and $\delta y$. As before, approximate $x$ and $y$ as $x_0$ and $y_0$. Hint: You should find that the initial velocity in the x direction must be $\dot {\delta x}(0)= \frac{G\mu \omega_{s}^{3} A^2}{rc^4} y_0$, in order to make the first constant of integration equal zero. Once you have determined the four initial conditions, you are ready to simulate this gravitational wave. * Without overwriting your previous program, create a program to model this gravitational wave. The net force will be the sum of the contributions from the two polarizations: $$ F_{x}= \frac{-m}{2}[ h_{\times}(t) y +h_{+}(t) x],$$ $$F_{y}= \frac{m}{2}[ h_{+}(t) y- h_{\times}(t)x ].$$ The following parameters are recommended (taken from measurements of the Alpha Centauri system, a group of stars about 4 light-years from Earth). | Parameter | Symbol | Value | | ------------- |:-------------:| -----:| | Reduced mass of system | $\mu$ | $ 0.50~ M_{\odot} $| | Angular frequency of source | $\omega_s$ | $2.5\cdot 10^{-9} ~rad/s $ | | Radius of source orbit | $A$ | $ 18~AU $ | | Distance from test masses to source| $r$ | $267,000~ AU $ | |Radius of ring of test masses |$R$ | $6.0~m$| Masses are conveniently expressed in "solar masses," $m_\odot = 1.989*10^{30}~kg$. An astronomical unit is $AU = 1.496\cdot10^{11}~m$. When you write your code, make sure to convert all units to SI to avoid inconsistencies. If you run your code with these parameters, you won't see any displacement of the test masses. They will just sit there. This makes sense because if ripples in spacetime were large enough to see the distortion of objects, we would see them in real life, and wouldn't need a kilometer-long detector to find them. In reality, gravitational waves, while omnipresent, are so small that they require heroic experimental efforts to detect even with 21st century instruments. To be able to get visible results in our program, we'll need to exaggerate the amplitude of the wave. * Scale up your parameters so that the apparent amplitude of the wave is $~0.2$ or higher. Plausibility checks * Do you observe that each mass is moving in a small circle? * Do the masses oscillate around $(x_0,y_0)$? If the motion is shifted, the initial position is not quite right. * Do the masses stay near $(x_0,y_0)$, or do they fly off to infinity? If they fly off, the initial velocity should be checked. Also, make sure that you have approximated $(x,y)$ in the force as $(x_0,y_0)$. This circling behavior is a signature of a circularly-polarized wave. As written, the masses circulate counter-clockwise; this is called "right-hand circularly polarized," since it matches the orientation of your right hand when your thumb points back at you. * Convert your wave to a left-hand circularly polarized wave by multiplying $F_y$, but not $F_x$, by a minus sign. (Make sure to modify $v_{iy}$ and $y_i$ accordingly). Check your output to make sure you get the masses circulating in small clockwise circles. We have been assuming that the detector is on the z-axis, so that it looks at the orbital plane "head on." If the orbit of this source had been oriented differently relative to the detector, the polarization would no longer be purely circularly-polarized. Thus, experimentalists are interested in determining the polarization of a gravitation wave, since it reveals the orientation of the orbit of the source. **Exercise 5: Detecting Gravitational Waves** In the previous section, we found that the gravitational waves produced by Alpha Centauri have tiny amplitudes - on the order of $10^{-23}$ when they reach Earth. Surprisingly, it is not this tiny amplitude that is the biggest limitation to detecting the stars' gravitational waves. Gravitational wave detectors, such as LIGO, can only sense gravitational waves with frequencies above $1~Hz$. The optimum frequency for Advanced LIGO is $10^2~ - ~10^3 ~Hz$. For our purposes, we'll assume that our detector can pick up a gravitational wave signal if its amplitude is at least $10^{-22}$ and its frequency falls within this range. Frequency, when measured in Hz, indicates how many cycles per second the system is making. How can a binary star system possibly make 10 revolutions *per second*? Would it have to go faster than the speed of light? Let's find out. The speed of an object moving in a circle is determined by both its frequency and the radius of the orbit ($A$). * Use the formula for tangential velocity ($v = \omega A$) and a frequency of $10~Hz$ to determine how large the radius of the orbit can be, while keeping $v$ no more than one-tenth the speed of light. (Recall that $\omega = 2\pi f$). Your answer will turn out to be smaller than the radius of the Earth (6,000 km). And yet, this is a typical scale for the orbit of "compact binaries" - binary systems composed of neutron stars and/or black holes. These are the sources of gravitational waves that LIGO is designed to find. Our final goal is to simulate the effect of a compact binary system. First, we need to make sure that the value used for the frequency of the orbit is consistent with the orbital radius - as stars get closer to each other, it takes less and less time for them to complete an orbit. In this simulation, we ignore the inward spiralling of the orbit, and approximate the radius as constant. Undergoing uniform circular motion at radius $A$, the centripetal acceleration of the system is $$a_c = \frac{v^2}{A} = \frac{G(m_1+m_2)}{A^2}. $$ This equation, combined with $v = \omega A$, requires $A =( 2 G m_s/\omega_s^{2})^{1/3}$. * With the values in the table below (and eq. \ref{mu}), simulate a binary neutron star system. Copy your code from Exercise 3 into a new file so that you don't overwrite it. | Parameter | Symbol | Value | | ------------- |:-------------:| -----:| | Mass of each star | $m_i$ | $1.4~ M_{\odot} $| | Angular frequency of source | $\omega_s$ | $2\pi(10~Hz) ~rad/s $ | | Radius of source orbit | $A$ | $ (2 G m_s/\omega_s^{2})^{1/3}$ | | Distance from test masses to source| $r$ |(variable) | * Experiment with the variable $r$ (keeping it much larger than the radius of the orbit, $A$). Can the detector get close enough to the star system to make the effect of gravitational wave visible to the naked eye? - How far away can a compact binary be and still be detectable by our detector - assumed to be sensitive to amplitudes as small as $10^{-22}$? Express the answer in Mega-parsecs ($1 Mpc = 3.086\cdot 10^{22}$ m). Login or register as a verified educator in order to comment. Login or register as a verified educator in order to add errata. Download Exercises - Word Share a Variation Did you have to edit this material to fit your needs? Share your changes by Creating a Variation Credits and Licensing Deva O'Neil and Parker Cline, "Visualizing Effects of a Gravitational Wave with a Ring of Test Masses," Published in the PICUP Collection, June 2018. The instructor materials are ©2018 Deva O'Neil and Parker Cline. The exercises are released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 license Home | Exercise Sets | Resources | Events | Publications | About | Contact | Terms | Privacy | My Account contact PICUP brought to you by AAPT, design from PhysPort funded by the NSF
CommonCrawl
Volume 18 Supplement 11 Selected articles from the International Conference on Intelligent Biology and Medicine (ICIBM) 2016: bioinformatics XBSeq2: a fast and accurate quantification of differential expression and differential polyadenylation Yuanhang Liu1, 2, Ping Wu3, Jingqi Zhou1, 4, Teresa L. Johnson-Pais3, Zhao Lai1, Wasim H. Chowdhury3, Ronald Rodriguez3 and Yidong Chen1, 5Email author BMC BioinformaticsBMC series – open, inclusive and trusted201718 (Suppl 11) :384 RNA sequencing (RNA-seq) is a high throughput technology that profiles gene expression in a genome-wide manner. RNA-seq has been mainly used for testing differential expression (DE) of transcripts between two conditions and has recently been used for testing differential alternative polyadenylation (APA). In the past, many algorithms have been developed for detecting differentially expressed genes (DEGs) from RNA-seq experiments, including the one we developed, XBSeq, which paid special attention to the context-specific background noise that is ignored in conventional gene expression quantification and DE analysis of RNA-seq data. We present several major updates in XBSeq2, including alternative statistical testing and parameter estimation method for detecting DEGs, capacity to directly process alignment files and methods for testing differential APA usage. We evaluated the performance of XBSeq2 against several other methods by using simulated datasets in terms of area under the receiver operating characteristic (ROC) curve (AUC), number of false discoveries and statistical power. We also benchmarked different methods concerning execution time and computational memory consumed. Finally, we demonstrated the functionality of XBSeq2 by using a set of in-house generated clear cell renal carcinoma (ccRCC) samples. We present several major updates to XBSeq. By using simulated datasets, we demonstrated that, overall, XBSeq2 performs equally well as XBSeq in terms of several statistical metrics and both perform better than DESeq2 and edgeR. In addition, XBSeq2 is faster in speed and consumes much less computational memory compared to XBSeq, allowing users to evaluate differential expression and APA events in parallel. XBSeq2 is available from Bioconductor: http://bioconductor.org/packages/XBSeq/ Differential expression analysis XBSeq XBSeq2 Alternative polyadenylation Next generation sequencing (NGS) technologies have revolutionized biomedical research. RNA sequencing, different from microarray technology, offers high resolution and has been widely used for transcriptome studies, such as, alternative splicing forms detection, allele-specific expression profiling, alternative polyadenylation site identification and most commonly, differential expression (DE) of transcripts between two conditions (e.g. tumor vs normal). The abundance level of a transcript is expected to be directly correlated with the number of sequenced fragments that map to that transcript as measured by RNA-seq. Because of this unique characteristic, DE testing methods developed for microarray technology may not be appropriate if directly adopted for RNA-seq. In recent years, various efforts have been made to develop statistical methods for identifying DEGs between two conditions. Poisson and negative binomial models are two most commonly used statistical models among all the statistical methods developed for DE analysis [1–3]. The main differences of different DE algorithms lie in the way they estimate dispersions and particular statistic used for inference. For instance, DESeq2 [4], the latest version of DESeq [2], uses a shrinkage based method for estimation of dispersion which improves stability. Then Wald test or likelihood ratio test is applied to assess significance. edgeR-robust [5], the latest version of edgeR [3], moderates dispersion estimates toward a trended-by-mean estimate. Then likelihood ratio test is also used to assess statistical significance. Recent comparative studies have shown that no single method dominates broad spectrum of scenarios [6, 7]. However, It is worthy of noting that none of the abovementioned methods take into consideration of reads that align to non-exonic regions of the genome as proposed in our earlier study [8]. Alternative polyadenylation (APA) is a widespread mechanism, where alternative poly(A) sites are used by a gene to encode multiple mRNA transcripts of different 3′ untranslated region (UTR) lengths [9]. Approximately 70% of known human genes have been identified with multiple Poly(A) sites in their 3'UTR regions [10], which significantly contributes to transcriptome diversity. APA events affect the fate of mRNA in several ways, for instance, by altering the binding sites of RNA binding proteins and miRNAs. Experimental methods utilizing sequencing technology to quantify relative usage of APA are still under development [11, 12], while it was not known whether RNA-seq, a routine method used for gene expression quantification, could be applied directly to infer APA usage in the past. Recently, several computational methods have been developed for analyzing APA usage using RNA-seq datasets [13, 14], which demonstrates the potential of using RNA-seq for identification of APA events. Previously we developed an algorithm XBSeq for testing differential expression of RNA-seq, where non-exonic mapped reads are used to model background noise for RNA-seq. To significantly increase the processing speed and functionality, here we provide an updated version: XBSeq2, which include: 1) Updated background annotation file; 2) Functionality to directly process alignment files (.bam files) using featureCounts [15]; 3) Alternative parameter estimation by using Maximum likelihood estimation (MLE); 4) Alternative statistical test for differential expression by using beta distribution approximation; and 5) Incorporation of roar [14] for testing differential APA usage. Direct processing of bam files using featureCounts One of the essential step after genome alignment for RNA-seq is the read summarization, or in other words, expression quantification. One of the read summarization algorithm, HTSeq [16], a python package and probably the most widely used program for read summarization, are commonly performed separately in the LINUX environment. To consolidate expression quantification and DE analysis into R environment, we utilize a fast implementation of featureCounts as described below. Similar to featureCounts, summarizeOverlaps, a function from GenomicRanges package [17], also enables user to directly carry out read summarization procedure within R environment. featureCounts is a read summarization program that can be used for reads generated from RNA or DNA sequencing technologies and it implements highly efficient chromosome hashing and feature blocking techniques that make it considerably faster in speed and consume less computational memory [15]. Previous study has shown that, compared to some other read summarization programs, featureCounts has a similar summarization accuracy but is proven to be much faster and more memory efficient. Currently, featureCounts is available within Subread program [18] and Rsubread package from Bioconductor. In our implementation, we used the default options for feautreCounts, such that, for example, the reads across overlapping genes will not be counted. Poisson-negative binomial model The read count that align to the exonic regions of gene i is made up of two components, underneath true signal S i , which is directly related to real expression intensity of gene i, and background noise B i , which is largely due to sequencing error or misalignment. Previously, we have developed an algorithm, XBSeq [8], which provides more accurate detection of differential expression for RNA-seq experiments based on Poisson-negative binomial convolution model. A similar statistical model has also been successfully applied to MBDcap-seq [19]. Basically, we assumed that the true signal S i (what we want to estimate) follows a negative binomial distribution and background noise B i (sequencing errors or misalignment, etc.) possesses a Poisson distribution. Then the observed signal (what we typically measured) X i is a convolution of S i and B i , which is governed by a Delaporte distribution [20]. $$ {\displaystyle \begin{array}{l}{X}_i={S}_i+{B}_i\\ {}{S}_i\sim NB\left({r}_i,{p}_i\right)\\ {}{B}_i\sim Poisson\left({\lambda}_i\right)\end{array}} $$ Estimation of parameters The assumption is that background noise B i and true signal S i are independent. By default, a non-parametric method was used for parameter estimation. Details regarding non-parametric parameter estimation can be found in our previous publication of XBSeq [8]. When sample size is relatively large (> 10, Additional file 1: Table S2), we provide a new way for estimation of parameters by using the maximum likelihood estimation (MLE). The likelihood function is given by: $$ {\displaystyle \begin{array}{l}L\left({\theta}_i\right)=\prod \limits_{j=1}^mp\left({X}_{ij}|{\alpha}_i,{\beta}_i,{\lambda}_i\right)\cdot \prod \limits_{j=1}^mp\left({B}_{ij}|{\lambda}_i\right)\\ {}=\prod \limits_{j=1}^m\sum \limits_{k=0}^{X_{ij}}\frac{\Gamma \left({\alpha}_i+\mathrm{k}\right){\beta}_i^k{\lambda}_i^{X_{ij}-k}{e}^{-{\lambda}_i}}{\Gamma \left({\alpha}_i\right)k!{\left(1+{\beta}_i\right)}^{\left({\alpha}_i+k\right)}\left({X}_{ij}-k\right)!}\\ {}\kern11.5em \cdot \prod \limits_{j=1}^m\frac{\lambda_i^{B_{ij}}{e}^{-{\lambda}_i}}{B_{ij}!}\end{array}} $$ which has no closed form. We applied Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm to estimate the parameters by iterative updating. α i and β i are parameters for gamma portion of Delaporte distribution which are related to negative binomial parameters by: $$ {r}_i={\alpha}_i $$ $$ {p}_i=1/\left({\beta}_i+1\right) $$ Differential expression testing After all parameters have been successfully estimated, differential expression testing between two groups (with read count x and y) will be carried out using a moderated Fisher's exact test: $$ p=\frac{\sum_{p\left(a,b\right)\le p\left(x,y\right)}p\left(a,b\right)}{\sum_{all}p\left(a,b\right)} $$ where a and b are constrained by a + b = x + y. This step requires heavy computation when a and b are relatively large. Here we also provide one updated way for differential expression testing by using beta distribution approximation when the counts are relatively large. For gene i with read count x and y in two groups, we have: $$ z=x+y $$ $$ \mu =z/\left({n}_1+{n}_2\right) $$ Where n 1 and n 2 are number of samples in each condition. The two parameters for beta distribution can then be estimated: $$ \alpha ={n}_1\cdot \mu /\left(1+{n}_1/\mu \right) $$ $$ \beta ={n}_2\cdot \mu /\left(1+{n}_2/\mu \right) $$ Then center point is defined as: $$ med= qbeta\left(0.5,\alpha, \beta \right) $$ Where qbeta is the quantile function of beta distribution. Then p value is calculated by: $$ p={2}^{\ast }{k}^{\alpha -1}{\left(1-k\right)}^{\beta -1}/B\left(\alpha, \beta \right) $$ Where B(α,β) is the beta function: B(α, β) = Γ(α)Γ(β)/Γ(α + β) and k = (x + 0.5)/z if \( \frac{x+0.5}{z}< med \) and k = (x − 0.5)/z if \( \frac{x-0.5}{z}> med \). Prediction of APA sites APA sites are predicted by using POLYAR program [21], which applies an Expectation Maximization (EM) approach by using 12 different previously mapped poly(A) signal (PAS) hexamer [22]. The predicted APA sites by POLYAR are classified into three classes, PAS-strong, PAS-medium and PAS-weak. Only APA sites in PAS-strong class are selected to construct final APA annotation. APA annotations for human and mouse genome of different versions have been built and are available to download from github: https://github.com/Liuy12/XBSeq_files Testing for differential APA usage Differential APA usage test is carried out using roar package [14]. Basically, the ratio of expression between the short and longer isoform of the transcript, m/M ratio, is firstly estimated by: $$ \frac{m}{M}=\frac{l_{post}{r}_{pre}}{l_{pre}{r}_{post}}-1 $$ Where l pre is the length of the shorter isoform, l post is the extra length of the longer isoform, r pre and r post are the number of reads map to shorter isoform and the portion only to the longer isoform respectively. Then differential APA usage between the two groups will be carried out using Fisher's exact test. For groups with multiple samples, every combination of comparisons will be examined and significance will be inferred based on a combined p value using Fisher's method. In order to evaluate the performance of our updated statistical method using beta approximation, we generated a set of simulated datasets where we can control the differential expression status of each gene. In this study, we simulated true signal S from a negative binomial distribution and background noise B from a Poisson distribution with parameters estimated from a real RNA-seq dataset. We compared XBSeq2 with XBSeq along with DESeq2 [4] and edgeR [3], two most widely used R packages for testing for differential expression for RNA-seq datasets. We followed a similar simulation procedure described in our previous paper XBSeq [8]. Simply speaking, 5000 genes were randomly selected with replacement after discarding genes with relatively low mapped reads or larger dispersion (top 10%). The true signal S was simulated from a negative binomial distribution with parameters estimated from the 5000 selected genes. 10% of the genes were randomly selected to be differentially expressed with 1.5-fold change. We simulated experiments with 3 samples per group. Background noise B was generated in three different scenarios, with different level of dispersion, to examine the performance of different methods in normal and noisy conditions. Background noise with different dispersion levels were simulated from a hybrid model: $$ {B}_{inc}\sim {M}^{\ast } Norm\left(\mu, \sigma \right) $$ where μ is from a Poisson distribution μ ~ Poisson(λ + NF). In our simulation, we set M = 100, σ = 3. The noise factor NF can be chosen from 0, 7, 20, each represents experiments with low background noise, intermediate background noise and high background noise. Simulations were repeated 100 times and statistical metrics were evaluated based on the average performance. We evaluated XBSeq2 against several other algorithms for their ability to discriminate between differentially expressed and non-differentially expressed genes in terms of the area under the ROC curve, number of false discoveries, and statistical power. The performance of different methods for genes expressed at high and low levels were also examined to see whether the algorithm is affected by expression intensity of the gene. RNA-seq dataset for testing Tumor and adjacent normal tissues from six clear cell renal cell carcinoma (ccRCC) patients. Were obtained from the UTHSCSA Genitourinary Tissue Bank. Total RNA was used for stranded mRNA-Seq library preparation by following the KAPA Stranded RNA-Seq Kit with RiboErase (HMR) sample preparation guide. RNA-Seq libraries were sequenced with 100 bp paired end sequencing run with Illumina HiSeq 2000 platform. After sequencing procedure, alignment was carried out using BWA and differential expression and differential APA usage testing were carried out using XBSeq2. Compare with other algorithms We compared XBSeq2 (1.3.2) with some other methods including XBSeq (1.2.2), DESeq2 (1.8.2), edgeR (3.10.5). All the analysis and evaluation were carried out using R version 3.2.0 and Bioconductor version 1.20.3. Updates of XBSeq algorithm Previously, we have developed an algorithm, XBSeq, for detecting differentially expressed genes for RNA-seq datasets by taking background noise into consideration. Here we present several major updates for XBSeq. Firstly, we update the background annotation files (utilizing the same procedures as given in [8]) needed for measuring background noise for human and mouse organism of various genome builds. Secondly, we incorporate functionalities of Rsubread and GenomicRanges packages to enable direct processing of alignment files (.bam) within R environment. Thirdly, besides the non-parametric method for estimation of parameters proposed by the original paper, we provide one additional method for estimating parameters by using maximum likelihood estimation (Eq. 2). Fourthly, we provide a beta distribution approximation method for testing DEGs, which is much faster in speed and more memory efficient compared to the original statistical method (Eq. 11). Fifthly, XBSeq2 now supports APA differential usage inference by using the functionalities provided by roar package. The background annotation file as well as the APA annotation file for various genome builds are available to download from github: https://github.com/Liuy12/XBSeq_files. Discrimination between DE and non-DE genes In order to compare XBSeq2 with edgeR, DESeq2 and XBSeq, we generated synthetic datasets where we can control the differential expression status of each gene by following the procedure described in the methods section. Basically, 5000 genes and their corresponding background noise were firstly simulated from negative binomial and Poisson distribution respectively with parameters estimated from a real RNA-seq dataset after discarding genes with relatively low mapped reads or larger dispersion (top 10%). We showed that by discarding genes with high dispersions, we did not introduce bias towards to a certain method (Additional file 1: Table S4). 500 genes were randomly selected to be differentially expressed with 1.5-fold change. Background noise with different dispersion levels was simulated. All statistical metrics were calculated based on the average of 100 simulations. We compared different methods for their ability to discriminate between differentially expressed genes and non-differentially expressed genes by examining area under the ROC curve. As shown in Fig. 1 & Additional file 1: Table S1, in general, XBSeq2 and XBSeq perform better than the other two methods with larger AUCs. To be specific, when background noise is at a low level, XBSeq2 achieved an AUC of 0.84 which is very close to XBSeq (AUC: 0.85), while AUCs for DESeq2 and edgeR are both 0.73. When we increased the dispersion level of background noise, all four methods have decreased AUCs. XBSeq2 and XBSeq are still the best methods with AUCs 0.75 under high background noise compared to DESeq2 and edgeR (AUC: 0.68 for DESeq2, 0.67 for edgeR). We also investigated the performance of different methods separately for genes with either high (> 75% quantile) or low (< 25% quantile) expression level. As shown in Fig. 1b and c, for genes with relatively high expression intensity, XBSeq and XBSeq2 still perform equally well (AUC = 0.88 for both under low background noise) and only slightly better than DESeq2 and edgeR (AUCs, 0.84 for both under low background noise). On the other hand, for genes with relatively low expression intensity, XBSeq and XBSeq2 perform much better than DESeq2 and edgeR under low background noise (AUCs, 0.78 for XBSeq and XBSeq2, 0.58 for DESeq2 and edgeR). However, all methods show poor performance for genes with relatively low expression under high background noise (AUCs, 0,58 for XBSeq and XBSeq2, 0.52 for DESeq2 and edgeR). Also, we evaluated the MLE-based method for parameter estimation compared to the original non-parametric based method. As shown in Additional file 1: Table S2, both non-parametric (NP) based estimation and maximum likelihood estimation (MLE) based estimation showed better performance than DESeq2 with larger area under the ROC curve (AUC). NP-based estimation has slightly better performance than MLE-based estimation when samples number is smaller than 10. When sample number is big enough, there seems to be no difference in terms of performance. Last but not least, we evaluated the parameter big_count, which defines the cutoff for genes with large counts. As shown in Additional file 1: Table S3, the parameter only has a slight influence on the performance of XBSeq2, which indicates that beta distribution approximation test has similar performance compared to the original statistical test. Overall, XBSeq2 performs equally with XBSeq in terms of AUC under various conditions and both methods perform better than DESeq2 and edgeR, especially for genes with relatively low expression intensity. ROC curves of different methods under various levels of background noise. ROC curves of DESeq2, edgeR, XBSeq, XBSeq2 under low, intermediate or high level of background noise (a); ROC curves of different methods but only with highly expressed genes (genes above 75% quantile of expression intensity) (b); ROC curves of different methods but only with genes expressed at low levels (genes below 25% quantile of expression intensity) (c); Simulations were carried out 100 times and average AUC were used. Dataset with 3 number of replicates per condition, 10% DEGs with 1.5-fold change was used Control of false discoveries We also compared the different methods in terms of the number of false discoveries encountered among top ranked differentially expressed genes based on p value. As shown in Fig. 2 & Additional file 1: Table S1, overall, XBSeq2 and XBSeq perform better than DESeq2 and edgeR. To be specific, under low background noise, XBSeq2 identified 243 number of false discoveries out of 500, which is comparably well to XBSeq (# of FDs, 240). Both methods perform better than DESeq2 and edgeR (# of FDs, 313 and 312 respectively). With increased background noise, all four methods detect an increased number of false discoveries. We then compared the performance of different methods separately for genes expressed at high and low levels as we did earlier. For genes with relatively high expression, XBSeq and XBSeq2 only perform slightly better than DESeq2 and edgeR (# of FDs, 53 for XBSeq and XBSeq2, 58 for DESeq2 and edgeR). However, for genes expressed at low levels, XBSeq and XBSeq2 performed much better than DESeq2 and edgeR under low background noise (# of FDs, 72 for XBSeq, 73 for XBSeq2, 102 for DESeq2, 101 for edgeR). Overall, XBSeq2 performs equally with XBSeq in terms of number of false discoveries under various conditions and both methods perform better than DESeq2 and edgeR, especially for genes expressed at low levels. False discovery curves different methods under various levels of background noise. False discovery curves of DESeq2, edgeR, XBSeq, XBSeq2 under low, intermediate or high level of background noise (a); False discovery curves of different methods but only with highly expressed genes (genes above 75% quantile of expression intensity) (b); False discovery curves of different methods but only with genes expressed at low levels (genes below 25% quantile of expression intensity) (c); Simulations were carried out 100 times and average number of false discoveries were used. Dataset with 3 number of replicates per condition, 10% DEGs with 1.5-fold change was used Statistical power We compared the different methods in terms of statistical power achieved at a pre-selected p value cutoff (p value = 0.05). As shown in Fig. 3 & Additional file 1: Table S1, overall, all four methods have similar statistical power with edgeR slightly better than other methods (Power, 0.35 for XBSeq, 0.36 for XBSeq, 0.35 for DESeq2, 0.37 for edgeR under low background noise). And all methods have decreased statistical power when the dispersion of background noise is increased. We also compared different methods separately for genes expressed at high and low levels as we did earlier. As shown in Fig. 3b and c, all four methods achieved similar statistical power for highly expressed genes. For genes expressed at low levels, DESeq2 and edgeR perform better than XBSeq and XBSeq2 (Power, 0.16 for DESeq2, 0.14 for edgeR, 0.08 for XBSeq and XBSeq2 under low background noise). However, when background noise is increased, all methods exhibit poor performance with similar statistical power for genes expressed at low levels. Overall, XBSeq2 perform comparably well with other methods regarding statistical power. Statistical power of different methods under various levels of background noise. Bar chart of statistical power for DESeq2, edgeR, XBSeq, XBSeq2 under low, intermediate or high level of background noise (a); Bar chart of statistical power for different methods but only with highly expressed genes (genes above 75% quantile of expression intensity) (b); Bar chart of statistical power for different methods but only with genes expressed at low levels (genes below 25% quantile of expression intensity) (c); Simulations were carried out 100 times and average statistical power were used. Dataset with 3 number of replicates per condition, 10% DEGs with 1.5-fold change was used Identify APA events from RNA-seq dataset derived from ccRCC tumors and adjacent normal tissues By utilizing XBSeq2 algorithm, we carried out differential APA usage analysis and differential expression analysis with RNA-seq samples derived from ccRCC tumors and adjacent normal tissues (see Methods Section). APA annotation was generated by using POLYA program as described in methods section. In total, we identified 179 number of genes with differential APA usage with roar value (ratio of ratios), fold change, larger than 1.5, average expression intensity above second quantile of total genes and adjusted p value smaller than 0.1. MYH9, one of the top-ranked genes with differential APA usage, has been previously demonstrated to be associated with end-stage renal disease in African Americans [23]. Then we proceeded to identify DEGs between the two groups using XBSeq2. In total, we identified 417 number of genes that are differentially expressed between tumor and adjacent normal samples with a fold change larger than 1.5, average expression intensity above second quantile of total genes and adjusted p value smaller than 0.1. We also compared the DEGs identified by XBSeq2, DESeq2 and edgeR (Additional file 1: Figure S1). 399 out of 417 DEGs identified by XBSeq2 are also identified by DESeq2 and edgeR. Intriguingly, only two of the genes we identified earlier with differential APA usage were found to be differentially expressed, PAG1 and FAM171A1, which might indicate that regulation through APA usage is independent of regulation through gene expression level. In this paper, we present several major updates to XBSeq, a method we previously developed for testing differential expression for RNA-seq. In order to compare different statistical methods for their ability to correctly identify DEGs, we carried out simulation studies to generate synthetic RNA-seq datasets with different levels of background noise. While Flux Simulator algorithm [24] provides a simulation path starting from the very beginning, in this report, we directly simulate the expression level with Negative Binomial and Poisson distribution for signal and noise expression levels, allowing us to efficiently estimate the accuracy of our proposed algorithm. Sequencing Quality Control (SEQC) project provide unique resources for comprehensive evaluating RNA-seq accuracy, reproducibility and information content [25]. However, the background noise for SEQC data cannot be simply quantified, which makes it difficult for evaluating algorithms under different background noise. Taking all these into consideration, we decided to apply similar simulation procedure as XBSeq [8]. As shown in the results section, XBSeq2 performed equally well with XBSeq and both performed better than DESeq2 and edgeR in terms of AUC (Fig. 1) and number of FD (Fig. 2). For statistical power (Fig. 3), all four methods have similar performance with edgeR being slightly better. Finally, we benchmarked all the methods with regard to time and memory consumption. As shown in Fig. 4, XBSeq2 consumes the least amount of time compared to other three methods and also has a significant increase in efficiency compared to XBSeq. Taken together, XBSeq2 and XBSeq are robust against background noise and provide more accurate detection of DEGs. In addition, XBSeq2 are faster and more memory efficient than XBSeq. Benchmark of different methods under low level of background noise. Benchmark of DESeq2, edgeR, XBSeq, XBSeq2 in terms of computation time (a); and total number of computational memory allocated (b). Methods were benchmarked with datasets of 3, 5, or 10 number of replicates per condition, 10% DEGs with 1.5-fold change. Benchmark procedure was carried out under MacBook Pro, 2.7 GHz Intel Core i5, 8 GB 1867 MHz DDR3 We incorporated functionalities for testing differential APA usage from roar package. As we mentioned earlier, DaPars is one novel algorithm for de novo identification and quantification of dynamic APA events between tumor and matched normal tissues, regardless of any prior APA annotation. To the contrary, roar do need user to provide APA annotation and lacks the ability to identify novel APA sites. The only reason we incorporate roar instead of DaPars, is for programming language compatibility. We demonstrated the functionality of XBSeq2 for testing differential APA usage by using our in-house CCRCC dataset. We found 179 genes with differential APA usage. Interestingly, only 2 out of the 179 genes were found to be differentially expressed between tumor and normal samples. It could be that the APA annotation we generated is far from complete and some novel APA sites might be overlooked. Another possible explanation is that APA usage regulate transcriptomic activity through a different mechanism without affecting gene expression intensity. We presented the latest updates of XBSeq in this report. The updated XBSeq2 package provide a much fast execution time and implemented in a computer memory efficient manner to allow user to process data directly from BAM files, much fast for testing differential expression for RNA-seq datasets, as well as a new functions, within one XBSeq2 package to identify differential APA usage. XBSeq2 is available from Bioconductor: http://bioconductor.org/packages/XBSeq/. Area under ROC curve BFGS: Broyden–Fletcher–Goldfarb–Shanno CCRCC: differential expression DEGs: differentially expressed genes expectation maximization MLE: maximum likelihood estimation receiver operating characteristic UTR: untranslated regions This research was supported in part by the Genome Sequencing Facility of the Greehey Children's Cancer Research Institute, UTHSCSA, which provided RNA-seq service. Fundings for this research were provided partially by the National Institutes of Health Cancer Center Shared Resources (NIH-NCI P30CA54174) to YC and NIGMS (R01GM113245) to YC and YL, and the Cancer Prevention and Research Institute of Texas (CPRIT RP120685-C2) to YC. The publication costs for this article were funded by the aforementioned CPRIT grants to YC. XBSeq2 is available from Github: https://github.com/Liuy12/XBSeq and Bioconductor: https://bioconductor.org/packages/XBSeq/. Supporting datasets including annotations for background noise and predicted alternative polyadenylation sites can be downloaded from: https//github.com/Liuy12/XBSeq_files. This article has been published as part of BMC Bioinformatics Volume 18 Supplement 11, 2017: Selected articles from the International Conference on Intelligent Biology and Medicine (ICIBM) 2016: bioinformatics. The full contents of the supplement are available online at <https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-18-supplement-11>. All authors contributed to the manuscript. YL, ZJ and YC conceived and designed the study. YL implemented updates for the algorithm and carried out the simulation procedure. TLJ provided protocol for collecting ccRCC samples. PW, WC, RR coordinated the experiment and provided consent information. TLJ provided ccRCC samples. ZJ carried out differential APA usage testing. All authors read and approved the final manuscript. The patients were consented under the GUTB protocol "HSC20050234H" to collect the tissue, and the samples were de-identified and provided to Dr. Johnson-Pais, under the non-human protocol "HSC20150509N". The authors agree the consent for publication. Authors declare no competing interest in preparing the paper and developing the software associated to this paper. Additional file 1: Figures and Tables to provide additional analysis results. (PDF 310 kb) Greehey Children's Cancer Research Institute, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA Department of Cellular and Structure Biology, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA Department of Urology, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA Cornell university, Ithaca, NY, USA Department of Epidemiology & Biostatistics, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA Li J, Witten DM, Johnstone IM, Tibshirani R. Normalization, testing, and false discovery rate estimation for RNA-sequencing data. Biostatistics. 2012;13(3):523–38.View ArticlePubMedGoogle Scholar Anders S, Huber W. Differential expression analysis for sequence count data. Genome Biol. 2010;11(10):R106.View ArticlePubMedPubMed CentralGoogle Scholar Robinson MD, McCarthy DJ, Smyth GK. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010;26(1):139–40.View ArticlePubMedGoogle Scholar Love MI, Huber W, Anders S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 2014;15(12):550.View ArticlePubMedPubMed CentralGoogle Scholar Zhou X, Lindsay H, Robinson MD. Robustly detecting differential expression in RNA sequencing data using observation weights. Nucleic Acids Res. 2014;42(11):e91.View ArticlePubMedPubMed CentralGoogle Scholar Rapaport F, Khanin R, Liang Y, Pirun M, Krek A, Zumbo P, Mason CE, Socci ND, Betel D. Comprehensive evaluation of differential gene expression analysis methods for RNA-seq data. Genome Biol. 2013;14(9):R95.View ArticlePubMedPubMed CentralGoogle Scholar Soneson C, Delorenzi M. A comparison of methods for differential expression analysis of RNA-seq data. BMC Bioinformatics. 2013;14:91.View ArticlePubMedPubMed CentralGoogle Scholar Chen HI, Liu Y, Zou Y, Lai Z, Sarkar D, Huang Y, Chen Y. Differential expression analysis of RNA sequencing data by incorporating non-exonic mapped reads. BMC Genomics. 2015;16(Suppl 7):S14.View ArticlePubMedPubMed CentralGoogle Scholar Di Giammartino DC, Nishida K, Manley JL. Mechanisms and consequences of alternative polyadenylation. Mol Cell. 2011;43(6):853–66.View ArticlePubMedPubMed CentralGoogle Scholar Derti A, Garrett-Engele P, Macisaac KD, Stevens RC, Sriram S, Chen R, Rohl CA, Johnson JM, Babak T. A quantitative atlas of polyadenylation in five mammals. Genome Res. 2012;22(6):1173–83.View ArticlePubMedPubMed CentralGoogle Scholar Hoque M, Ji Z, Zheng D, Luo W, Li W, You B, Park JY, Yehia G, Tian B. Analysis of alternative cleavage and polyadenylation by 3′ region extraction and deep sequencing. Nat Methods. 2013;10(2):133–9.View ArticlePubMedGoogle Scholar Chang H, Lim J, Ha M, Kim VN. TAIL-seq: genome-wide determination of poly(a) tail length and 3′ end modifications. Mol Cell. 2014;53(6):1044–52.View ArticlePubMedGoogle Scholar Xia Z, Donehower LA, Cooper TA, Neilson JR, Wheeler DA, Wagner EJ, Li W. Dynamic analyses of alternative polyadenylation from RNA-seq reveal a 3′-UTR landscape across seven tumour types. Nat Commun. 2014;5:5274.View ArticlePubMedPubMed CentralGoogle Scholar Grassi E: roar: Identify differential APA usage from RNA-seq alignments. In., 1.9.1 edn. Bioconductor: Bioconductor; 2016.Google Scholar Liao Y, Smyth GK, Shi W. featureCounts: an efficient general purpose program for assigning sequence reads to genomic features. Bioinformatics. 2014;30(7):923–30.View ArticlePubMedGoogle Scholar Anders S, Pyl PT, Huber W. HTSeq--a Python framework to work with high-throughput sequencing data. Bioinformatics. 2015;31(2):166–9.View ArticlePubMedGoogle Scholar Lawrence M, Huber W, Pages H, Aboyoun P, Carlson M, Gentleman R, Morgan MT, Carey VJ. Software for computing and annotating genomic ranges. PLoS Comput Biol. 2013;9(8):e1003118.View ArticlePubMedPubMed CentralGoogle Scholar Liao Y, Smyth GK, Shi W. The subread aligner: fast, accurate and scalable read mapping by seed-and-vote. Nucleic Acids Res. 2013;41(10):e108.View ArticlePubMedPubMed CentralGoogle Scholar Yuanhang Liu DW, Leach RJ, Chen Y. Model-based and context-specific background correction and differential methylation testing for MBDCap-seq. In: BIBM: 2015. Washington, DC: IEEE; 2015.Google Scholar Johnson NL, Kemp AW, Kotz S. Univariate discrete distributions. 3rd ed. Hoboken: Wiley; 2005.View ArticleGoogle Scholar Akhtar MN, Bukhari SA, Fazal Z, Qamar R, Shahmuradov IA. POLYAR, a new computer program for prediction of poly(A) sites in human sequences. BMC Genomics. 2010;11:646.View ArticlePubMedPubMed CentralGoogle Scholar Tian B, Hu J, Zhang H, Lutz CS. A large-scale analysis of mRNA polyadenylation of human and mouse genes. Nucleic Acids Res. 2005;33(1):201–12.View ArticlePubMedPubMed CentralGoogle Scholar Kao WH, Klag MJ, Meoni LA, Reich D, Berthier-Schaad Y, Li M, Coresh J, Patterson N, Tandon A, Powe NR, et al. MYH9 is associated with nondiabetic end-stage renal disease in African Americans. Nat Genet. 2008;40(10):1185–92.View ArticlePubMedGoogle Scholar Griebel T, Zacher B, Ribeca P, Raineri E, Lacroix V, Guigo R, Sammeth M. Modelling and simulating generic RNA-Seq experiments with the flux simulator. Nucleic Acids Res. 2012;40(20):10073–83.View ArticlePubMedPubMed CentralGoogle Scholar Consortium SM-I. A comprehensive assessment of RNA-seq accuracy, reproducibility and information content by the sequencing quality control consortium. Nat Biotechnol. 2014;32(9):903–14.View ArticleGoogle Scholar
CommonCrawl
Why finding big prime numbers? why study weird arithmetic sequences? Now I am sitting on my desk thinking on a technique to help me to identify BIG prime numbers in certain arithmetic sequences using algebraic geometry. This was part of my last project for my PhD, which I think it was very interesting. When I say "BIG" I mean thousands of digits, maybe millions. When I say "primes in arithmetic sequences" I mean to separate prime numbers in a list like $latex \{ \alpha_n \}_{n=0}^\infty$. A popular, a prostituted, an important and a "dumb" sequence. Now we will describe some non trivial sequences and we will see a glimpse of why they are important (or not). This is of course in my opinion, since there are infinitely many sequences which I ignore and could be more interesting. In fact there is a sequence of "uninteresting numbers" which contains all numbers which appear to not have known/interesting properties (like being prime, Mersenne prime, triangular number, cube, etc...). But well, this sequence then..., is interesting as is unique, and then we get a paradox. The popular sequence The popular is a very famous arithmetic sequence, and is the "Mersenne Sequence", $latex \{ 2^n-1\}_{n=1}^\infty\subset\mathbb{N}$. $latex \{ 2,7,15,31,63,127,255,...\}$ This is the sequence of all Mersenne numbers. We denote by $latex M_n$ to the number $latex 2^n-1$. To identify the primes in this sequence, note that if $latex n=2k$ (is even) then $latex 2^n-1=(2^k-1)(2^k+1)$ and hence... not a prime number. This means that in the sequence we can discard all the values $latex n\in 2\mathbb{Z}$, that is $latex M_{2k}$ is not prime. A less trivial fact is that if $latex n$ is composite, namely $latex n=ab$ then we have that: $latex 2^n-1=2^{ab}-1=(2^a-1)(\sum_{k=0}^{b-1}2^{ka})=(2^b-1)(\sum_{k=0}^{a-1}2^{kb})$ and then also $latex M_{pq}$ is not prime. So, we are left with all the elements in the sequence of the form $latex M_p$ for $latex p$ a prime number. A quick inspection says that $latex M_2, M_3, M_5, M_7$ are Mersenne primes. But $latex M_{11} = 2047 =23\cdot 89$ which is not prime. So, which Mersenne numbers are prime ? This is a difficult question, there are big computer grids working in this, trying to find the most spectacular prime. The biggest Mersenne prime known, (and Biggest Prime in general) is $latex 2^{74207281}-1$. which has 22.3 million digits approximately. The way to check it without putting so much effort in the algebraic geometric or number theory rigor is the following: To check that $latex 2^p -1$ is prime: Consider the sequence $latex \{ \sigma_1=4, \sigma_2=14,...,\sigma_j=(\sigma_{j-1}^2)-2...\}$ then $latex M_p=2^p-1$ is prime if and only if $latex \sigma_{p-1}\equiv 0 \bmod M_p$. This is the fastest way known today. is called the Lucas-Lehmer test. There are other elegant tests (but not necessarily faster) using elliptic curves which I knew its existence when I was talking with Benedict Gross at the conference on L-Functions at Harvard the past year, and in some sense I have to do something similar as part of my PhD program. The prostituted Another famous and prostituted sequence is the so called "Fibonacci sequence". $latex \{ 1,1,2,3,5,8,13,...,F_{n-1}+F_{n-2}, ...\} $ This is famous and is always presented as the building blocks of beauty in the universe. This just an exaggeration, but well, that is another story. Is known that $latex \lim_{n\to\infty} \frac{F_{n}}{F_{n-1}}=\phi=1.618033...$. This number $latex \phi$ has the property that squared is equal to $latex \phi+1$ so, is appears as a root of the polynomial $latex x^2 -x -1=0$. The value of $latex \phi$ can be found when you divide lengths of middle lines in some polygons by the length of the sides. Also in your body, if you divide your height by the distance of your feet to your belly button, and in a lot of parts of our body. This number can be analyzed in a Leonardo da Vinci drawing called "Le proporzioni del corpo umano secondo Vitruvio" which can be seen here. This is why $latex \phi$ has some kind of mystic and esoteric significance for some people, and that is why is called sometimes The Golden Ratio. To identify primes in the Fibonacci sequence, there are no good ways. This is mainly because the sequence highly depends on the "addition" operation and not "multiplication", and believe it or not, addition in number theory is a mystery. A very important Conjecture in mathematics about this mystery, predicts the possible relation between the multiplication and the addition of integers, through its prime factorization, called the abc conjecture, so Fibonacci, is difficult as a problem in terms of primality, but as seen, has more significance in geometry. The important sequence Now for the important sequence is this $latex \{2, 1729, 87539319, 6963472309248, 48988659276962496, 24153319581254312065344...\}$ Do you see it? Well, is not too easy to see, is called the "Hardy-Ramanujan sequence", if you denote by $latex \tau(n)$ the $latex n^{th}$ element of the sequence, it means that $latex \tau(n)$ can be written in $latex n$ different ways as a sum of two cubes. Fermat Proved that there are infinitely many of these numbers hundreds of years before, but they caught the attention of Hardy by Ramanujan in a very nice story. When Ramanujan was dying at the hospital in England, Hardy went to visit him. Hardy told him that the number in his taxi to the hospital was a dull, boring number, 1729. Ramanujan said: 'No, Hardy, it is a very interesting number. It is the smallest number expressible as the sum of two cubes in two different ways' That is why we use the letter $latex \tau$ because these numbers are called "taxicab numbers" because of this story. The importance of these numbers, is the new theory in arithmetic geometry that arose, which in fact I am very interested personally, and professionally. Practically Ramanujan discovered in his way of thinking, an integral solution to the equation $latex x^3+y^3=z^3+w^3$ which nowadays is studied over $latex \mathbb{Q}$ and then twisted. This kind of surfaces are called K3 Surfaces (Kodaira-Kummer-Kahler, smooth minimal complete surface with trivial canonical bundle), there is no smart way of obtaining these numbers other than using this algebraic geometry in these surfaces. An example of these surfaces with a family of rational curves parametrized (the curves may contain taxicab solutions in it) is: And yeah 1729 has the desired property, that is, $latex \tau(2)$ is a sum of two cubes in only two different ways. Srnivasa Ramanujan was a human computer, in fact to verify this, we check for the $latex \tau(2)=1729$ and $latex \tau(6)=24153319581254312065344$ can be written as the sum of 2 cubes in 2 and 6 different ways respectively: $latex \begin{matrix}\tau(2)&=&1729&=&1^3 + 12^3 \\&&&=&9^3 + 10^3\end{matrix}$ $latex \begin{matrix}\tau(6)&=&24153319581254312065344&=&582162^3 + 28906206^3 \\&&&=&3064173^3 + 28894803^3 \\&&&=&8519281^3 + 28657487^3 \\&&&=&16218068^3 + 27093208^3 \\&&&=&17492496^3 + 26590452^3 \\&&&=&18289922^3 + 26224366^3\end{matrix}$ The "dumb" sequence There are plenty other sequences, for example, there are some "dumb" sequences that at the end... they are not so dumb, for example, consider the following sequence: $latex \{1, 11, 21, 1211, 111221, 312211, 13112221, ... \}$ Do you see the pattern? Well, the pattern is visual, you begin with "1" and then the following will be the "description of the previous", that is, you ask yourself "What symbols are in the preceding element of the sequence?", and you say "one one" (11) then the next is "two ones" (21), and so on... This sequence is called "look and say sequence". There are a lot of things you can do in your spare time with this sequence, for example, prove that a "4" cannot appear in any element of the sequence. The number 13112221 is in fact the biggest prime known in this sequence, are there others? This apparently dumb sequence has an amazing property which transforms it from dumb to analytic and it was due to John Conway. Consider the $latex k^{th}$ element of that sequence, call it $latex a_k$ , and define $latex \ell(a_k)=$number of digits of $latex a_k$. Is proved that $latex \lim_{n\to\infty} \frac{\ell(a_k)}{\ell(a_{k-1})}=\lambda=1.303577269034...$. The surprising thing is that $latex \lambda$ is an algebraic number that can be found as a root of a polynomial of degree 71. This was proved by John Conway, for more information, look at wikipedia. But why?, why you want to find big primes or classify properties of sequences? I have been questioned plenty of times "Why you want to find big primes?", today was one of these days where a master student asked me. There is a FALSE answer which is very popular among a lot of people, namely "It has cryptographic applications, since prime numbers are the basis for today's e-commerce and the development of cryptographic schemes" This is false, since cryptographic schemes for public key cryptography need prime numbers with no more than 1000 decimal digits (and this is already too long). Using more than 1000 digits maybe would be more secure, but the speed will decrease exponentially, that is why you don't use 1 million bits of security. Identifying these "cryptographic" prime numbers at random with certain properties (Like being a Sophie-Germain prime) to generate a public key with 4096 bits which has approx 1234 digits, (which is already paranoid for the public techniques of cryptanalysis) takes less than a second in my workstation (try: time openssl genrsa 4096). So, cryptography is not a good excuse to generate BIG prime numbers, when I say big, I mean thousands of digits, millions. The answer in my case is easy... To collect them... they are difficult to find, but they are present in a lot of shapes, for example in the last sequence, which elements are prime ? which is the biggest known ? how to identify them with geometric techniques?. That is how the theory in mathematics is born, with a question regarding classification. So, finding big primes is difficult, so is valuable, there are only 49 Mersenne Primes and nobody has proved that they are infinite. (but is believed heuristically that they are). A couple of years ago I wrote here about finding a big prime of the form $latex 3\cdot 8^n -1$ , which I found here, there are a lot of possible primes disguised in infinitely many shapes, you could just define whatever recursive relation and analyze it. Other people could say to test a big cluster, or to get some money (yes you get money if you break the records). Maybe an analytic number theorist could say To understand more the distribution of prime numbers which is unknown. So well in conclusion, I think is good to collect primes and analyze sequences. This to find something hidden in the unknown nature of the logic of sequences of natural numbers, since, we don't know yet how the prime numbers arise. There is still a mystery of how the prime numbers are distributed among all the natural numbers, even with quantum computers is hard to find them arbitrarily large, so there are still work to do in mathematics. For more information of all the known published sequences (yes, whatever you think will be there) go to www.oeis.org, this is the Online Encyclopedia of Integer Sequences. Publicado por beck en Wednesday, April 19, 2017 Etiquetas: algebraic geometry, arithmetic, aritmética, English, geometría algebraica, primes, primos, sequences, sucesiones primenumbers said... To prove whether a number is a prime number, first try dividing it by 2, and see if you get a whole number it can't be a prime number. If you don't get a whole number, next try dividing it by prime numbers: 3, 5, 7, 11 and so on. beck said... And according to your proposed proof of primality to check if n is prime I need to stop dividing by k and checking if my division is whole number for all k<Sqrt(n). Imagine that the number you want to test has smallest prime factor 2^74,207,281 − 1 (This is the Mersenne 49), this number has 22 million digits, so your method would not finish even with a quantum computer. What you are describing is a sieve to identify primes less than n by checking if they are divisible by all the k < sqrt(n). But this sieving doesn't work in practice to decide if an arbitrary integer is prime. Taxis in Kent island Maryland If you've crossed the Chesapeake Bay Bridge, you've gone right by Kent Island. Located off the west coast of Queen Anne's County, Kent Island sits at the base of the Chesapeake Bay Bridge, connecting Maryland's eastern and western shores. Public transportation through this area is limited, and Annapolis Taxi Cabs can alleviate difficult transportation situations. Annapolis Taxi Cabs offers reliable Taxi Cab Service in Kent Island Maryland. Concrete Ready Mix Pakistan/ Allied group is a renowned name synonym to success, integrity and prestige in Pakistan since 1976. Allied Materials is involved in the construction materials business, aimed at providing high quality products and reliable solutions to customers with consistency, backed by our own in-house technical team, who advise and formulate specific, customized mix designs to suit your engineering need Why finding big prime numbers? why study weird ari...
CommonCrawl
Search Results: 1 - 10 of 248592 matches for " Peter C. Breen " Page 1 /248592 Regional differences in expression of VEGF mRNA in rat gastrocnemius following 1 hr exercise or electrical stimulation Tom D Brutsaert, Timothy P Gavin, Zhenxing Fu, Ellen C Breen, Kechun Tang, Odile Mathieu-Costello, Peter D Wagner BMC Physiology , 2002, DOI: 10.1186/1472-6793-2-8 Abstract: Total muscle VEGF mRNA (via Northern blot) was upregulated 3.5-fold with both exercise and with electrical stimulation (P = 0.015). Quantitative densitometry of the VEGF mRNA signal via in situ hybridization reveals significant regional differences (P ≤ 0.01) and protocol differences (treadmill, electrical stimulation, and control, P ≤ 0.05). Mean VEGF mRNA signal was higher in the oxidative region in both treadmill run (~7%, N = 4 muscles, P ≤ 0.05) and electrically stimulated muscles (~60%, N = 4, P ≤ 0.05). These regional differences were not significantly different from control muscle (non-exercised, non-stimulated, N = 2 muscles), although nearly so for electrically stimulated muscle (P = 0.056).Moderately higher VEGF mRNA signal in oxidative muscle regions is consistent with regional differences in capillary density. However, it is not possible to determine if the VEGF mRNA signal difference is important in either the maintenance of regional capillarity differences or exercise induced angiogenesis.The adaptations of skeletal muscle to endurance-type training are well characterized. They include increases in mitochondrial volume density and increases in the activity of enzymes involved in oxidative phosphorylation to produce ATP [1]. The increased metabolic capacity of trained muscle is accompanied by an angiogenic response which increases capillary density and/or capillary to fiber ratio [2,3], preserving the functional match between oxygen delivery and metabolic demand within the muscle. The angiogenic response in skeletal muscle is thought to be mediated by a number of angiogenic factors including, most importantly, vascular endothelial growth factor (VEGF). VEGF is a 45 kDA heparin-binding homodimeric glycoprotein with a predominant specificity to vascular endothelial cells [4-7]. Recent investigations demonstrate that VEGF increases vascular permeability [4], endothelial cell proliferation in vitro[8], and angiogenesis in vivo[9]. We have previously demons The ERI-6/7 Helicase Acts at the First Stage of an siRNA Amplification Pathway That Targets Recent Gene Duplications Sylvia E. J. Fischer,Taiowa A. Montgomery,Chi Zhang,Noah Fahlgren,Peter C. Breen,Alexia Hwang,Christopher M. Sullivan,James C. Carrington,Gary Ruvkun Abstract: Endogenous small interfering RNAs (siRNAs) are a class of naturally occuring regulatory RNAs found in fungi, plants, and animals. Some endogenous siRNAs are required to silence transposons or function in chromosome segregation; however, the specific roles of most endogenous siRNAs are unclear. The helicase gene eri-6/7 was identified in the nematode Caenorhabditis elegans by the enhanced response to exogenous double-stranded RNAs (dsRNAs) of the null mutant. eri-6/7 encodes a helicase homologous to small RNA factors Armitage in Drosophila, SDE3 in Arabidopsis, and Mov10 in humans. Here we show that eri-6/7 mutations cause the loss of 26-nucleotide (nt) endogenous siRNAs derived from genes and pseudogenes in oocytes and embryos, as well as deficiencies in somatic 22-nucleotide secondary siRNAs corresponding to the same loci. About 80 genes are eri-6/7 targets that generate the embryonic endogenous siRNAs that silence the corresponding mRNAs. These 80 genes share extensive nucleotide sequence homology and are poorly conserved, suggesting a role for these endogenous siRNAs in silencing of and thereby directing the fate of recently acquired, duplicated genes. Unlike most endogenous siRNAs in C. elegans, eri-6/7–dependent siRNAs require Dicer. We identify that the eri-6/7–dependent siRNAs have a passenger strand that is ~19 nt and is inset by ~3–4 nts from both ends of the 26 nt guide siRNA, suggesting non-canonical Dicer processing. Mutations in the Argonaute ERGO-1, which associates with eri-6/7–dependent 26 nt siRNAs, cause passenger strand stabilization, indicating that ERGO-1 is required to separate the siRNA duplex, presumably through endonucleolytic cleavage of the passenger strand. Thus, like several other siRNA–associated Argonautes with a conserved RNaseH motif, ERGO-1 appears to be required for siRNA maturation. Hadamard Renormalisation of the Stress Energy Tensor on the Horizons of a Spherically Symmetric Black Hole Space-Time Cormac Breen,Adrian C. Ottewill Physics , 2011, DOI: 10.1103/PhysRevD.85.064026 Abstract: We consider a quantum field which is in a Hartle-Hawking state propagating in a general spherically symmetric black hole space-time. We make use of uniform approximations to the radial equation to calculate the components of the stress tensor, renormalized using the Hadamard form of the Green's function, on the horizons of this space-time. We then specialize these results to the case of the `lukewarm' Reissner-Nordstrom-de Sitter black hole and derive some conditions on the stress tensor for the regularity of the Hartle-Hawking state. Extended Green-Liouville asymptotics and vacuum polarization for lukewarm black holes Abstract: We consider a quantum field on a lukewarm black hole spacetime. We introduce a new uniform approximation to the radial equation, constructed using an extension of Green-Liouville asymptotics. We then use this new approximation to construct the renormalized vacuum polarization in the Hartle-Hawking vacuum. Previous calculations of the vacuum polarization rely on the WKB approximation to the solutions of the radial equation, however the nonuniformity of the WKB approximations obscures the results of these calculations near both horizons. The use of our new approximation eliminates these obscurities, enabling us to obtain explicitly finite and easily calculable values of the vacuum polarization on the two horizons. Hadamard Renormalization of the Stress Energy Tensor in a Spherically Symmetric Black Hole Space-Time with an Application to Lukewarm Black Holes Abstract: We consider a quantum field which is in a Hartle-Hawking state propagating in a spherically symmetric black hole space-time. We calculate the components of the stress tensor, renormalized using the Hadamard form of the Green's function, in the exterior region of this space-time. We then specialize these results to the case of the `lukewarm' Riessner-Nordstrom-de Sitter black hole. Carbon dioxide kinetics and capnography during critical care Cynthia T Anderson, Peter H Breen Critical Care , 2000, DOI: 10.1186/cc696 Abstract: Carbon dioxide is produced in the tissues by aerobic plus/minus anaerobic metabolism (Fig. 1a), transported in blood to the lung by venous return (essentially equal to cardiac output [QT]), and eliminated from the lung by minute ventilation (VE) [1]. In this model the lung is a simple mixing chamber and the alveolar fractional carbon dioxide (FACO2) is given byFACO2 = ?CO2,ti/?A + FICO2 (1)where ?CO2,ti is the tissue carbon dioxide production, ?A is alveolar ventilation, and FICO2 is the inspired FCO2. If one assumes no diffusion defect for carbon dioxide, then the partial carbon dioxide tension (PCO2) of arterial blood (PaCO2) leaving the lung is the perfusion-weighted average alveolar PCO2 (PACO2). Note that pulmonary shunt will add mixed venous blood with high PCO2 (PVCO2) to arterial blood and slightly increase PaCO2 [2]. ?A is the product of respiratory frequency and expired tidal volume (VT). Expired VT is composed of alveolar VT and total physiologic dead space (VDphy). The fraction VDphy/VT is given byVDphy/VT = (PaCO2 - P?CO2)/PaCO2 (2)where P?CO2 is the mixed expired PCO2 [2]. In turn, VDphy is partitioned into anatomic dead space (VDana; conducting airways that do not participate in gas exchange) and alveolar dead space (VDalv; ventilated alveolar units that are devoid of perfusion; Fig. 2). VDalv/VTalv is given byVDalv/VTalv = (PaCO2 - PACO2)/PaCO2 (3)where PACO2 is the alveolar PCO2, estimated either from PETCO2 or P?CO2 [2] (see below). The PaCO2-PETCO2 gradient results from the presence of VDalv or high alveolar ventilation-to-blood flow (VA/Q) lung regions (see also Capnography during weaning from mechanical ventilation, below).The normal capnogram is the measurement of PCO2 at the airway opening during the ventilatory cycle (Fig. 1b) [1]. Phase I (inspiratory baseline) reflects inspired gas, which is normally devoid of carbon dioxide. Phase II (expiratory upstroke) is the transition between VDana, which does not participate in gas exchange, and alve A Preliminary Examination of Risk in the Pharmaceutical Supply Chain (PSC) in the National Health Service (NHS) [PDF] Liz Breen Journal of Service Science and Management (JSSM) , 2008, DOI: 10.4236/jssm.2008.12020 Abstract: The effective management of pharmaceuticals in the National Health Service (NHS) is critical to patient welfare thus any risks attached to this must be identified and controlled. At a very basic level, risks in the pharmaceutical supply chain are associated with product discontinuity, product shortages, poor performance, patient safety/dispensing errors, and technological errors (causing stock shortages in pharmacies) to name but a few, all of which incur risk through disruption to the system. Current indications suggest that the pharmaceutical industry and NHS practitioners alike have their concerns as to the use of generic supply chain strategies in association with what is perceived to be a specialist product (pharmaceuticals). The aim of the study undertaken was to gain a more realistic understanding of the nature and prevalence of risk in the Pharmaceutical Supply Chain (PSC) to be used as a basis for a more rigorous research project incorporating in-vestigation in the UK, Europe and USA. Data was collected via a workshop forum held in November 2005. The outputs of the workshop indicated that there were thirty-five prevalent risks. The risks were rated using risk assessment catego-ries such as impact, occurrence and controllability. The findings indicated that the risks identified are similar to those prevalent in industrial supply chains, regardless of the idiosyncrasies of pharmaceuticals. However, the group consen-sus was that caution must be applied in how such risks are addressed, as there are aspects of the product that highlight its uniqueness e.g. criticality. Psychosocial factors and their predictive value in chiropractic patients with low back pain: a prospective inception cohort study Jennifer M Langworthy, Alan C Breen Chiropractic & Manual Therapies , 2007, DOI: 10.1186/1746-1340-15-5 Abstract: A prospective inception cohort study of patients presenting to a UK chiropractic practice for new episodes of non-specific low back pain (LBP) was conducted. Baseline questionnaires asked about age, gender, occupation, work status, duration of current episode, chronicity, aggravating features and bothersomeness using Deyo's 'Core Set'. Psychological factors (fear-avoidance beliefs, inevitability, anxiety/distress and coping, and co-morbidity were also assessed at baseline. Satisfaction with care, number of attendances and pain impact were determined at 6 weeks. Predictors of poor outcome were sought by the calculation of relative risk ratios.Most patients presented within 4 weeks of onset. Of 158 eligible and willing patients, 130 completed both baseline and 6-week follow-up questionnaires. Greatest improvements at 6 weeks were in interference with normal work (ES 1.12) and LBP bothersomeness (ES 1.37). Although most patients began with moderate-high back pain bothersomeness scores, few had high psychometric ones. Co-morbidity was a risk for high-moderate interference with normal work at 6 weeks (RR 2.37; 95% C.I. 1.15–4.74). An episode duration of >4 weeks was associated with moderate to high bothersomeness at 6 weeks (RR 2.07; 95% C.I. 1.19 – 3.38) and negative outlook (inevitability) with moderate to high interference with normal work (RR 2.56; 95% C.I. 1.08 – 5.08).Patients attending a private UK chiropractic clinic for new episodes of non-specific LBP exhibited few psychosocial predictors of poor outcome, unlike other patient populations that have been studied. Despite considerable bothersomeness at baseline, scores were low at follow-up. In this independent health sector back pain population, general health and duration of episode before consulting appeared more important to outcome than psychosocial factors.Recovery from persistent low back pain is determined not solely by clinical factors but also by the individual's psychological state [1]. Such psychologic Gravothermal oscillations in two-component models of star clusters Philip G. Breen,Douglas C. Heggie Physics , 2011, DOI: 10.1111/j.1365-2966.2011.20036.x Abstract: In this paper, gravothermal oscillations are investigated in two-component clusters with a range of different stellar mass ratios and total component mass ratios. The critical number of stars at which gravothermal oscillations first appeared is found using a gas code. The nature of the oscillations is investigated and it is shown that the oscillations can be understood by focusing on the behaviour of the heavier component, because of mass segregation. It is argued that, during each oscillation, the re-collapse of the cluster begins at larger radii while the core is still expanding. This re-collapse can halt and reverse a gravothermally driven expansion. This material outside the core contracts because it is losing energy both to the cool expanding core and to the material at larger radii. The core collapse times for each model are also found and discussed. For an appropriately chosen case, direct N -body runs were carried out, in order to check the results obtained from the gas model, including evidence of the gravothermal nature of the oscillations and the temperature inversion that drives the expansion. Gravothermal oscillations in multi-component models of star clusters Abstract: In this paper, gravothermal oscillations are investigated in multi-component star clusters which have power law initial mass functions (IMF). For the power law IMFs, the minimum masses ($m_{min}$) were fixed and three different maximum stellar masses ($m_{max}$) were used along with different power-law exponents ($\alpha$) ranging from 0 to -2.35 (Salpeter). The critical number of stars at which gravothermal oscillations first appear with increasing $N$ was found using the multi-component gas code SPEDI. The total mass ($M_{tot}$) is seen to give an approximate stability condition for power law IMFs with fixed values of $m_{max}$ and $m_{min}$ independent of $\alpha$. The value $M_{tot}/m_{max} \simeq 12000$ is shown to give an approximate stability condition which is also independent of $m_{max}$, though the critical value is somewhat higher for the steepest IMF that was studied. For appropriately chosen cases, direct N-body runs were carried out in order to check the results obtained from SPEDI. Finally, evidence of the gravothermal nature of the oscillations found in the N-body runs is presented.
CommonCrawl
A quantum state is any possible state in which a quantum mechanical system can be. A fully specified quantum state can be described by a state vector, a wavefunction, or a complete set of quantum numbers for a specific system. A partially known quantum state, such as an ensemble with some quantum numbers fixed, can be described by a density operator. Paul A. M. Dirac invented a powerful and intuitive mathematical notation to describe quantum states, known as bra-ket notation. Basis states Any quantum state ∣ψ⟩ can be expressed in terms of a sum of basis states (also called basis kets), ∣ki⟩ ∣ψ⟩ = ∑ici∣ki⟩ where ci are the coefficients representing the probability amplitude, such that the absolute square of the probability amplitude, ∣ci∣2 is the probability of a measurement in terms of the basis states yielding the state ∣ki⟩. The normalization condition mandates that the total sum of probabilities is equal to one, ∑i∣ci∣2 = 1. The simplest understanding of basis states is obtained by examining the quantum harmonic oscillator. In this system, each basis state ∣n⟩ has an energy $E_n = \hbar \omega \left(n + {\begin{matrix}\frac{1}{2}\end{matrix}}\right)$. The set of basis states can be extracted using a construction operator a † and a destruction operator a in what is called the ladder operator method. Superposition of states If a quantum mechanical state ∣ψ⟩ can be reached by more than one path, then ∣ψ⟩ is said to be a linear superposition of states. In the case of two paths, if the states after passing through path α and path β are $|\alpha\rangle = \begin{matrix}\frac{1}{\sqrt{2}}\end{matrix} |0\rangle + \begin{matrix}\frac{1}{\sqrt{2}}\end{matrix} |1\rangle$, and $|\beta\rangle = \begin{matrix}\frac{1}{\sqrt{2}}\end{matrix} |0\rangle - \begin{matrix}\frac{1}{\sqrt{2}}\end{matrix} |1\rangle$, then ∣ψ⟩ is defined as the normalized linear sum of these two states. If the two paths are equally likely, this yields $|\psi\rangle = \begin{matrix}\frac{1}{\sqrt{2}}\end{matrix}|\alpha\rangle + \begin{matrix}\frac{1}{\sqrt{2}}\end{matrix}|\beta\rangle = \begin{matrix}\frac{1}{\sqrt{2}}\end{matrix}(\begin{matrix}\frac{1}{\sqrt{2}}\end{matrix}|0\rangle + \begin{matrix}\frac{1}{\sqrt{2}}\end{matrix}|1\rangle) + \begin{matrix}\frac{1}{\sqrt{2}}\end{matrix}(\begin{matrix}\frac{1}{\sqrt{2}}\end{matrix}|0\rangle - \begin{matrix}\frac{1}{\sqrt{2}}\end{matrix}|1\rangle) = |0\rangle$. Note that in the states ∣α⟩ and ∣β⟩, the two states ∣0⟩ and ∣1⟩ each have a probability of $\begin{matrix}\frac{1}{2}\end{matrix}$, as obtained by the absolute square of the probability amplitudes, which are $\begin{matrix}\frac{1}{\sqrt{2}}\end{matrix}$ and $\begin{matrix}\pm\frac{1}{\sqrt{2}}\end{matrix}$. In a superposition, it is the probability amplitudes which add, and not the probabilities themselves. The pattern which results from a superposition is often called an interference pattern. In the above case, ∣0⟩ is said to constructively interfere, and ∣1⟩ is said to destructively interfere. For more about superposition of states, see the double-slit experiment. Pure and mixed states A pure quantum state is a state which can be described by a single ket vector, or as a sum of basis states. A mixed quantum state is a statistical distribution of pure states. The expectation value ⟨a⟩ of a measurement A on a pure quantum state is given by ⟨a⟩ = ⟨ψ∣A∣ψ⟩ = ∑iai⟨ψ∣αi⟩⟨αi∣ψ⟩ = ∑iai∣⟨αi∣ψ⟩∣2 = ∑iaiP(αi) where ∣αi⟩ are basis kets for the operator A, and P(αi) is the probability of ∣ψ⟩ being measured in state ∣αi⟩. In order to describe a statistical distribution of pure states, or mixed state, the density operator (or density matrix), ρ, is used. This extends quantum mechanics to quantum statistical mechanics. The density operator is defined as ρ = ∑sps∣ψs⟩⟨ψs∣ where ps is the fraction of each ensemble in pure state ∣ψs⟩. The ensemble average of a measurement A on a mixed state is given by $\left [ A \right ] = \langle \overline{A} \rangle = \sum_s p_s \langle \psi_s | A | \psi_s \rangle = \sum_s \sum_i p_s a_i | \langle \alpha_i | \psi_s \rangle |^2 = tr(\rho A)$ where it is important to note that two types of averaging are occurring, one being a quantum average over the basis kets of the pure states, and the other being a statistical average over the ensemble of pure states. Quantum harmonic oscillator Bra-ket notation Orthonormal basis Wavefunction Probability amplitude Density operator Category:Handbook of Quantum Information
CommonCrawl
Spatially coupled turbo-coded continuous phase modulation: asymptotic analysis and optimization Tarik Benaddi ORCID: orcid.org/0000-0003-4112-68761 & Charly Poulliat2 For serially or parallel concatenated communication systems, spatial coupling techniques enable to improve the threshold of these systems under iterative decoding using belief propagation (BP). For the case of low-density parity-check (LDPC) codes, it has been shown that, under some asymptotic assumptions, spatially coupled ensembles have BP thresholds that approach the bitwise maximum a posteriori (MAP) threshold of the related uncoupled ensemble. This phenomenon is often referred to as threshold saturation, and it has sometimes very important consequences. For example, in the case of regular LDPC code ensembles, spatial coupling enables to achieve asymptotically the capacity for any class of binary memoryless symmetric channels. Since then, this threshold saturation has been conjectured or proved for several other types of concatenations. In this work, we consider a serially concatenated scheme which is the serial concatenation of a simple outer convolutional code and a continuous phase modulator (CPM) separated by an interleaver. Then, we propose a method to do the spatial coupling of several replicas of this serially concatenated scheme, aiming to improve the asymptotic convergence threshold. First, exploiting the specific structure of the proposed system, an original procedure is proposed in order to terminate the spatially coupled turbo-coded CPM scheme. In particular, the proposed procedure aims to ensure the continuity of the transmitted signal among spatially coupled replicas, enabling to keep one of the core characteristics and advantages of coded CPM schemes. Then, based on an asymptotic analysis, we show that the proposed scheme has very competitive thresholds when compared to carefully designed spatially coupled LDPC codes. Furthermore, it is shown how we can accelerate the convergence rate of the designed systems by optimizing the connection distributions in the coupling matrices. Finally, by investigating on different continuous phase modulation schemes, we corroborate the conjecture stating that spatially coupled turbo-coded CPM schemes saturate to a lower bound very close to the threshold given by the extrinsic information transfer (EXIT) area theorem. Continuous phase modulations (CPMs) belong to the class of nonlinear coded modulations [1]. They can be decomposed as the serial concatenation of a trellis-based encoder associated with a memoryless filter bank modulator [2]. For this type of modulation, the phase transitions are kept continuous by design from one symbol to the other. Consequently, these nonlinear waveforms exhibit narrower spectral main lobe and relatively lower side lobes when compared to classical memoryless linear modulations. This feature makes them popular for applications having strong constraints on the out-of-band rejection. Furthermore, for low-cost and stringent embedded wireless communication systems, the inherent constant envelope also enables embedded amplifiers to operate near the saturation regime and to ease operation in nonlinear channels. Because of these interesting features, CPM has been considered over time for several stringent applications and adopted in many standards, recommendations, or proprietary solutions (to cite a few: GSM [3], telemetry [4], Bluetooth [5], optical communications [6], tactical communications, etc.). For satellite communications, CPM has been adopted for the DVB-RCS2 standard [7], deep space communications [8], automatic identification system [9], tactical communications [10], etc. More recently, the CPM was pointed as a candidate for the fifth generation (5G) machine-to-machine (M2M) communications [11] and was proposed for the navigation's inter-satellite links [12]. The authors in [2] showed that the CPM operation can actually be divided into the concatenation of two modules. The first one is the continuous phase encoder (CPE) which is a state machine defined by the CPM parameters and mainly responsible for assuring the continuity of the phase. The second one, called the memoryless modulator (MM), is a filter bank composed of waveforms that compose the signal going to be transmitted by the emitter. Thanks to this decomposition, CPM has greatly benefited from the concept of turbo decoding. Several papers investigated the behavior and the joint optimization of iterative schemes of various CPM families with convolutional or BCH codes [13–20]. Concerning low-density parity-check (LDPC) codes, the first related work was conducted by [21, 22] where density evolution was used to optimize unstructured LDPC codes for the minimum shift keying (MSK) modulation. Ganesan [23] proposed a bit-interleaved coded-modulation approach to optimize codes for M-ary continuous-phase frequency-shift keying (CPFSK) modulations. Later, structured LDPC codes were considered such as irregular repeat accumulate (IRA) codes [24–26] and protograph-based LDPC codes [27, 28]. One of the relatively recent forward error correcting codes introduced in the literature is convolutional LDPC codes [29, 30]. They are constructed from LDPC block codes using a design strategy called spatial coupling. This latter provides them with a specific behavior, called saturation phenomenon, that makes them achieve very good thresholds in a various number of channels. The saturation phenomenon remains hard to explain until [31] where authors proved that, in the case of the binary erasure channel (BEC), the belief propagation (BP) threshold of convolutional LDPC codes actually converges to the maximum a posteriori (MAP) threshold of the corresponding LDPC block code. Afterwards, several studies extended the proof to other channels and introduced strategies to couple other error correcting codes such as turbo-codes. Authors in [32] for instance coupled a systematic serial and parallel turbo-codes and showed that, over the BEC channel, the threshold of the former outperforms the latter. Regarding braided convolutional codes (BCC), a similar study was conducted in [33] and concluded that coupled code ensembles exhibit better minimal distance than the uncoupled underlying ensemble. Recently, [34] presented a unified description of the construction of such codes and [35] identified the fact that spatial coupling of concatenated schemes is actually analog to coupling generalized multi-edge type (MET) LDPC code. Applying the proposed design in [35] to CPM schemes is not possible as it will not lead to a continuous signal. This is due to the fact that CPEs of different stages are not sharing their boundary states as it will be made clearer later. In this paper, we propose a method to spatially couple serially concatenated CPM schemes. First, by assuring the phase continuity at the transmitter, the encoding of the CPM signal can be efficiently performed without introducing any additional overhead (like termination sequences). Secondly, the continuity of the phase suggests that the decoding of the spatially coupled scheme should be done sequentially from the beginning of the signal. In order to allow parallel computations of the BP decoding, i.e., starting the decoding at all coupled stages at once, we propose a proper CPE trellis decoding initialization. We will also investigate on the asymptotic performance of the system using the P-EXIT analysis [36] and minimize the number of iterations before convergence by optimizing a continuous-valued coupling matrix. Using the same analysis, when the coupling length increases, we will additionally show experimentally that the threshold of the spatially coupled CPM (SC-CPM) saturates to a value very close to the threshold given by the area theorem [37], which is a lower bound on achievable performance. As for the turbo-code case, it is conjectured that this saturation value corresponds to the MAP threshold of the underlying concatenated system, showing that spatial coupling can improve the BP threshold of the uncoupled iterative system. Finally, we will show that, for various CPM schemes, very competitive results can be achieved with the classical (5,7)8 convolutional code when compared to the aforementioned error correcting codes. System description of coded CPM The proposed study in this paper holds for any coded CPM (C-CPM) scheme where the outer component is a forward error correcting (FEC) code and where the inner component is a CPM modulator as depicted in Fig. 1. At the beginning, a k-bits sequence s∈{0,1}k is encoded with a FEC code \(\mathcal {C}\mathcal {C}\), into a n-bits codeword u∈{0,1}n (the code rate is R=k/n). Without loss of generality, we consider in this paper the rate- 1/2 block code given by the octal representation generators (5,7)8. The obtained sequence u is then interleaved by an interleaver π to obtain v. Without loss of generality and to be consistent with the asymptotic analysis carried out in Section 5, we consider π as a random interleaver. v is then encoded by the CPM modulator to obtain the signal: $$\begin{array}{*{20}l} x(t) &= \sqrt{\frac{2E_{s}}{T}} cos\left(2\pi f_{0} t + \theta(t,\boldsymbol{v})+ \theta_{0} \right) \\ \text{with} & \\ \theta(t,\boldsymbol{v}) &= \pi h \sum_{i=0}^{N-1}{v_{i} q(t-iT}), \quad q(t) = \left\{\begin{array}{l} \int_{0}^{t}{g(\tau)d\tau}\\ 1/2, \hspace{0.1cm} t>L_{c} \end{array}\right. \end{array} $$ The serially concatenated coded CPM transmitter θ0 is the initial phase, f0 the carrier frequency, g(t) the frequency pulse, θ(t,v) the information carrying phase, h the modulation index, Lc the memory, and ℜ(.) the real part. Practically, the value of Lc and the shape of q(t) (rectangular (REC), raised cosine (RC), Gaussian, etc.) accommodate the smoothness of the phase transitions. At the decoder side, a classical iterative turbo receiver is considered. First, the soft-input soft-output (SISO) CPM decoder is based on Rimoldi's decomposition [2]. As shown in Fig. 2, this decomposition splits the CPM modulator into a serial concatenation of the CPE, represented by a trellis, and the MM, seen as a filter bank. Indeed, [2] showed that the CPM operation can actually be divided into the concatenation of two main modules. The first one is the continuous phase encoder (CPE) which is a state machine defined by the CPM parameters and mainly responsible for assuring the continuity of the phase. The second one, called the memoryless modulator (MM), is a filter bank composed of symbol duration waveforms that compose the signal going to be transmitted by the emitter during one symbol period. As shown in Fig. 2, prior to CPE encoding, the information bit sequence v is mapped into the symbol sequence U={Un∈{±1,...,±(M−1)}} in the so-called tilted phase as: $$\begin{array}{*{20}l} &\overline{\psi}(\tau + nT, \boldsymbol{v})= \left\lbrace \left[ 2\pi h \sum_{i=0}^{n-L_{c}}{v_{i}} \right] \ mod \ p + W(\tau) \right. \\ &\left. + 4\pi h \sum_{i=0}^{L_{c}-1}{v_{n-i}q(\tau+iT)} \right\rbrace \ mod \ 2\pi \ , \ 0\leq \tau \leq T \end{array} $$ Rimoldi's decomposition of CPM where p is the denominator of h and W(τ) a data independent term [2]. Rimoldi's proposed CPE trellis is formed by \(\phantom {\dot {i}\!}pM^{L_{c}-1}\) states each defined by the tuple \(\phantom {\dot {i}\!}\sigma _{n}=[U_{n-1},..., U_{n-L_{c}+1}, V_{n}]\) where Ui is an M-ary modified symbol and \(\phantom {\dot {i}\!}V_{n} = [ \sum _{i=0}^{n-L_{c}}{U_{i}} ] mod\ p \). The MM filter bank consists of \(\phantom {\dot {i}\!}pM^{L_{c}}\) different pulses {xi(t)}i corresponding to CPE outputs \(\phantom {\dot {i}\!}X_{n} = [ U_{n},..., U_{n-L_{c}+1}, V_{n}]\). The transmitted signal x(t) is transmitted over an additive white Gaussian noise (AWGN) channel having a double-sided power spectral density N0/2. From Eq. (1), the received complex baseband noised signal becomes: $$\begin{array}{*{20}l} y(t) = \sqrt{2E_{s}/T} exp\{j\psi(t,\boldsymbol{v})\} + n(t) \ , \ t>0 \end{array} $$ The outputs of the receiver matched filter bank {x∗(T−t)} are sampled once each nT in order to obtain the correlator-based outputs: $$\begin{array}{*{20}l} \boldsymbol{y^{n}} = \left[ y_{i}^{n} = \int_{nT}^{(n+1)T}{y(l)x_{i}^{*}(l)dl} \right]_{1\leq i \leq pM^{L}_{c}} \end{array} $$ yn can be shown to be sufficient statistics that are used to compute likelihood functions at the receiver. Following [14], the likelihood function p(yn/Xn) is given as follows: $$\begin{array}{*{20}l} p(\boldsymbol{y^{n}}/X_{n}) \varpropto exp\{2Re(y^{n}_{i})/N_{0}\} \end{array} $$ This likelihood gives the transition metrics of the CPE trellis when the BCJR algorithm [38] is used. The obtained extrinsic log-likelihood ratios (LLRs) of the demodulated bits, \(L_{e}({\mathcal {C}\mathcal {P}\mathcal {M}})\), are then used, after deinterleaving, as a priori LLRs, \(L_{a}({\mathcal {C}\mathcal {C}})\), by the outer decoder \({\mathcal {C}\mathcal {C}}^{-1}\). By runing a BCJR algorithm again on the CC trellis, we obtain the extrinsic LLRs corresponding to the coded bits, denoted here by \(L_{e}({\mathcal {C}\mathcal {C}})\). Finally, these later form the a priori LLRs of the demodulated bits, denoted \(L_{a}({\mathcal {C}\mathcal {P}\mathcal {M}})\) of the SISO \({\mathcal {C}\mathcal {P}\mathcal {M}}^{-1}\). This concludes one turbo iteration. After a fixed number of iterations, the decoded information bits are estimated from the a posteriori LLRs of the decoded bits \(L_{ap}({\mathcal {C}\mathcal {C}})\). A sketch of the turbo receiver architecture with the exchanged LLR messages is depicted in Fig. 3. The coded CPM turbo receiver As proposed in [35], we use a vectorized representationFootnote 1 of the transmitter and the receiver as depicted in Fig. 4. The information blocks s and x are represented by white circles, the CPM and the FEC components are represented by rectangles, and the interleavers are placed above the corresponding edges. The vectorized (proto) graph corresponding to the C-CPM transmitter (left) and receiver (right) Spatially coupled turbo-coded CPM Spatial coupling is a general framework that aims to improve iterative decoding performance of some iterative systems by coupling them spatially. It has been introduced for the case of LDPC codes for which it has been shown, for example, that, despite their limited BP thresholds, regular LDPC codes can reach very good performance under iterative decoding when they are spatially coupled. Thus, with simple regular codes, we can achieve very good performance, close to the capacity under some asymptotic assumptions. In particular, it is shown that the thresholds of the spatially coupled ensembles under belief propagation (BP) decoding converge asymptotically to the MAP threshold of the underlying ensemble (i.e., to the MAP threshold of the uncoupled ensemble). This phenomenon is often referred to as threshold saturation. This phenomenon has been then observed for turbo-codes and some other serially concatenated systems. Here, we investigate on the case of CPM-based serially concatenated systems which will be shown to have some specificities. Coupling procedure In this section, we show how one can spatially couple the serially concatenated systems in Fig. 4. In this paper, we consider a framework similar to [35]; however, this latter cannot be applied directly to the CPM. More caution should be taken into account at both the transmitter and the receiver; otherwise, the modulation will fail to keep one of its main features, i.e., the phase continuity. Motivated by the spatially coupled protographs [30], spatially coupled turbo-codes are obtained by performing the general edge spreading-like (ESR) rule (also referred to as copy-and-permute procedure in the protograph literature) described as follows: The encoded bits u are split into ms+1 bundles. The obtained graph is then replicated L times. Finally, we interconnect the L replicated graphs by permuting the bundles of the same type. This final permutation step is a constrained step for which only bundles that belong to a given type can be exchanged. It is fully characterized by the coupling matrix B whose definition is given by: $$\begin{array}{*{20}l} B=[b_{0}, b_{1}, \ldots b_{m_{s}}] \end{array} $$ where bi represents the fraction of bits (width of the bundle) connecting the copy ℓ to the copy (ℓ+i). L can be referred to as the coupling length and ms as the syndrome former memory. It is straightforward that B should verify \(\sum _{i=0}^{m_{s}} b_{i} = 1\). We now consider the simple example as given in Fig. 5 to describe with more details the coupling procedure for a toy example considering the simplest case with B=[0.5,0.5]. We start from a classical concatenated system consisting of an outer convolutional code concatenated with an inner CPM separated with an interleaver π. The general aim of spatial coupling is to introduce some interconnections in a structured manner (enabling analysis and optimization) between replicated versions of this base concatenated system. The first step consists in introducing some multi-edge representation into the base concatenated system to enable simple description of the possible interconnections that can be made between replicas. In our case with B=[0.5,0.5], it just means that half of the coded bits of one replica will be sent to the CPM modulator of the same stage while it will exchange the other half with the replica next to it. It will also receive half of the coded bits of the preceding replica to be used during its CPM encoding step. To enable such interconnections and to have a suitable graphical representation of this coupled system, we need to introduce an intermediary representation of the base concatenated system as presented at step 2 of Fig. 5, which will be referred to as base or proto representation of the underlying concatenated system. To enable multi-edge type representation of the coded bits, we have to split the interleaver π into two interleavers πi and πo. Then, we introduce two bundle "ports" that explicitly show how many types of bundles (group of coded bits) are considered. Eventually, we can adapt the size of the port boxes to better represent the fractions bi,∀i=0⋯ms. In our example, they are of equal sizes. When considering only one replica as in step 2, giving our proto CC+CPM system, bundles of the same type are directly connected and the overall concatenated system is equivalent to the initial concatenated system but with a detailed representation or splitting of the interleaver π. This representation is equivalent to protograph-like representation as for the case of LDPC codes. Structured this way, we then apply the second step of the edge spreading rule which gives the third part of Fig. 5. It consists in copying L times the proto representation. L is also referred to as the coupling length. Then, the final step of the ESR is applied which is represented by the last part of Fig. 5. Except at the boundaries, each replica is connected to other replicas following the connection matrix \(\phantom {\dot {i}\!}B=[b_{0}, b_{1}, \ldots b_{m_{s}}]\). In our example, a replica at stage l is connected to replicas at stage l−1 and l+1. For the first and the last replicas, they are not connected to a preceding or a following replica, respectively. Thus, there is a degree of freedom to decide how to start and to end the obtained coupled chain. This point will be discussed below, in Section 4.2. SC TC transmitter. The spatial coupling is done according to B = [0.5,0.5] Moreover, since we are considering transmission using CPM, one important feature is the ability to keep phase continuity among the chain. This encoding issue is illustrated in Fig. 5 with some dashed arrows with the label SSI that stands for possible state side information. In this case, using a specific scheduling for the chain encoding, continuity of the phase along the chain can be preserved. All encoding strategies for the proposed scheme are discussed in Section 4.3. As a final remark, as it is done in the analysis of LDPC codes, and even if it seems to be quite artificial at the first sight when considering an uncoupled single stage (second step of Fig. 5), the introduction of the two interleavers will allow the study of the average behavior of the obtained spatially coupled scheme as it will be detailed in Section 5. In Fig. 5, we end up with unconnected bundles at both edges of the coupled diagram. One can tail-bite the graph by interconnecting these bundles all the way around (also referred to as wrapping-around procedure). It can be easily shown that the global design rate RL of this obtained SC-CPM is exactly R; however, this scheme does not exhibit the desired coupling gain since, locally, each stage behaves exactly as the underlying C-CPM scheme in Fig. 4. An alternative solution is as follows: Append ms CPM modulators at the end to link the right-hand unconnected bundles Add padding bits at the ms first and last CPM modulators to fill the vacant bundle connections For the obtained coupled graph illustrated in Fig. 5, as ms=1, we have appended one extra CPM modulator to modulate last fraction of coded bits from the last replica. Then, padding null bits are used to initialize the coupling chain and to terminate it. The ms black circles represent the block of padding bits. In this case, the overall code rate RL (also called design rate) of the coupled ensemble is lower than the rate R of a single replica and is given by : $$\begin{array}{*{20}l} R_{L} = R - \frac{m_{s}}{L+m_{s}}R \end{array} $$ Observe that the expression of RL is analogous to the rate of spatially coupled protographs and that the termination produces a rate loss of \(\frac {m_{s}}{L+m_{s}}R\). This loss vanishes to 0 as L→+∞. To summarize, the coupling procedure using the proposed termination is given by the following steps: Draw the vectorized (proto) graph corresponding to the coded CPM scheme of interest; Spatially couple this graph following the proposed edge spreading rule with respect to the matrix B; Insert known zero bits at the vacant bundles at the boundaries of the spatially coupled chain. Encoding strategies As discussed earlier, applying [35] to the CPM will not guarantee the phase continuity. In a classical setting as in [35], the encodings performed by the CPEs of different stages are done independently: they all start encoding from the same CPE state (say σ0) but finish at different states depending on the sequence v of each. The phase is then continuous within the signal generated by each stage of the coupled system, but is going to present discontinuities at the transitions between stages. Therefore, for this scheme, the encoding strategy particularly matters. In the following, we discuss three encoding strategies and show how phase continuity can be ensured. Strategy 1: Independent CPM encoders This first strategy is to simply not address the continuity of the phase, since the discontinuities are rare in comparison with the total length of the signal (as they occur only when transitioning from stage ℓ to stage ℓ+1). The advantage is that this method allows direct application of all spatial coupled encoding algorithms. The drawback is that the periodic phase discontinuities between some symbol intervals will increase the amount of the occupied spectrum outside the main lobe, due to the presence of these high frequency components, which may not be acceptable in some stringent applications. Strategy 2: Termination of CPM encoders The second solution is to enforce the CPM encodings of each stage to end at a predefined CPE state, e.g., the all-zero state σ0. This can be achieved by appending CPM termination sequences after all second interleavers in Fig. 5. As an example, in order to end at the all-zero state σ0, we should append a number \(\mathcal {N}\) of termination symbols, at the end of the CPM encoder input, equal to [39]: $$\begin{array}{*{20}l} \mathcal{N} = \left\lfloor{\frac{P-1}{M-1}}\right\rfloor + L_{c} \end{array} $$ Now, since the CPE of each stage starts encoding from the same state σ0, each stage can operate independently while assuring the phase continuity. In other words, the CPEs of all stages will start encoding in parallel, starting from state σ0, and this is achieved without sacrificing the phase continuity. The advantage of this solution is that the signal now is kept continuous during the total transmission. The disadvantage is that, due to the introduction of termination symbols, it leads to a small additional rate loss with respect to RL. Strategy 3: CPM encoders with SSI Even if both solutions are acceptable in some scenarios, they may be nonviable in stringent CPM applications where the properties of the CPM are of high interest. Instead, we propose to communicate a state side information (SSI) from one stage to the other. In other words, each stage ℓ will communicate its final CPE state to its next neighbor (ℓ+1), in order for this latter to start encoding from this same state, and hence generate a continuous CPM signal. This procedure can be interpreted as the following: if one puts the trellises of all the CPEs side by side, the final transmitted signal will form one continuous path which in turn will result in a continuous signal. The obtained scheme is represented in Fig. 5 where the SSI is depicted by the dashed arrows. Note that this does not come at the expense of an increase to complexity in comparison with classical CPM encoders, since we just modify the starting state of the CPM encoder according to the final state of the CPM encoder of the previous stage. One direct way to implement this strategy is to first perform the CPM encoding corresponding to the first spatial stage. After the encoding is done, this stage communicates its final CPE state to the neighboring CPE. That way, this later can start encoding the sequence at its input starting from that particular state. While this implementation is straightforward, it will lead to an additional encoding delay (especially with high values of L). This delay is caused by the fact that the stage ℓ has to wait until the encoding of the previous stage finishes in order to be provided with the state it should start encoding from. In general, by noting δc the encoding time taken by CPE, stage ℓ has to wait (ℓ−1)δc before starting the encoding. This drawback can be easily fixed. In order to minimize this delay, the state shared between stages can be actually quickly deduced directly from the information bits { Un} only in Fig. 2, using the CPE trellis state definition. Actually, given the sequence {Un} at the input of CPE of the stage ℓ, the CPE of the stage ℓ+1 should start encoding from the state: $$\begin{array}{*{20}l} \sigma =\left[ U_{n-1},..., U_{n-L_{c}+1}, V_{n} = \left[ \sum_{i=0}^{n-L_{c}}{U_{i}} \right] mod\ p \right] \end{array} $$ Thanks to this encoding strategy, all CPEs can now start encoding at the same time in parallel. NB: Contrary to what Fig. 5 may suggest, SSI connections and the exchanged bit connections are completely decorrelated. SSI is exchanged only from one stage to the next one (to assure the phase continuity), while the exchanged bit connections are between adjacent stages (which is given by the coupling matrix B). Summary of the proposed spatially coupled encoding To summarize, the encoding of spatially coupled coded CPM schemes is achieved by the following steps : Insert known zero bits at the vacant bundles at the boundaries of the spatially coupled chain; Feed all outer codes with corresponding information bit sequences and perform convolutional encoding, Interleave the output coded sequences and spread them between different stages according to the spatially coupled graph and interleave again; Using Eq. (6), compute the initial state at which the CPE encoder at stage ℓ should start as a function of the sequence {v} of the stage ℓ−1; Perform encoding at the CPE of all stages According to the output of each CPE, pick the corresponding waveform in the MM filter bank; Transmit the whole signal. At the receiver, it is well known that, for the BCJR algorithm, the probability of the transition (σn−1=σ1,σn=σ2) can be factored as p(σ1,σ2,y(t))=αn−1(σ1)γn(σ1,σ2)β(σ2), where γn(σ1,σ2) is given by Eq. (3) and where αn−1(σ1) and βn(σ2) are computed through the so-called forward and backward recursions. In our scheme, the starting and ending states of each stage are not known by the receiver (except the starting state of the first stage) and thus need to be estimated by the decoder. To take this into account, both the forward and backward recursions should be initialized equally likely as: $$\begin{array}{*{20}l} \alpha_{0}(\sigma_{i}) = \beta_{N}(\sigma_{i}) = \frac{1}{\text{number of CPM states}},\quad \forall \sigma_{i} \end{array} $$ The disadvantage of this method is that the MAP decoder is more complex because of the higher number of explored trellis paths. However, the advantage is that all the CPM properties are maintained and no additional rate loss is induced. Should the decoding complexity be of concern, low complexity BCJR variants could be implemented. This is outside the scope of this paper. Asymptotic convergence analysis and coupling optimization In this section, we study the asymptotic behavior of the proposed spatially coupled turbo-coded CPM scheme. For the considered iterative system, density evolution (DE) (which should be in addition implemented using a coset approach in our setting) [40] cannot be easily implemented due to the inner MAP CPM detector. Firstly, an analytic expression of the output probability density distribution is not easy to derive, and secondly, evaluating the threshold by tracking the evolution of the exchanged message densities between the SISO CPM and the SISO CC decoders over a Gaussian channel is a cumbersome task. Instead, EXIT analysis [41] can be alternatively exploited to evaluate the threshold of the overall system. EXIT chart analysis of the associated uncoupled serially concatenated coded CPM scheme An EXIT analysis is a one-dimensional parameter tracking method that enables to analyze asymptotically (i.e., in the infinite length regime) the convergence behavior of general concatenated iterative systems. This method has been introduced in [41] showing that iterative decoding using BCJR or BP algorithms can be well predicted tracking a one-dimensional parameter, e.g., the average mutual information (MI) between bits and associated LLRs. To this end, exchanged LLRs are usually modeled as consistent Gaussian random variables (r.v.). These consistent Gaussian r.v. can be characterized by a single parameter, usually their mean or variance as they are closely related [41]. Then, for the different SISO components, we can compute the so-called input-output transfer functions that give the average mutual information between bits and extrinsic LLRs at the output of a SISO component versus the average mutual information between bits and a priori LLRs at the input of a SISO component. In general, closed-form expressions do not exist for these input-output transfer functions that have finally to be estimated through intensive Monte-Carlo simulations and then approximated. As an example, the CPM demodulator transfer function, referred to as \(T_{{\mathcal {C}\mathcal {P}\mathcal {M}}}(.)\), is depicted in Fig. 4. Based on the channel outputs, \(T_{{\mathcal {C}\mathcal {P}\mathcal {M}}}(.)\) gives the average MI between the extrinsic LLRs \(L_{e}({\mathcal {C}\mathcal {P}\mathcal {M}})\) and the corresponding bits and the average MI between the a priori LLRs \(L_{a}({\mathcal {C}\mathcal {P}\mathcal {M}})\) and the corresponding bits. We refer to these latter quantities as \(I_{e}({\mathcal {C}\mathcal {P}\mathcal {M}})\) and \(I_{a}({\mathcal {C}\mathcal {P}\mathcal {M}})\)), respectively. Similarly, the outer decoder transfer function, referred to as \(T_{{\mathcal {C}\mathcal {C}}}(.)\), computes both the average MI between the extrinsic LLRs \(L_{e}({\mathcal {C}\mathcal {C}})\) and the corresponding bits and the average MI between the a posteriori LLRs \(L_{ap}({\mathcal {C}\mathcal {C}})\) and the corresponding bits from the average MI between a priori LLRs \(L_{a}({\mathcal {C}\mathcal {C}})\) and the corresponding bits. These quantities are denoted as \(I_{e}({\mathcal {C}\mathcal {C}}), I_{ap}({\mathcal {C}\mathcal {C}})\), and \(I_{a}({\mathcal {C}\mathcal {C}})\), respectively. Assuming no a priori information from the outer decoder at the first detection and decoding step, successive MI exchange updates between the two SISO components are then performed until we reach \(I_{ap}({\mathcal {C}\mathcal {C}})=1\) (convergence to zero error probability) or we reach the maximum number of iterations (no convergence). SC-CPM EXIT analysis Following the framework in [35], the EXIT chart analysis of the coupled system is summarized in Fig. 6. At each decoding iteration, all CPM demodulation updates are performed followed by all outer decoder updates. The update equations relative to the stage i are given by the following: The CPM demodulator at stage i, based on its transfer function \(T_{{\mathcal {C}\mathcal {P}\mathcal {M}}}(.)\) and the a priori MI \(I_{a}({\mathcal {C}\mathcal {P}\mathcal {M}}_{i})\), computes the extrinsic mutual information \(I_{e}({\mathcal {C}\mathcal {P}\mathcal {M}}_{i})\). In other words: \(I_{e}({\mathcal {C}\mathcal {P}\mathcal {M}}_{i}) = T_{{\mathcal {C}\mathcal {P}\mathcal {M}}}(I_{a}({\mathcal {C}\mathcal {P}\mathcal {M}}_{i}))\). The vectorized graph of the ith stage of the terminated SC-CPM receiver After deinterleaving, the mutual information of the bits and corresponding LLRs, shared from the stage i to the right-hand adjacent stage i+k, is given by \(I_{e}^{k}(i^{+}) = I_{e}({\mathcal {C}\mathcal {P}\mathcal {M}}_{i}).b_{k}.\) After a second deinterleaving, the a priori MI at the input of the outer decoder is given by the mixture \(I_{a}({\mathcal {C}\mathcal {C}}_{i}) = \sum I_{a}^{k}(i^{-}).b_{k}\), where \(I_{a}^{k}(i^{-})\) is the MI between the LLRs and corresponding bits shared from the left-hand adjacent stage i−k to the stage i. By convention, \(I_{a}^{0}(i^{+}) = I_{e}({\mathcal {C}\mathcal {P}\mathcal {M}}_{i}).\) The outer decoder then updates its extrinsic LLRs as \(I_{e}({\mathcal {C}\mathcal {C}}_{i}) = T_{{\mathcal {C}\mathcal {C}}}(I_{a}({\mathcal {C}\mathcal {C}}_{i})).\) After interleaving, the mutual information of the bits and corresponding LLRs, shared from the stage i to the left-hand adjacent stage i−k, is given by \(I_{e}^{k}(i^{-}) = I_{e}({\mathcal {C}\mathcal {C}}_{i}).b_{k}.\) After a second interleaving, the a priori MI at the input of the CPM demodulator is given by the mixture \(I_{a}({\mathcal {C}\mathcal {P}\mathcal {M}}_{i}) = \sum I_{a}^{k}(i^{+}).b_{k}\), where \(I_{a}^{k}(i^{+})\) is the MI between the LLRs and corresponding bits shared from the adjacent stage k+i to the stage i. By convention, \(I_{a}^{0}(i^{+}) = I_{e}({\mathcal {C}\mathcal {C}}_{i}).\) Wherever they are involved, the a priori MIs coming from the added padding bit nodes are taken equal to 1. The threshold of the SC-CPM is then defined as the lowest channel noise parameter such that \(I_{ap}({\mathcal {C}\mathcal {C}}_{i}) \rightarrow 1, \forall i\). Note that the two concatenated interleavers πi and πo in Fig. 5 are here to ensure that the conducted asymptotic analysis depicts the average behavior of the system. In other words, when computing the different average mutual information \(I_{a}({\mathcal {I}_{i})}\) and \(I_{a}({\mathcal {O}_{i})}\), it is thanks to these two interleavers that we are able to write: $$\begin{array}{*{20}l} I_{a}(\mathcal{I}_{i}) = \sum I_{a}^{k}(i^{+}).b_{k} \quad \text{and} \quad I_{a}(\mathcal{O}_{i}) = \sum I_{a}^{k}(i^{-}).b_{k} \end{array} $$ Without them, the computation of the threshold will refer to a particular family, corresponding to a particular realization of these two interleavers. Coupling optimization The asymptotic spatially coupled threshold described in the previous section suggests a large enough number of iterations. However, when designing practical turbo systems, speeding up the convergence rate is essential to minimize the decoding delay. With the coupling proposed in this paper, it is also possible to apply the optimization in [35] in order to reduce the decoding time without degrading the threshold. This is done as the following: Draw an initial coupling matrix B. Several trials with different CPM schemes showed that the threshold of the SC-CPM corresponding to the uniform coupling matrix, bk=1/(ms+1), is a good representative of the SC-CPM ensemble threshold. Applying the EXIT procedure described in the previous section, evaluate the threshold of the obtained SC-CPM. Find another coupling matrix B such that the SC-CPM conserves the same threshold yet converges with a smaller number of iterations. Since this optimization problem is nonlinear, mainly due to the transfer functions \(T_{{\mathcal {C}\mathcal {P}\mathcal {M}}(.)}\) and \(T_{{\mathcal {C}\mathcal {C}}(.)}\), two optimization programming can be adopted: Greed search over a set of candidates of the form: \(\left \lbrace \{b_{k}\}_{k} \vert \sum b_{k} = 1 \text { and } b_{k} \in \{\frac {a}{\Delta } \vert a \in [\!\![ 0, \Delta ]\!\!]\} \right \rbrace \), where \(\Delta \in \mathbb {N}\) is a step constant Use the well-known differential evolution algorithm [42]. For both optimizations, the search space set can be extremely reduced by canceling symmetrical B. Further dimension reduction is achieved by replacing the equality constraint \(\sum b_{k} = 1\) by the hyperplane \(\sum _{i=0}^{m_{s}-1} b_{i} \leq 1\). Hence, the last component of B can be deduced until the end as \(b_{m_{s}} = 1 - \sum _{i=0}^{m_{s}-1} b_{i}\). To illustrate the behavior and the performance of the proposed schemes, and without lake of generality, we first consider a serially concatenated coded CPM scheme using a systematic (5,7)8 outer convolutional code concatenated with three different CPMs given as follows: A binary CPM scheme with parameters (Lc=1,h=1/2, Gaussian pulse); A quaternary CPM scheme with parameters (Lc=1,h=1/3, rectangular pulse, natural mapping); An octal CPM scheme with parameters (Lc=2,h=1/3, raised cosine pulse, natural mapping). We first illustrate the spatial coupling gain using Fig. 7 and how it allows to improve the iterative decoding threshold of the underlying uncoupled serially concatenated system. In this figure, the evolution of the a posteriori MI, denoted as \(I_{ap}({\mathcal {C}\mathcal {C}})\), associated with each replica/stage of the coupled chain is given for different numbers of iterations. L refers to the coupling length, i.e., the number of graph replicas. The x-axis refers to the spatial index l of one of the replicas in the coupled chain as illustrated in Fig. 5. This position index in the coupled chain is also equivalently denoted as stage position in the label of the figure. For a given colored curve with label number n, the y-axis gives the average a posteriori mutual information observed at the output of each of the lth iterative decoders after n iterations of the BP decoding process. Reaching the \(I_{ap}({\mathcal {C}\mathcal {C}})=1\) at a given position means that the corresponding replica/stage has been correctly decoded. The threshold of the coupled ensemble is defined as the infimum of the Es/N0 values such that all replicas/stages converge to \(I_{ap}({\mathcal {C}\mathcal {C}})=1.\) When iterative decoding fails, the decoding process is stopped at some indexes with \(I_{ap}({\mathcal {C}\mathcal {C}})<1.\) This latter value is a function of the signal-to-noise ratio. Convergence of the binary SC-CPM stages as function of the decoding iterations (number above the lines) at Es/N0=−2.58 dB. Here, L=20 Thus, the different colored curves in Fig. 5 help to illustrate the classical double wave effect due to the coupling of the L replicas across iterations when the signal-to-noise ratio is above the convergence threshold of the coupled ensemble and how spatial coupling can help to improve iterative decoding performance. For our considered example, the threshold of the underlying uncoupled ensemble is Es/N0=−1.86 dB, meaning that it is unable to asymptotically achieve an arbitrary low probability of error for Es/N0 values below this threshold. However, we can observe that the coupled ensemble can converge at Es/N0=−2.58 dB. The rationale behind this phenomenon is that, thanks to the padding bits, the stages at the boundaries were able to converge, i.e, \(I_{ap}({\mathcal {C}\mathcal {C}})\) tends to 1, at Es/N0=−2.58 dB. Then, thanks to the spatial coupling defined by the coupling matrix B, i.e, the linking between the different stages, reliable LLR values are shared to adjacent stages, which help these later to converge. This step-by-step convergence from the boundaries to the center of the SC-CPM propagates following a wave-like phenomenon, making the whole system to converge even at Es/N0=−2.58 dB. This improved threshold is due to the so-called coupling gain. Figure 8 depicts the design rate RL of the different schemes versus the corresponding BP thresholds when ms=1 and B=[1/2,1/2]. These thresholds, referred to as "SC-CPM," are the BP thresholds of the corresponding spatially coupled ensembles. As for the case of LDPC codes, it is conjectured to converge to the MAP threshold of the corresponding serially concatenated scheme as the coupling length L increases. This phenomenon is often referred to as threshold saturation. The performance of the coupled ensembles for different coupling lengths is compared to the following: The thresholds of the underlying uncoupled ensembles which are given by only one operating point, referred to as "coded CPM." The obtained thresholds correspond to the BP thresholds of the uncoupled ensembles. Threshold of coupled and uncoupled coded CPM. Comparison to the area under the EXIT is also depicted. The coupling matrix here is \(B=[\frac {1}{2}, \frac {1}{2}]\) The maximum achievable rate for serially concatenated scheme using optimized LDPC codes [27], referred to as "LDPC+CPM." The obtained thresholds correspond to the BP threshold under iterative detection and decoding. An estimation of the maximum achievable rate computed using the area under the EXIT curve, referred to as "EXIT area." This curve is often conjectured to be a tight approximation of the normalized capacity associated with the inner detection scheme [37] (the inner CPM scheme in our context). It corresponds to an upper bound of the maximum achievable rate of any serially concatenated scheme involving the CPM scheme of interest. For the binary CPM, spatial coupling allows to gain 0.68 dB in comparison to the uncoupled family and is at only 0.18 dB from the threshold given by the area theorem. Observe that this was achieved without any code design or optimization for the outer code. As L increases, the design rate RL tends to 1/2 and the three SC-CPMs saturate to a value very close to the EXIT area theorem upper bound. This result corroborates the conjecture stating that the spatially coupled serially concatenated schemes saturate to a lower bound very close to the threshold given by the EXIT area theorem [35]. It is conjectured that it corresponds to the MAP threshold of the concatenated ensemble. This simply shows that, despite the limited performance of the uncoupled ensemble that does not operate close to the normalized capacity, very good thresholds can be achieved by spatial coupling for a scheme with a very regular structure. The same phenomenon has been observed for LDPC code ensembles for which it has been shown that spatial coupling of regular codes improves the BP threshold towards the MAP threshold that can be very close to the capacity. As a reference, we also plot the thresholds obtained by optimizing unstructured LDPC codes at different rates with maximum variable node degree of 7. We use the same optimization procedure as defined in [27] where both degree-1 variable nodes and their corresponding stability condition were considered. We observe that, for the three schemes, by coupling a simple (5,7)8-coded CPM, we can reach or outperform the performance of the carefully designed serially concatenated coded CPM schemes using outer LDPC codes. Similar conclusions can be drawn from the two other subfigures corresponding to the quaternary and octal cases, respectively. Finally, Table 1 summarizes the thresholds of the considered schemes. Table 1 Thresholds Eb/N0 in decibel for different schemes at rates close to 1/2 The preceding results show how spatial coupling can help improve the iterative decoding threshold, even in the case of a very simple coupling. We now discuss the benefit of optimizing the coupling matrix. Figure 9 shows the number of iterations when uniform and optimized coupling matrices are used in the case of ms=2. For the binary and the octal CPM schemes, we observe that the optimized schemes are able to converge around 17% and 13% faster than the uniform coupling matrix scheme. This has been observed in several other contexts. From this observation, we can conclude that optimizing the coupling structure does not necessarily improve the asymptotic threshold but rather help improve convergence speed. Iterations before convergence of different SC-CPM schemes with syndrome former memory ms=2 For the special case of the CPM used in DVB-RCS2 standard [43], where h=1/5,L=2,α=1/3 and a natural mapping is used, we plot in Fig. 10 the obtained threshold computed for three schemes: The serially concatenated scheme with an inner (5,7)8 convolutional code Quaternary CPM used in DVB-RCS2 with h=1/5,Lc=2,α=1/3 and natural mapping. The coupling matrix is always given by \(B=[\frac {1}{2}, \frac {1}{2}]\) The spatially coupled version of the previous scheme with respect to the coupling matrix B=[0.5,0.5] The LDPC coded scheme with an optimized rate- 1/2 LDPC codes whose degree polynomials are given as: λ(x)=0.2006+0.6116x+0.1878x6 and ρ(x)=0.2x2+0.8x3 As we can see, the threshold of the SC-CPM scheme (i.e., 0.41 dB) outperforms the optimized LDPC code threshold (i.e., 0.45 dB). Both schemes 1 and 2 exhibit a coding gain of approximately 0.9 dB better that the DVB-RCS2 (5,7)8-coded CPM scheme (whose threshold is at 1.33 dB). Preceding results tend to show that performance of serially concatenated coded CPM schemes can be drastically improved by spatial coupling, pushing their performance very close to the maximum achievable rate, while the uncoupled ensemble exhibits limited performance. Remarkably, one is able to compete with state-of-art optimized LDPC coded solutions that are tailored for this application but with simple system components and a very regular structure, as it has been observed for regular LDPC codes. In general, as for the DVB-RCS2 context, any application that is considering concatenated schemes using CPMs such as in telemetry applications or tactical communications can be upgraded by considering a coupling strategy. The performance of such systems can be improved by coupling while each component of the emitter and the receiver (encoders and SISO decoders) can be reused from the initial setup and reassembled with a reasonable cost to enable coupling. Using an optimized coded CPM scheme with an outer LDPC code induces to change the complete layout of both the encoder and the decoder. Moreover, to achieve optimal performance, we cannot use on-the-shelf LDPC codes that are not efficient in this context. A specific design must be done, and the developed core may not be usable for any other application, since the design is dedicated to only one specific application. In this paper, we proposed a method to spatially couple coded CPM schemes. Using an EXIT analysis, this scheme shows very competitive thresholds when compared to a carefully optimized LDPC code. Moreover, we introduced a design procedure to accelerate the convergence rate by optimizing the coupling matrix. Simulation results for different CPM schemes corroborate the conjecture that SC-CPM should also saturate to a value lower-bounded by the EXIT area theorem. Future work will investigate finite length performance and the optimization of the corresponding coupling base matrix. This graph is very close to the compact graph in [32]. While this latter is not a directional graph and only a functional relationship between variables and factor nodes is shown (same graph for the encoder and the decoder), the vectorized graph simplifies the formalism of the coupling and its parameters, especially here for our CPM system BEC: Binary erasure channel Belief propagation Convolutional code CPE: Continuous phase encoder CPFSK: Continuous-phase frequency-shift keying C-CPM: Coded CPM CPM Continuous phase modulation; DE: Density evolution Extrinsic information transfer FEC: LLR: Log-likelihood ratio Maximum a posteriori MET: Multi-edge type Mutual information Memoryless modulator MSK: Minimum shift keying M2M: machine-to-machine LDPC: Low-density parity-check SC-CPM: Spatially coupled concatenated CPM Soft-input soft-output SSI: State side information J. B. Anderson, T. Aulin, C. -E. Sundberg, Digital phase modulation (Springer Science & Business Media, 1986). B. E. Rimoldi, A decomposition approach to CPM. IEEE Trans. Inf. Theory. 34(2), 260–270 (1988). M. Mouly, M. -B. Pautet, T. Foreword By-Haug, The GSM system for mobile communications (Telecom publishing, 1992). M. Geoghegan, in 21st Century Military Communications Conference Proceedings MILCOM 2000, vol. 1. Description and performance results for a multi-h cpm telemetry waveform, (2000), pp. 353–357. https://doi.org/10.1109/milcom.2000.904974. L. -J. Lampe, R. Tzschoppe, J. B. Huber, R. Schober, in IEEE International Conference on Communications, 2003. ICC'03, vol. 5. Noncoherent continuous-phase modulation for DS-CDMA, (2003), pp. 3282–3286. https://doi.org/10.1109/icc.2003.1204051. T. F. Detwiler, Continuous phase modulation for high speed fiber optic links. PhD thesis, Georgia Institute of Technilogy (2011). B. F. Beidas, S. Cioni, U. De Bie, A. Ginesi, R. Iyer-Seshadri, P. Kim, L. Lee, D. Oh, A. Noerpel, M. Papaleo, et al, Continuous phase modulation for broadband satellite communications: design and trade-offs. Int. J. Satell. Commun. Netw.31(5), 249–262 (2013). M. K. Simon, Bandwidth-Efficient Digital Modulation with Application to Deep-Space Communications, vol. 2 (Wiley, 2005). A. Scorzolini, V. De Perini, E. Razzano, G. Colavolpe, S. Mendes, P. Fiori, A. Sorbo, in Advanced Satellite Multimedia Systems Conference (ASMA) and the 11th Signal Processing for Space Communications Workshop (SPSC), 2010 5th. European enhanced space-based AIS system study, (2010), pp. 9–16. https://doi.org/10.1109/asms-spsc.2010.5586883. Department of Defense Interface Standard. Interoperablility standard for single-acess 5-kHz and 25-kHz UHF satellite communications channels, in MIL-STD188-181B (USA Department of Defense, 1999). Qualcomm, Waveform candidates. 3GPP TSG-RAN WG1, "R1-162199 (waveform candidates)", 3GPP, 11–15 (2016). X. Rui, Y. Huan, C. Qinglin, Adaptive coded modulation based on continuous phase modulation for inter-satellite links of global navigation satellite systems. IEEE Access (2018). A. Barbieri, D. Fertonani, G. Colavolpe, Spectrally-efficient continuous phase modulations. IEEE Trans. Wirel. Commun.8(3), 1564–1572 (2009). P. Moqvist, T. M. Aulin, in IEEE Trans. Commun., 49. Serially concatenated continuous phase modulation with iterative decoding, (2001), pp. 1901–1915. K. R. Narayanan, G. L. Stuber, A serial concatenation approach to iterative demodulation and decoding. IEEE Trans. Commun.47(7), 956–961 (1999). A. G. i Amat, C. A. Nour, C. Douillard, Serially concatenated continuous phase modulation for satellite communications. IEEE Trans. Wirel. Commun.8(6), 3260–3269 (2009). C. Douillard, M. Jézéquel, C. Berrou, D. Electronique, A. Picart, P. Didier, A. Glavieux, Iterative correction of intersymbol interference: turbo-equalization. Eur. Trans. Telecommun.6(5), 507–511 (1995). R. El Chall, F. Nouvel, M. Hélard, M. Liu, Iterative receivers combining mimo detection with turbo decoding: performance-complexity trade-offs. EURASIP J. Wirel. Commun. Netw.2015(1), 69 (2015). S. K. Chronopoulos, V. Christofilakis, G. Tatsis, P. Kostarakis, Preliminary ber study of a tc-ofdm system operating under noisy conditions. J. Eng. Sci. Technol. Rev.9(4), 13–16 (2016). S. K. Chronopoulos, V. Christofilakis, G. Tatsis, P. Kostarakis, Performance of turbo coded ofdm under the presence of various noise types. Wirel. Pers. Commun.87(4), 1319–1336 (2016). K. R. Narayanan, I. Altunbas, R. Narayanaswami, in IEEE Global Telecommunications Conference (GLOBECOM), vol. 2. On the design of LDPC codes for MSK, (2001), pp. 1011–1015. K. R. Narayanan, I. Altunbas, R. S. Narayanaswami, Design of serial concatenated msk schemes based on density evolution. IEEE Trans. Commun.51(8), 1283–1295 (2003). A. Ganesan, Capacity estimation and code design principles for continuous phase modulation (CPM). PhD thesis, Texas A&M University (2003). M. Xiao, T. M. Aulin, On analysis and design of low density generator matrix codes for continuous phase modulation. IEEE Trans. Wirel. Commun.6(9), 3440–3449 (2007). M. Xiao, T. Aulin, Irregular repeat continuous phase modulation. IEEE Commun. Lett.9(8), 722–725 (2005). T. Benaddi, C. Poulliat, M. -L. Boucheret, B. Gadat, G. Lesthievent, Design of systematic GIRA codes for CPM. Proc ISTC (2014). https://doi.org/10.1109/istc.2014.6955075. T. Benaddi, C. Poulliat, M. -L. Boucheret, B. Gadat, G. Lesthievent, in 2014 IEEE International Symposium on Information Theory (ISIT). Design of unstructured and protograph-based LDPC coded continuous phase modulation, (2014), pp. 1982–1986. https://doi.org/10.1109/isit.2014.6875180. T. Benaddi, C. Poulliat, M. -L. Boucheret, B. Gadat, G. Lesthievent, in IEEE International Conference on Communications, (ICC). Protograph-based LDPC convolutional codes for continuous phase modulation, (2014), pp. 1982–1986. A. Jimenez Felstrom, K. S. Zigangirov, Time-varying periodic convolutional codes with low-density parity-check matrix. IEEE Trans. Inf. Theory. 45(6), 2181–2191 (1999). D. G. Mitchell, M. Lentmaier, D. J. Costello, Spatially coupled LDPC codes constructed from protographs. IEEE Trans. Inf. Theory. 61(9), 4866–4889 (2015). S. Kudekar, T. Richardson, R. Urbanke, Threshold saturation via spatial coupling: why convolutional LDPC ensembles perform so well over the BEC. IEEE Trans. Inf. Theory. 57(2), 803–834 (2011). S. Moloudi, M. Lentmaier, A. G. i Amat, in 8th IEEE International Symposium on Turbo Codes and Iterative Information Processing (ISTC). Spatially coupled turbo codes, (2014), pp. 82–86. S. Moloudi, M. Lentmaier, A. G. Amat, in International Symposium on Inf. Theory and Its Applications (ISITA). Finite length weight enumerator analysis of braided convolutional codes, (2016), pp. 488–492. D. J. Costello, M. Lentmaier, D. G. Mitchell, in 9th International Symposium on Turbo Codes and Iterative Information Processing (ISTC), 2016. New perspectives on braided convolutional codes (IEEE, 2016), pp. 400–405. https://doi.org/10.1109/istc.2016.7593145. T. Benaddi, C. Poulliat, R. Tajan, in GLOBECOM 2017-2017 IEEE Global Communications Conference. A general framework and optimization for spatially-coupled serially concatenated systems, (2017), pp. 1–6. https://doi.org/10.1109/glocom.2017.8254239. G. Liva, Block codes based on sparse graphs for wireless communication systems. PhD thesis, University of Bologna (2006). J. Hagenauer, in Proc. 12th European Signal Processing Conference (EUSIPCO). The exit chart-introduction to extrinsic information transfer in iterative processing, (2004), pp. 1541–1548. L. Bahl, J. Cocke, F. Jelinek, J. Raviv, Optimal decoding of linear codes for minimizing symbol error rate (corresp.)IEEE Trans. Inf. Theory. 20:, 284–287 (1974). P. Moqvist, T. Aulin, Trellis termination in CPM. Electron. Lett.36(23), 1940–1941 (2000). T. Richardson, R. Urbanke, Modern coding theory (Cambridge University Press, 2008). S. ten Brink, Convergence behavior of iteratively decoded parallel concatenated codes. IEEE Trans. Commun.49:, 1727–1737 (2001). R. Storn, K. Price, Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim.11(4), 341–359 (1997). Second Generation DVB Interactive Satellite System (dvb-rcs2), in Digital Video Broadcasting (DVB), (2013). IMT Atlantique, Lab-STICC, DEOS, Toulouse, France Tarik Benaddi INP Toulouse, ENSEEIHT, IRIT, Toulouse, France Charly Poulliat TB is the main author of the current paper. TB contributed to the development of the ideas, design of the study, theory, result analysis, and article writing. CP contributed to the development of the ideas, design of the study, theory, result analysis, and article writing. The authors read and approved the final manuscript. Correspondence to Tarik Benaddi. Benaddi, T., Poulliat, C. Spatially coupled turbo-coded continuous phase modulation: asymptotic analysis and optimization. J Wireless Com Network 2020, 159 (2020). https://doi.org/10.1186/s13638-020-01773-7 Continuous phase modulation Spatially coupling EXIT analysis Code design Turbo decoding
CommonCrawl
Ultrafine fully vulcanized natural rubber modified by graft-copolymerization with styrene and acrylonitrile monomers Krittaphorn Longsiri1, Phattarin Mora2, Watcharapong Peeksuntiye1, Chanchira Jubsilp2, Kasinee Hemvichian3, Panagiotis Karagiannidis4 & Sarawut Rimdusit1 This research aims to modify ultrafine fully vulcanized powdered natural rubber (UFPNR) prepared by emulsion graft-copolymerization with styrene (St) and acrylonitrile (AN) monomers onto deproteinized natural rubber (DPNR). The effects of monomers content and St/AN weight ratio on grafting efficiency and thermal stability of the developed DPNR-g-(PS-co-PAN) were investigated. The results showed that grafting efficiency was enhanced up to 86% with monomers content 15 phr and weight ratio St:AN 80:20. The obtained DPNR-g-(PS-co-PAN) was radiated by an electron beam at various doses, followed by a spray drying process to produce UFPNR. The obtained modified UFPNR particles irradiated at dose up to 300 kGy were relatively spherical with a particle size of approximately 4.4 µm. Furthermore, the degradation temperature of 5wt% loss (Td5) of UFPNR was found in the range of 349–356 °C. The results revealed that the modified UFPNR is suitable as a toughening filler for a broader spectrum of polymers. Recently, ultrafine fully vulcanized powdered natural rubbers (UFPNRs) have become an alternative to traditional commodity synthetic rubber powder, due to renewable resources used for their preparation, which are environmentally friendly and reasonably priced (Gupta et al. 2016; Haile et al. 2021; Yang et al. 2021). UFPNRs are illustrated to be suitable toughening fillers in a polymer matrix (Lin et al. 2021; Wongkumchai et al. 2021). Ultrafine fully vulcanized powdered rubbers (UFPRs) were prepared by irradiation vulcanization followed by a spray drying process to produce a controllable spherical particle powder (Qiao et al. 2002). Mostly UFPRs were obtained from synthetic rubber latexes as raw materials, such as St-butadiene rubber (SBR) (Liu et al. 2014), carboxylated-St butadiene rubber (XSBR) (Taewattana et al. 2018), AN–butadiene rubber (NBR) (Pan and Liu, 2022; Wu et al. 2010), and carboxylated-nitrile butadiene rubber (XNBR) (Huang et al. 2002; Taewattana et al. 2018). It is well-known that UFPRs show predominantly reinforcement effect into other rubbers (Tian et al. 2006) or other polymer matrices and modify properties of composites, such as reduction of the wear mass loss and friction coefficient of epoxy composite for friction materials (Yu et al. 2008), and increase of toughness or heat resistance of PVC (Wang et al. 2005), epoxy resin (Huang et al. 2002), polypropylene (Liu et al. 2004), and phenolic resin (Liu et al. 2006; Ma et al. 2005). UFPRs show good elasticity and are dispersed easily in polymer matrices during blending, as a result of their particular microstructure, i.e., a spherical powder form with high crosslinking on the particle surface and moderate crosslinking in the inner part (Tian et al. 2006). Moreover, the advantages of UFPRs over the conventional rubber in a latex form are higher stability upon long-term storage and not harmful to the human body during blending (Qiao 2020; Wang et al. 2019). Despite the extensive use, unfortunately, the production of UFPNR is restricted because of aggregation between particles (Taewattana et al. 2018). Therefore, NR has been developed by enhancing the degree of crosslinking by adding polyfunctional monomers as a crosslinking agent or coagent in the rubber latex before the irradiation process. Lin et al. (2021) have studied the effect of acrylate coagents having different amounts of functional groups, i.e., dipropylene glycol diacrylate (DPGDA), trimethylol propane trimethaacrylate (TMPTMA), and ditrimethylol propane tetraacrylate (DTMPTA), on properties of UFPNR produced by radiation vulcanization and spray-drying. They suggested that DTMPTA, which has four functional acrylate groups, demonstrated high efficiency in enhancing the degree of crosslinking in NR that led to UFPNRs with much less agglomerated particles. However, NR which is a non-polar long chain hydrocarbon, lacks in some properties, i.e., it has poor solvent resistance, and limited application due to its immiscibility when blended with polar polymers (Arayapranee et al. 2002; Kangwansupamonkon et al. 2005). Therefore, it is necessary to modify the properties of NR before processing to overcome these problems. Chemical modification by graft copolymerization is one of the most attractive techniques. The NR molecular structure, which contains cis-1,4-polyisoprene with an electron-donating methyl group attached to the carbon–carbon double bond in its main chain, can facilitate the reaction with other vinyl monomers and covalently bunched onto the NR backbone. Several vinyl monomers have been used for grafting modification of NR, such as St (Dung et al. 2016), AN (Prukkaewkanjana et al. 2013), methyl methacrylate (MMA) (Kongparakul et al. 2008), and maleic anhydride (MA) (Pongsathit and Pattamaprom 2018) to improve solvent resistance, thermal stability, mechanical properties, and compatibility of. Rimdusit et al. (2021) have improved thermal stability and solvent resistance of UFPNR via graft-copolymerization with St and or AN, respectively. The results revealed that the proper monomer content was 5 phr and proper radiation dose was 300 kGy for producing UFPNR-g-PS and UFPNR-g-PAN with maintaining rather high thermal stability. Therefore, the combination of St and AN monomer to form St/AN copolymer grafting on NR backbone is expected to improve the thermal stability and solvent resistance of NR and probably the compatibility with various polymer matrices having different polarity (Angnanon et al. 2011; Dung et al. 2017; Fukushima et al. 1988; Indah Sari et al. 2020; Nguyen Duy et al. 2020; Nguyen et al. 2019; Prasassarakich et al. 2001). The present work is devoted on modifying UFPNR by graft-copolymerization with the combination of St and AN monomers onto deproteinized NR using tert-butyl hydroperoxide (TBHPO) and tetraethylenepentamine (TEPA) as a redox initiator. The effect of monomers content and St/AN weight ratios of DPNR-g-(PS-co-PAN) on monomers conversion and grafting efficiency were determined. The obtained DPNR-g-(PS-co-PAN) were irradiated by an electron beam in the presence of DTMPTA as coagent followed by spray drying process to produce UFPNR. The production of UFPNR-g-(PS-co-PAN) by grafting St-AN comonomers was studied to improve thermal stability and solvent resistance for using as toughening fillers. The effect of irradiation dose used on the morphology and thermal properties of UFPNR was also investigated. High ammonia natural rubber (HANR) latex containing 60% of dry rubber content (DRC) was obtained from Sri Trang Agro-Industry Public Co., Ltd. (Thailand). Sodium dodecyl sulfate (SDS; 99%, Merck), urea (99.5%), magnesium-sulfate heptahydrate (MgSO4·7H2O), St (99%), AN (99%), tert-butyl hydroperoxide (TBHPO; 70% in water), TEPA, DTMPTA, sodium hydroxide (NaOH), acetone (99.5%), and 2-butanone (99%) were purchased from Tokyo Chemical Industry Co., Ltd. (Tokyo, Japan). Preparation of DPNR latex and purification of St and AN monomers DPNR was prepared by incubating HANR with 0.1 wt% urea and 1 wt% SDS at room temperature for 60 min, followed by centrifugation at 10,000 rpm, 15 °C for 30 min. After centrifugation, the cream fraction was redispersed in 0.5 wt% of SDS solution. The latex was washed repeated twice by centrifugation. The final product of DPNR latex was stabilized in 0.8 wt% of SDS solution and diluted to 30% DRC (Kawahara et al. 2004). St and AN monomers were extracted with 10 wt% sodium hydroxide solution, washed with de-ionized water until neutral, and dried in MgSO4·7H2O to remove inhibitor (Dung et al. 2017). Graft copolymerization of DPNR with St and AN The graft copolymerization of DPNR with St and AN in the latex state was conducted in 500 cm3 glass reactor, equipped with a mechanical stirrer, water bath and nitrogen gas inlet. The DPNR latex and SDS were charged into a glass reactor under a nitrogen atmosphere and stirring at the speed of 400 rpm, 40 °C for approximately 2 h. Afterwards, TEPA and TBHPO were used as an initiator at a concentration of 3.5 × 10–5 mol/g of dry rubber. St/AN monomers with a ratio 20/80, 30/70, 50/50,70/30 or 80/20 wt% was added in a solution of dry rubber at concentration 1.5 × 10–3 mol/g (15 phr) or 0.5 × 10–3 mol/g of dry rubber (5 phr). Then, the grafting reaction was continued for 2.5 h (Nguyen Duy et al. 2020). After the reaction was finished, the unreacted monomers were removed by the rotary evaporator at 80 °C under reduced pressure for 1 h. The mixture was cast in a glass Petri dish and dried to constant weight in a vacuum oven at 50 °C. The monomers conversion of graft copolymerization reaction was determined by the gravimetric method, using the following equation: The increase in mass of grafted NR was equal to the amount of formed polymer and was used for the calculation of monomer conversion. $${\text{Monomer}}{\text{s}}\text{ conversion }\,(\%) = \frac{\text{weight of polymer in gross copolymer}}{\text{total weight of monomers}} \times 100$$ After drying, the product was extracted with a mixture of acetone/2-butanone in ratio 3/1 v/v for 48 h using a Soxhlet apparatus to remove the free homopolymer, i.e., PS and PAN. The grafting efficiency was calculated as follows: $$\text{Grafting efficiency }\,(\%) = \frac{\text{weight of polymer linked to NR}}{\text{total weight of the poly}\text{mer formed}} \times 100$$ Preparation of ultrafine fully vulcanized powdered rubber (UFPNR) The grafted DPNR was diluted to 20% DRC with de-ionized water in the presence of 3 phr of DTMPTA. The latex mixture was stirred for 15 min before being vulcanized by electron beam irradiation at the dose of 100, 200, or 300 kGy supported by Thailand Institute of Nuclear Technology (Public Organization). Then, the vulcanized grafted DPNR was dried by a spray dryer (model B-290 from BUCHI, Switzerland) with the inlet temperature at 150 °C, feed flow rate 7 mL/min, and air flow rate 500 L/hr. to achieve the ultrafine powdered rubbers as a bottom product. Samples characterization The chemical structure of DPNR-g-(PS-co-PAN) obtained after Soxhlet extraction was studied using Fourier transform infrared spectroscopy (model 2000 FTIR, Perkin Elmer) with an attenuated total reflection (ATR) accessory (Waltham, Massachusetts, United States) in the range from 4000 to 600 cm−1, by averaging 128 scans at a resolution of 4 cm−1. 1H NMR spectra were recorded on a Bruker AV500D spectrometer 500 MHz (Bruker, Switzerland) and was used to confirm the FTIR results. The samples were dissolved in deuterated chloroform (CDCl3) using the pulse accumulation of 64 scans and LB parameter of 0.30 Hz. The morphology of NR latex particles was observed with a transmission electron microscope (TEM, model JEM-1400 from JEOL Ltd., Tokyo Japan) with an accelerating voltage of 80 kV. Before observation, 1 mL of NR latex (30% DRC) was diluted using 300 mL deionized water and placed on a carbon-coated copper grid. The NR latex was stained with 1.0 wt% osmium tetroxide (OsO4) to enhance the resolution (Chueangchayaphan et al. 2017; Gosecka and Gosecki 2015; Schneider et al. 1996). After staining, the samples were dried in ambient air before observation. After irradiation followed by spray drying process the obtained UFPNR was coated with thin gold using a JEOL ion sputtering device (model JFC-1200) for 4 min. The morphology was investigated by a scanning electron microscope (SEM, model JSM-6510A from JEOL Ltd., Tokyo Japan) with an accelerating voltage of 15 kV. The particle size of the UFPNR was measured using the Image J program. The change of polarity of NR after chemical modification was estimated by their wettability. The rubber films were prepared by drying latex under reduced pressure at room temperature for a week and examined by static contact angle (sessile drop) measurements using a contact angle meter (model DM300, Kyowa Interface Science Co., Ltd., Japan). Distilled water was used as the test liquid. The shape of the drops was observed with a microscope equipped with a CCD camera, and the contact angles of the same sample were measured at least 5 times in ambient air, and an average value established. The degradation temperature of the UFPNRs was evaluated using a thermogravimetric analyzer (model TGA1 module from Mettler-Toledo, Thailand). The samples ~ 10 mg was heated from 25 °C to 800 °C with a heating rate of 20 °C/min under nitrogen atmosphere at a nitrogen purge gas flow rate of 50 mL/min. The glass transition temperature (Tg) of UFPNR was determined using a differential scanning calorimeter (model DSC1 module from Mettler-Toledo, Thailand). The samples about ~ 10 mg was cooled to − 100 °C by liquid nitrogen and heated up to 25 °C with a constant rate of 10 °C/min under nitrogen atmosphere. Swelling properties and gel content of UFRNR were then evaluated. The weight of UFPNR (W1) was measured before immersing in toluene (ρs = 0.87 g/cm3, V1 = 106.5 mL/mol) at room temperature for 24 h. After that, the swollen UFPNR was immediately weighted (W2), followed by drying in a vacuum oven at 80 °C for 24 h to remove the solvent and obtain the dried weight (W3). The swelling ratio (Q), molecular weight between crosslinks (Mc), crosslink density (CLD) and gel fraction (g) were calculated using the following Eqs. (3–6) (Flory–Rehner equation) (Flory and Rehner 1943): $$Q=\frac{(W2-W1)/{\uprho }_{s}}{W1/{\uprho }_{r}}.$$ $${M}_{C}=\frac{-{\uprho }_{r}{V}_{1}({{\mathrm{\varphi }}_{r}}^\frac{1}{3}-\frac{{\mathrm{\varphi }}_{r}}{2})}{Ln\left(1-{\mathrm{\varphi }}_{r}\right)+{\mathrm{\varphi }}_{r}+{X}_{12}{{\mathrm{\varphi }}_{r}}^{2}}; where {\mathrm{\varphi }}_{r}=\frac{1}{1+Q}.$$ $$\mathrm{CLD}=\frac{{\uprho }_{r}\mathrm{N}}{{M}_{C}}.$$ $$g=\frac{W3}{W1}$$ where W1, W2, and W3 are the weights of initial, swollen and dried samples, respectively. ρs and ρr are the densities of solvent (0.87 g/cm3 of toluene) and rubber, ϕr is the volume fraction of polymer in the swollen sample, V1 is the molar volume of the toluene solvent (106.5 mL/mol), χ12 is the polymer–solvent interaction parameter (the value of χ12 is 0.393 for toluene) and N is Avogadro's number (6.022 × 1023). Effect of St/AN monomers ratio and monomers content on graft copolymerization Monomers conversion of graft copolymerization reaction is the factor that evaluates the percentage of monomers converted to grafted copolymer and the amount of formed homopolymers. The grafting efficiency of process, expresses the PS-co-PAN actually attached to the poly-isoprene chains by chemical linkage. At the same time, the monomers can be homopolymerised under the reaction conditions; the reaction occurs in the aqueous phase and the homopolymers are formed as short-chain free polymer products, i.e., free-PS and free PN, eliminated by Soxhlet extraction with a suitable solvent. The monomers conversion and grafting efficiency of process in the presence of St/AN at various weight ratios were estimated by gravimetric analysis using Eqs. (1) and (2), respectively, as shown in Figs. 1 and 2. The monomer conversion at monomers content of 5 phr was found to be 23, 31, 38, 43, or 46%, whereas the grafting efficiency was 26, 35, 37, 42, or 48% with an addition of St/AN weight ratio of 20/80, 30/70, 50/50, 70/30, or 80/20, respectively. It was found that the monomers conversion and the grafting efficiency at monomer content of 5 phr was substantially increased with raising the amount of St. The most significant reason is that the solubility of the monomers in the aqueous and organic phase is responsible for the relative rate of monomer reaction. The St monomer being hydrophobic was similar to polarity of polyisoprene in NR. In contrast, AN monomer is a partially water-soluble monomer with relatively more affluent in the aqueous phase. For that reason, to increase the St monomer may preferentially react with polyisoprenyl macroradicals first to generate stable radical, which is styryl macroradicals (DPNR-g-St·) capable of further copolymerization with AN monomer (Arayapranee et al. 2002). Furthermore, St monomer can act as an electron donor to activate the carbon–carbon double bond of AN monomer, which is an electron acceptor by creating a charge transfer complex (CTC). In consequence, St-AN copolymers can be synthesized with the predominantly alternating structure to generate oligomer-radicals (AN-co-St·) which highly activates the weakly reactive double bond of AN towards the rubber macroradicals (Indah Sari et al. 2015; Ji et al. 2005). The monomers conversion and the grafting efficiency at monomers content 15 phr showed the same trend but higher than those at the content 5 phr. The monomers conversion at monomers content 15 phr increased from 67, 74, 80, 86, to 89, as well as the grafting efficiency was increased from 69, 77, 79, 85, to 86% with St/AN weight ratio 20/80, 30/70, 50/50, 70/30, or 80/20, respectively. It can be explained that during the copolymerization at higher monomers content more monomer and oligomer radicals are produced which raised the chance of the reaction between monomer and oligomer radicals with NR molecules to form graft copolymer. Variation of St/AN weight ratio as a function of monomer conversion (%) at different monomer content Variation of St/AN weight ratio as a function of grafting efficiency (%) at different monomers content Morphology study of DPNR-g-(PS-co-PAN) by TEM In the grafting process by emulsion copolymerization, the NR particles and droplets of undissolved monomers were stabilized by SDS surfactant molecules absorbed on their surfaces to form micelle particles. The emulsion copolymerization process takes place in the organic phase, which is inside the micelle particles, and very little occurs in the aqueous phase. The graft-copolymerization begins with the redox initiation system, which has two components, that is the TBHPO (a hydrophobic oxidizing agent) which prefers to remain strongly onto the NR surface and the TEPA (a hydrophilic reducing agent) which prefers to remain in the aqueous phase. These are proposed to be thermally dissociated to generate initiator radicals (I·) at the rubber/water interface and form grafting sites present on the surface of NR particles, as shown in Fig. 3 (Schneider et al. 1996). In Fig. 4 is shown the mechanism of graft copolymerization. In the initiation step (a) the initiator radicals possibly react with the active sites of the polyisoprene backbone in two ways. Through abstraction of an allylic hydrogen, which is the hydrogen in –CH2 next to carbon–carbon double bond and transfer of the radical to form an active site at allylic carbon that is a secondary polyisoprene macroradicals (DPNR). In addition, the initiator radicals might interact through an addition reaction with carbon–carbon double bond, breaking the double bond of rubber main chain to give a tertiary polyisoprene macroradicals (DPNR·) (Kochthongrasamee et al. 2006). At the same time, initiator radicals can attach to the St and AN monomers to form monomer and oligomer radicals, i.e., (St·), (AN·), and (AN-co-St·) similar to the proposed mechanism reported by (Staverman 1979). In the second step of propagation (b), a propagating polymer chain can be formed in the presence of monomer radicals and macroradicals to form graft copolymer. The cycle of the growing chain of the polymer particle continues until the monomer conversion is essentially complete in the termination step (c) by chain transfer to macromolecules or combination reaction (Azanam and Ong 2017). Possible reaction sites in the modified NR latex in the bipolar redox initiation systems (adapted from (Schneider et al. 1996)) Mechanism of graft copolymerization studied As mentioned previously, the characteristic morphology of grafted DPNR (DPNR-g (PS-co-PAN)) particles was confirmed by TEM micrographs compared to virgin NR. The micrograph of virgin NR particles is shown in Fig. 5a. The dark domains describe the electron-dens of the carbon–carbon double bonds of isoprene units inside the NR as spherical particles, with sharp edges and smooth surfaces (Chueangchayaphan et al. 2017; Gosecka and Gosecki 2015). Meanwhile, the micrographs of grafted DPNR particles with monomer contents at 5 and 15 phr are shown in Fig. 5b and c. The figures illustrated the contrast variation of electron density between the middle and the edge of particles. The dark domains at the middle represent the DPNR particles attributing to the core-structure, surrounded by the distinct brighter domains represent to the grafted PS-co-PAN phase attributing to the shell-structure (Schneider et al. 1996). The results suggest that the grafting copolymerization of St/AN was occurred on the DPNR surface and form the core–shell structure which is in a good agreement with the proposed mechanism of grafting-copolymerization. Moreover, the thickness of the shell layer is 1.1 ± 2.7 µm larger than that of virgin NR particles and forms a perfect shell at monomer content 15 phr ascribable to the highest grafting efficiency of grafted NR up to 86%. Since the active grafting sites are totally occupied at maximum grafting, a sufficient percent of grafting was required to keep the bonding between core and shell strong enough to prevent the breaking of core–shell particles at the interphase. TEM micrographs of a Virgin NR b DPNR-g-(PS-co-PAN) at monomers content: 5 phr and c DPNR-g-(PS-co-PAN) at monomers content 15 phr (×100,000 magnification) The advantages of core–shell structure may prevent the agglomeration of the DPNR particles led to processing aids when produced UFPNR and improved interfacial adhesion between UFPNR and polymer matrix. Chemical structure study of DPNR-g-(PS-co-PAN) The chemical structure of the grafted DPNR with St/AN monomers at weight ratio of 80/20, and with different monomers content 5 and 15 phr, which showed the highest monomers conversion and grafting efficiency was investigated by FTIR spectroscopy. The chemical structure of the obtained grafted DPNR after Soxhlet extraction was examined by comparing its FTIR signal with DPNR, and the results are plotted in Fig. 6. In all FTIR spectra the characteristic absorption bands of NR. In the spectrum of DPNR shown in Fig. 6a, the distinctive absorption peaks at 2915 and 2852 cm−1 corresponding to C–H stretching of –CH3 and –CH2–, respectively, and the absorption band at 1664 cm−1 corresponding to vibration of C=C stretching are shown. C–H stretching vibration at –CH2– is expected around 1448 cm−1 and the absorption band at 1375 cm−1 is assigned to C–H asymmetry vibration of –CH3. In addition, the absorption band at 1244 cm−1 is attributed to vibration of C–C stretching next to C=C, which is (R2C=CH–R) or at cis1,4 addition position, while the peak of 836 cm−1 indicated C=C bending vibration in NR main chain (Dinsmore and Smith 1948; Kishore and Pandey 1986; Nallasamy and Mohan 2004). Confirmation of grafted NR can be seen by considering the spectrum of DPNA-g-(PS-co-PAN) with monomers content 15 phr shown in Fig. 6c. It could be pointed out by the existence of characteristic absorption bands at 760 and 700 cm−1 which are related to vibration of C–H bending of styrenic benzene rings of polystyrene (Nguyen Duy et al. 2020). Furthermore, the new absorption band appeared at wavenumber 2247 cm−1 ascribed to vibration of C≡N stretching in PAN. In addition, the intensity of absorbance peak can imply the quantity of functional groups in graft copolymer. The intensity of absorption bands at 2247, 760, and 700 cm−1 obviously increased with increasing monomers content from 5 up to 15 phr. The intensity of the shoulder peak around 1650–1655 cm−1 and 833 cm−1 was decreased. This is due to the consumption of double bond (C=C) in NR structure during the grafting process. FTIR spectra of DPNR and DPNR-g-(PS-co-PAN) at different monomer content The chemical structure of DPNA-g-(PS-co-PAN) with monomer content 5 and 15 phr were also analyzed by 1H-NMR spectroscopy. The chemical shifts of NR at 1.61, 1.97 and 5.10 ppm, showed in Fig. 7 were attributed to the methyl proton CH3 (c), unsaturated CH2 (b, b') and olefinic proton (a), respectively (Pongsathit and Pattamaprom 2018; Wongthong et al. 2013). Whereas the structure of St indicated the chemical shifts at 7.26 ppm attributed to the aromatic protons (d, d') were found (Pukkate et al. 2007), and the chemical shifts at 1.3–1.4 ppm (e), which corresponds to the methylene protons (e) of DPNR linked to St in DPNR-g-(PS-co-PAN) was obtained (Liu et al. 2013). Moreover, the chemical shifts appeared at 4.45 and 4.90 ppm assigned to the backbone protons (f, g) of AN units (Prukkaewkanjana et al. 2013). These results confirmed that grafting NR with St and AN can form DPNR-g-(PS-co-PAN) structure. 1H-NMR spectra of DPNR-g-(PS-co-PAN) at different monomer content Previous research (Rimdusit et al. 2021) showed that only graft copolymerization of NR with St or AN was not sufficiently crosslinked the NR molecules to reduce aggregated and tackiness between rubber particles and could not be produced modified UFPNR. Therefore, the suggestion is that only suitable high crosslinking density of NR particles by irradiation could produce UFPNR and solve the aggregation problem. Characterizations of ultrafine fully vulcanized NR grafted with polystyrene-co-polyacrylonitrile (UFPNR-g-(PS-co-PAN)) The crosslinking process or vulcanization of NR molecules is obtained by electron beam vulcanization, which enhances crosslinking efficiency. Moreover, the crosslinking of NR can be further enhanced by adding a polyfunctional monomer, known as a crosslinking coagent. Previous research (Lin et al.) suggested that the addition of 3 phr of DTMPTA could enhance crosslinking density and produce the smallest particle size of UFPNR. Therefore, DTMPTA at 3 phr was used with electron beam irradiation in this research. The mechanism of DTMPTA irradiation by electron beam is shown in Fig. 8a. The electrons released from electron beam accelerator would attack π-electrons at the double bonds on tetra functional groups of DTMPTA to form monomer free radicals. a DTMPTA structure was radiated by electron beam b possible structure of crosslinked UFPNR-g-(PS-co-PAN) in the presence of DTMPTA Meanwhile, electron beam irradiation also attacks π-electrons at the double bonds on NR structure to form polymeric free radicals. After that, the highly active free radicals can react to form chemical linkage between DTMPTA and the NR structure. There are two possible ways to promote the formation of a crosslinked network. First, the generated monomer free radicals would attach to the hydrogen at the AN segment on the grafted NR through hydrogen abstraction. Second, the generated monomer free radicals are attached with polymeric free radicals in NR chains to form crosslinked networks (Bee et al. 2018). The possible structure of crosslinked UFPNR-g-(PS-co-PAN) in the presence of DTMPTA to form three-dimensional crosslinking network illustrated in Fig. 8b. The chemical structures of UFPNR-g-(PS-co-PAN) The obtained UFPNR-g-(PS-co-PAN) at monomers content 15 phr with St/AN weight ratio 80/20, which provided the highest grafting efficiency was irradiated at different irradiation doses, i.e., 100, 200, and 300 kGy. Their structure was studied by FTIR spectroscopy as illustrated in Fig. 9. The FTIR spectra showed the distinctive absorption peaks of NR at 836 cm−1 and 1650–1655 cm−1 corresponding to vibration of C=C bending and C=C stretching of NR main chain. In addition, the absorption bands at 1080 and 1244 cm−1 are attributed to vibration of C–C stretching next to C=C, which is (R2C=CH–R) or at cis1,4 addition position, while the peak at 836 cm−1 indicated C=C bending vibration in NR main chain. However, the intensity of the shoulder peak at 1650–1655 cm−1 corresponding to absorption peaks 1080 and 1244 cm−1 was decreased with increasing irradiation dose indicating the consumption of double bond (C=C) in NR structure to form C–C in three-dimensional crosslinking network. In addition, the absorbance peak in range of 1755–1745 cm−1 exhibited the presence of ester group of DTMPTA which promotes the crosslinking during irradiation vulcanization. In addition, the additional high energy of irradiation decreased the intensity of absorption peak at 2242 cm−1 referred to nitrile group C≡N of AN which was broken to form C=N bond (Badawy and Dessouki 2003). From the results of FT-IR spectroscopy study it can be concluded that NR grafted with St/AN copolymer can be vulcanized to form three-dimensional crosslinking network by addition of DTMPTA and concurrent electron beam irradiation. FTIR spectra of grafted NR at monomers content 15 phr with St/AN weight ratio 80/20 at various irradiation doses: a unradiated, b 100, c 200, and d 300 kGy The effect of electron beam irradiation on gel formation and swelling behaviors of UFPNR-g-(PS-co-PAN) The swelling behavior of NR was evaluated in toluene to determine the ability to resist against this when immersed for 24 h at room temperature. The parameters of swelling behaviour which determine the degree of crosslinking are the swelling ratio (Q), molecular weight between crosslinks (Mc), crosslinking density (CLD) and gel fraction (g) based on Eqs. 3–6, respectively. The effect of used irradiation dose (100, 200, and 300 kGy) on these parameters for grafted NR at monomers content of 15 phr with St/AN 80/20 weight ratio are plotted in Fig. 10, and the numerical data are tabulated in Table 1. The results shown a significantly decreased of swelling ratio from 17.52 ± 0.88 to 8.83 ± 0.54 when grafted NR was radiated with an electron beam 100 kGy. Moreover, the swelling ratio of the grafted NR was decreased from 6.57 ± 0.44 to 4.99 ± 0.59 when the irradiation dose increased from 200 to 300 kGy, respectively. This can be explained by the sufficient electron beam irradiation which can activate the π-electrons of the double bonds and form highly active free radicals and a three-dimensional crosslinked network. Therefore, toluene molecules are more difficult to penetrate into the NR molecules. While the gel fraction continuously increased with the increase of irradiation dose. The results showed that the gel fraction increased from 0.58 ± 0.02 to 0.75 ± 0.02 and continues to increase from 0.89 ± 0.01 to 0.96 ± 0.01 when the irradiation dose increased from 200 to 300 kGy, respectively. This phenomenon was attributed to the long straight chain of NR, which has solubility parameter nearly that of toluene, while the molecules of the grafted NR chain have strong interaction with the toluene molecules which cause the expanding of NR chains and the swelling in the solvent eventually. Therefore, the increment of irradiation doses is extremely influential to gel fraction cause; it leads to increased inter-molecular crosslink between NR chains to form a three-dimensional crosslinking network that resists solvent penetration (Manshaie et al. 2011; Tuti et al. 2015). (Black square) Swelling ratio and (Black circle) gel fraction of UFPNR-g-(PS-co-PAN) 15 phr with ST/AN weight ratio 80/20 at various irradiation doses Table 1 Swelling ratio, gel fraction, molecular weight between crosslinks and crosslink density of UFPNR-g-(PS-co-PAN) The effect of electron beam irradiation on molecular weight between crosslinks and crosslink density of UFPNR-g-(PS-co-PAN) The effect of irradiation dose on the crosslink density, and molecular weight between crosslinks of grafted NR at monomer content 15 phr with St/AN 80/20 weight ratio after irradiation at 100, 200, and 300 kGy, are plotted in Fig. 11. The crosslink density of unirradiated grafted NR was 0.83 × 10–22 ± 0.09, whereas crosslink density values of radiated grafted NR increased from 1.54 × 10–22 ± 0.09, 2.13 × 10–22 ± 0.07 and 2.45 × 10–22 ± 0.05 with increasing irradiation dose from 100, 200, or 300 kGy, respectively. Due to the influence of higher energy irradiation doses, the thermal energy stimulated rubber latex to promote more free radicals, which are more suitable to form a three-dimensional crosslinking network. This behavior causes hindrance of solvent penetration into NR molecules leading to reduce the swelling ratio. While, as expected, the molecular weight between crosslinks decreased with increasing the irradiation dose. (Black square) Molecular weight between crosslinks (Mc) and (black circle) crosslink density of UFPNR-g-(PS-co-PAN) 15 phr with St/AN weight ratio 80/20 at various irradiation doses The effect of irradiation dose on morphology of UFPNR-g-(PS-co-PAN) In this section, the effects of irradiation doses on morphology of UFPNR-g-(PS-co-PAN) with St/AN weight ratio 80/20 were investigated due to the highest grafting efficiency at monomer contents of 5 and 15 phr which could result in thermal stability and solvent resistance. The grafted NR with St/AN weight ratio 80/20 having the highest grafting efficiency at monomer contents of 5 and 15 phr was prepared by irradiation with electron beam of 100, 200, or 300 kGy, followed by spray drying process to produce modified UFPNRs. The morphology of modified UFPNRs was observed by SEM. SEM micrographs of the modified UFPNR with monomer content of 5 phr with irradiation dose at 100 kGy are shown in Fig. 12a. The modified UFPNR still shows aggregated and tacky rough surfaces and uncertain spherical particles, because of the amount of electron beam, which was not sufficient enough to generate free radicals for crosslinking. When increasing the irradiation dose to 200 and 300 kGy as shown in Fig. 12b, c, respectively, the aggregation of modified UFPNR tended to be less at higher irradiation doses, and the particles showed the least aggregation at irradiation dose of 300 kGy. SEM micrographs (×1500 magnification) of UFPNR-g-(PS-co-PAN) at monomers content 5 phr with various irradiation doses: a 100 kGy, b 200 kGy, c 300 kGy Figure 13a shows SEM micrographs of the modified UFPNR with monomer content 15 phr at irradiation dose 100 kGy. The particles of UFPNR have smoother surface when compared to modified UFPNR with monomer content of 5 phr with the same irradiation dose (Fig. 12a). It is worth noting that the monomers conversion and the grafting efficiency at monomer content 15 phr are higher than those at 5 phr. Furthermore, the results confirmed that copolymers of St and AN at monomer content 15 phr were sufficiently grafted on to the rubber surface reducing aggregation and tackiness between rubber particles permitting the production of modified UFPNR. However, particles still showed aggregation, high tackiness and uncertain spherical. When increasing the irradiation dose at 200 and 300 kGy, as shown in Fig. 13b, c, respectively, the aggregation of modified UFPNR tended to be less at higher irradiation doses and the particles showed the least aggregation at irradiation dose of 300 kGy. These results show the influence of irradiation dose on free radicals generation, which form a more dense three-dimensional crosslinking network. This results to reduced tackiness between of rubber particles surface reduced aggregation and more smooth surface particles, along with the smaller particle sizes (Rezaei Abadchi and Jalali-Arani 2014). The increase of doses resulted in a systematically decrease in the resulting rubber particle size. This phenomenon is attributed to the simultaneous mainchain scission and crosslinking of the rubber macromolecules occur during irradiation process particularly at higher dose of electron beam. From SEM micrographs, and the swelling behaviour study, it can be concluded that the increment of electron beam irradiation at 300 kGy is suitable for the production of modified UFPNR. Moreover, the particle sizes of the developed UFPNR-g-(PS-co-PAN) are smaller than that of UFPNR-g-PS or UFPNR-g-PAN investigated by Rimdusit et al. with the average sizes of 5.95 ± 3.03 µm and 6.39 ± 2.71 µm, respectively (Rimdusit et al. 2021). SEM micrographs (×1500 magnification) of UFPNR-g-(PS-co-PAN) at monomers content 15 phr with various irradiation doses: a 100 kGy, b 200 kGy, c 300 kGy The effect of grafting efficiency on morphology of UFPNR-g-(PS-co-PAN) The virgin NR and the grafted NR with St/AN weight ratio 80/20 with monomers contents at 5 and 15 phr were irradiated with electron beam of 300 kGy absorption dose, followed by spray drying process to produced UFPNR and modified UFPNRs. Figure 14 shows their SEM micrographs; aggregation of rubber particles was observed in virgin UFPNR which tended to fuse with each other, although at increment irradiation dose up to 300 kGy. Different morphology in modified UFPNR at monomers content 5 and 15 phr is observed in Fig. 14b, c, respectively. It can be seen that the morphology of modified UFPNR show non-aggregated and relatively spherical particles with relatively smooth surface and similar uniform particle size distribution. The surface of the UFPNR-g-(PS-co-PAN) at monomers content of 15 phr was smoother than that of the UFPNR-g-(PS-co-PAN) at monomers content of 5 phr. It is probable due to formation of a denser crosslinked network and the modification of the rubber surface, reducing the tackiness of rubber particles and resulting in less aggregated and smoother surface particles (Rimdusit et al. 2021). SEM micrographs (×1500 magnification) of a virgin UFPNR and UFPNR-g-(PS-co-PAN) at monomer content b 5 and c 15 phr at irradiation dose 300 kGy The average particle sizes of the modified UFPNRs were measured from 500 particles, the results obtained are shown in Fig. 15, and the numerical data are tabulated in Table 2. It was found that the average particle sizes of the modified UFPNR were 3.56 ± 1.70 and 4.38 ± 1.79 µm at monomer content 5 or 15 phr, respectively; due to higher grafting efficiency at monomer content, 86% for 15 phr than 48% for 5 phr. This may be the reason that the particles are slightly larger. Particle sizes of the modified UFPNR with monomer content (square with upper left to lower right fill) 5 and (black square) 15 phr at irradiation doses 300 kGy Table 2 Particle size of the modified UFPNR The effect of monomer ratio on surface properties of UFPNR-g-(PS-co-PAN) The measurement of water contact angle is one of the conventional methods used for estimating the hydrophilicity of polymer surfaces. PS-co-PAN was purposely grafted to the NR molecule to improve the affinity of NR with polar surfaces. Profiles of water contact angle on the surfaces of unmodified NR and grafted NR are illustrated in Fig. 16. The contact angle of NR showed the highest value 71 ± 0.1° indicating the lowest hydrophilicity (Cabrera et al. 2017). While compared to grafted NR, the contact angle of the films was reduced from 56 ± 0.1° to 40 ± 0.3° and the lowest contact angle appeared at 27 ± 0.7° by increasing the AN content from 20, 50, and 80 wt%, respectively. The decrease in the water contact angle of NR after grafting modification is attributed to the presence of grafted poly(St-co-AN) chains on to NR surface as mentioned before at TEM analysis. As the AN groups present in the grafted poly(St-co-AN) are capable of hydrogen bonding with water molecules, they facilitate the spreading and wetting of a water drop on an NR surface. This would lead to a noticeable reduction in the water contact angle, as the AN content increases, enhancing the hygroscopic characteristic of NR (Safeeda Nv et al. 2016). Contact angles of a water droplet on surface of NR films after irradiation a unmodified NR and grafted NR at monomer content 15 phr with various St/AN monomers weight ratios; b 80/20, c 50/50, and d 20/80 wt% The effect of grafting efficiency on thermal stability of UFPNR-g-(PS-co-PAN) The effect of grafting efficiency and irradiation dose on thermal stability, i.e., the degradation temperature at 5% weight loss (Td5) which is a vital property indicating their performance at elevated temperatures was studied. In Fig. 17, TGA thermograms of virgin UFPNR and the modified UFPNR are shown, and the numerical data are presented in Table 3. It was found that Td5 of the virgin NR without irradiation was 334 °C and increased to 344 °C after irradiation with 300 kGy. This is because the electron beam provides radical reactions by activating the double bond of NR structure and tetra-acrylate groups of DTMPTA to generate highly reactive radicals which form intermolecular and intramolecular C–C bonds in NR chains and a three-dimension network structure (Akiba and Hashim 1997; Dawes et al. 2007). Whereas C–C single bond is more stable than C=C double bonds, the crosslinks of NR chains can stabilize and restrict molecular mobility; more thermal energy is required to cause destructive changes leading to enhancement of Td5 (Bandzierz et al. 2018). Degradation temperature at 5% weight loss (Td5) (black square) unmodified NR, (black circle) unmodified UFPNR at irradiation dose 300 kGy, and UFPNR-g-(PS-co-PAN) at monomers content (black diamond) 5 phr and (black up-pointing triangle) 15 phr at irradiation dose 300 kGy Table 3 Degradation temperature at 5% weight loss (Td5) at various monomer content Then, the modified UFPNRs' degradation temperature was evaluated by considering, Td5 of unmodified UFPNR and modified UFPNR under the same irradiation dose at 300 kGy. It was found that Td5 was increased from 344 of unmodified UFPNR to 350 and 356 °C for UFPNR-g-(PS-co-PAN) with monomer content 5 and 15 phr, respectively. It is worth to note that the thermal stability of the prepared UFPNR-g-(PS-co-PAN) is higher than that of the UFPNR-g-PS (343 °C) reported by (Rimdusit et al. 2021). Grafting of St-AN copolymer onto NR molecules improves Td5 of UFPNR besides the improvement of surface polarity of NR. Moreover, the free radicals generated during the irradiation process would abstract the hydrogen attached to the AN section in rubber structure to form a crosslinked network of PAN (Badawy et al. 2003; Park et al. 2014; Xue et al. 1997). Furthermore, higher grafting efficiency at monomers content, 15 phr (i.e., 86%) observed than that of 5 phr (i.e., 48%). The increased amount of St, grafted on the NR backbone with the bulky side chains of aromatic rings resulted in greater difficulty for the polymer chains to flow or to slide past each other and generally led to a simultaneous increase of the chain stiffness of the modified UFPNRs. Therefore, the thermal stability was enhanced by grafting with PS-co-PAN (Seleem et al. 2017). The obtained results showed that the modified UFPNRs by grafting with St/AN at monomers content 15 phr improved Td5 by 12 °C when compared to unmodified UFPNR, and improved Td5 by 22 °C when compared to virgin NR. The effect of monomer ratio on thermal stability of UFPNR-g-(PS-co-PAN) The effect of the St/AN weight ratio of UFPNR-g-(PS-co-PAN) with monomers content 15 phr at irradiation dose 300 kGy on the 5% weight loss (Td5) was studied. TGA thermograms are shown in Fig. 18 and the numerical data in Table 4. The results found that an increase in Td5 of the UFPNR-g-(PS-co-PAN) with raising the St in St/AN weight ratio of 20/80, 30/70 and 50/50 cause an increase in Td5 from 351 to 354 and to 356 °C, respectively, due to the increase of monomer conversion and grafting efficiency. The sufficient irradiation dose of 300 kGy can activate the double bonds of NR structure and coagent to generate highly reactive radicals and form a three-dimensional network. This leads to higher heat energy consumption to destructive the polymer chains. However, raising the amount of St in St/AN weight ratio up to 70/30 and 80/20 resulted in Td5 decline to 352 and 349 oC, respectively. As the percentage of St is increased (≥ 70 wt%) the aromatic benzene rings of the St units provide lower irradiation resistance. Degradation temperature at 5% weight loss (Td5) (black square) unmodified UFPNR at irradiation dose 300 kGy and UFPNR-g-(PS-co-PAN) at monomers content 15 phr with St/AN weight ratios: (black circle) 20/80, (black diamond) 30/70, (black up-pointing triangle) 50/50, (black down-pointing triangle) 70/30 and (black lower right triangle) 80/20 at irradiation dose 300 kGy The physical properties of PS remain relatively stable even after high doses of irradiation, this phenomenon can generate low reactivity free radicals by the resonance in aromatic rings through to the backbone chains, which could induce chain scission in a NR molecule to counterbalance the effects of crosslinking (Bee et al. 2018; Burlant et al. 1962). Moreover, the benzene rings also cause high stiffness and low flexibility of NR chain resulting in block radical neighbour chains or partner macroradical to form crosslink network leading to chain scission of rubber chain (Bandzierz et al. 2018). Therefore, thermal stability can imply this phenomenon predominantly undergo chain scission in the NR backbone led to give the shorter chains and reduce the degree of crosslinked network when subjected to high energy radiation. The glass transition temperature of UFPNR-g-(PS-co-PAN) Tg was determined by a DSC as shown in Fig. 19 and the numerical data are tabulated in Table 5. The results found that the Tg of virgin NR is − 64 °C and increased to − 63 °C when irradiated with electron beam 300 kGy. In the high irradiation a denser three-dimensional network structure in radiated UFPNR-g-(PS-co-PAN) is produced with restriction of the chain movement and lesser number of free chains available for glassy to rubbery transition (Rezaei Abadchi and Jalali-Arani 2014; Taewattana et al. 2018). Therefore, modified UFPNR with monomers content of 5 or 15 phr at 50/50 of St/AN weight ratio radiated at a dose of 300 kGy are representative to investigate the Tg which are − 62 °C. It is possible that the bulky styrenic benzene ring of PS side chains can interact with neighbor chains and restrict their rotational freedom. In addition, intermolecular forces in grafted copolymers increased due to the introduction of polar nitrile groups in PAN at interface of NR and may cause movement limitation of NR chain. However, the effect of irradiation dose on Tg of modified UFPNRs was slightly increased compared with un-irradiated NR, which is about − 64 °C, agreeing with previous work (Lin et al. 2021; Taewattana et al. 2018; Wongkumchai et al. 2021). Glass transition temperature of (black square) unmodified NR, (black circle) unmodified UFPNR at irradiation dose 300 kGy, and UFPNR-g-(PS-co-PAN) at monomers content (black diamond) 5 phr and (black up-pointing triangle) 15 phr at irradiation dose 300 kGy Table 5 Glass transition temperature (Tg) of UFPNR The modified NR latex, DPNR-g-(PS-co-PAN), was successfully prepared by grafting St-AN co-monomers onto DPNR latex via emulsion copolymerization confirmed by FTIR spectra and 1H NMR spectra. The addition of St monomer up to St/AN weight ratio of 80/20 at monomer content of 15 phr provided the highest monomer conversion and grafting efficiency at 89 and 86%, respectively. Moreover, the DPNR-g-(PS-co-PAN) having desirable core–shell morphology is confirmed by TEM micrographs. The obtained modified NR has qualified to produce modified UFPNR by radiation with an electron beam in the presence of DTMPTA as a coagent followed by the spray drying process to produce UFPNR. The results revealed that the irradiation dose 300 kGy resulted in improved solvent resistance of the UFPNR-g-(PS-co-PAN) by reducing swelling ratio and molecular weight between crosslinks. The grafting and irradiation can improve the morphology of UFPNR particles, which are relatively spherical and show a non-aggregated smooth surface with a particle size of approximately 4.4 ± 1.8 µm. Thermal stability, i.e., degradation temperature at 5% weight (Td5) of the UFPNR modified by grafted with various St/AN weight ratios at monomer content, 15 phr was found to be in a range of 349 to 356 °C. On the contrary, the Tg of modified UFPNR and unmodified UFPNR were not significantly enhanced and consequently they maintain the elastomeric properties of the UFPNR. Furthermore, the contact angle measurement result revealed that the modified UFPNR is suitable for utilizing as toughening filler for wide polarity and types of polymers. All data analyzed during this study are included in this article. Akiba M, Hashim AS (1997) Vulcanization and crosslinking in elastomers. Prog Polym Sci 22(3):475–521. https://doi.org/10.1016/S0079-6700(96)00015-9 Angnanon S, Prasassarakich P, Hinchiranan N (2011) Styrene/acrylonitrile graft natural rubber as compatibilizer in rubber blends. Polym-Plast Technol Eng 50(11):1170–1178. https://doi.org/10.1080/03602559.2011.574667 Arayapranee W, Prasassarakich P, Rempel GL (2002) Synthesis of graft copolymers from natural rubber using cumene hydroperoxide redox initiator. J Appl Polym Sci 83(14):2993–3001. https://doi.org/10.1002/app.2328 Azanam SH, Ong SK (2017) Natural rubber and its derivatives. In Elastomers. Badawy S, Dessouki A (2003) Cross-linked polyacrylonitrile prepared by radiation-induced polymerization technique. J Phys Chem B 107(41):11273–11279. https://doi.org/10.1021/jp034603j Bandzierz KS, Reuvekamp LAEM, Przybytniak, Dierkes WK, Blume A, Bieliński DM (2018) Effect of electron beam irradiation on structure and properties of styrene-butadiene rubber. Radiat Phys Chem 149:14–25. https://doi.org/10.1016/j.radphyschem.2017.12.011 Bee S-T, Sin LT, Ratnam CT, Chew WS, Rahmat AR (2018) Enhancement effect of trimethylopropane trimethacrylate on electron beam irradiated acrylonitrile butadiene styrene (ABS). Polym Bull 75(11):5015–5037. https://doi.org/10.1007/s00289-018-2316-z Burlant W, Neerman J, Serment V (1962) γ-radiation of p-substituted polystyrenes. J Polym Sci 58(166):491–500. https://doi.org/10.1002/pol.1962.1205816627 Cabrera FC, Dognani G, Santos RJ, Agostini DLS, Cruz NC, Job AE (2017) Surface modification of natural rubber by sulfur hexafluoride (SF6) plasma treatment: a new approach to improve mechanical and hydrophobic properties. J Coat Sci Technol 3(3):116–120. https://doi.org/10.6000/2369-3355.2016.03.03.3 Chueangchayaphan W, Tanrattanakul V, Chueangchayaphan N, Muangsap S, Borapak W (2017) Synthesis and thermal properties of natural rubber grafted with poly(2-hydroxyethyl acrylate). J Polymer Res. https://doi.org/10.1007/s10965-017-1269-5 Dawes K, Glover LC, Vroom DA (2007) The effects of electron beam and g-irradiation on polymeric materials. In (pp. 867–887). Dinsmore HL, Smith DC (1948) Analysis of natural and synthetic rubber by infrared spectroscopy. Anal Chem 20(1):11–24. https://doi.org/10.1021/ac60013a004 Dung T, Nhan N, Thuong N, Nghia P, Yamamoto Y, Kosugi K, Kawahara S, Thuy T (2016) Modification of Vietnam natural rubber via graft copolymerization with styrene. J Braz Chem Soc. https://doi.org/10.21577/0103-5053.20160217 Dung TA, Nhan NT, Thuong NT, Viet DQ, Tung NH, Nghia PT, Kawahara S, Thuy TT (2017) Dynamic mechanical properties of vietnam modified natural rubber via grafting with styrene. Int J Polymer Sci 2017:1–8. https://doi.org/10.1155/2017/4956102 Flory PJ, Rehner J (1943) Statistical mechanics of cross-linked polymer networks I. rubberlike elasticity. J Chem Phys 11(11):512–520. https://doi.org/10.1063/1.1723791 Fukushima Y, Kawahara S, Tanaka Y (1988) Synthesis of graft copolymers from highly deproteinised natural rubber. J Rubber Res 1:154–166 Gosecka M, Gosecki M (2015) Characterization methods of polymer core–shell particles. Colloid Polym Sci 293(10):2719–2740. https://doi.org/10.1007/s00396-015-3728-z Gupta KK, Aneja KR, Rana D (2016) Current status of cow dung as a bioresource for sustainable development. Bioresour Bioprocess. https://doi.org/10.1186/s40643-016-0105-9 Haile A, Gelebo GG, Tesfaye T, Mengie W, Mebrate MA, Abuhay A, Limeneh DY (2021) Pulp and paper mill wastes: utilizations and prospects for high value-added biomaterials. Bioresour Bioprocess. https://doi.org/10.1186/s40643-021-00385-3 Huang F, Liu Y, Zhang X, Wei G, Gao J, Song Z, Zhang M, Qiao J (2002) Effect of elastomeric nanoparticles on toughness and heat resistance of epoxy resins. Macromol Rapid Commun 23:786–790. https://doi.org/10.1002/1521-3927(20020901)23:13%3c786::AID-MARC786%3e3.0.CO;2-T Indah Sari T, Handaya Saputra A, Bismo S, Maspanger R, Cifriadi DA (2015) The effect of styrene monomer in the graft copolymerization of arcylonitrile onto deproteinized natural rubber. Int J Technol. https://doi.org/10.14716/ijtech.v6i7.1266 Indah Sari T, Handaya Saputra A, Bismo S, Maspanger RD (2020) Deproteinized natural rubber grafted with polyacrylonitrile (pan)/polystirene (ps) and degradation of its mechanical properties by dimethyl ether. Int J Technol. https://doi.org/10.14716/ijtech.v11i1.1942 Ji B, Liu C, Huang W, Yan D (2005) Novel hyperbranched predominantly alternating copolymers made from a charge transfer complex monomer pair of p-(chloromethyl)styrene and acrylonitrile via controlled living radical copolymerization. Polym Bull 55(3):181–189. https://doi.org/10.1007/s00289-005-0429-7 Kangwansupamonkon W, Gilbert RG, Kiatkamjornwong S (2005) Modification of natural rubber by grafting with hydrophilic vinyl monomers. Macromol Chem Phys 206(24):2450–2460. https://doi.org/10.1002/macp.200500255 Kawahara S, Klinklai W, Kuroda H, Isono Y (2004) Removal of proteins from natural rubber with urea. Polym Adv Technol 15(4):181–184. https://doi.org/10.1002/pat.465 Kishore K, Pandey HK (1986) Spectral studies on plant rubbers. Prog Polym Sci 12(1):155–178. https://doi.org/10.1016/0079-6700(86)90008-0 Kochthongrasamee T, Prasassarakich P, Kiatkamjornwong S (2006) Effects of redox initiator on graft copolymerization of methyl methacrylate onto natural rubber. J Appl Polym Sci 101(4):2587–2601. https://doi.org/10.1002/app.23997 Kongparakul S, Prasassarakich P, Rempel GL (2008) Effect of grafted methyl methacrylate on the catalytic hydrogenation of natural rubber. Eur Polymer J 44(6):1915–1920. https://doi.org/10.1016/j.eurpolymj.2007.09.021 Lin Y, Amornkitbamrung L, Mora P, Jubsilp C, Hemvichian K, Soottitantawat A, Ekgasit S, Rimdusit S (2021) Effects of coagent functionalities on properties of ultrafine fully vulcanized powdered natural rubber prepared as toughening filler in rigid PVC. Polymers 13(2):289. https://doi.org/10.3390/polym13020289 Liu Y, Zhang X, Gao J, Huang F, Tan B, Wei G, Qiao J (2004) Toughening of polypropylene by combined rubber system of ultrafine full-vulcanized powdered rubber and SBS. Polymer 45(1):275–286. https://doi.org/10.1016/j.polymer.2003.11.001 Liu Y, Fan Z, Ma H, Tan Y, Qiao J (2006) Application of nano powdered rubber in friction materials. Wear 261(2):225–229. https://doi.org/10.1016/j.wear.2005.10.011 Liu D, Kang J, Chen P, Liu X, Cao Y (2013) 1H NMR and 13C NMR investigation of microstructures of carboxyl-terminated butadiene acrylonitrile rubbers. J Macromol Sci Part B 52(1):127–137. https://doi.org/10.1080/00222348.2012.695622 Liu X, Gao Y, Bian L, Wang Z (2014) Preparation and characterization of natural rubber/ultrafine full-vulcanized powdered styrene–butadiene rubber blends. Polym Bull 71(8):2023–2037. https://doi.org/10.1007/s00289-014-1169-3 Ma H, Wei G, Liu Y, Zhang X, Gao J, Huang F, Tan B, Song Z, Qiao J (2005) Effect of elastomeric nanoparticles on properties of phenolic resin. Polymer 46(23):10568–10573. https://doi.org/10.1016/j.polymer.2005.07.103 Manshaie R, Nouri Khorasani S, Jahanbani Veshare S, Rezaei Abadchi M (2011) Effect of electron beam irradiation on the properties of natural rubber (NR)/styrene–butadiene rubber (SBR) blend. Radiat Phys Chem 80(1):100–106. https://doi.org/10.1016/j.radphyschem.2010.08.015 Nallasamy P, Mohan S (2004) Vibrational spectra of cis-1,4-polyisoprene. Arab J Sci Eng 28(1A):17–26 Nguyen TH, Do QV, Tran AD, Kawahara S (2019) Preparation of hydrogenated natural rubber with nanomatrix structure. Polym Adv Technol 31(1):86–93. https://doi.org/10.1002/pat.4749 Nguyen Duy H, Rimdusit N, Tran Quang T, Phan Minh Q, Vu Trung N, Nguyen TN, Nguyen TH, Rimdusit S, Ougizawa T, Tran Thi T (2020) Improvement of thermal properties of Vietnam deproteinized natural rubber via graft copolymerization with styrene/acrylonitrile and diimide transfer hydrogenation. Polym Adv Technol 32(2):736–747. https://doi.org/10.1002/pat.5126 Pan C, Liu P (2022) Fluorinated nitrile-butadiene rubber (F-NBR) via metathesis degradation: closed system or open system? Eur Polymer J 162:110886. https://doi.org/10.1016/j.eurpolymj.2021.110886 Park M, Choi Y, Lee S-Y, Kim H-Y, Park S-J (2014) Influence of electron-beam irradiation on thermal stabilization process of polyacrylonitrile fibers. J Ind Eng Chem 20(4):1875–1878. https://doi.org/10.1016/j.jiec.2013.09.006 Pongsathit S, Pattamaprom C (2018) Irradiation grafting of natural rubber latex with maleic anhydride and its compatibilization of poly(lactic acid)/natural rubber blends. Radiat Phys Chem 144:13–20. https://doi.org/10.1016/j.radphyschem.2017.11.006 Prasassarakich P, Sintoorahat P, Wongwisetsirikul N (2001) Enhanced graft copolymerization of styrene and acrylonitrile onto natural rubber. J Chem Eng Jpn 34(2):249–253. https://doi.org/10.1252/jcej.34.249 Prukkaewkanjana K, Kawahara S, Sakdapipanich J (2013) Influence of reaction conditions on the properties of nano-matrix structure formed by graft-copolymerization of acrylonitrile onto natural rubber. Adv Mater Res 844:365–368. https://doi.org/10.4028/www.scientific.net/AMR.844.365 Pukkate N, Kitai T, Yamamoto Y, Kawazura T, Sakdapipanich J, Kawahara S (2007) Nano-matrix structure formed by graft-copolymerization of styrene onto natural rubber. Eur Polymer J 43(8):3208–3214. https://doi.org/10.1016/j.eurpolymj.2007.04.037 Qiao J (2020) Elastomeric nano-particle and its applications in polymer modifications. Adv Ind Eng Polymer Res 3(2):47–59. https://doi.org/10.1016/j.aiepr.2020.02.002 Qiao J., Wei G., Zhang Xiaohong, Zhang Shijun, Gao Jianming, Zhang Wei, . . . ., Y. H. (2002). US 6,423,760 B1. United States Patent. Rezaei Abadchi M, Jalali-Arani A (2014) The use of gamma irradiation in preparation of polybutadiene rubber nanopowder; its effect on particle size, morphology and crosslink structure of the powder. Nucl Instrum Methods Phys Res, Sect B 320:1–5. https://doi.org/10.1016/j.nimb.2013.11.016 Rimdusit N, Jubsilp C, Mora P, Hemvichian K, Thuy TT, Karagiannidis P, Rimdusit S (2021) Radiation graft-copolymerization of ultrafine fully vulcanized powdered natural rubber: effects of styrene and acrylonitrile contents on thermal stability. Polymers. https://doi.org/10.3390/polym13193447 Safeeda Nv F, Gopinathan J, Indumathi B, Thomas S, Bhattacharyya A (2016) Morphology and hydroscopic properties of acrylic/thermoplastic polyurethane core–shell electrospun micro/nano fibrous mats with tunable porosity. RSC Adv 6(59):54286–54292. https://doi.org/10.1039/C6RA08650K Schneider M, Pith T, Lambla M (1996) Preparation and morphological characterization of two- and three-component natural rubber-based latex particles. J Appl Polym Sci 62(2):273–290. https://doi.org/10.1002/(SICI)1097-4628(19961010)62:2%3c273::AID-APP3%3e3.0.CO;2-U Seleem S, Hopkins M, Olivio J, Schiraldi DA (2017) Comparison of thermal decomposition of polystyrene products vs bio-based polymer aerogels. Ohio J Sci. https://doi.org/10.18061/ojs.v117i2.5828 Staverman AJ (1979) Science and technology of rubber, F. R. Eirich, Ed., Academic, New York, 1978, 670 pp. Journal of Polymer Science: Polymer Letters Edition, 17(2). https://doi.org/10.1002/pol.1979.130170209 Taewattana R, Jubsilp C, Suwanmala P, Rimdusit S (2018) Effect of gamma irradiation on properties of ultrafine rubbers as toughening filler in polybenzoxazine. Radiat Phys Chem 145:184–192. https://doi.org/10.1016/j.radphyschem.2018.02.002 Tian M, Tang Y-W, Lu Y-L, Qiao J, Li T, Zhang L-Q (2006) Novel rubber blends made from ultra-fine full-vulcanized powdered rubber (UFPR). Polym J 38(1):50–56. https://doi.org/10.1295/polymj.38.50 Tuti IS, Asep HS, Setijo B, Dadi RM, Adi C (2015) The effect of styrene monomer in the graft copolymerization of arcylonitrile onto deproteinized natural rubber. Int J Technol. https://doi.org/10.14716/ijtech.v6i7.1266 Wang Q, Zhang X, Liu S, Gui H, Lai J, Liu Y, Gao J, Huang F, Song Z, Tan B, Qiao J (2005) Ultrafine full-vulcanized powdered rubbers/PVC compounds with higher toughness and higher heat resistance. Polymer 46(24):10614–10617. https://doi.org/10.1016/j.polymer.2005.08.074 Wang J, Zhang X, Jiang L, Qiao J (2019) Advances in toughened polymer materials by structured rubber particles. Prog Polym Sci 98:101–160. https://doi.org/10.1016/j.progpolymsci.2019.101160 Wongkumchai R, Amornkitbamrung L, Mora P, Jubsilp C, Rimdusit S (2021) Effects of coagent incorporation on properties of ultrafine fully vulcanized powdered natural rubber prepared as toughening filler in polybenzoxazine. SPE Polymers 2(3):191–198. https://doi.org/10.1002/pls2.10038 Wongthong P, Nakason C, Pan Q, Rempel GL, Kiatkamjornwong S (2013) Modification of deproteinized natural rubber via grafting polymerization with maleic anhydride. Eur Polymer J 49(12):4035–4046. https://doi.org/10.1016/j.eurpolymj.2013.09.009 Wu F, Xie T, Yang G (2010) Properties of toughened poly(butylene terephthalate) by blending with reactive ultra-fine full-vulcanized acrylonitrile butadiene rubber particles (UFNBRP). Polym Bull 65(7):731–742. https://doi.org/10.1007/s00289-010-0281-2 Xue TJ, McKinney MA, Wilkie CA (1997) The thermal degradation of polyacrylonitrile. Polym Degrad Stab 58:193–202 Yang M, Zhu W, Cao H (2021) Biorefinery methods for extraction of oil and protein from rubber seed. Bioresour Bioprocess. https://doi.org/10.1186/s40643-021-00386-2 Yu S, Hu H, Ma J, Yin J (2008) Tribological properties of epoxy/rubber nanocomposites. Tribol Int 41(12):1205–1211. https://doi.org/10.1016/j.triboint.2008.03.001 Electron beam irradiation was supported by Thailand Institute of Nuclear Technology (Public Organization). The authors would like to express their sincere appreciations to The National Research Council of Thailand (NRCT) and The NSRF via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation (PMU-B), Thailand [grant number B05F640086] for financial support throughout the research. This research was also funded by the 90TH Anniversary of Chulalongkorn University Scholarship (50 3/2564), Chulalongkorn University and Thailand Institute of Nuclear Technology (public organization), Thailand, through its program of TINT to University. Research Unit in Polymeric Materials for Medical Practice Devices, Department of Chemical Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, 10330, Thailand Krittaphorn Longsiri, Watcharapong Peeksuntiye & Sarawut Rimdusit Department of Chemical Engineering, Faculty of Engineering, Srinakharinwirot University, Nakhonnayok, 26120, Thailand Phattarin Mora & Chanchira Jubsilp Thailand Institute of Nuclear Technology, Nakhonnayok, 26120, Thailand Kasinee Hemvichian School of Engineering, Faculty of Technology, University of Sunderland, Sunderland, SR6 0DD, UK Panagiotis Karagiannidis Krittaphorn Longsiri Phattarin Mora Watcharapong Peeksuntiye Chanchira Jubsilp Sarawut Rimdusit Conceptualization and design of the research, KL and PM; experimental work, KL and WP; design of the research and discussion of the results, KH, CJ and KL; writing—original draft of the manuscript, KL and PM; writing—review and editing, PK and SR; supervision, conceptualization, and funding acquisition, SR. All authors read and approved the final manuscript. Correspondence to Sarawut Rimdusit. Longsiri, K., Mora, P., Peeksuntiye, W. et al. Ultrafine fully vulcanized natural rubber modified by graft-copolymerization with styrene and acrylonitrile monomers. Bioresour. Bioprocess. 9, 85 (2022). https://doi.org/10.1186/s40643-022-00577-5 Graft copolymer DPNR-g-(PS-co-PAN) UFPNR Electron beam vulcanization
CommonCrawl
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Global genetic differentiation of complex traits shaped by natural selection in humans Jing Guo1, Yang Wu ORCID: orcid.org/0000-0002-0128-72801, Zhihong Zhu1, Zhili Zheng1,2, Maciej Trzaskowski1, Jian Zeng1, Matthew R. Robinson ORCID: orcid.org/0000-0001-8982-88131,3, Peter M. Visscher ORCID: orcid.org/0000-0002-2143-87601,4 & Jian Yang ORCID: orcid.org/0000-0003-2001-24741,4 Nature Communications volume 9, Article number: 1865 (2018) Cite this article Evolutionary genetics Genetic association study There are mean differences in complex traits among global human populations. We hypothesize that part of the phenotypic differentiation is due to natural selection. To address this hypothesis, we assess the differentiation in allele frequencies of trait-associated SNPs among African, Eastern Asian, and European populations for ten complex traits using data of large sample size (up to ~405,000). We show that SNPs associated with height (\(P = 2.46 \times 10^{ - 5}\)), waist-to-hip ratio (\(P = 2.77 \times 10^{ - 4}\)), and schizophrenia (\(P = 3.96 \times 10^{ - 5}\)) are significantly more differentiated among populations than matched "control" SNPs, suggesting that these trait-associated SNPs have undergone natural selection. We further find that SNPs associated with height (\(P = 2.01 \times 10^{ - 6}\)) and schizophrenia (\(P = 5.16 \times 10^{ - 18}\)) show significantly higher variance in linkage disequilibrium (LD) scores across populations than control SNPs. Our results support the hypothesis that natural selection has shaped the genetic differentiation of complex traits, such as height and schizophrenia, among worldwide populations. Many human complex traits, including quantitative traits (e.g., height1) and complex disorders (e.g., cardiovascular diseases2,3), are substantially differentiated among worldwide populations. For example, the mean height in Northern Hemisphere populations generally increases with latitude1,4. European Americans have a lower body mass index (BMI) (~1.3 kg/m2) than African Americans but a higher BMI (1.9–3.2 kg/m2) than Asians, such as Chinese, Indonesians, and Thais for the same body fat percentage5,6. For the mortality rates associated with ischemic heart disease in the UK, African Caribbeans are at a lower risk while South Asians are at a higher risk than Europeans7. While environmental factors certainly play a role, since most complex traits have a genetic component, the question is whether or not the phenotypic differentiation is partly due to genetic differentiation and, if so, whether the genetic differentiation is a consequence of genetic drift or natural selection. There has been evidence suggesting that natural selection has caused genetic differentiation among worldwide populations8,9,10 and the signals of selection are enriched in specific parts of the genome8,11. However, it is not straightforward to investigate whether the signals of natural selection are enriched at genetic variants associated with a complex trait for two main reasons. First, because of the polygenic nature of most complex traits12, the signals of genetic differentiation are usually diluted among many trait-associated loci, each having a small effect too weak to be detected using methods that target a complete selective sweep13,14,15. Second, genetic differentiation can be masked by environmental factors, thereby increasing the difficulty of directly detecting and quantifying the polygenic selection signals4,16. Over the past decade, genome-wide association studies (GWAS) have identified thousands of single nucleotide polymorphisms (SNPs) associated with a number of traits in humans17. These findings have provided critical knowledge for understanding the polygenic architecture of human complex traits18 or detecting the signature of natural selection4,16,19,20. The large amount of GWAS data available in the public domain also allows researchers to address the question whether genetic variants associated with a complex trait have been under natural selection. For example, utilizing data from GWAS of large sample size, recent studies have shown that genetic variants associated with human height have been under directional selection in European populations4,19,21. In this study, we seek to address the question whether the differentiation in allele frequency and linkage disequilibrium (LD) of variants associated with a complex trait among global populations is shaped by natural selection. We first test if allele frequencies of the trait-associated SNPs are more differentiated across African, East Asian, and European populations than expected under genetic drift for 10 complex traits (Supplementary Table 1) utilizing summary-level data from published GWAS with large sample sizes (n = up to ~405,000). These traits include three quantitative traits (adult height, BMI, and waist-to-hip ratio adjusted by BMI (WHRadjBMI)), two common diseases and related biochemical traits (coronary artery disease (CAD) and type II diabetes (T2D) and high-density lipoprotein (HDL) cholesterol and low-density lipoprotein (LDL) cholesterol), two neurological/psychiatric disorders (Alzheimer's disease (AD) and schizophrenia (SCZ)), and one behavioral trait (educational attainment years (EAY)). For the traits for which the associated SNPs are significantly more differentiated among the global populations than expected under drift, we then measure the direction of genetic differentiation using polygenic scoring4,16 or the mean frequency of the trait-increasing alleles, and compare it with the observed direction of phenotypic differentiation in the three populations. Moreover, previous studies have shown that both strong and soft selective sweeps can alter linkage disequilibrium (LD) between a genetic variant and its surrounding variants22,23,24,25. If a trait-associated SNP has been under selection, one would also expect to see a population differentiation of the LD score of this SNP with its surrounding SNPs, more than expected under a drift model, where the LD score is defined as the sum of the LD r2 between the focal SNP and all nearby SNPs in a 10 Mb window26,27. In this context, we further test whether the trait-associated SNPs show greater differences in LD across populations than the control SNPs. Enrichment of F ST in trait-associated SNPs If the genetic loci associated with a complex trait have been under natural selection, an excess of among-population differentiation will be observed in the frequencies of the trait-associated alleles compared with what is expected under a drift model28. We used Wright's fixation index (FST) to measure the extent to which a particular SNP varied in allele frequency among three worldwide populations. We focused only on common variants (minor allele frequency, MAF > 0.01) because rare variants were not available in most GWAS summary data used in this study. The FST values of the SNPs were calculated from unrelated individuals (SNP-derived genetic relatedness < 0.05) of European (EUR, n = 1099), African (AFR, n = 1099), and East Asian (EAS, n = 1099) ancestry from the Genetic Epidemiology Research on Adult Health and Aging (GERA) cohort after quality controls (QC) (Methods; Supplementary Table 2). We selected a list of nearly independent SNPs (LD r2 < 0.01) associated with each trait using PLINK29 clumping analysis (Methods) of the summary statistics from the latest meta-analysis of GWAS (Supplementary Table 1) and a set of "control" SNPs randomly sampled from the genome with MAF and LD scores matched with those of the associated SNPs (Methods). All GWAS summary statistics were from studies on individuals of European ancestry. We tested whether the mean FST value of the trait-associated SNPs was significantly higher than that of the control SNPs, a method we call the FST enrichment test (Methods), similar to the approach used in Zhang et al.30. We demonstrate by simulation (Methods; Supplementary Table 3) that there was no inflation in the test-statistics of the method under the null model of genetic drift (Supplementary Fig. 1a) and that selecting associated SNPs at a clumping P-value threshold of 5 × 10−6 provided higher detection power than at P < 5 × 10−8 under the alternative model (Supplementary Fig. 1b). We performed the FST enrichment analysis for each of the 10 traits using the trait-associated SNPs clumped at P < 5 × 10−6 with FST values computed from the three populations in GERA. We found that the mean FST values for the trait-associated loci for height (\(P = 2.46 \times 10^{ - 5}\)), WHRadjBMI (\(P = 2.77 \times 10^{ - 4}\)), and SCZ (\(P = 3.96 \times 10^{ - 5}\)) were significantly higher than those of the control SNPs after Bonferroni correction for multiple tests (Table 1), indicating that the genetic loci associated with these traits have been under natural selection. We further confirmed the results using FST values calculated from the 1000 Genome Project (1000G; unrelated EUR n = 494, AFR n = 591, and EAS n = 491; Supplementary Table 2). The results remained significant for height (\(P = 4.93 \times 10^{ - 6}\)), WHRadjBMI (\(P = 2.62 \times 10^{ - 3}\)), and SCZ (P = 6.83 × 10−5) correcting for multiple tests (Table 1 and Fig. 1). Table 1 Enrichment test of the differentiation of trait-associated SNPs (clumped at P < 5 × 10−6) in allele frequency against the control SNPs among the three populations in GERA and 1000G Mean FST values of the associated SNPs across 1000G populations against the null distribution for height, WHRadjBMI and SCZ. The red dashed line represents the mean FST of the trait-associated SNPs clumped at \(P < 5 \times 10^{ - 6}\). The histogram represents the distribution of mean FST values of the sets of control SNPs. WHRadjBMI waist-to-hip ratio adjusted by BMI, SCZ schizophrenia Direction of genetic differentiation The analyses above showed that the genetic variants associated with height, WHRadjBMI, and SCZ are more differentiated than expected by random drift. However, these analyses did not show the direction of genetic differentiation. To demonstrate the direction of differentiation, we used the SNPs clumped at \(P < 5 \times 10^{ - 6}\) (see above) to compute a polygenic risk score (PRS)4,16,29 for each individual in 1000G for height, WHRadjBMI, and SCZ. We then estimated the deviation of the mean PRS of each population from the overall mean in standard deviation (s.d.) units (Methods). The deviation is expected to be zero under drift as demonstrated by the null distribution computed from the 10,000 control SNP sets (Fig. 2). The results showed that the mean PRS for height in the EUR subjects was higher than that in the AFR and EAS subjects (Fig. 2), consistent with the observed mean phenotypic differences between the populations1. For SCZ, the mean PRS in AFR was higher than that in EUR (Fig. 2), in line with the results from recent studies suggesting that SCZ is more prevalent in people of AFR ancestry than EUR ancestry and Asians31,32,33. For WHRadjBMI, EAS showed a higher mean PRS than both AFR and EUR (Fig. 2). WHRadjBMI measures the fat distribution at the abdomen region after adjusting for BMI to exclude the influence of overall adiposity. Previous studies have reported that after correcting for age, gender and BMI, Asian Americans tend to have a higher accumulation of excess visceral adipose tissue (VAT) than European Americans34, whereas African Americans have a lower VAT than European Americans35,36. We repeated the PRS analysis in GERA and found that the results were consistent with those in 1000G (Supplementary Fig. 2). It should be noted that we first used the FST enrichment analysis to seek for evidence that the SNPs associated with a trait have been under natural selection, and then used the PRS analysis to illustrate the direction of selection without repeating the significance test. An observed differentiation of PRS values among populations alone is not evidence for selection unless it is more than expected under drift. Direction of genetic differentiation for height, WHRadjBMI and SCZ in the 1000G populations. The colored dot represents the estimated deviation (in s.d. units) of the mean PRS based on the trait-associated SNPs clumped at \(P < 5 \times 10^{ - 6}\) of a population from the overall mean across populations. The gray dot represents the mean of mean PRS values of 10,000 sets of control SNPs, with the gray dashed line indicating the 95% confidence interval of the distribution of mean PRS values. WHRadjBMI, waist-to-hip ratio adjusted by BMI; SCZ schizophrenia, EUR European, AFR African, EAS East Asian The PRS analysis used the effect sizes of SNPs estimated from GWAS samples of EUR ancestry. Strong genetic heterogeneity between EUR and non-EUR (e.g., if the effect sizes in EUR are different from those in non-EUR populations) could possibly bias PRS prediction in non-EUR populations37,38. To avoid this potential bias, we examined the direction of genetic differentiation by directly comparing the difference in mean frequency of the trait-increasing alleles (fTIAs) across SNPs (clumped at \(P < 5 \times 10^{ - 6}\)) between two populations because in comparison with the magnitude, the direction of SNP effect estimated from GWAS is less prone to bias due to population structure37. We also calculated the difference in fTIA between the two populations for the control SNPs. Note that TIA of a control SNP simply means the allele for which the estimated SNP effect is positive in the GWAS summary data. Under a drift model, fTIA is expected to be 0.5 and the difference in fTIA values between two populations is expected to be zero. The results were consistent with those from the PRS analysis (Figs. 2, 3). On average, the height-increasing alleles were more frequent in EUR than in EAS, SCZ risk alleles were more frequent in ARFs than in EUR, and WHRadjBMI-increasing alleles were more frequent in EAS than in EUR (see Fig. 3 for the results from 1000G and Supplementary Fig. 3 for the results from GERA). Of note, the mean fTIA for the control SNPs was not zero, especially for height and SCZ, which was likely because both height and SCZ are highly polygenic39,40 and some of the control SNP could be in LD with the causal variant(s) by chance. This result also implies that SNPs other than those selected at low-association P-values have also been under polygenic selection10,14. Mean difference in frequencies of the trait-increasing alleles between two 1000G populations for height, WHRadjBMI and SCZ. The red dashed line represents the mean difference in fTIA of the trait-associated SNPs clumped at \(P < 5 \times 10^{ - 6}\). The histogram represents the distribution of the difference in fTIA for the control SNPs. The gray dashed line represents the expected difference in fTIA (i.e., 0) under genetic drift. WHRadjBMI, waist-to-hip ratio adjusted by BMI; SCZ schizophrenia, EUR European, AFR African, EAS East Asian LD pattern of complex trait loci altered by selection Because our findings indicated that natural selection has shaped the frequencies of the trait-associated alleles, we then sought to test whether selection has also differentiated the LD pattern at the trait-associated genomic loci among populations more than expected under drift22,23,24,25. If this is the case, we should expect to see larger among-population difference in LD score at the trait-associated SNPs than at the control SNPs. We first computed the LD score of each SNP in the unrelated individuals in GERA and 1000G, respectively. We then calculated the coefficient of variation of the LD score (LDCV) across the AFR, EAS, and EUR populations for each SNP (Methods), and tested whether the mean LDCV of the trait-associated SNPs (clumped at \(P < 5 \times 10^{ - 6}\)) significantly deviates from that of the control SNPs, a procedure we call the LDCV enrichment analysis. Note that we used coefficient of variation rather than variance because SNPs with higher LD scores tend to have larger between-population variation, a mean-variance relationship. Using the LDCV from GERA, we found a significant excess of LDCV for the trait-associated SNPs over the control SNPs for height (\(P = 2.01 \times 10^{ - 6}\)), EAY (\(P = 2.37 \times 10^{ - 8}\)), and SCZ (\(P = 5.16 \times 10^{ - 18}\)) after Bonferroni correction for multiple tests (Table 2). The results for height (\(P = 1.99 \times 10^{ - 4}\)) and SCZ (\(P = 2.74 \times 10^{ - 8}\)) remained significant (after correcting for multiple tests) when using the LDCV from 1000G (Table 2 and Supplementary Fig. 4). There was no significant correlation between FST and LDCV for either height or SCZ, suggesting that the between-population differentiation in LD were not confounded by the differentiation in FST (Supplementary Fig. 5). Together, these results suggest that natural selection has altered both the frequency and LD properties of the SNPs associated with height and SCZ. Table 2 Enrichment test of the differentiation of trait-associated SNPs (clumped at P < 5 × 10−6) in the LD pattern against the control SNPs among the three populations in GERA and 1000G This study sought to address whether natural selection has differentiated allele frequencies and LD scores of complex traits associated SNPs among worldwide populations more than expected under drift. To this end, we established a strategy to test whether the among-population variation in allele frequency or LD score at the trait-associated SNPs is significantly higher than that at MAF-matched and LD-matched control SNPs. We detected significant signals in allele frequencies in the GERA populations for height (\(P = 2.46 \times 10^{ - 5}\)), WHRadjBMI (\(P = 2.77 \times 10^{ - 4}\)), and SCZ (\(P = 3.96 \times 10^{ - 5}\)) (Table 1) and significant signals in LD for height (\(P = 2.01 \times 10^{ - 6}\)) and SCZ (\(P = 5.16 \times 10^{ - 18}\)) (Table 2). There are two plausible models that are compatible with the observed results: 1) the trait itself has undergone natural selection; 2) variants affecting the trait have been under selection because of their pleiotropic effects on fitness, and variants with larger effects on the trait tend to be more differentiated via such pleiotropic effects41. Height is a classic complex trait. Previous studies have shown that height-associated SNPs have been under natural selection and that this might contribute to the mean height differences among EUR populations4,16,19,20,21. Our results suggest that the phenotypic differences in height among AFR, EAS, and EUR populations are also partially due to natural selection on height-associated SNPs (Fig. 2 and Supplementary Fig. 2). The mean PRS of AFR was higher than that of EAS but lower than that of EUR, consistent with the observed phenotypic differences in mean height1. Waist-to-hip ratio is an anthropometric index of abdominal obesity. Previous studies have shown that after adjusting for BMI, European Americans tend to have a larger amount of visceral fat on average than African Americans35,36 but a lower amount than those of Asian descent6,34. This finding could be explained by the enrichment of the WHRadjBMI-increasing alleles in EAS as shown in our results; we did not have enough power to detect difference between EUR and AFR (Fig. 2 and Supplementary Fig. 2). It is worth noting that, compared with height, direct measurements of WHRadjBMI are not widely available for worldwide populations and are less consistent across studies, which may be related to insufficient sample sizes42,43 or the influence of environmental factors44. For SCZ, the enrichment of risk alleles in AFR (Fig. 2 and Supplementary Fig. 2) identified by our analyses appears to support the reported discrepancy in the prevalence of SCZ between African Americans and European Americans31,32. Moreover, Fearon et al.33 found that, among the ethnic minority groups in the UK, the incidence rate ratios (IRRs) for SCZ (adjusted by age and sex) in individuals of Chinese descent (IRR = 3.5) is higher than non-British Whites (IRR=2.5) and lower than individuals of African descent (IRR = 9.1 for African-Caribbean and 5.8 for Black African), consistent with our findings. However, the diagnosis of SCZ can be potentially biased by non-genetic factors, such as socioeconomic status and access to hospitalization31. The trait-associated SNPs used to detect signatures of natural selection were ascertained from EUR-based meta-analyses of GWAS. Such ascertainment might be biased because FST is a function of MAF and the MAF or LD properties of the trait-associated SNPs could be different from that of the non-associated SNPs26,45. For example, SNPs with higher MAF in EUR tend to have higher power to be detected at a certain significance level, resulting in a difference in mean FST even in the absence of natural selection. This potential bias can be mitigated by matching the control SNPs with the associated SNPs by MAF and LD score. Via simulations, we confirmed that inflation did not occur in the test statistics under the null model (Supplementary Fig. 1a). Our FST enrichment results are unlikely to be driven by the EUR bias because among the traits that showed significant signal in the FST enrichment test (Table 1) only height showed higher mean fTIA in EUR than non-EUR (mean fTIA was higher in EAS for WHRadjBMI and higher in AFR for SCZ compared to EUR) (Fig. 3). Moreover, we did not observe a correlation between the worldwide FST (using the AFR, EAS and EUR samples in 1000G) and EUR FST (using the CEPH, Finnish, British, Spanish, and Tuscan samples in 1000G-EUR) for the trait-associated SNPs that show evidence of selection (Supplementary Fig. 6). This result suggests that the differentiation of PRS in global populations is unlikely to be confounded by the biases in the estimated SNP effects due to population stratification in EUR. In addition, genetic drift that takes place during particular demographic events (e.g., population bottleneck or expansion in Europeans) would result in a difference in the frequency spectrum between the causal variants and neutral variants46. The difference, however, was very unlikely lead to a bias in our results because there was no difference in MAF between the trait-associated SNPs and the matched control SNPs. It is confirmed by simulation that there was no inflation in the FST enrichment test-statistics when the causal variants were simulated to have lower MAF than null SNPs in the absence of selection (A and B in Supplementary Table 4 and Supplementary Fig. 7a). We further demonstrated by simulation that the FST enrichment analysis method was robust to different levels of heritability (C and D in Supplementary Table 4 and Supplementary Fig. 7b), different degrees of LD between causal variants (Supplementary Figs. 8 and 9), different strategies of sampling control SNPs (matching the control SNPs with the trait-associated SNPs by only MAF computed from EUR or by both MAF and LD scores computed from AFR; Supplementary Fig. 10), or whether the causal variants were included in the analysis (Supplementary Fig. 11). We also applied different strategies of matching control SNPs to the analyses of real data and observed little differences in results (Supplementary Table 5). In addition, we compared the variance of FST values of a set of associated SNPs with a random set of control SNPs (with MAF and LD matched) across 100 independently simulated traits and did not observe a significant difference in variance of FST values between the trait-associated SNPs and control SNPs, regardless of whether the traits were simulated based on a single variant or multiple variants at each of the 1,000 causal loci (Supplementary Fig. 12). We observed from the LD score calculation in GERA that there were three regions (on chromosomes 6, 11 and 17) presenting extremely large LD scores (Supplementary Fig. 13). The chr6 region (Supplementary Fig. 14a) harbors genes that encode the major histocompatibility complex (MHC; hg19 chr6:28,477,797-33,448,354), a well-known protein complex that is essential for the immune response. It is not surprising that the MHC locus has been under selection47. The chr17 region (Supplementary Fig. 14b) harbors an inversion (hg16 chr17:44.1-45.0 Mb) that almost exclusively occurs in EUR and has been shown to be under positive selection in Icelandic females48. The chr11 region (Supplementary Fig. 14c) stretches over the centromere with a length >10 Mb, and it contains multiple polymorphic inversions49. More than half of the genes contained in this region (48–60 Mb; Supplementary Fig. 14d) are olfactory receptor genes (ORs) (186 ORs/308 genes)50. This gene family has been found to show the greater evolutionary acceleration between humans and chimpanzees than other functional gene classes, such as nuclear transport and reproduction51. Moreover, this region included a large number of nearly independent SNPs that show pleiotropic effects on HDL cholesterol and height (Supplementary Fig. 14d). Nevertheless, several limitations were associated with this study. First, the FST enrichment test is underpowered for highly polygenic traits because some of the control SNPs might be in LD with the causal variants by chance under a polygenic model (as demonstrated by the deviation of fTIA differences in the control SNPs from the expected values for height and SCZ; Fig. 3). Fortunately, the loss of power was remedied by the use of data from studies with very large sample sizes (Supplementary Table 1). Second, the GWAS summary statistics used in this study were generated from EUR samples (see the discussion of EUR bias above). This is because non-EUR GWAS of large sample size (on a similar scale as the sample sizes of the EUR GWAS used in this study) are not available for most complex traits. If there is genetic heterogeneity between EUR and non-EUR populations, the PRS computed in AFR and EAS based on SNP effects estimated from EUR studies will be biased37,38. We showed that this potential bias could be mitigated by the fTIA analysis, which ignores the magnitude of estimated SNP effects (Fig. 3 and Supplementary Fig. 3). In fact, the results from the fTIA analysis (Fig. 3 and Supplementary Fig. 3) were largely consistent with those from the PRS analysis (Fig. 2 and Supplementary Fig. 2), suggesting that the direction of genetic differentiation as indicated by the PRS for height, WHRadjBMI, or SCZ is unlikely to be substantially biased by the between-population differences in SNP effects52. In addition, our result that the EUR FST was almost independent of the global FST (Supplementary Fig. 6) implies that the mean values of PRS in non-EUR samples were not biased by possible confounding in the estimated SNP effects owing to population stratification in EUR. Third, because we tested the mean FST (or LDCV) across all the associated loci against that of the control SNPs, we could not distinguish whether the population differentiation of mean FST (or LDCV) at the trait-associated variants (more than expected by drift) is due to natural selection on different loci, the same loci but different alleles, or the same alleles but different levels of selection pressure in different populations. Fourth, regarding to the type of selection on the trait-associated variants, our results seem to indicate that the excess of genetic differentiation at the trait-associated SNPs is a consequence of local selection (different alleles are favored in different populations). However, we cannot rule out the possibility that the increase in FST (or LDCV) at the trait-associated SNPs is due to background selection53 (i.e., those SNPs are in LD with causal variants under negative selection). This is confirmed by forward simulation using SLiM54 (Methods). The simulation result shows that background selection reduces genetic diversity and increases between-population differentiation at genetic variants in LD with the variants under negative selection (Supplementary Fig. 15). Last, we used the FST and LDCV enrichment analyses to assess the excess of population genetic differentiation at the trait-associated loci as a means to detect signature of natural selection. These analyses, however, cannot determine when the selection occurred in the history of human evolution and whether there are other types of natural selection within a population. Studies in progress have developed methods to model polygenic selection in an admixture graph (representing of the historical divergences and admixture events in the human populations through time) to infer which branches are most likely to have experienced polygenic selection55, and to model the relationship between variance in SNP effect and MAF to detect signatures of negative selection on variants associated with complex traits56,57. In summary, we proposed a robust statistical approach to test whether SNPs associated with a complex trait of interest are more differentiated across worldwide populations than MAF-matched and LD-matched control SNPs, and used the results to infer whether the trait-associated genetic variants have undergone natural selection. Our simulations indicated that the test statistics of the proposed approach were not inflated under the null model of random drift. Using this approach, we identified that for height, WHRadjBMI, and SCZ, the trait-associated alleles were differentiated significantly more than the matched control, in directions consistent with those of phenotypic differentiation. We showed that the results were robust to the potential biases in ascertaining the trait-associated SNPs (e.g., population stratification in EUR). These results support our hypothesis that the observed phenotypic differentiation among worldwide populations is (at least partly) genetic and a consequence of natural selection on the trait-associated variants because of selection on the trait or through their pleiotropic effects on fitness since the divergence of these populations16. Our findings further suggest that natural selection has also driven differentiation in LD among populations at genomic loci associated with height and SCZ. These findings expand our understanding of the role of natural selection in shaping the genetic architecture of complex traits in human populations. The methods developed in this study are general and applicable to other complex traits, including endophenotypes, such as gene expression and DNA methylation. Data and quality control (QC) The 1000G samples used in this study comprised individuals of EUR (n = 503), AFR (n = 661) and EAS origins (n = 504) (Supplementary Table 2). SNPs with MAF <0.01 and Hardy–Weinberg equilibrium (HWE) \(P < 10^{ - 6}\) were removed from the genotype data for each population, respectively, resulting in 6,160,018 SNPs in common across the three populations. We used GCTA58 to construct the genetic relationship matrix (GRM) using the SNPs present in HapMap phase 3 project (HapMap3; m = ~1.2 million SNPs) and generate a set of unrelated individuals at a relatedness threshold of 0.05 in each population, resulting in 494 unrelated EUR, 591 unrelated AFR and 491 unrelated EAS (Supplementary Table 2). There were 60,586 EUR, 3826 AFR, and 5188 EAS genotyped on Affymetrix Axiom arrays in the GERA data. The SNP genotype data were cleaned according to the following QC criteria: sample/SNP call rate <98%, MAF<0.01, and HWE test \(P < 10^{ - 6}\). After QC, the SNP genotypes were imputed to the 1000G (phase 1) reference panels using IMPUTE259 (Supplementary Table 6). The imputed SNPs with imputation INFO scores <0.3, MAF<0.01, or HWE \(P < 10^{ - 6}\) in any of the populations (AFR, EAS, and EUR) were removed (Supplementary Table 6), resulting in ~5.8 million remaining SNPs in common across the three populations. We performed a principal component analysis60 (PCA) in a combined GERA and 1000G sample using 820,460 HapMap3 SNPs. For a particular population (e.g., GERA-EUR), any GERA individuals who were more than 3 s.d. away from the mean of the corresponding 1000G population (e.g., 1000G-EUR) were removed (Supplementary Fig. 16). We further calculated the GRM for each GERA population using the HapMap3 SNPs (\(m = 820,460\)), and removed one of each pair of individuals with estimated genetic relatedness >0.05. This resulted in 53,629 unrelated EUR, 1099 unrelated AFR and 3365 unrelated EAS. To avoid heterogeneity in the subsequent analyses, we harmonized the sample sizes by randomly sampling 1,099 individuals from the EUR and EAS samples (Supplementary Table 2). In summary, ~6.1 million SNPs in 1000G and ~5.8 million SNPs in GERA after QC were used in the subsequent analyses. We used the LD-based clumping approach in PLINK29 to select trait-associated SNPs from GWAS summary data. The clumping approach filters out SNPs with P-values larger than a specific threshold, clusters the remaining SNPs by LD and physical distance between SNPs, and selects the top associated SNP from each clump. We used an LD r2 threshold of 0.01 and a distance threshold of 1 Mb to ensure that all the selected trait-associated SNPs were nearly independent. The Atherosclerosis Risk in Communities (ARIC) data set (~8.8 million 1000G-imputed SNPs and 7703 unrelated European Americans) was used as the reference to compute LD r2 between SNPs. The ARIC sample was genotyped on Affymetrix 6.0 arrays. Genotype QC and imputation have been detailed elsewhere61 (Supplementary Table 6). F ST enrichment test Similar to the approach used in Zhang et al.30, we compared the mean FST value of the trait-associated SNPs with that of the control SNPs with MAF and LD score matched. First, we divided all the SNPs (in either 1000G or HapMap2 depending on the SNPs used in GWAS for the trait) into 20 MAF bins from 0 to 0.5 with an increment of 0.025 (excluding the SNPs with MAF < 0.01). Each of the MAF bins was further grouped into 20 bins according to the 20 quantiles of LD score distribution. The MAF and LD values were computed from the EUR samples in GERA or 1000G described above. Second, we allocated the trait-associated SNPs to the MAF and LD stratified bins, randomly sampled a matched number of "control" SNPs from each bin, computed a mean FST value for the control SNPs sampled from all bins, and repeated this process 10,000 times to generate a distribution of mean FST under drift (approximately normally distributed; see Fig. 1). Third, a P-value was computed from a two-tailed test by comparing the observed mean FST value for the associated SNPs against the null distribution quantified by the control SNPs, assuming normality of the null distribution. Simulation based on real GWAS genotype data To verify whether the test statistics used in the FST enrichment analyses are well calibrated, we used GCTA58 to perform GWAS simulations based on the 1000G-imputed genotypes of 53,629 unrelated European Americans in GERA after QC. Two SNP panels (1000G and HapMap2) were respectively used to mimic those used in the real data to sample "causal variants" (GWAS for EAY, AD, CAD, and SCZ were based on 1000G and those for the other traits such as height and BMI were based on HapMap2) (Supplementary Tables 1 and 3). First, we simulated 100 quantitative traits under the null hypothesis (i.e., genetic drift) with heritability (h2) of 0.5 based on 1000 causal variants randomly sampled from a SNP panel for each trait (1000G or HapMap2). The trait phenotype was simulated based on an additive genetic model, i.e., $$y = g + e = \mathop {\sum }\limits_i^m \,w_iu_i + e$$ where y is the phenotype, m is the number of causal variants, w i is the standardized genotype of the i-th causal variant with its effect u i drawn from N(0, 1), and e is the residual generated from \(N[0,{\mathrm{var}}\left( g \right)\left( {\frac{1}{{h^2}} - 1} \right)]\). To demonstrate the statistical power under the alternative hypothesis (i.e., natural selection), we sampled 1000 causal variants for each trait from the top 50% of the FST distribution of the 1000G (median FST of 0.084) or HapMap2 (median FST of 0.093) SNPs, where the FST values were computed from the three populations in 1000G. Second, we performed a GWAS analysis for each trait (under null or alternative hypothesis) with the first 10 PCs fitted as covariates to control for population stratification. Independent trait-associated SNPs were selected by PLINK-clumping at two P-value thresholds (\(5 \times 10^{ - 8}\) and \(5 \times 10^{ - 6}\)) alongside a LD r2 threshold of 0.01 and window size of 1 Mb (Supplementary Table 3). We included a less stringent threshold (i.e., \(5 \times 10^{ - 6}\)) in the simulation to test whether the increased number of true positives passing this lower threshold could outweigh the increased number of false positives, which might result in an increase in power for the FST enrichment test. Finally, we performed the FST enrichment test for each trait to see if there is inflation under the null hypothesis and whether there is a difference in statistical power between the two P-value thresholds under the alternative hypothesis. Note that, for consistency, we used the same SNP panel to simulate the causal variants and to sample control SNPs. Four sets of results were generated from the combinations of two P-value thresholds (\(5 \times 10^{ - 8}\) and \(5 \times 10^{ - 6}\)) and two SNP panels (1000G and HapMap2) (Supplementary Table 3). We further performed simulations under the null hypothesis in three additional scenarios with 1) larger proportion of lower-MAF causal variant (1000 random causal variants + 500 causal variants with MAF < 0.1; A and B in Supplementary Table 4), 2) a lower level of heritability (i.e., h2 = 0.2; C and D in Supplementary Table 4), and 3) two additional causal variants sampled from a 1-Mb flanking region of each primary causal variant (3000 causal variants in total). We also performed the FST enrichment analysis of the original simulation data (see above) and the real GWAS summary data in two additional scenarios, i.e., 1) matching the control SNPs with the trait-associated SNPs by MAF (computed from 1000G-EUR) only, and 2) matching the control SNPs by both MAF and LD score computed from 1000G-AFR. The inflation of test statistics under the null hypothesis and the statistical power under the alternative hypothesis were illustrated by quantile-quantile (QQ) plots. The analysis below uses a similar method introduced in Robinson et al. to quantify the population genetic differentiation of a complex trait4. The PRS of an individual is computed as \(\hat g = \mathop {\sum }\limits_l^m x_l\hat b_l\), where m is the number of SNPs used to create the PRS, \(x\) represents the SNP genotype (coded as 0, 1, or 2) and \(\hat b\) is the estimate of SNP effect from the GWAS summary data. To investigate the direction of genetic differentiation among populations, we fitted a linear model $$\hat g_i = \mu + v_j + e_i$$ where \(\hat g_i\) represents the standardized PRS calculated from the trait-associated SNPs (clumping \(P < 5 \times 10^{ - 6}\)) for each of the unrelated individuals \(i\) in either the 1000G or GERA samples; μ is the mean term; \(v_j\) is the deviation of the mean PRS of population j from μ; and \(e_i\) represents the residual. This method provides the estimate of the deviation (in s.d. units) of the mean PRS of a population from the overall mean. In data analysis, we also applied this method to estimate \(v_j\) for control SNPs (10,000 random sets for each trait in each population) to demonstrate the variability of \(\hat v_j\) under drift. LD variation among populations Similar to the FST, we created LDCV for each SNP to measure LD variation among the AFR, EAS and EUR populations. We first calculated LD score of each SNP as the sum of the LD r2 between the focal SNP and all the flanking SNPs (including the focal SNP itself) within a 10-Mb window. SNPs with LD r2 values < 0.01 were excluded from the calculation to avoid chance correlations between SNPs. Each SNP obtained three LD scores estimated in the unrelated individuals from each of the three populations. We computed LDCV of a SNP as the ratio of the s.d. of the three LD scores to the mean in GERA and 1000G, respectively. Forward simulation We simulated two independent 10-Mb segments using SLiM54 where new mutations occurred with 5% probability to be deleterious for fitness and 95% probability to be neutral on the first segment and 100% probability to be neutral on the second segment. The deleterious mutations were under negative selection with a selection coefficient of −0.01. The mutation rate was set to be 2.36 × 10−8 (ref. 62). The population samples were generated based on a commonly used demographic model63, mimicking the "Out-of-Africa" event allowing migration between populations and population expansion. We started the simulation with 7310 individuals. After 58,000 generations as suggested in Gravel et al.63, we obtained 34,039 Europeans and 14,474 Africans along with ~260,000 variants segregating in Europeans and ~225,000 variants in Africans. We sampled 5000 individuals from each population, extracted common variants (MAF > 0.01) and calculated the \(F_{{\mathrm{ST}}}\) (or LDCV) values of the variants in common between the two populations. We conducted the simulation with 30 independent replicates. The average number of common variants across the 30 replicates was 81,055 in Africans and 43,715 in Europeans with 29,710 variants in common. We then compared the mean \(F_{{\mathrm{ST}}}\) (or LDCV) value of the neutral variants on segment #2 with that of the non-deleterious variants with matching MAF and LD on segment #1 (some of which were under background selection because of the LD with deleterious mutations). GCTA: http://cnsgenomics.com/software/gcta SLiM: https://messerlab.org/slim PLINK: https://www.cog-genomics.org/plink2 ARIC data: https://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?study_id=phs000090.v4.p1 GERA data: https://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?study_id=phs000674.v2.p2 GWAS summary data for Height, BMI, and WHRadjBMI: https://www.broadinstitute.org/collaboration/giant/index.php/GIANT_consortium_data_files HDL and LDL: http://csg.sph.umich.edu//abecasis/public/lipids2013/ EAY: http://www.thessgac.org/data AD: http://web.pasteur-lille.fr/en/recherche/u744/igap/igap_download.php CAD (coronary artery disease): http://www.cardiogramplusc4d.org/data-downloads/ SCZ: https://www.med.unc.edu/pgc/downloads T2D: http://diagram-consortium.org/downloads.html All the data used in this study were obtained from the public domain (see the URLs above). Stulp, G. & Barrett, L. Evolutionary perspectives on human height variation. Biol. Rev. 91, 206–234 (2016). Menotti, A. et al. Food intake patterns and 25-year mortality from coronary heart disease: cross-cultural correlations in the Seven Countries Study. Eur. J. Epidemiol. 15, 507–515 (1999). Tunstall-Pedoe, H. et al. Contribution of trends in survival and coronary-event rates to changes in coronary heart disease mortality: 10-year results from 37 WHO MONICA Project populations. Lancet 15, 507–515 (1999). Robinson, M. R. et al. Population genetic differentiation of height and body mass index across Europe. Nat. Genet. 47, 1357–1362 (2015). Deurenberg, P., Deurenberg-Yap, M. & Guricci, S. Asians are different from Caucasians and from each other in their body mass index/body fat per cent relationship. Obes. Rev. 3, 141–146 (2002). Deurenberg, P., Yap, M. & van Staveren, W. A. Body mass index and percent body fat: a meta analysis among different ethnic groups. Int. J. Obes. Relat. Metab. Disord. 22, 1164–1171 (1998). Chaturvedi, N. Ethnic differences in cardiovascular disease. Heart 89, 681–686 (2003). Barreiro, L. B., Laval, G., Quach, H., Patin, E. & Quintana-Murci, L. Natural selection has driven population differentiation in modern humans. Nat. Genet. 40, 340–345 (2008). Novembre, J. & Di, R. A. Spatial patterns of variation due to natural selection in humans. Nat. Rev. Genet. 10, 745–755 (2009). Fu, W. & Akey, J. M. Selection and adaptation in the human genome. Annu. Rev. Genom. Hum. Genet 14, 467–489 (2013). Tennessen, J. A. & Akey, J. M. Parallel adaptive divergence among geographically diverse human populations. PLoS Genet. 7, e1002127 (2011). Yang, J. et al. Ubiquitous polygenicity of human complex traits: genome-wide analysis of 49 traits in Koreans. PLoS Genet. 9, e1003355 (2013). Hancock, A. M., Alkorta-Aranburu, G., Witonsky, D. B. & Di Rienzo, A. Adaptations to new environments in humans: the role of subtle allele frequency shifts. Philos. Trans. R. Soc. Lond. B. Biol. Sci. 365, 2459–2468 (2010). Pritchard, J. K., Pickrell, J. K. & Coop, G. The genetics of human adaptation: hard sweeps, soft sweeps, and polygenic adaptation. Curr. Biol. 20, R208–R215 (2010). Pritchard, J. K. & Di Rienzo, A. Adaptation–not by sweeps alone. Nat. Rev. Genet. 11, 665–667 (2010). Berg, J. J. & Coop, G. A population genetic signal of polygenic adaptation. PLoS Genet. 10, e1004412 (2014). Welter, D. et al. The NHGRI GWAS Catalog, a curated resource of SNP-trait associations. Nucleic Acids Res. 42, D1001–D1006 (2014). Visscher, P. M. et al. 10 eears of GWAS discovery: biology, function, and translation. Am. J. Hum. Genet. 101, 5–22 (2017). Turchin, M. C. et al. Evidence of widespread selection on standing variation in Europe at height-associated SNPs. Nat. Genet. 44, 1015–1019 (2012). Field, Y. et al. Detection of human adaptation during the past 2,000 years. Science 354, 760–764 (2016). ADS CAS Article PubMed PubMed Central Google Scholar Zoledziewska, M. et al. Height-reducing variants and selection for short stature in Sardinia. Nat. Genet. 47, 1352–1356 (2015). Sabeti, P. C. et al. Genome-wide detection and characterization of positive selection in human populations. Nature 449, 913–918 (2007). Sabeti, P. C. et al. Detecting recent positive selection in the human genome from haplotype structure. Nature 419, 832–837 (2002). ADS CAS Article PubMed Google Scholar Pennings, P. S. & Hermisson, J. Soft sweeps III: The signature of positive selection from recurrent mutation. PLoS Genet. 2, e186 (2006). Voight, B. F., Kudaravalli, S., Wen, X. & Pritchard, J. K. A map of recent positive selection in the human genome. PLoS Biol. 4, 0446–0458 (2006). Yang, J. et al. Genetic variance estimation with imputed variants finds negligible missing heritability for human height and body mass index. Nat. Genet. 47, 1114–1120 (2015). Bulik-Sullivan, B. K. et al. LD Score regression distinguishes confounding from polygenicity in genome-wide association studies. Nat. Genet 47, 291–295 (2015). Lewontin, R. C. & Krakauer, J. Distribution of gene frequency as a test of the theory of the selective neutrality of polymorphisms. Genetics 74, 175–195 (1973). Purcell, S. et al. PLINK: a tool set for whole-genome association and population-based linkage analyses. Am. J. Hum. Genet. 81, 559–575 (2007). Zhang, G., Muglia, L. J., Chakraborty, R., Akey, J. M. & Williams, S. M. Signatures of natural selection on genetic variants affecting complex human traits. Appl. Transl. Genom. 2, 77–93 (2013). Schwartz, R. C. & Blankenship, D. M. Racial disparities in psychotic disorder diagnosis: a review of empirical literature. World J. Psychiatry 4, 133–140 (2014). Bresnahan, M. et al. Race and risk of schizophrenia in a US birth cohort: another example of health disparity? Int. J. Epidemiol. 36, 751–758 (2007). Fearon, P. et al. Incidence of schizophrenia and other psychoses in ethnic minority groups: results from the MRC AESOP Study. Psychol. Med. 36, 1541–1550 (2006). Park, Y. W., Allison, D. B., Heymsfield, S. B. & Gallagher, D. Larger amounts of visceral adipose tissue in Asian Americans. Obes. Res. 9, 381–387 (2001). Hill, J. O. et al. Racial differences in amounts of visceral adipose tissue in young adults: the CARDIA (Coronary Artery Risk Development in Young Adults) study. Am. J. Clin. Nutr. 69, 381–387 (1999). Hoffman, D. J., Wang, Z., Gallagher, D. & Heymsfield, S. B. Comparison of visceral adipose tissue mass in adult African Americans and whites. Obes. Res. 13, 66–74 (2005). Carlson, C. S. et al. Generalization and dilution of association results from European GWAS in populations of non-European ancestry: the PAGE study. PLoS Biol. 11, e1001661 (2013). Martin, A. R. et al. Human demographic history impacts genetic risk prediction across diverse populations. Am. J. Hum. Genet. 100, 635–649 (2017). Yang, J. et al. Common SNPs explain a large proportion of the heritability for human height. Nat. Gen. 42, 565–569 (2010). Lee, S. H. et al. Estimating the proportion of variation in susceptibility to schizophrenia captured by common SNPs. Nat. Genet. 44, 247–250 (2012). Johnson, T. & Barton, N. Theoretical models of selection and mutation on quantitative traits. Philos. Trans. R. Soc. B Biol. Sci. 360, 1411–1425 (2005). Goh, L. G. H., Dhaliwal, S. S., Welborn, T. a., Lee, A. H. & Della, P. R. Ethnicity and the association between anthropometric indices of obesity and cardiovascular risk in women: a cross-sectional study. BMJ Open 4, e004702 (2014). Lean, M. E. J. et al. Ethnic differences in anthropometric and lifestyle measures related to coronary heart disease risk between South Asian, Italian and general-population British women living in the west of Scotland. Int. J. Obes. 25, 1800–1805 (2001). Cheng, C.-Y. et al. Admixture mapping of obesity-related traits in African Americans: the Atherosclerosis Risk in Communities (ARIC) study. Obesity 18, 563–572 (2010). Gusev, A. et al. Partitioning heritability of regulatory and cell-type-specific variants across 11 common diseases. Am. J. Hum. Genet. 95, 535–552 (2014). Marth, G. T., Czabarka, E., Murvai, J. & Sherry, S. T. The allele frequency spectrum in genome-wide human variation data reveals signals of differential demographic history in three large world populations. Genetics 166, 351–372 (2004). Meyer, D. & Thomson, G. How selection shapes variation of the human major histocompatibility complex: a review. Ann. Hum. Genet. 65, 1–26 (2001). Stefansson, H. et al. A common inversion under selection in Europeans. Nat. Genet. 37, 129–137 (2005). Martínez-Fundichely, A. et al. InvFEST, a database integrating information of polymorphic inversions in the human genome. Nucleic Acids Res. 42, D1027–D1032 (2014). Taylor, T. D. et al. Human chromosome 11 DNA sequence and analysis including novel gene identification. Nature 440, 497–500 (2006). Clark, A. G. et al. Inferring nonneutral evolution from human-chimp-mouse orthologous gene trios. Science 302, 1960–1963 (2003). Marigorta, U. M. & Navarro, A. High trans-ethnic replicability of GWAS results implies common causal variants. PLoS Genet. 9, e1003566 (2013). Charlesworth, B., Nordborg, M. & Charlesworth, D. The effects of local selection, balanced polymorphism and background selection on equilibrium patterns of genetic diversity in subdivided populations. Genet. Res. 70, 155–174 (1997). Messer, P. W. SLiM: simulating evolution with selection and linkage. Genetics 194, 1037–1039 (2013). Racimo, F., Berg, J. J. & Pickrell, J. K. Detecting polygenic adaptation in admixture graphs. Genetics 208, 1565–1584 (2018). Zeng, J. et al. Signatures of negative selection in the genetic architecture of human complex traits. Nat. Genet. https://doi.org/10.1038/s41588-018-0101-4 (2018). Schoech, A. et al. Quantification of frequency-dependent genetic architectures and action of negative selection in 25 UK Biobank traits. Preprint at https://www.biorxiv.org/content/early/2017/09/13/188086 (2017). Yang, J., Lee, S. H., Goddard, M. E. & Visscher, P. M. GCTA: A tool for genome-wide complex trait analysis. Am. J. Hum. Genet. 88, 76–82 (2011). Howie, B. N., Donnelly, P. & Marchini, J. A flexible and accurate genotype imputation method for the next generation of genome-wide association studies. PLoS Genet. 5, e1000529 (2009). Patterson, N., Price, A. L. & Reich, D. Population structure and eigenanalysis. PLoS Genet. 2, e190 (2006). Zhu, Z. et al. Dominance genetic variation contributes little to the missing heritability for human complex traits. Am. J. Hum. Genet. 96, 377–385 (2015). Palamara, P. F. et al. Leveraging distant relatedness to quantify human mutation and gene-conversion rates. Am. J. Hum. Genet. 97, 775–789 (2015). Gravel, S. et al. Demographic history and rare allele sharing among human populations. Proc. Natl Acad. Sci. 108, 11983–11988 (2011). This research was supported by the Australian National Health and Medical Research Council (1107258, 1078037, 1103418, and 1113400), the Australian Research Council (DP160101343), the US National Institutes of Health (MH100141 and MH077139), and the Sylvia and Charles Viertel Charitable Foundation (Senior Medical Research Fellowship). This study uses data from dbGaP (accessions: phs000090 and phs000674). A full list of acknowledgments to these data sets can be found in the Supplementary Note. Institute for Molecular Bioscience, The University of Queensland, Brisbane, QLD, 4072, Australia Jing Guo, Yang Wu, Zhihong Zhu, Zhili Zheng, Maciej Trzaskowski, Jian Zeng, Matthew R. Robinson, Peter M. Visscher & Jian Yang The Eye Hospital, School of Ophthalmology and Optometry, Wenzhou Medical University, 325027, Zhejiang, China Zhili Zheng Department of Computational Biology, University of Lausanne, 1011, Lausanne, Switzerland Matthew R. Robinson Queensland Brain Institute, The University of Queensland, Brisbane, QLD, 4072, Australia Peter M. Visscher & Jian Yang Jing Guo Yang Wu Zhihong Zhu Maciej Trzaskowski Jian Zeng Peter M. Visscher Jian Yang J.Y. conceived and designed the study. J.G. performed simulations and statistical analyses under the assistance and guidance from Y.W., Z.Z., Z.L.Z., M.T., J.Z., M.R.R., P.M.V., and J.Y. J.G. and J.Y. wrote the manuscript with the participation of all authors. All authors reviewed and approved the final manuscript. Correspondence to Jian Yang. The authors declare no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Peer Review File Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Guo, J., Wu, Y., Zhu, Z. et al. Global genetic differentiation of complex traits shaped by natural selection in humans. Nat Commun 9, 1865 (2018). https://doi.org/10.1038/s41467-018-04191-y DOI: https://doi.org/10.1038/s41467-018-04191-y Genomic approaches to trace the history of human brain evolution with an emerging opportunity for transposon profiling of ancient humans Yilan Wang Boxun Zhao Eunjung Alice Lee Mobile DNA (2021) Variation of microRNA expression in the human placenta driven by population identity and sex of the newborn Song Guo Shuyun Huang Philipp Khaitovich BMC Genomics (2021) Quantifying genetic heterogeneity between continental populations for human height and body mass index Andrew Bakshi Scientific Reports (2021) Allele frequency differentiation at height-associated SNPs among continental human populations Minhui Chen Charleston W. K. Chiang European Journal of Human Genetics (2021) Evaluating marginal genetic correlation of associated loci for complex diseases and traits between European and East Asian populations Haojie Lu Ting Wang Ping Zeng Human Genetics (2021) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Reviews & Analysis Editorial Values Statement Journal Impact Editors' Highlights Search articles by subject, keyword or author Show results from All journals This journal Explore articles by subject Nature Communications (Nat Commun) ISSN 2041-1723 (online) nature.com sitemap Protocol Exchange Nature portfolio policies Author & Researcher services Scientific editing Nature Masterclasses Nature Research Academies Libraries & institutions Librarian service & tools Partnerships & Services Nature Conferences Nature Africa Nature China Nature India Nature Italy Nature Korea Nature Middle East Close banner Close Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
CommonCrawl
Department of Mathematical Sciences Advisement Sheets Mathematics Placement Math Lab Tutoring College of Sciences and Engineering Hammo, B., Sleit' A., Baarah· A., & AbuSalem, H. A. (2013). Computational Approach for Identifying Quranic Themes, accepted to be published in the International Journal of Computer Processing of Oriental Languages (IJCPOL) T. Chen, J. Ma, & Z. Deng. (2013). The attributes of color represented by a spherical model, Journal of Electronic imaging, 22 (4), 043032 ; doi: 10.1117/1. JEl.22.4.043032 T. Chen, D. Edwards, & J. Ma. (2013). Binary search tree with vine, Natural Science Journal of Xiangtan University, Vol. 35, No. 3, pages 106-113 T. Chen, & R. Koo. (2013). Two-term Egyptian Fractions. Notes on Number Theory and discrete Mathematics, Vol. 19, No. 2, pages 15-25. Z. Zhou, N. sang, L. wan, & T. Chen. (2013). statistic-based image edge detection about gray system theory, Journal of Systems Engineering and Electronics, Vol. 35, No. 5, pp. 1110-1114. Fadimba, K. (2014). Linear Backward Euler Scheme for a class of degenerate advection-diffusion equations: a mathematical analysis of the convergence in and" in Analysis and Applications, Vol. 12, No. 3, pps. 227-249. Li, R. (2012) An Inequality on Laplacian Eigenvalues of Connected Graphs, Ars Combinatoria 105 (2012), 361-368. Li, R. (2012) Egienvalues, Laplacian Eigenvalues and Some Hamiltonian Properties of Graphs,Utilitas Mathematica 88(2012), 247-257. Li, R. (2013) Some Inequalities on the eigenvalues and spread associated with a graph, Utilitas Mathematica 92, 341-350 Li, R. (2013) Energy and the first Zagreb index conditions for some stable properties of graphs, Journal of Discrete mathematical sciences & Cryptography 16, 307-312 Li, R. (2013) Lower Bounds for the Kirchhoff Index, MATCH Communications in Mathematical and in Computer Chemistry 70 (2013), 163-174. Li, R. (2014) Spectral Conditions for some stable properties of graphs, Journal of Combinatorial Mathematics and Combinatorial Computing 88, 199-205. Li, R. (2014) Laplacian spectral radius and some Hamiltonian properties of graphs, Applied Mathematics E-Notes 14(2014), 216-220. Li, R. (2015) On $\alpha$ - incidence energy and $\alpha$ - distance energy of a graph, Ars Combinatoria 121 (2015), 373-384. Li, R. Spectral Inequalities on Independence number, Chromatic number, and Total Chromatic number of a graph, Journal of Discrete mathematical sciences & Cryptography 18 (2015) 41-46. Li, R. On the upper bound for the number of spanning trees of a connected graph, Journal of Combinatorial Mathematics and Combinatorial Computing Li, R. New upper bounds of the first Zagreb index of a graph, Advances and Applications in Mathematical Sciences 13 (2014) 111-121. Li, R. Spectral results on some Hamiltonian properties of graphs, Romanian Journal of Mathematics and Computer Science 4(2014), 197-202. Li, R. Two results on the Hamiltonicity of L1-graphs, Ars Combinatoria 115 (2014), 305-314. Li, R. Harary Index and some Hamiltonian properties of graphs, AKCE International Journal of Graphs and Combinatorics 12 (2015) 64-69. Z. Li, Y. Cao, & J. Lei, The Symplectic Monoid, Communications in Algebra, in press. T. Chen, & S. Cong, Product of sum of squares and sum of reciprocal squares of consecutive Fibonacci numbers, 2014 MAA-SA section Meeting, Tennessee tech University, TN, March 15, 2014 Fadimba, K. A Linear Backward Euler Scheme for the Saturation Equation: Error Estimates in and in the SAIM-SEAS meeting at Florida Institute of Technology, in March 2014. Li, R. The spectral lower bounds for the circumference of graphs, presented in the 26th Cumberland Conference on Combinatorics, Graph theory & computing held at Middle Tennessee State University, May 24- May 26, 2013 Li, Z. Introduction to algebraic monoids, in the 13th National Conference on Algebra in China, Jinlin University, Changchun, Jilin, China, August 5-9, 2013 Li, Z. Algebraic structures associated with symplectic root systems, in the 13th National Conference on Lie Algebra in China, Sichuan University, Chengdu, Sichuan, China, July 22-26, 2013 Reid, T. ITQ PRIME TIME presentation. August, 13, 2013. Reid, T. ITQ PRIME TIME presentation. November 15, 2013. Reid, T. ITQ PRIME TIME presentation. March 13, 2014. Reid, T. Aiken County STEM Symposium. January 17, 2014. Aiken Technical College.
CommonCrawl
School health services and its practice among public and private primary schools in Western Nigeria Olugbenga Temitope Kuponiyi1, Olorunfemi Emmanuel Amoran1 & Opeyemi Temitola Kuponiyi2 BMC Research Notes volume 9, Article number: 203 (2016) Cite this article Globally the number of children reaching school age is estimated to be 1.2 billion children (18 % of the world's population) and rising. This study was therefore designed to determine the school health services available and its practices in primary schools in Ogun state, Western Nigeria. The study was a comparative cross-sectional survey of private and public primary schools in Ogun state using a multi-stage sampling technique. Participants were interviewed using a structured, interviewer administered questionnaire and a checklist. Data collected was analyzed using the SPSS version 15.0. A total of 360 head teachers served as respondents for the study with the overall mean age of 45.7 ± 9.9 years. More than three quarters of the respondents in both groups could not correctly define the school health programme. There were no health personnel or a trained first aider in 86 (47.8 %) public and 110 (61.1 %) private schools but a nurse/midwife was present in 57 (31.7 %) and 27 (15.0 %) public and private schools. (χ2 = 17.122, P = 0.002). In about 95 % of the schools, the teacher carried out routine inspection of the pupils while periodic medical examination for staff and pupils was carried out in only 13 (7.2 %) public and 31 (17.2 %) private schools (χ2 = 8.398, P = 0.004). A sick bay/clinic was present in 26 (14.4 %) and 67 (37.2 %) public and private schools respectively (χ2 = 24.371, P = 0.001). The practice of school health programme was dependent on the age (χ2 = 12.53, P = 0.006) and the ethnicity of the respondents (χ2 = 6.330, P = 0.042). Using multivariate analysis only one variable (type of school) was found to be a predictor of school health programme. (OR 4.55, CI 1.918–10.79). The study concludes that the practice of the various components of school health services was poor but better in private primary schools in Nigeria. Routine inspection by teachers was the commonest form of health appraisal. This may suggest that more health personnel need to be employed to cater for the health of the school children in Nigeria and other similar developing countries. School health services refer to the health care delivery system that is operational within a school or college. These services aim at promoting and maintaining the health of school children so as to give them a good start in life. In addition, these services seek to enable children benefit optimally from their school learning experience [1, 2]. Globally the number of children reaching school age is estimated to be 1.2 billion children (18 % of the world's population) and rising [3]. In many homes across the world, children start to attend school from as early as 5–6 months because mothers have to wean early to return to their work place [3]. The purpose of the school health services is to help children at school to achieve the maximum health possible for them to obtain full benefit from their education. School health services deal with health appraisals, control of communicable diseases, record keeping and supervision of the health of school children and personnel [3, 4]. It is the aspect that concerns itself with the evaluating the health of an individual objectively. Health appraisals afford the school authorities the opportunity to detect signs and symptoms of common diseases as well as signs of emotional disturbances that could impede the learning activities of children [4]. School health services are both preventive and curative services and it helps in providing information to parents and school personnel on the health status of school children [5]. It also provides advisory and counselling services for the school community and parents. It include pre-entry medical screening, routine health screening/examination, school health records, sick bay, first aid and referral services. Other services rendered include health observation (which involves physical inspection of the physiology and behaviours of children), health examinations (screening tests and medical diagnosis) and health records (keeping of records of the health histories of children) [4, 5]. A National study of the school health system in Nigeria by the Federal Ministries of Health and Education revealed that only 14 % of head teachers indicated that pre-enrolment medical examination was mandatory in their schools and 30 % of the students had low body mass index (BMI). It further indicated that 30 % of students have low BMI and the common health conditions that contribute to absenteeism include fever (56 %), headache (43 %), stomach ache (29 %), cough/catarrh (38 %) and malaria (40 %) [4, 5] There is a dearth of school health clinics in Nigeria and where they exist, the services are not comprehensive enough or not organized to meet the needs of the pupils [5]. Studies have shown that primary school children in Nigeria were not provided with basic health examination services and pre-entrance medical examinations thus baseline health information about them was absent. There is also a lack of routine medical examination which would have picked up deviations from normal which make early referrals impossible and children vulnerable to preventable diseases [6, 7]. School health has been described as the neglected component of Primary Health Care in Africa [8, 9]. Since almost every small community has a primary school, in those communities without health centres, it should be possible to use the primary school as a centre for primary health care delivery not just for the pupils but also for the community [10]. A well organized and properly executed school health programme can be used to create safe environment for school children [9]. School health programme can become one of the strategies for promoting primary health care services [11]. All efforts at addressing the school health programme in Nigeria have remained largely at policy level, with minimal implementation. Where implementation has been attempted the emphasis has been on outside rather than within the schools [12–14]. This study was therefore designed to determine the school health services available and its practices in primary schools in Ogun state Nigeria. This has implications in the primary health care of the school children and reduction in incidence of preventable diseases early in life. The study was carried out in Ogun state, South West Nigeria. Ogun state was created on February 3rd 1976 out of the defunct Western Nigeria. The state is named after Ogun River which runs right across it from North to South. Ogun state is situated on latitude 7.00° N and longitude 3.35° E in the Greenwich Meridian. It covers a total land area of 16,409.26 km2 within the South West region of the country. It is bounded in the north by Oyo and Osun states, in the east by Ondo state, in the west by the Republic of Benin which makes it an access route to the expansive market of the Economic Community of West African States (ECOWAS) and in the south by Lagos state and the Atlantic Ocean. The state Capital Abeokuta, lies about 100 km north of Lagos state, Nigeria's business Capital [15]. The projected population of the state as at 2012 is 5.1 million. The people of the state belong to the Yoruba ethnic group of South–West Nigeria. The main ethnic groups of the state are Egbas, Ijebus, Remos, Yewas, Eguns and Aworis. Major occupations in the state are farming trading, artisan and white collar jobs. The three major religion of the people are Christianity, Islam and traditional religion. A greater proportion of the state lies in the tropical rain forest zone [15]. The state has twenty (20) Local Government Areas (LGA). Each LGA is headed by an Executive Chairman. It has three (3) Senatorial Districts and is divided into four (4) geo-political zones. The study population consisted of all the head teachers in public and private primary schools in Ogun state and their schools. The Ogun State Universal Basic Education Board (SUBEB) is in charge of primary school education and activities within the state under the Ministry of Education. The state operate a 6-3-3-4 system of education which means 6 years in primary school, 3 years in junior secondary schools, 3 years in senior secondary school and 4 years in the University. There are One thousand, four hundred and forty nine (1449) registered public primary schools and one thousand, six hundred and ninety four (1694) registered private primary schools within the state making a total of 3143 primary schools [6]. The schools have an Administrative Head known as the head teacher and he/she supervises all school activities and the activities of the teaching and non-teaching staff. The head teacher and other staff within the public schools are employed by the State's Ministry of Education while the private school Heads and Staff are employed by a Proprietor/Proprietress who may also function as the head teacher. All the public schools run the six (6) year programme but some private schools run a five (5) year programme. The private schools usually have an attached Crèche and Nursery Units. The Zonal Education Office (ZEO) is responsible for compliance and adherence to the Educational standards as specified by the Ministry of Education for all public and private schools within each Zone. In each Local Government, the Local Government Education Authority (LGEA) is directly responsible for the supervision and human resource management of public primary schools. The three (3) Local Government Areas where the study was carried out are Sagamu, Abeokuta South and Ado-Odo/Ota [6]. The study design was a comparative cross sectional study that assessed the school health services in public and private primary schools in Ogun state. All fully registered public and private primary schools in the selected LGAs were included in the sampling frame while all unregistered schools were excluded. A prevalence of 40.4 % of private schools compared to 31.0 % of public schools [16] was used to estimate the sample size using the formula for comparative study proportions between two groups. $$ N = \frac{{Z_{\alpha } \sqrt {P_{1} (1 - P_{1} )} + Z_{\beta } \sqrt {P_{2} (1 - P_{2} )} }}{{(P_{1} - P_{2} )^{2} }} $$ Thus, a minimum sample size of 153 head teachers is required per group, however a total of 360 participants were recruited into the study. Sampling technique A multi-stage sampling technique was employed. A simple random sampling method was used to select three local government areas, one from each of the three senatorial district's sampling frame which consist of nine from Ogun East, six from Ogun Central and five from Ogun West senatorial districts respectively. A simple random sampling method was used to select 60 public and 60 private primary schools from the sampling frame of all the public and private schools in each of the three LGA selected making a total of 360 schools. The head teachers of each schools were recruited into the study. Data collection instrument A self-administered semi-structured questionnaire with open and closed ended questions for the head teachers was designed for the study. It was adapted from that used by Ofovwe and Ofilli [16] in a similar study in 2004. The questionnaire consists of: Section A: Socio-economic and demographic characteristics such as age, sex, marital status, highest educational qualification and length of time as a head teacher. This section gave insight into the respondents' socio-economic and demographic background. Section B: This section contained questions that assessed the head teachers' knowledge of school health programme. Section C: This section assessed some of practices of school health programme by the head teachers in their various schools. The section served to augment the main Instrument that was used to assess practice of school health within the schools which was the observational checklist. The observational checklist was adapted from the school health programme evaluation scale by the Federal Ministry of Education's sanitary inspection form [17]. The checklist covered all the domains of the school health programme. It was the main Instrument used to evaluate the practice of school health programme. Data collection technique The instruments for data collection: a self-administered semi-structured questionnaire for the head teachers and an observational checklist for the schools were pre-tested in ten (10) public and ten (10) private primary schools in Ibadan North East Local Government and modified as appropriate. Twenty (20) research assistants were recruited and trained in the correct use of the questionnaire (Additional file 1) and the checklist (Additional file 2). Identification tags with pictures were issued to the Research Assistants to facilitate school entry. School entry was made by approaching the head teacher. Some of the schools randomly recruited into the study were located in 'hard to reach' areas with difficult terrains that required very long treks on foot and crossing of rivers in canoes. Once the head teacher give his consent by signing the informed consent form, he/she was given a copy of the questionnaire to fill in the presence of a research assistant who explained grey areas when necessary. Observational checklist was also used to assess practice of school health programme usually in the company of a teacher nominated by the head teacher. Data were collected over a three (3) month period. Analysis of results Quantitative data collected was checked for errors, cleaned, entered and analyzed using the SPSS version 15.0. Data was summarized with proportions and means and presented using frequency tables. The data analysis focused on univariate frequency table and bivariate cross tabulations that identify important relationships between variables. Respondents were categorized into good and poor knowledge status by identifying the correct answer as indicated in the National school health evaluation scale [17]. Practice of school health programmes was described as indicated in the evaluation scale form by the Federal Ministry of Education. Inferential statistics to test for associations between variables was done using the Chi square test, t test was used to compare the difference between the mean. Logistic regression was then used to estimate predictors of willingness to practice school health programme. Variables that were found to be significant at 0.05 for factors affecting Implementation of school health programme were fed into the Logistic regression model in order to assess the effect of confounding factors. The level of statistical significance was set at 5 %. Ethical approval Ethical approval to conduct the study was obtained from the Ethical Committee of the Olabisi Onabanjo University Teaching Hospital, Sagamu. Official permission was obtained from the office of the permanent secretary, state Ministry of Education and the three (3) Local Government Authorities where the schools were sited. Furthermore, the zonal education officers of Sagamu, Abeokuta South and Ado-Odo/Ota local government areas were also informed. Written informed consent was obtained from all the participants after study objectives were explained to them. They were assured that participation was voluntary and they would incur no loss if they decided not to participate. Study participants were assured of strict confidentiality and this was indicated on the questionnaire. Data collected was only used for research purposes and was kept confidential on a password protected computer. Research assistants were also trained not to disclose the information divulged by the respondents during the interview. Anonymity was assured as names or any other personal identifying information was not required from subjects. Socio-dermographic characteristics All the schools surveyed provide school health services. The mean age of the head teachers in public schools was 53.0 ± 3.6 years while that for the private schools was 37.4 ± 8.0 years. Table 1 shows the socio-dermographic characteristics of the respondents. Table 1 Characteristics of respondents' socio-demographic variables Knowledge of the respondents about school health services More than three quarters of the head teachers in both groups could not provide a basic definition of the school health programme. Majority, 166 (92.2 %) of the public and 167 (92.8 %) of the private school head teacher gave a poor definition of SHP (χ2 = 2.043, P = 0.360). Furthermore, 164 (91.1 %) of the public and 167 (92.8 %) of the respondents were unable to correctly list the components of the SHP (χ2 = 3.327, P = 0.189). Few of the respondents, 40 (22.2 %) of the public and 50 (27.8 %) of the private school head teachers did not know if basic life support is an integral skill needed by the school's first aider, (χ2 = 1.398, P = 0.237). This is shown in Table 2 below. Table 2 Knowledge of respondents' about school health programme Services available in the schools There was no health personnel or a trained first aider in 86 (47.8 %) public schools and 110 (61.1 %) private schools. Also, a nurse/midwife was present in only 57 (31.7 %) and 27 (15.0 %) public and private schools respectively (χ2 = 17.122, P = 0.002). Periodic medical examination for staff and pupils was carried out in only 13 (7.2 %) public and 31 (17.2 %) private schools. This was a statistically significant finding (χ2 = 8.398, P = 0.004). Essential drugs and materials were totally absent in 66 (36.7 %) of public and 40 (22.2 %) of private schools. (χ2 = 9.039, P = 0.003). A sick bay/clinic was present only in 26 (14.4 %) and 67 (37.2 %) public and private schools respectively (χ2 = 24.371, P = 0.001). While an ambulance/school bus was present in 5 (2.8 %) of the public schools, 44 (24.4 %) of the private schools had an ambulance or a school bus. (χ2 = 35.931, P = 0.001). First aid of any type was unavailable in 33 (18.3 %) public schools and 13 (7.2 %) private schools (χ2 = 9.970, P = 0.002). Wash hand basins and stands were present in 32 (17.8 %) and 54 (30.0 %) public and private schools respectively (χ2 = 7.394, P = 0.007). This is as shown in Table 3. Routine inspection by teachers was the commonest form of health appraisal done in this study. In about 95 % of the schools, the teacher carried out routine inspection of the pupils. Periodic medical examination was carried out by 17 % of private schools as against 7 % of public schools. Table 3 Practice of school health services in public and private schools Factors influencing implementation of school health services The public school head teachers reported lack of infrastructures (51.7 %), lack of funds (42.8 %) and inadequate health personnel (31.1 %) as the three most important challenges that they face in running the SHP. On the other hand, the private school head teachers had listed lack of funds (24.4 %), inadequate health personnel (20.6 %) and friction between parents and the school management (16.1 %) as the three major challenges faced while trying to implement the school health programme. The study revealed as indicated in Table 4 that the practice of SHP was dependent on the age (χ2 = 12.53, P = 0.006) and the ethnicity of the respondents (χ2 = 6.330, P = 0.042). It was however not dependent on sex, marital status, religion, highest educational qualification and years of experience (P > 0.05). The practice score of the respondents in public and private schools when compared was dependent on the type of school (χ2 = 29.120, P = 0.001). Table 4 Practie of school health programme and socio-demographic variables of respondents Table 5 shows the multiple logistic regression model. Only one variable (type of school) was found to be a predictor of school health programme. (OR 4.551, CI 1.918–10.799). Table 5 Predictors of practice of school health programme (multivariate analysis) The importance of a good and functional SHP as a component of Primary Health Care in the overall development of children and the citizenry of a nation cannot be over emphasized. Various studies in the last 20 years or more in Nigeria have indicated poor status of the school health programme [5, 17, 18]. Knowledge of the school health services were generally poor. The generally poor knowledge on school health services has been demonstrated in other previous studies [19–22]. School health services constitute one of the major components of the SHP and deals with the maintenance of the health of the school children. Effective school health services facilitate early detection and diagnosis with prompt intervention in order to prevent mortality and reduce morbidity. This study showed that almost all of the schools studied did not have the services of a doctor and only one out of every six of the schools in this study had someone trained in first aid. This dearth of health personnel have been reported severally in various studies conducted in Nigeria [23, 24]. This shows that there has not been any improvement in supply of health personnel to school health care in the last 10 years in various parts of Nigeria. The figures from this study and all the other studies above are in sharp contrasts compared with a 1972 study in Ibadan which reported that about two-thirds of the schools had a trained first aider [25]. This may imply a steady deterioration in the SHP within the last four decades as noted by some authors [26, 27]. Every teacher should be trained to be able to administer first aid within the primary school system. However as a minimum requirement three persons trained in first aid should be available at all times in the schools [28]. This study shows that about a quarter of the Schools had a sick bay/clinic while fewer still have any form of school ambulance or bus to convey sick children to hospitals in case of any emergency. Several authors have reported similar findings in their studies [23–25]. Absence of sick bays and school ambulance or bus reflected the poor state of school health services in the schools with the private schools just slightly better. Routine inspection by teachers was the commonest form of health appraisal done in this study. In about 95 % of the schools, the teacher carried out routine inspection of the pupils. This figure is close to those reported in other studies [19, 29]. Other authors however reported a general absence of health appraisal services [1, 2]. Screening tests for growth defect, handicaps and disabilities were only available in 7 % of the schools in this study. Several studies in Nigeria has reported similar findings [2, 5]. These low figures suggest that most handicaps and disabilities would be discovered much later and at a time when they might have become permanent and irreversible. It has been postulated that a teacher must never be in doubt about the seeing and hearing status of the pupils in his or her class [30]. Periodic medical examination was carried out by few of the respondents' schools. However higher figures have been reported in previous studies [16, 23] while others studies showed poor medical examinations in schools [19, 31]. Medical officers and other health workers should have schools placed under their watch which they would oversee and help conduct routine medical examination. Pre-entrance medical screening must become an admission requirement into all public and private schools in Ogun state complimented by good record keeping practices at the schools. On further analysis, the age and ethnicity of the head teacher and the type of school were strong determinants in the practice of school health services. School health services is four times more likely to be implemented in a private when compared to the public school (OR 4.55, CI 1.198–10.79). Private schools have better access to funding because they are also run as a profit-oriented business. Some of the available structures that complement school health programme activities are available because they have to compete with other private schools for pupils. They therefore have a tendency to provide some of the services not because they have an understanding of the requirements of the SHP but as a business model to attract clientele. Public schools on the other hand have to wait for the Government in order to have funds available for all activities. They are usually barred from fund raising activities and when they do the funds are very limited. The study findings are limited in terms of overall generalization and impact because there may be variation in the availability of resources and political will in the operation of school health services in various LGA in Nigeria and other low income countries. Furthermore, the limitations of a cross-sectional study to explore risk and protective factors are important limitations of this study. Despite these limitations, we believe that our data provide useful information for the assessment of the school health programme in Nigeria and identify factors associated with its practice in Nigeria and other low income countries. The study concludes that the practice of the various components of school health services was poor. The health care personnel available in these schools were inadequate but the situation was generally better in the private schools. School health services are four times more likely to be implemented in a private school when compared to the public school. Routine inspection by teachers was the commonest form of health appraisal. This may suggest that more health personnel need to be employed to cater for the health of the school children in Nigeria and other similar developing countries. Medical officers and other health workers should have schools placed under their watch which they would oversee and help conduct routine medical examination. Pre-entrance medical screening must become an admission requirement into all public and private schools in Nigeria and other countries with similar public health challenges. These inadequacies need to be addressed if health targets such as MDG goals needs to be achieved in Nigeria and other developing countries. Okafor JO. A functional approach to School Health Education. Awka: Meks Publishers; 1991. Schools and health. Impact of health on education. http://www.schoolsandhealth.org/pages/Anthropometricstatusgrowth.aspx. Ogbuji CN. School health services. In: Ezedum CE, editor. School health education. Nsukka: Topmost Press; 2003. p. 58–72. UNICEF. Launch of National School Health Policy and the National Education Sector HIV/AIDS Strategic Plan. 2007. http://www.unicef.org/nigeria/media_2216.htlm. Federal Ministry of Education, Nigeria. National School Health Policy. Abuja: Federal Ministry of Education; 2006. p. 1–32. Ola JA, Oyeledun B. School health in Nigeria: national strategies. WHO information series on school health (PDF); 1998. Ojugo AI. Status of health appraisal services for primary school children in Edo state Nigeria. Int Electron J Health Educ. 2005;8:146–52. Eke AN. School education: a neglected primary health component. Nig Sch Hlth J. 1988;7(1):105–9. Adegbenro CA. The effect of a school health programme on ensuring safe environments for primary school children. J R Soc Health. 2007;127(1):29–32. Akani NA, Nkanginieme KEO, Orumabo RS. The school health programme: a situational revisit. Niger J Paediatr. 2001;28(1):1–6. Nemir A. The school health programme. Philadelphia: WB Sanders Co.; 1975. p. 269–367. Mbarie IA, Ofovwe GE, Ibadin MO. Evaluation of the performance of primary schools in Oredo Local Government Area of Edo state in the school health programme. J Community Med Prim Health Care. 2010;22(2):22–32. Adeniyi JD. Effective teaching of health education in primary schools: the challenge in the 90s. Niger School Health J. 1993;8(1):26–34. Imoge AO. An evaluation of primary healthcare program in secondary schools in Oredo Local Government Area of Bendel state. Nig Sch Hlth J. 1987;7(1):99–104. Nigerian National Population Commission. National Population Census 2006. Ofovwe GE, Ofili AN. Knowledge, attitude and practice of school health programme among head teachers of primary schools in Egor Local Government Area of Edo state, Nigeria. Ann Afr Med. 2007;6(3):99–103. Federal Ministry of Education Nigeria. Implementation guidelines on national school health programme; 2006. Sofowora OA. Improving the standard and quality of primary education in Nigeria: a case study of Oyo and Osun states. Int J Cross-Disciplines Subjects in Education (IJCDSE). 2010;2:156. Ireti FA. Teacher effectiveness among female teachers in primary and secondary schools in Southwestern Nigeria. J Educ Leadersh Action. Lindenwood University; 2014. Toma BO, Tinuade O, Gabriel IO, Agaba E. School Health Services in Primary Schools in Jos, Nigeria. Open Science Journal of Clinical Medicine. 2014;2(3):83–8. Ejifugha AU. Awareness of school health services among primary school teachers in Enugu state. Nig Sch Hlth J. 1993;10(2):54–61. Maduagwu CO. A1 survey of school health services in old Njikoka LGA of Anambra state. Niger School Health J. 1995;7(4):51. Oyinlade OA, Ogunkunle OO, Olanrewaju DM. An evaluation of the school health services in Sagamu, Nigeria. Niger J Clin Pract. 2014;17:336–42. Akpabio II. Problems and challenges of school health nursing in Akwa Ibom and cross river states of Nigeria. Cont J Nurs Sci. 2010;2:17–28. Anderson CL, Creswell WH. School health practice. St. Louis: The CV Mosby Company; 1980. p. 1–185. Ochor JOS. Analysis of the primary health care activities in Bendel state primary schools. Niger School Health J. 1988;7:50–60. Fajewonyomi BA, Afolabi JS. The state of health services and needs of nursing school children A case study of nursing schools in Ile-Ife, Osun state, Nigeria. Niger School Health J. 1993;8:62–7. Folawiyo AFA. Primary health care system through school health education. Niger School Health J. 1988;7:69–74. Ezeonu CT, Akani NA. Evaluating school health appraisal scheme in primary schools within Abakaliki Metropolis, Ebonyi state, Nigeria. Ebonyi Med J. 2010;9:1597–11260. Nwachukwu CN. Mental health provisions in the national policy on education. Couns. 1996;14(1):82–8. Ilika AL, Obionu CO. Personal hygiene practice and school-based health education of children in Anambra state, Nigeria. Niger Postgrad Med J. 2002;9(2):79–82. KOT1 participated in the study design and conducted data collection. OEA conceived the study theme, participated in the study design, supervised data collection and prepared the final manuscript. KOT2 was involved in Data collection and analysis. All authors read and approved the final manuscript. We hereby acknowledge all the research assistant for their participation, encouragement and motivation during the design and conduct of the study. The conduct of this research was funded by the contribution of the authors. The supporting data are included as additional files. Department of Community Medicine and Primary Care, Olabisi Onabanjo University Teaching Hospital, Sagamu, Nigeria Olugbenga Temitope Kuponiyi & Olorunfemi Emmanuel Amoran Department of Paediatrics, College of Health Sciences, Olabisi Onabanjo University Teaching Hospital, Sagamu, Nigeria Opeyemi Temitola Kuponiyi Olugbenga Temitope Kuponiyi Olorunfemi Emmanuel Amoran Correspondence to Olorunfemi Emmanuel Amoran. Questionnaire. Kuponiyi, O.T., Amoran, O.E. & Kuponiyi, O.T. School health services and its practice among public and private primary schools in Western Nigeria. BMC Res Notes 9, 203 (2016). https://doi.org/10.1186/s13104-016-2006-6 Accepted: 23 March 2016
CommonCrawl
Prove that if 33 rooks are placed on a chessboard, at least five don't attack one another The question asks to prove that when 33 rooks are placed on an $8 \times 8$ chessboard that there are a total of 5 rooks that aren't attacking each other. What I know: 64 squares Rooks attack in straight lines at least 1 row must have more than 5 rooks at least 1 column must have more than 5 rooks I've set up an empty chessboard and randomly picked a row and place 5 and a column that contained five and found that no matter where you place them 1 X-shaped diagonal will have four rooks in it. So i counted all the holes that would have to have a rook placed in it to complete the diagonal of 5 and came up with 12. I just don't know how to use that. I understand that concept that it works via diagonals and that 32 is the max number you can place on a chessboard with only having 4 in each diagonal. The minute you place the 33 one you make atleast one diaganol have 5 in it. Making it so that 5 aren't attacking each other. But I don't know how to write that into the form of a proof. The proffesor said to use the pigeonhole principle, but I'm not sure what is the pigeon and what is the hole. discrete-mathematics pigeonhole-principle templatetypedef Fmonkey2001Fmonkey2001 $\begingroup$ This problem looked familiar.. "Problem Solving Strategies", 4. The Box Principle, Problems, 74: Thirty-three rooks are placed on an 8x8 chessboard. Prove that you can choose five of them which are not attacking each other. $\endgroup$ – heinrich5991 Aug 28 '13 at 22:11 $\begingroup$ The problem is badly stated here, though this has little effect on the intended solution. Leaving apart the subtlety that in chess pieces can only attack one another if they belong to opposite camps (pieces from the same camp can defend one another; so even with only 9 rooks there would be 5 from the same camp, therefore mutually non-attacking), the rules of chess do not allow rooks to jump over occupied squares, so with a board so full, there can easily be families of non-attacking rooks of which some share a rank or file. $\endgroup$ – Marc van Leeuwen Feb 20 at 12:40 Look at the extended diagonals, which I've numbered from $1$ through $8$ in the diagram below: $$\begin{array}{|c|c|c|c|c|c|c|c|} \hline 1&2&3&4&5&6&7&8\\ \hline 2&3&4&5&6&7&8&1\\ \hline 3&4&5&6&7&8&1&2\\ \hline 4&5&6&7&8&1&2&3\\ \hline 5&6&7&8&1&2&3&4\\ \hline 6&7&8&1&2&3&4&5\\ \hline 7&8&1&2&3&4&5&6\\ \hline 8&1&2&3&4&5&6&7\\ \hline \end{array}$$ There are eight of them, each comprising eight squares, so one of them must contain at least five of the rooks. Brian M. ScottBrian M. Scott $\begingroup$ Thanks a lot! I was trying this similar idea, but I was using the horizontal or vertical rows and trying to number them. After all the time I focused on the diagonals I don't know why I didn't to do that! Thanks! $\endgroup$ – Fmonkey2001 Aug 28 '13 at 19:44 $\begingroup$ @Jerry: You're welcome! $\endgroup$ – Brian M. Scott Aug 28 '13 at 19:45 $\begingroup$ What a beautiful argument! It brightened up this gloomy day for me. $\endgroup$ – Prism Aug 28 '13 at 22:15 $\begingroup$ @Prism: Thanks! I'm sorry to hear that it's been such a gloomy day, but glad to have relieved a bit of the gloom. $\endgroup$ – Brian M. Scott Aug 28 '13 at 22:17 I suppose the problem here is to prove the statement is true regardless of their arrangement. Your points about the rules are correct, but I would first correct you that 32 pieces could be placed on a chessboard with only four on any given rank or file (by placing all pieces on the white squares or all the black squares), so by placing the 33rd piece, one row and one column must have "five or more", not "more than five". To be pedantic, a piece that can capture another piece on its turn is not "attacking", it's "threatening". And there is one more rule of note, just to be painfully obvious: a rook cannot move "through" another piece to capture a piece beyond it. You seem to have overlooked this, because you seem to be hung up on keeping the rooks on diagonals. That leads to one possible proof; because one row must have five or more rooks no matter how they're placed, and one column must have five or more rooks regardless of placement, then there are three rooks in the row of five and three rooks in that column of five that are separated by at least one other rook from each other. In the worst case, two of these rooks are the same (the 33rd rook) and so, given a grid of rooks placed on only the black squares, by placing the 33rd rook on any row or column, you identify three in a row and two additional rooks in a column that can't touch each other. This actually seems an extremely low estimate, because if you place 32 rooks on all the black spaces, there is a diagonal of 8 that don't threaten each other. So, let's imagine the worst-case scenario: Any given rook threatens a maximum of only four other pieces; those being the four positioned nearest him on the same row or column. Those same 4 rooks are the only ones that threaten the original rook. The other 28 rooks are either on a different row and column, or they are separated from the rook in question by one or more other rooks. So, literally speaking, any one rook would be "neutral" to (neither threatening nor being threatened by) at least 28 other pieces, and as many as 30 (for a piece in a corner), and each of those would be a pair of rooks that don't attack each other. Of those 28, each of them can only attack four others at most, and so the number of pairs of non-threatening rooks is at least the number of combinations of 29 things taken 2 at a time, which is way more than five (406 in fact). So, given one starting rook, there are no more than 4 that can attack him and no fewer than 28 that cannot. Let's pick the worst-case from the remaining 28; a rook threatened by four completely different rooks than the ones threatening the first. There are four rooks that threaten this rook as I just said, so out of 33 rooks we have two that simply cannot attack each other because there's a rook on each side, and have eliminated 8 more as possibilities because they threaten one of these two rooks we've identified. There are 23 more rooks to pick from; let's continue our pattern and choose another rook threatened by four others, so we have three rooks that can't attack each other and 18 left. Two more repetitions of this and you have a group of five rooks that don't threaten each other (because they're each threatened by four unique rooks that are definitely not any other rook in the group), 20 more that threaten one and only one rook out of those five (and so can't be part of the group), and you still have 8 additional rooks that could neither threaten nor be threatened by any of the five we've already identified. That's the worst-case; basically five groups of five rooks each in cross formations, with eight other rooks placed arbitrarily, and we're choosing the five rooks that are threatened by the most other rooks, and so eliminate the most possibilities for other members of this group. We have more than enough additional rooks to repeat this cross motif one more time (and there are a few permutations of placements that allow six of these cross formations to fit on an 8x8 board), so in fact we can identify 6 pieces out of 30, worst-case, that can't threaten each other because each of them is already threatened by four other unique rooks. On top of that, the 31st piece, no matter where you place it, cannot threaten any of the six we have identified and vice-versa (although this piece is now threatened by at least one rook from the ones threatening the other 6), so in fact it is possible to identify a group of at least seven rooks of these 33, in any permutation of placements, that do not threaten any other rook in the group. The 32nd piece can trivially threaten the 31st; however, the last one placed, the 33rd, cannot form any formation in which the 31st, 32nd and 33rd pieces all threaten each other; two of them will be mutually non-threatening, and as we've established, cannot threaten the 6 cross centerpieces, and so there are, at least, 8 rooks in any formation of 33 that cannot threaten any other rook in that group. KeithSKeithS Here's a thought: if you take a row with five or more rooks in it (cal this row $r_1$), the first, third, and fifth rooks don't attack one another because there are other rooks in-between them. We know that such a row has to exist via the pigeonhole principle ($\lceil \frac{33}{8} \rceil = 5$). So now all you need to do is find two rooks that don't attack any of these three. If any of the columns that these rooks are in have four or more additional rooks in them, we can apply the same argument to the column to get two more rooks that don't attack any of the others, and we're done. So suppose that each of the columns containing one of the initial rooks has at most three other rooks in it. That means that of the 33 rooks we started with, there are 33 - 5 - 15 = 13 remaining rooks to place, and there are five columns where they could go. Pigeonholing again ($\lceil \frac{13}{5}\rceil = 4$) tells us that one of these columns must have at least three rooks in it, and the first and last rook in the column don't attack one another. So now we have our five rooks - the three from the row we started with, plus two in a column disjoint from any of the columns containing the initial rooks. templatetypedeftemplatetypedef $\begingroup$ Nice try, but the argument is not correct. Certainly not all rooks that remain when you remove you initial row have to be in one of the five columns you singled out. The rest, I cannot even follow (where is the value $27$ used?) $\endgroup$ – Marc van Leeuwen Feb 20 at 12:44 $\begingroup$ @MarcvanLeeuwen Whoops, you're right - that original argument didn't work. I've corrected it with a more nuanced argument and I believe the math still checks out in this case. $\endgroup$ – templatetypedef Feb 20 at 20:57 Not the answer you're looking for? Browse other questions tagged discrete-mathematics pigeonhole-principle or ask your own question. How many ways are there to position two black rooks and two white rooks on an 8X8 chessboard Chessboard Pigeonhole Question Prove existence of 5 non-attacking rooks There are 12 children .Assuming there are 4 children's bedrooms show that there are at least 3 children sleeping in at least one of them. Pigeonhole principle formula using Propositonal Logic Prove that it is possible to place $n$ nonattcking rooks Bound on number of rooks on an $n \times n$ board if empty squares share row or column with at least $n$ rooks Prove that if fifteen bishops were placed on a chessboard, then at least two of them attack each other. Placing 8 rooks on an unoccupied chessboard - two approaches Put n rooks on a $n \times n$ chessboard so that every empty square is threatened
CommonCrawl
Brain Informatics Methods for inferring neural circuit interactions and neuromodulation from local field potential and electroencephalogram measures Pablo Martínez-Cañada1,2, Shahryar Noei1,3 & Stefano Panzeri ORCID: orcid.org/0000-0003-1700-89094,1 Brain Informatics volume 8, Article number: 27 (2021) Cite this article Electrical recordings of neural mass activity, such as local field potentials (LFPs) and electroencephalograms (EEGs), have been instrumental in studying brain function. However, these aggregate signals lack cellular resolution and thus are not easy to be interpreted directly in terms of parameters of neural microcircuits. Developing tools for a reliable estimation of key neural parameters from these signals, such as the interaction between excitation and inhibition or the level of neuromodulation, is important for both neuroscientific and clinical applications. Over the years, we have developed tools based on neural network modeling and computational analysis of empirical data to estimate neural parameters from aggregate neural signals. This review article gives an overview of the main computational tools that we have developed and employed to invert LFPs and EEGs in terms of circuit-level neural phenomena, and outlines future challenges and directions for future research. Neural activity is often recorded at the level of aggregate electrical signals. These signals are recorded invasively in animals (for example, local field potentials, LFPs, and electrocorticograms, ECoGs [1, 2]) or non-invasively in humans (for example, electroencephalograms, EEG, and magnetoencephalograms, MEG [1, 3,4,5]). These different aggregate brain signals largely share the same neural sources and have major applications in both scientific research and clinical diagnosis. They are easy to record, capture many circuit-level aggregate phenomena, including key synaptic integrative signals at different organization levels from mesoscopic to macroscopic brain scales, and can reveal oscillatory activity over a wide range of frequencies [1, 2, 6,7,8,9]. However, neural aggregate signals are more difficult to interpret than spiking activity of individual neurons, because they conflate and add together contributions from many complex neural processes [1, 2, 6,7,8]. It is therefore notoriously difficult to link them to individual neural circuit features. For example, we still cannot interpret simple modulations of EEG/LFP features, such as a change in LFP or EEG oscillatory power, in terms of excitation, inhibition, and their interaction. This hinders us from understanding cognitive computations in humans and animals, understanding the neural underpinnings of brain disorders, and developing effective interventions. Being able to separate contributions of different neural phenomena to LFPs or EEGs, and to quantify how neural parameters change with manipulations of neural circuits or in brain disorders, will enhance our understanding of how best to use LFPs or EEGs to study brain function and dysfunction. Over the years, we have developed numerous computational tools to address this challenge. Our approach includes advanced methods to identify meaningful bands in the frequency domain in neural recordings, neural network models to predict key neural phenomena, and computationally guided perturbations of neural activity to causally validate model predictions. This paper summarizes progress achieved by our lab in the interpretation of aggregate electrical signals and introduces new directions and challenges for future research in this field. Since this is an extended review of our work presented as Plenary Talk at the 14th International Conference on Brain Informatics BI 2021 [10], here we have principally focused on describing the computational methods and the results coming from our own Laboratory. We would like, though, to remind the readers of the large number of very important contributions in this field made by many other authors, summarized in recent important reviews [1, 2, 5, 11, 12]. Cortical oscillations and their role in neural computation Much of our work has been aimed at understanding the neural mechanisms and functions for information processing of brain oscillations captured by LFPs and EEGs. We thus briefly describe some basic features of neural oscillatory activity that are relevant for our review. Aggregate electrical signals recorded in the cerebral cortex often display prominent oscillatory activity. A large bulk of evidence shows that oscillations seen in neural activity are not simply an epiphenomenon, but are a core mechanism in a variety of cognitive, sensory and information transmission functions [4, 13,14,15,16,17,18,19,20,21,22,23,24]. Synchronization of neuronal oscillations at different frequencies is a pervasive feature of neuronal activity and is thought to facilitate the transmission and integration of information in the cerebral cortex. Neural aggregate signals have been thus decomposed and interpreted in the frequency domain [1, 6, 8]. Traditionally, neural oscillations have been divided into canonical frequency bands such as the widely used delta (1–4 Hz), theta (4–8 Hz), alpha (8–12 Hz), beta (15–30 Hz) and gamma (30–100 Hz) bands. Associations robustly found between band-limited power signals and distinct behavioral states or sensory inputs strongly support the validity of this approach [6, 23, 25,26,27]. Gamma-band oscillations have received much attention in the last few decades [13,14,15, 20, 21, 24, 28]. There is a general acceptance that gamma oscillations reflect the interaction between excitation and inhibition in local cortical circuits [20, 21, 29,30,31]. The power of gamma oscillation encodes information about sensory stimuli, motor and cognitive variables [4, 23, 24, 32,33,34,35,36,37,38,39,40]. It has been shown that gamma oscillations are also implicated in facilitating or modulating inter-areal or within-area communication [4, 18, 21, 41,42,43,44,45,46]. Moreover, and of particular importance for the interpretation of neuroimaging experiments in humans, the gamma band is the frequency band that correlates the most with the functional magnetic resonance imaging (fMRI) signal [47, 48]. The slower theta, alpha and beta rhythms have been involved in many cognitive functions. These slower oscillations have been proposed to mediate top-down perceptual decision processes, encoded in long-range cortical inputs, which could also interact with gamma-band synchronization [4, 20, 24, 49]. Thus, several cortical rhythms coexist in the cerebral cortex, which are often nested into each other and cooperate to shape brain functions and neuronal information processing [20]. Analytical methods to identify regions of the frequency spectrum capturing different neural phenomena of interest Numerous studies have characterized the role of the different frequency bands in brain function. However, the individuation and definitions of the exact boundaries of individual frequency bands are often largely arbitrary, based on heuristic criteria and vary substantially between studies [2]. Thus, a first major problem when trying to infer neural mechanisms from aggregate signals is to provide an objective approach to separate aggregate neural signals into different bands each reflecting a different neural phenomenon, and to establish a correspondence between specific frequency regions of the LFP or EEG power spectrum and the underlying neural mechanisms. One difficulty in this endeavor is that the average neural power spectrum (over either time epochs or trials) of a typical recording (see Fig. 1A for an example of LFP recordings in visual cortex during naturalistic stimulation) is dominated by a power-law aperiodic component and often lacks easily identifiable oscillatory peaks [23, 50]. This could lead us to think that there is no distinctive structure in the power spectrum and, thus, there is no possibility for a clear and objective separation in frequency bands. However, the average spectrum may mask individual variations that correspond to different processing modalities or functions, especially for complex tasks or during stimulation with naturalistic sensory stimuli. Comparison of power and information spectra. Data were taken from primary visual cortex of anaesthetized macaques during stimulation with naturalistic movies. A Power spectrum. B Information conveyed by power spectrum. Recomputed from data first published in [23, 48] To capture how individual Fourier frequencies vary their power over time in relation to stimulus variations, we developed an information theoretic algorithm (illustrated in Fig. 2) that quantifies the amount of information about each possible stimulus that is carried by the LFP power at a given frequency [40]. The theoretical foundations of information theory (see [51, 52]) demonstrate that mutual information is the best measure to capture all possible ways in which a neural signal can carry information about any sensory variable of interest. To create the stimulus set, we divided the presentation time of the movie into different time windows (Fig. 2A), each considered a different stimulus \(s\) (in other words, a different movie scene). We computed the information between the stimulus window in the movie \(s\) that was being presented and the power of the LFP at a given frequency \(f\), as follows: $$I\left( {S;R_{f} } \right) = \mathop \sum \limits_{s} P\left( s \right)\mathop \sum \limits_{{r_{f} }} P\left( {r_{f} |s} \right)\log_{2} \frac{{P\left( {r_{f} |s} \right)}}{{P\left( {r_{f} } \right)}},$$ where \(P\left( s \right)\) is the probability of presentation of the stimulus window \(s\) (here, this is the inverse of the total number of time windows in which we divided the movie sequence), \(P\left( {r_{f} |s} \right)\) is the probability of observing a power \(r_{f}\) at a frequency \(f\) in response to the stimulus \(s\) in a single trial (Fig. 2C and D), and \(P\left( {r_{f} } \right)\) is the probability of observing the power \(r_{f}\) across all trials in response to any stimulus (Fig. 2B). Illustration of computation of the mutual information carried by LFP power about movie scenes. A Simulation of single-trial LFP power in the gamma band (from 70 to 80 Hz) using a sparsely connected recurrent network of excitatory and inhibitory neurons [40]. To simulate periods of low and high LFP power, which approximate the different movie scenes used in the original publication [40], we modulated the external input rate of the model by superposition of a sine wave with frequency 1 Hz and a constant rate signal. The spectrogram was computed over half a cycle of the sinusoid. Every time window of the spectrogram was considered a different scene \(s\) (\(s_{1}\) and \(s_{2}\) are a period of low and high LFP gamma power, respectively). Thus, the probability of each scene \(P\left( s \right)\) is the inverse of the number of time windows. B Probability distribution \(P\left( r \right)\) of the LFP gamma power across all trials and scenes. Probability distribution \(P\left( {r|s} \right)\) of the LFP gamma power across all trials given the presented scenes \(s_{1}\) (C) and \(s_{2}\) (D) To facilitate the sampling of response probabilities, the space of power values at each frequency was binned [23]. Information is non-negative and quantifies the average reduction of the uncertainty about the stimulus that can be gained from observing a single-trial neural response. We measured it in units of bits, one bit corresponding to a reduction of uncertainty by a factor of two. Importantly, the quantification of information based on the division of the movie into stimulus windows or scenes without defining which visual feature (e.g., contrast, orientation, etc.) is represented in each movie frame allowed us to capture all information about any possible visual features (including both static image features and their variation from frame to frame) present in the movie. The information spectrum computed for V1 data during naturalistic stimulation showed a clear structure that was invisible in the average spectrum: there were only two bands that carried stimulus information, a low-frequency (1–8 Hz) and a high-frequency gamma band (60–100 Hz), whereas middle frequencies carried little information (Fig. 1B). It is important to note that the gamma band has been traditionally implicated in the coding of information about specific visual features, such as orientation or contrast of a visual input [35, 37]. The low-frequency band, to our knowledge, was not implicated in the coding of visual information in V1 by any previous study. This discovery of an extra information channel not considered before was in our view enabled by two key features. First, from an experimental point of view, it was crucial to use a complex dynamics visual stimulus (a movie) that included not only a rich variety of image features from one frame to the next, but also a rich variety of naturalistic temporal dynamics of those features. Second, from a computational point of view, we used a formalism that accounted for all possible sources of information, and, in this way, allowed us to identify the sources of coding of information and neural pathways not considered before. This gives us an example of the potential of using an information theoretic analysis for discovering channels that carry different kinds of neural information and thus need to be included in models as partly different neural pathways. Extending the information theory approach to the multivariate case of information carried by pairs of frequencies (see [53]) allowed us to characterize specific regions of the information spectrum as belonging to only one or multiple bands. This partition into functionally meaningful bands can be achieved very precisely (and even to the point of individuating the optimal frequency values determining the boundaries between different bands) by quantifying patterns of redundancy or independence between the information carried by different frequencies [48]. For example, if the information carried by one frequency is independent of amplitude variations in another frequency, then these two frequencies probably capture different neural contributions to the LFP. If the two frequencies carry redundant information instead, they likely originate from common neural phenomena. Application of this approach to visual cortical data has revealed three different functional bands in the information spectrum [23]. Frequencies in the gamma (60–100 Hz) range exhibited high visual information and had large redundancy among them, indicating that neural responses at these frequencies have a common component that is stimulus-driven. The same applies to low frequencies (1–8 Hz), where there was high redundancy between frequencies. Importantly, low and high-frequency frequencies carried independent information, indicating that they act as independent visual information channels and probably originate from separate neural processes. Finally, frequencies between 15 and 38 Hz exhibited high correlations between them but not with stimulus information. Based only on these results of the information theoretic analysis, we hypothesized that signals in this middle frequency range are generated by a common process unrelated to the visual stimuli—for example, a neuromodulatory input [23]. We will discuss in the Section "Perturbation experiments guided by predictions of computational models to study the effect of neuromodulation on cortical oscillations" how this hypothesis could be tested causally by pharmacological intervention. Importantly, the principles of information theory can be used to understand not only how information is encoded in the oscillatory power or phase of each frequency band, but also how activity in different bands is involved in transmission of information across different neural populations. We used the same recordings of LFP activity in macaque V1 during natural movie stimulation discussed above. Our information theoretic methods (in particular, directed measures of information transfer such as transfer entropy) allowed us to investigate how oscillations of cortical activity in the gamma frequency band may influence dynamically the direction and strength of information flow across different groups of neurons. We found that the local phase of gamma-band rhythmic activity exerted a stimulus-modulated and spatially asymmetric directed effect on the firing rate of spatially separated populations within the primary visual cortex [45]. The relationships between gamma phases at different sites could be described as a stimulus-modulated gamma-band wave propagating along the spatial directions with the maximal flow of information transmitted between neural populations. We observed that gamma waves changed direction during presentation of different movie scenes, and when this occurred, the strength of information flow in the direction of the gamma wave propagation was transiently reinforced. Given that travelling gamma waves indicated the direction of causation in neural activity, we hypothesized that these shifts were associated to a propagation of gamma oscillations along the horizontal connections of V1. Interestingly, we found support for this hypothesis from the fact that the properties of gamma waves were compatible with known physiological and anatomical properties of lateral connectivity. First, travelling gamma waves had an average propagation speed (approximately 364 cm/s) that was similar in magnitude to the signal propagation speed along axons of excitatory horizontal connections reported in the literature [54,55,56,57]. Second, information transfer mediated by gamma waves was quantitatively stronger among pairs with similar orientation preference, compatible with the finding that horizontal connections are more likely among populations with similar orientation preferences [58,59,60]. These effects were specific to the gamma band and were not found in other low-frequency bands [45]. These results suggest that traveling gamma waves mark and causally mediate the dynamic reconfiguration of functional connections and the transfer of visual information within V1 [45]. Together, these examples show the power of information theoretic approaches to interpret individual frequencies in terms of variations with stimuli or behavioral state and to identify a minimal set of meaningful bands whose origin can then be investigated with the aid of computational models and perturbation experiments, as we illustrate in the next sections. Mathematical modeling of neural network dynamics Neural network models to identify neural mechanisms for information encoding The above information theoretic analysis individuated two frequency bands that were shown to carry different channels of visual information. The question that arises is what neural circuit mechanisms are expressed by each band. To address this question, we developed a formalism based on fitting recurrent network models of interacting excitatory and inhibitory point neurons (Fig. 3A) to data. These models reduce the morphology of neurons to a single point in space and their dynamics are described by a set of coupled differential equations that can be solved efficiently numerically and often also analytically. Despite their simplicity, these models have been widely used to describe important properties of cortical microcircuits [61], such as sensory information coding [40, 62], working memory [63, 64], attention [65] or sleep slow waves [66]. In particular, we developed a recurrent network model of leaky integrate-and-fire (LIF) neuronal populations composed of 5000 neurons. Consistent with the ratio of excitatory and inhibitory neurons found in the cerebral cortex, 4000 neurons were excitatory (i.e., their projections onto other neurons formed AMPA-like excitatory synapses) and 1000 inhibitory (i.e., their projections formed GABA-like synapses), randomly connected with a connection probability between each pair of neurons of 0.2. All neurons in the model receive external inputs (both a sensory-driven thalamic input and a noisy intracortical input) to predict some key aspects of neural activity in primary visual cortex during naturalistic visual stimulation and spontaneous activity [40, 62, 67]. A Recurrent inhibitory–excitatory (I–E) network of LIF point neurons. Excitatory and inhibitory neurons receive two different types of external inputs: a sensory-driven input and a cortico-cortical input. B Network of multicompartment neuron models used in the hybrid modeling approach [72, 73] to compute the ground-truth EEG signal. C Raster plots of spiking activity (top panels) of the LIF network model for the asynchronous irregular (AI), synchronous irregular (SI) and synchronous regular (SR) network states. Comparison between ground-truth EEGs and outputs of the current-based ERWS1 and ERWS2 proxies (bottom panels) Specifically, in ref. [40, 62], we found that by studying this simulated network we could capture the translation rules between stimulus dynamics and LFP frequency bands. Confirming theoretical results that showed that gamma power in a recurrent network tends to increase with the strength of the input to the network [30], we found that the network encoded the overall strength of the input into the power of gamma-band oscillations generated by inhibitory–excitatory neural interactions. In addition, we found that the network encoded slow dynamic features of the input into slow LFP fluctuations mediated (through entrainment to the inputs) by stimulus–neural interactions. Thus, our recurrent network model could provide evidence for the dual encoding of information in both the low-frequency information channel (carrying temporal information of the dynamics of sensory-driven thalamic inputs) and the gamma-band information channel (reflecting excitatory inhibitory interactions modulated by the strength of thalamic inputs). Interestingly, the model also reproduced other higher order features of the dynamics of visual cortex, including the independence of the information carried by low- and high-frequency information channels when using naturalistic visual stimuli [23], and the cross-frequency coupling between the EEG delta-band phase and gamma-band amplitude [67]. However, our model [40, 62, 67] could not reproduce the excess in power and the strong within-band correlations observed in real data for the mid-range (19–38 Hz) band in visual cortex [23]. Our model did not include changes in neural activity induced by neuromodulation, further corroborating the idea that stimulus-independent neuromodulatory factors are needed to model the dynamics of this mid-range band. Realistic computation of field potentials from point-neuron network models The above studies compared qualitatively and quantitatively information patterns in neural network models and real data to make inferences about which neural pathway contribute to each frequency band. As demonstrated in ref. [30], this question can be addressed even without having to compute a realistic LFP or EEG from the network models, because basic oscillation properties of the network can be observed both at the level of spiking activity of neurons and at the level of aggregate signals. We have then begun to investigate the more difficult problem of trying to measure, or to infer, the precise value of microscopic neural parameters, such as the activity of individual classes of neurons within a network, from aggregate activity measures such as EEG or LFP. To obtain a more precise estimation of network parameters, it is necessary to compute a realistic LFP or EEG from these network models of point neurons. However, these neuron models lack a spatial structure, which prevents modelers from being able to compute the spatially separated transmembrane currents that are necessary to generate LFPs and EEGs in real biological networks. In our initial studies [40, 62, 67], we estimated the LFP and EEG based on the sum of absolute values of synaptic currents from simulation of the network model. Other studies have proposed different approaches to compute extracellular potentials using other variables of the simulation, such as the average membrane potentials [66, 68], the average firing rate [30, 69] or the sum of all synaptic currents [70, 71]. We then evaluated systematically [72, 73] the limitations and caveats of using such ad hoc simplifications to estimate the LFP or EEG from neuron models without spatial structure (i.e., point-neuron models). We compared how well different approximations of field potentials (termed proxies) proposed in the literature reconstructed a ground-truth signal obtained by means of the hybrid modeling approach [72, 74] (Fig. 3B). This approach includes a network of unconnected multicompartment neuron models with realistic three-dimensional (3D) spatial morphologies. Each multicompartment neuron is randomly assigned to a unique neuron in the network of point neurons and receives the same input spikes of the equivalent point neuron. Since the multicompartment neurons are not connected to each other, they are not involved in the network dynamics and their only role is to transform the spiking activity of the point-neuron network into a realistic estimate of the LFP or EEG that is used as the ground-truth signal against which we compared different candidate proxies (Fig. 3C). We found that a specific weighted sum of synaptic currents from the point-neuron network model, for a specific network state (i.e., asynchronous irregular), performed remarkably well in predicting the LFP [72]. We then extended our study to the EEG [73] by including a head model that approximated the different geometries and electrical conductivities of the head necessary for computing a realistic EEG signal recorded by scalp electrodes. We chose the four-layered spherical head model [75, 76] that included different layers that represented the brain tissue, cerebrospinal fluid (CSF), skull, and scalp. We also validated our EEG proxies across the repertoire of network states displayed by recurrent network models [30, 77], namely the asynchronous irregular (AI), synchronous irregular (SI), and synchronous regular (SR) (Fig. 3C). The states generated by the LIF neuron network were produced by systematically varying across simulations the firing rate of the thalamic input (\(\upsilon_{0}\)) and the relative strength between inhibitory and excitatory synapses (\(g = g_{{\text{I}}} /g_{{\text{E}}}\)). The validation of our proxies for a wide range of values of \(g\) and \(\upsilon_{0}\) is important to solve the inverse modeling approach and to ensure that our proxies can be used to robustly predict these network parameters from the varied shapes of experimentally recorded EEGs (see Sect. 4.3). We found that a new class of linear EEG proxies, based on a weighted sum of synaptic currents, outperformed previous approaches and worked well under a wide range of network configurations with different cell morphologies, distributions of presynaptic inputs and positions of the EEG electrode. We also evaluated whether our proxies could perform well when combined with a more complex and anatomically detailed human head model: the New York head model [78], which takes into account the folded cortical surface of the human brain. The EEG topographic maps calculated by applying our proxies to the New York head model correctly predicted time traces of the EEG signal at different electrode positions. Changes in excitation–inhibition (E/I) balance in simulated neural aggregate signals Our realistic estimations of aggregate signals from simple point-neuron networks allowed us to invert and use these models to estimate some neural parameters of circuit activity that are not directly accessible from the EEG and LFP. For example, we considered how we could use network models to estimate from such recordings the ratio between excitation and inhibition. The theory of neural network models [30] and the empirical electrophysiological data have reported that the E/I ratio has profound effects on the spectral shape of neural activity. Its imbalance has been implicated in neuropsychiatric conditions, including Autism Spectrum Disorder. In ref. [79], we investigated different biomarkers computed on the power spectrum of LFPs and fMRI blood oxygen level-dependent (BOLD) signal that could be used to reliably estimate the E/I ratio. These biomarkers were the exponent of the 1/f spectral power law, slopes for the low- and high-frequency regions of the spectrum and the Hurst exponent (H). We simulated the LFP (Fig. 4A) and BOLD signal from our recurrent network model, and studied how these biomarkers changed when we manipulated the E/I ratio by independently varying the strengths of the inhibitory (\(g_{{\text{I}}}\)) and excitatory (\(g_{{\text{E}}}\)) synaptic conductances [80]. Part of our results are shown in Fig. 4. A flattening of 1/f slopes (Fig. 4C) was found in the excitation-dominated region where the E/I ratio is shifted in favor of E than the reference value used previously [40, 62, 67] to capture cortical power spectra. We also observed that H decreased in the excitation-dominated region (Fig. 4D). However, shifting the E/I balance towards stronger inhibition had a weaker effect on slopes and H. We then validated our model against in vivo chemogenetic manipulations in mice that either increased neurophysiological excitation or silenced the local activity in the network. When modeling effects of chemogenetic manipulations within the recurrent network model, we found that DREADD manipulations that enhanced excitability of pyramidal neurons reduced steepness of the slopes and led to a decrease in H. Then, we used the predictions of our model of how the ratio \(g\) between inhibition and excitation affects spectral properties such as slopes and H (see Fig. 4) to interpret the spectra of resting state fMRI (rsfMRI) in the medial prefrontal cortex (MPFC) of subjects within the autism spectrum disorder. We found that H was reduced in the MPFC of autistic males but not females, and using our model we interpreted this change in spectral properties as an indicator of increased excitation in males. LFPs (A) and PSDs (B) generated for two different ratios between inhibitory and excitatory conductances (\(g = g_{I} /g_{E}\)). The relationship between 1/f slopes (C) and Hurst exponents (D) are plotted as a function of \(g\) for two different firing rates of thalamic input (1.5 and 2 spikes/second). The reference value of \(g\) (which has shown in previous studies to reproduce cortical data well) is represented by a dashed black line. Recomputed and replotted from data published in ref. [79] Perturbation experiments guided by predictions of computational models to study the effect of neuromodulation on cortical oscillations Biophysically realistic computational models and information theoretic methods can be used to generate predictions and test them with suitably designed perturbation experiments. Finding the best strategy to do it is an active topic of research. In what follows, we briefly review our attempts to address this challenge. As reviewed above, based on our information theoretic analysis, we have proposed that the mid-frequency range (15–38 Hz approx.), which exhibited high correlations within frequency bands but contained little visual information, may reflect a single source of neuromodulatory inputs. We designed a perturbation experiment to test this hypothesis [81]. We recorded the LFP in primary visual cortex (V1) of anesthetized macaques during spontaneous activity and during visual stimulation with naturalistic movies while pharmacologically perturbing dopaminergic neuromodulation by systemic injection of L-DOPA (a metabolic precursor of dopamine). We found that dopaminergic neuromodulation had marked effects on both spontaneous and movie-evoked neural activity. During spontaneous activity, dopaminergic neuromodulation increased the power of the LFP specifically in the 19–38 Hz band, suggesting that the power of endogenous visual cortex oscillations in this band can be used as a robust marker of dopaminergic neuromodulation. These results confirmed the hypothesis that we made in earlier work [23] based on information theoretic analysis of field potentials. Moreover, dopamine increased visual information encoding over all frequencies during movie stimulation. The information increase due to dopamine was prominent in the supragranular layers of visual cortex that project to higher cortical area and in the gamma band of LFP power spectrum that has been previously implicated in mediating feedforward information transfer. We concluded that dopamine may promote the readout of relevant sensory information by strengthening the transmission of information from primary to higher areas [81]. These observations, which in our view could not have been made by either computational analyses or blind design of perturbation experiments alone, illustrate the power of effectively combing them. Understanding the microcircuit dynamics and computations underlying EEG and LFP features has the potential to allow researchers to make fundamental discoveries about brain function and to effectively use measures extracted from aggregate electrical signals as a reliable biomarker of brain pathologies. In this paper, we have reviewed our approach based on computational modeling and advanced analytical tools of neural network dynamics to interpret neural aggregate signals in terms of neural circuit parameters. We have developed tools to partition the LFP and EEG power spectrum into different meaningful frequency bands and to identify frequency channels and neural pathways that process largely independent and different kinds of neural information. We have shown preliminary work on estimating neural circuit parameters, such as excitation, inhibition and their interaction, from aggregate neural signals. Here we outline some limitations of our approach and the major challenges that we must address in the future. In Refs. [72, 73] we have developed accurate LFP and EEG proxies that open up the possibility of computing realistic EEG and LFP predictions from simple network models. These predictions can be then compared to empirical EEGs and LFPs and can be used to estimate neural model parameters. However, to achieve this goal we still need to accomplish several steps. First, we need to develop statistical tools that can infer neural parameters (such as the ratio between excitation and inhibition or properties of network connectivity) from EEG and LFP spectral features by fitting such models to empirically measured spectra. Then, we need to carefully validate the statistical inference approaches on real brain data in which neural circuit parameters can been manipulated by the experimenter, for example by means of chemogenetic manipulations [79]. We could validate the inference algorithm by studying if it is able to predict the type of controlled manipulation that has been applied in each dataset (e.g., whether a manipulation produced an increase or decrease of the E/I balance). Although we used realistic modeling of neurons and networks, our models do not capture the full complexity of the brain. It would be particularly important to extend our models to include different classes of neurons, such as different types of interneurons. We could include inter-areal interactions between different recurrent networks to generate wider oscillation ranges than the gamma oscillations mostly considered in our work, which would be useful to study the relationship between local oscillations and functional connectivity [82]. It would also be important to model the effects of different kinds of neuromodulators on distributed processing. In previous work [81], we developed methods to study the effect of global and diffuse patterns of neuromodulation. However, an emergent view [83] is that neuromodulation can be non-global and depend on target specificity and the differentiated spatiotemporal dynamics within brain stem nuclei. It will be important to implement analytical tools to identify first individual ensembles in the locus coeruleus (LC) and to understand then how neural activity of these LC ensembles drive cortical states [84]. Given the above limitations, and although more work is needed to be able to interpret empirical aggregate signals such as EEGs and LFPs in terms of network model parameters and neuromodulation, we expect that future research can build on the encouraging results presented in this paper and lead to a credible, robust and biologically plausible estimation of neural parameters from neural aggregate signals. This is a review that contains no new data. Software for simulation of neural network models of spiking point neurons and multicompartment neurons and for computation of EEG proxies can be found at https://github.com/pablomc88/EEG_proxy_from_network_point_neurons. Software for the information theoretic calculations can be found at https://sicode.eu/results/software.html. AI: Asynchronous irregular BOLD: Blood oxygen level-dependent ECoG: Electrocorticogram EEG: fMRI: LC: Locus coeruleus LIF: Leaky integrate-and-fire LFP: Local field potential MEG: Magnetoencephalogram MPFC: Medial prefrontal cortex rsfMRI: Resting state fMRI Synchronous irregular Synchronous regular Buzsáki G, Anastassiou CA, Koch C (2012) The origin of extracellular fields and currents—EEG, ECoG, LFP and spikes. Nat Rev Neurosci 13(6):407–420. https://doi.org/10.1038/nrn3241 Einevoll GT, Kayser C, Logothetis NK, Panzeri S (2013) Modelling and analysis of local field potentials for studying the function of cortical circuits. Nat Rev Neurosci 14(11):770–785. https://doi.org/10.1038/nrn3599 Lopes da Silva F (2013) EEG and MEG: relevance to neuroscience. Neuron 80(5):1112–1128. https://doi.org/10.1016/j.neuron.2013.10.017 Siegel M, Donner TH, Engel AK (2012) Spectral fingerprints of large-scale neuronal interactions. Nat Rev Neurosci 13(2):121–134. https://doi.org/10.1038/nrn3137 Cohen MX (2017) Where does EEG come from and what does it mean? Trends Neurosci 40(4):208–218. https://doi.org/10.1016/j.tins.2017.02.004 Nunez PL, Srinivasan R (2006) Electric fields of the brain: the neurophysics of EEG. Oxford University Press, Oxford. https://doi.org/10.1093/acprof:oso/9780195050387.001.0001 Başar E (1980) EEG-brain dynamics: relation between EEG and brain evoked potentials. Elsevier, Amsterdam Mitra P, Bokil H (2007) Observed brain dynamics. Oxford University Press, Oxford Mahmud M, Vassanelli S (2016) Processing and analysis of multichannel extracellular neuronal signals: state-of-the-art and challenges. Front Neurosci 10:248. https://doi.org/10.3389/fnins.2016.00248 Martínez-Cañada P, Noei S, Panzeri S (2021) Inferring neural circuit interactions and neuromodulation from local field potential and electroencephalogram measures. In: Mahmud M, Kaiser MS, Vassanelli S, Dai Q, Zhong N (eds) Brain informatics. Lecture notes in computer science. Springer, Berlin, pp 3–12. https://doi.org/10.1007/978-3-030-86993-9_1 Wang X-J, Krystal John H (2014) Computational psychiatry. Neuron 84(3):638–654. https://doi.org/10.1016/j.neuron.2014.10.018 Pesaran B, Vinck M, Einevoll GT, Sirota A, Fries P, Siegel M, Truccolo W, Schroeder CE, Srinivasan R (2018) Investigating large-scale brain dynamics using field potential recordings: analysis and interpretation. Nat Neurosci 21(7):903–919. https://doi.org/10.1038/s41593-018-0171-8 Buzsaki G, Wang XJ (2012) Mechanisms of gamma oscillations. Annu Rev Neurosci 35:203–225. https://doi.org/10.1146/annurev-neuro-062111-150444 Buzsaki G (2004) Neuronal oscillations in cortical networks. Science 304(5679):1926–1929. https://doi.org/10.1126/science.1099745 Jadi MP, Sejnowski TJ (2014) Regulating cortical oscillations in an inhibition-stabilized network. Proc IEEE 102(5):830–842. https://doi.org/10.1109/jproc.2014.2313113 Wilson HR, Cowan JD (1972) Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J 12(1):1–24. https://doi.org/10.1016/s0006-3495(72)86068-5 Whittington MA, Traub RD, Kopell N, Ermentrout B, Buhl EH (2000) Inhibition-based rhythms: experimental and mathematical observations on network dynamics. Int J Psychophysiol 38(3):315–336. https://doi.org/10.1016/s0167-8760(00)00173-2 Singer W (1999) Neuronal synchrony: a versatile code for the definition of relations? Neuron 24(1):49–65. https://doi.org/10.1016/s0896-6273(00)80821-1 Wang X-J (2010) Neurophysiological and computational principles of cortical rhythms in cognition. Physiol Rev 90(3):1195–1268. https://doi.org/10.1152/physrev.00035.2008 Fries P (2015) Rhythms for cognition: communication through coherence. Neuron 88(1):220–235. https://doi.org/10.1016/j.neuron.2015.09.034 Buzsáki G, Schomburg EW (2015) What does gamma coherence tell us about inter-regional neural communication? Nat Neurosci 18(4):484–489. https://doi.org/10.1038/nn.3952 Scheeringa R, Fries P (2019) Cortical layers, rhythms and BOLD signals. Neuroimage 197:689–698. https://doi.org/10.1016/j.neuroimage.2017.11.002 Belitski A, Gretton A, Magri C, Murayama Y, Montemurro MA, Logothetis NK, Panzeri S (2008) Low-frequency local field potentials and spikes in primary visual cortex convey independent visual information. J Neurosci 28(22):5696–5709. https://doi.org/10.1523/jneurosci.0009-08.2008 Donner TH, Siegel M (2011) A framework for local cortical oscillation patterns. Trends Cogn Sci 15(5):191–199. https://doi.org/10.1016/j.tics.2011.03.007 Lakatos P, Karmos G, Mehta AD, Ulbert I, Schroeder CE (2008) Entrainment of neuronal oscillations as a mechanism of attentional selection. Science 320(5872):110–113. https://doi.org/10.1126/science.1154735 Steriade M, Hobson J (1976) Neuronal activity during the sleep-waking cycle. Prog Neurobiol 6(3–4):155–376 Ungerleider L, Ray S, Maunsell JHR (2011) Different origins of gamma rhythm and high-gamma activity in macaque visual cortex. PLoS Biol. https://doi.org/10.1371/journal.pbio.1000610 Veit J, Hakim R, Jadi MP, Sejnowski TJ, Adesnik H (2017) Cortical gamma band synchronization through somatostatin interneurons. Nat Neurosci 20(7):951–959. https://doi.org/10.1038/nn.4562 Bartos M, Vida I, Jonas P (2007) Synaptic mechanisms of synchronized gamma oscillations in inhibitory interneuron networks. Nat Rev Neurosci 8(1):45–56. https://doi.org/10.1038/nrn2044 Brunel N, Wang X-J (2003) What determines the frequency of fast network oscillations with irregular neural discharges? I. Synaptic dynamics and excitation-inhibition balance. J Neurophysiol 90(1):415–430. https://doi.org/10.1152/jn.01095.2002 Cardin JA, Carlén M, Meletis K, Knoblich U, Zhang F, Deisseroth K, Tsai L-H, Moore CI (2009) Driving fast-spiking cells induces gamma rhythm and controls sensory responses. Nature 459(7247):663–667. https://doi.org/10.1038/nature08002 Gray CM, König P, Engel AK, Singer W (1989) Oscillatory responses in cat visual cortex exhibit inter-columnar synchronization which reflects global stimulus properties. Nature 338(6213):334–337. https://doi.org/10.1038/338334a0 Belitski A, Panzeri S, Magri C, Logothetis NK, Kayser C (2010) Sensory information in local field potentials and spikes from visual and auditory cortices: time scales and frequency bands. J Comput Neurosci 29(3):533–545. https://doi.org/10.1007/s10827-010-0230-y Juergens E, Guettler A, Eckhorn R (1999) Visual stimulation elicits locked and induced gamma oscillations in monkey intracortical- and EEG-potentials, but not in human EEG. Exp Brain Res 129(2):247–259. https://doi.org/10.1007/s002210050895 Kayser C, König P (2004) Stimulus locking and feature selectivity prevail in complementary frequency ranges of V1 local field potentials. Eur J Neurosci 19(2):485–489. https://doi.org/10.1111/j.0953-816X.2003.03122.x Kayser C, Petkov CI, Logothetis NK (2007) Tuning to sound frequency in auditory field potentials. J Neurophysiol 98(3):1806–1809. https://doi.org/10.1152/jn.00358.2007 Henrie JA, Shapley R (2005) LFP power spectra in V1 cortex: the graded effect of stimulus contrast. J Neurophysiol 94(1):479–490. https://doi.org/10.1152/jn.00919.2004 Pesaran B, Pezaris JS, Sahani M, Mitra PP, Andersen RA (2002) Temporal structure in neuronal activity during working memory in macaque parietal cortex. Nat Neurosci 5(8):805–811. https://doi.org/10.1038/nn890 Frien A, Eckhorn R, Bauer R, Woelbern T, Gabriel A (2000) Fast oscillations display sharper orientation tuning than slower components of the same recordings in striate cortex of the awake monkey. Eur J Neurosci 12(4):1453–1465. https://doi.org/10.1046/j.1460-9568.2000.00025.x Mazzoni A, Brunel N, Cavallari S, Logothetis NK, Panzeri S (2011) Cortical dynamics during naturalistic sensory stimulations: experiments and models. J Physiol Paris 105(1–3):2–15. https://doi.org/10.1016/j.jphysparis.2011.07.014 Bosman Conrado A, Schoffelen J-M, Brunet N, Oostenveld R, Bastos Andre M, Womelsdorf T, Rubehn B, Stieglitz T, De Weerd P, Fries P (2012) Attentional stimulus selection through selective synchronization between monkey visual areas. Neuron 75(5):875–888. https://doi.org/10.1016/j.neuron.2012.06.037 van Kerkoerle T, Self MW, Dagnino B, Gariel-Mathis M-A, Poort J, van der Togt C, Roelfsema PR (2014) Alpha and gamma oscillations characterize feedback and feedforward processing in monkey visual cortex. Proc Natl Acad Sci 111(40):14332–14341. https://doi.org/10.1073/pnas.1402773111 Womelsdorf T, Schoffelen J-M, Oostenveld R, Singer W, Desimone R, Engel AK, Fries P (2007) Modulation of neuronal interactions through neuronal synchronization. Science 316(5831):1609–1612. https://doi.org/10.1126/science.1139597 Fries P (2009) Neuronal gamma-band synchronization as a fundamental process in cortical computation. Annu Rev Neurosci 32(1):209–224. https://doi.org/10.1146/annurev.neuro.051508.135603 Kohn A, Besserve M, Lowe SC, Logothetis NK, Schölkopf B, Panzeri S (2015) Shifts of gamma phase across primary visual cortical sites reflect dynamic stimulus-modulated information transfer. PLOS Biol. https://doi.org/10.1371/journal.pbio.1002257 Ferro D, van Kempen J, Boyd M, Panzeri S, Thiele A (2021) Directed information exchange between cortical layers in macaque V1 and V4 and its modulation by selective attention. Proc Natl Acad Sci 118(12):e2022097118. https://doi.org/10.1073/pnas.2022097118 Logothetis NK (2008) What we can do and what we cannot do with fMRI. Nature 453(7197):869–878. https://doi.org/10.1038/nature06976 Magri C, Schridde U, Murayama Y, Panzeri S, Logothetis NK (2012) The amplitude and timing of the BOLD signal reflects the relationship between local field potential power at different frequencies. J Neurosci 32(4):1395–1407. https://doi.org/10.1523/jneurosci.3985-11.2012 Engel AK, Fries P (2010) Beta-band oscillations—signalling the status quo? Curr Opin Neurobiol 20(2):156–165. https://doi.org/10.1016/j.conb.2010.02.015 Donoghue T, Haller M, Peterson EJ, Varma P, Sebastian P, Gao R, Noto T, Lara AH, Wallis JD, Knight RT, Shestyuk A, Voytek B (2020) Parameterizing neural power spectra into periodic and aperiodic components. Nat Neurosci 23(12):1655–1665. https://doi.org/10.1038/s41593-020-00744-x Shannon CE (1948) A mathematical theory of communication. Bell Syst Tech J 27(3):379–423. https://doi.org/10.1002/j.1538-7305.1948.tb01338.x Quian Quiroga R, Panzeri S (2009) Extracting information from neuronal populations: information theory and decoding approaches. Nat Rev Neurosci 10(3):173–185. https://doi.org/10.1038/nrn2578 Pola G, Thiele A, Hoffmann KP, Panzeri S (2003) An exact method to quantify the information transmitted by different mechanisms of correlational coding. Network 14(1):35–60. https://doi.org/10.1088/0954-898x/14/1/303 Bringuier V, Fdr C, Glaeser L, Frégnac Y (1999) Horizontal propagation of visual activity in the synaptic integration field of area 17 neurons. Science 283(5402):695–699. https://doi.org/10.1126/science.283.5402.695 Nauhaus I, Busse L, Carandini M, Ringach DL (2008) Stimulus contrast modulates functional connectivity in visual cortex. Nat Neurosci 12(1):70–76. https://doi.org/10.1038/nn.2232 Grinvald A, Lieke EE, Frostig RD, Hildesheim R (1994) Cortical point-spread function and long-range lateral interactions revealed by real-time optical imaging of macaque monkey primary visual cortex. J Neurosci 14(5):2545–2568. https://doi.org/10.1523/jneurosci.14-05-02545.1994 Sato Tatsuo K, Nauhaus I, Carandini M (2012) Traveling waves in visual cortex. Neuron 75(2):218–229. https://doi.org/10.1016/j.neuron.2012.06.029 Stettler DD, Das A, Bennett J, Gilbert CD (2002) Lateral connectivity and contextual interactions in macaque primary visual cortex. Neuron 36(4):739–750. https://doi.org/10.1016/s0896-6273(02)01029-2 Roerig B, Chen B (2002) Relationships of local inhibitory and excitatory circuits to orientation preference maps in ferret visual cortex. Cereb Cortex 12(2):187–198. https://doi.org/10.1093/cercor/12.2.187 Kisvarday Z (1997) Orientation-specific relationship between populations of excitatory and inhibitory lateral connections in the visual cortex of the cat. Cereb Cortex 7(7):605–618. https://doi.org/10.1093/cercor/7.7.605 Einevoll GT, Destexhe A, Diesmann M, Grün S, Jirsa V, de Kamps M, Migliore M, Ness TV, Plesser HE, Schürmann F (2019) The scientific case for brain simulations. Neuron 102(4):735–744. https://doi.org/10.1016/j.neuron.2019.03.027 Mazzoni A, Panzeri S, Logothetis NK, Brunel N (2008) Encoding of naturalistic stimuli by local field potential spectra in networks of excitatory and inhibitory neurons. PLoS Comput Biol 4(12):e1000239. https://doi.org/10.1371/journal.pcbi.1000239 Compte A (2000) Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model. Cereb Cortex 10(9):910–923. https://doi.org/10.1093/cercor/10.9.910 Mongillo G, Barak O, Tsodyks M (2008) Synaptic theory of working memory. Science 319(5869):1543–1546. https://doi.org/10.1126/science.1150769 Deco G, Thiele A (2011) Cholinergic control of cortical network interactions enables feedback-mediated attentional modulation. Eur J Neurosci 34(1):146–157. https://doi.org/10.1111/j.1460-9568.2011.07749.x Hill S, Tononi G (2005) Modeling sleep and wakefulness in the thalamocortical system. J Neurophysiol 93(3):1671–1698. https://doi.org/10.1152/jn.00915.2004 Mazzoni A, Whittingstall K, Brunel N, Logothetis NK, Panzeri S (2010) Understanding the relationships between spike rate and delta/gamma frequency bands of LFPs and EEGs using a local cortical network model. Neuroimage 52(3):956–972. https://doi.org/10.1016/j.neuroimage.2009.12.040 Bazhenov M, Stopfer M, Rabinovich M, Huerta R, Abarbanel HDI, Sejnowski TJ, Laurent G (2001) Model of transient oscillatory synchronization in the locust antennal lobe. Neuron 30(2):553–567. https://doi.org/10.1016/s0896-6273(01)00284-7 Buehlmann A, Deco G (2010) Optimal information transfer in the cortex through synchronization. PLoS Comput Biol. https://doi.org/10.1371/journal.pcbi.1000934 Deco G, Jirsa VK, Robinson PA, Breakspear M, Friston K (2008) The dynamic brain: from spiking neurons to neural masses and cortical fields. PLoS Comput Biol 4(8):e1000092. https://doi.org/10.1371/journal.pcbi.1000092 Compte A, Sanchez-Vives MV, McCormick DA, Wang X-J (2003) Cellular and network mechanisms of slow oscillatory activity (<1 Hz) and wave propagations in a cortical network model. J Neurophysiol 89(5):2707–2725. https://doi.org/10.1152/jn.00845.2002 Mazzoni A, Linden H, Cuntz H, Lansner A, Panzeri S, Einevoll GT (2015) Computing the local field potential (LFP) from integrate-and-fire network models. PLoS Comput Biol 11(12):e1004584. https://doi.org/10.1371/journal.pcbi.1004584 Martinez-Canada P, Ness TV, Einevoll GT, Fellin T, Panzeri S (2021) Computation of the electroencephalogram (EEG) from network models of point neurons. PLoS Comput Biol 17(4):e1008893. https://doi.org/10.1371/journal.pcbi.1008893 Hagen E, Dahmen D, Stavrinou ML, Lindén H, Tetzlaff T, van Albada SJ, Grün S, Diesmann M, Einevoll GT (2016) Hybrid scheme for modeling local field potentials from point-neuron networks. Cereb Cortex 26(12):4461–4496. https://doi.org/10.1093/cercor/bhw237 Næss S, Halnes G, Hagen E, Hagler DJ, Dale AM, Einevoll GT, Ness TV (2021) Biophysically detailed forward modeling of the neural origin of EEG and MEG signals. Neuroimage. https://doi.org/10.1016/j.neuroimage.2020.117467 Næss S, Chintaluri C, Ness TV, Dale AM, Einevoll GT, Wójcik DK (2017) Corrected four-sphere head model for EEG signals. Front Hum Neurosci. https://doi.org/10.3389/fnhum.2017.00490 Brunel N (2000) Phase diagrams of sparsely connected networks of excitatory and inhibitory spiking neurons. Neurocomputing 32–33:307–312. https://doi.org/10.1016/s0925-2312(00)00179-x Huang Y, Parra LC, Haufe S (2016) The New York Head—a precise standardized volume conductor model for EEG source localization and tES targeting. Neuroimage 140:150–162. https://doi.org/10.1016/j.neuroimage.2015.12.019 Trakoshis S, Martínez-Cañada P, Rocchi F, Canella C, You W, Chakrabarti B, Ruigrok ANV, Bullmore ET, Suckling J, Markicevic M, Zerbi V, Consortium MA, Baron-Cohen S, Gozzi A, Lai M-C, Panzeri S, Lombardo MV (2020) Intrinsic excitation-inhibition imbalance affects medial prefrontal cortex differently in autistic men versus women. Elife 9:e55684. https://doi.org/10.7554/eLife.55684 Martínez-Cañada P, Panzeri S (2021) Spectral properties of local field potentials and electroencephalograms as indices for changes in neural circuit parameters. In: Mahmud M, Kaiser MS, Vassanelli S, Dai Q, Zhong N (eds) Brain informatics. Lecture notes in computer science. Springer, Berlin, pp 115–123. https://doi.org/10.1007/978-3-030-86993-9_11 Zaldivar D, Goense J, Lowe SC, Logothetis NK, Panzeri S (2018) Dopamine is signaled by mid-frequency oscillations and boosts output layers visual information in visual cortex. Curr Biol 28(2):224–235. https://doi.org/10.1016/j.cub.2017.12.006 Canella C, Rocchi F, Noei S, Gutierrez-Barragan D, Coletta L, Galbusera A, Vassanelli S, Pasqualetti M, Iurilli G, Panzeri S, Gozzi A (2020) Cortical silencing results in paradoxical fMRI overconnectivity. bioRxiv. https://doi.org/10.1101/2020.08.05.237958 Totah NK, Neves RM, Panzeri S, Logothetis NK, Eschenko O (2018) The locus coeruleus is a complex and differentiated neuromodulatory system. Neuron 99(5):1055-1068.e1056. https://doi.org/10.1016/j.neuron.2018.07.037 Noei S, Zouridis IS, Logothetis NK, Panzeri S, Totah NK (2020) Distinct ensembles in the noradrenergic locus coeruleus evoke diverse cortical states. bioRxiv. https://doi.org/10.1101/2020.03.30.015354 We are most grateful to the organizers and participants of the 14th International Conference on Brain Informatics (BI 2021) for their feedback on the work presented here. We are also deeply grateful to the wonderful colleagues who collaborated with us on these topics over the years, in particular N.K. Logothesis, C. Kayser, O. Eschenko, N. Brunel, G.T. Einevoll, A. Mazzoni, C. Magri, M. Besserve, N.K. Totah, T. Fellin and A. Gozzi. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 893825-ESNECO to P.M.C, the NIH Brain Initiative (Grants U19NS107464 and NS108410 to S.P.), the Simons Foundation (SFARI Explorer 602849 to S.P.), and by the EU FESR-FSE PON "Ricerca & Innovazione 2014-2020". Neural Computation Laboratory, Istituto Italiano Di Tecnologia, Genova and Rovereto, Italy Pablo Martínez-Cañada, Shahryar Noei & Stefano Panzeri Optical Approaches To Brain Function Laboratory, Istituto Italiano Di Tecnologia, Genova, Italy Pablo Martínez-Cañada CIMeC, University of Trento, Rovereto, Italy Shahryar Noei Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany Stefano Panzeri All authors wrote the paper. All authors read and approved the final manuscript. Correspondence to Stefano Panzeri. Martínez-Cañada, P., Noei, S. & Panzeri, S. Methods for inferring neural circuit interactions and neuromodulation from local field potential and electroencephalogram measures. Brain Inf. 8, 27 (2021). https://doi.org/10.1186/s40708-021-00148-y Local field potential (LFP) Neural oscillation Neural network model Leaky integrate-and-fire (LIF) neuron model
CommonCrawl
What is meant by `DiracDelta'[t]`? While calculating an inverse Laplace transform Wolfram Alpha returned to me the following output: 7 + 2 DiracDelta[-1 + t] + 14 DiracDelta[t] + HeavisideTheta[-1 + t] + 16 DiracDelta'[t] What does DiracDelta'[t] mean? A derivative of Dirac Delta function? Wouldn't that be infinite at $0$ and zero everywhere else? That is, basically the Dirac Delta function itself? dirac-delta wolfram-alpha "Infinite at zero and zero everywhere else" is a woefully inadequate description of the dirac delta. The best (and usually literal) definition of the dirac delta is basically that the notation resembling an integral containing a dirac delta is defined to mean evaluation: $$ \int_{-\infty}^{\infty} f(x) \delta(x-a) \, \mathrm{d}x := f(a) $$ whenever $f$ is continuous at $a$. Notation involving the derivative is defined by a similar formula: $$ \int_{-\infty}^{\infty} f(x) \delta'(x-a) \, \mathrm{d}x := -f'(a) $$ where $f$ is continuously differentiable at $a$. The idea behind the definition is that it is is meant to invoke partial integration; to imagine a hypothetical calculation $$ \int_{-\infty}^{\infty} \left( f(x) \delta'(x-a) + f'(x) \delta(x-a) \right) \, \mathrm{d}x = (f(x) \delta(x-a))\big|_{x=-\infty}^{x=\infty} = 0 $$ There is a systematic approach to this sort of stuff: they're called distributions. On a suitable space of test functions, this partial integration formula is the definition of the derivative. HurkylHurkyl Change of variables for a Dirac delta function strange transform of dirac delta function Definition of the Dirac Delta function gives Sin converging to zero at infinity. Laplace Transform of Dirac Delta function What does Dirac delta function of a constant mean? Interpretation of ODE with Dirac delta input and initial conditions at $t=0$ Fourier transform of $t^2$ discrepancy Wolfram Alpha and Fourier Transform inconsistencies? We Can Think of the Dirac Delta Function as Being the Limit Point of a Series of Functions That Put Less and Less Mass On All Points Other Than Zero?
CommonCrawl
Extraction of Yardang Characteristics Using Object-Based Image Analysis and Canny Edge Detection Methods Assessing Typhoon-Induced Canopy Damage Using Vegetation Indices in the Fushan Experimental Forest, Taiwan Characterizing Land Surface Phenology and Exotic Annual Grasses in Dryland Ecosystems Using Landsat and Sentinel-2 Data in Harmony 10.3390/rs12040727 Hirschmugl, M. Deutscher, J. Sobe, C. Bouvet, A. Mermoz, S. Schardt, M. Use of SAR and Optical Time Series for Tropical Forest Disturbance Mapping Manuela Hirschmugl 1,*, Janik Deutscher Carina Sobe Alexandre Bouvet Stéphane Mermoz 2,3 and Mathias Schardt Joanneum Research Forschungsgesellschaft mbH, 8010 Graz, Austria CESBIO (CNRS/UPS/IRD/CNES), 31401 Toulouse, France GlobEO (Global Earth Observation), 31400 Toulouse, France Remote Sens. 2020, 12(4), 727; https://doi.org/10.3390/rs12040727 Received: 7 January 2020 / Revised: 3 February 2020 / Accepted: 17 February 2020 / Published: 22 February 2020 (This article belongs to the Special Issue Forest Canopy Disturbance Detection Using Satellite Remote Sensing) Frequent cloud cover and fast regrowth often hamper topical forest disturbance monitoring with optical data. This study aims at overcoming these limitations by combining dense time series of optical (Sentinel-2 and Landsat 8) and SAR data (Sentinel-1) for forest disturbance mapping at test sites in Peru and Gabon. We compare the accuracies of the individual disturbance maps from optical and SAR time series with the accuracies of the combined map. We further evaluate the detection accuracies by disturbance patch size and by an area-based sampling approach. The results show that the individual optical and SAR based forest disturbance detections are highly complementary, and their combination improves all accuracy measures. The overall accuracies increase by about 3% in both areas, producer accuracies of the disturbed forest class increase by up to 25% in Peru when compared to only using one sensor type. The assessment by disturbance patch size shows that the amount of detections of very small disturbances (< 0.2 ha) can almost be doubled by using both data sets: for Gabon 30% as compared to 15.7–17.5%, for Peru 80% as compared to 48.6–65.7%. forest; disturbance; monitoring; Earth Observation; Sentinel; SAR; Gabon; Peru; FNF mapping; change detection The mapping of forest disturbances is an important component in sustainable forest management and in implementing climate policy initiatives, such as the UN's Reducing Emissions from Deforestation and Forest Degradation (REDD+) programme. In this paper, we use the term forest disturbance as an umbrella term for all forest changes that result from both deforestation and from forest degradation. According to both FAO and UNFCCC, deforestation is a conversion from forest land to non-forest land. Since, in many cases, we do not know the future fate of the disturbance patches, we cannot further classify them. Instead, the analysis of this study puts a focus on the size of the detected disturbance patches. Today, remote sensing applications are widely used to monitor tropical forests and their changes at large spatial scales. Most tropical forest monitoring systems are based on optical data sets and focus on large-deforestation areas, for which user accuracies around 90% and producer accuracies above 75% are reported [1,2,3]. In addition, recent developments have led to automated forest monitoring systems that are based on medium to high spatial resolution Earth Observation (EO) data and allow tracking forest changes in near real-time (NRT), such as Global Forest Watch Alerts for the humid tropical forests [4] and the DETER system in Brazil [5]. While methods for large area deforestation monitoring have improved considerably over the last years, there still is a lack of methods that accurately detect forest degradation and small forest changes. Forest degradation is thought to be a major source of carbon emissions [6], and thus needs to be better understood and integrated in REDD+ Monitoring & Measurement, Reporting and Verification (MRV) systems. A key driver of forest degradation is timber extraction by selective logging, where only a small subset of trees is harvested. The intensity of selective logging varies, depending on the amount of wood that is harvested, but usually the individual patches of change are small. Other less prominent drivers of forest degradation in developing countries include fuelwood collection and charcoal production, uncontrolled fire, and livestock grazing [7]. To accurately detect small patches of forest disturbance from selective logging and other degradation drivers, EO data of both high spatial and temporal resolution is required because gaps left by e.g., individual tree extraction are very small and quickly overgrow in tropical climates. EO data that have the potential to map forest degradation are now available: the Sentinel missions provide data at 10 m spatial resolution both in the optical and Synthetic Aperture Radar (SAR) domain and data are provided every five to 12 days. Apart from rapid forest regrowth, tropical forest disturbance mapping with optical data is also limited by the number of available cloud-free observations. C-band SAR data from the Sentinel-1 mission (central frequency of 5.404 GHz) can be acquired independent of weather conditions and daytime, leading to very dense time series of EO data. The two-satellite constellation of Sentinel-1 has a potential six-day exact repeat cycle, but the tropical regions are only covered by one satellite, leading to a 12-day repeat cycle. Some tropical areas are only covered by one descending or one ascending orbit. Information on revisit frequency and the available pass directions can be obtained from the Sentinel-1 observation scenario from ESA (https://sentinel.esa.int/web/sentinel/missions/sentinel-1/observation-scenario). By combining the vast amounts of new EO data from the Sentinel-1 (S-1), Sentinel-2 (S-2), and Landsat 8 (L8) missions, it is possible to strongly increase the temporal density of available EO data and additionally exploit multi-sensor information for forest monitoring. The Sentinel-2 constellation and Landsat 8 have repeat cycles of five days and 16 days, respectively. Together, these two missions provide up to eight images per month for tropical regions and strongly increase the temporal density of image time series. However, the number of usable cloud-free observations is site specific as it strongly depends on regional cloud cover. An assessment of available Landsat 7 and 8 images for Peru showed that, on average, less than 50% of all potential observations are cloud-free observations and cloud cover has significant intra-annual and regional variation [8]. The combination of all available cloud-free optical observations and SAR data allows to develop near real-time forest disturbance mapping systems at high spatial resolution, which can be used to more accurately detect small scale forest disturbances. The aim of this study is to analyze if the joint use of SAR (S-1) and optical (S-2 and L8) time series data in forest monitoring approaches can increase forest disturbance detection accuracies in the humid tropics. Forest disturbance maps from SAR and optical data are highly complementary, in that they detect different disturbed forest areas. Therefore, the assumption was that higher accuracy values can be obtained by merging detections from both sensor types. In this study, we demonstrate this at two humid tropical forest test sites located in Peru and in Gabon. Our approach includes the generation of a benchmark forest/non-forest mask, which serves as the starting point for the forest disturbance mapping. Separate forest disturbance maps are then calculated from the SAR and optical time series. The final forest disturbance maps combine the forest disturbance results from SAR and optical time series by a simple union process. The benchmark forest/non-forest masks and all the forest disturbance maps (SAR, optical, combined) are validated with a set of sample plots that were visually interpreted in VHR and HR imagery. Forest Disturbance Monitoring with Optical Data The opening of the U.S. Geological Survey (USGS) Landsat data archive in 2008 [9] and the launch of new satellite missions with an open data policy, such as ESA's Sentinel Missions [10], has formed the basis for time series analysis of optical satellite data at high spatial and high temporal resolution [9,11,12,13]. Although the Sentinel-2 Multi Spectral Instrument (MSI) and the Landsat 8 Operational Land Imager (OLI) bands are not fully compatible in terms of radiometry, recent investigations on radiometric consistency between the two sensors revealed a high correlation between corresponding bands. Dense optical image time series allow for developing new Land Use and Land Cover (LULC) mapping approaches and novel methods for detecting dynamic and gradual, as well as long-term, change processes. Satellite systems that operate at daily acquisition [14] rates, such as MODIS, can achieve consistent temporal coverage, even in tropical regions. But small disturbances are largely omitted due to the coarse spatial resolution of 250–500 m [15]. Many different algorithms have been proposed to detect forest changes while using time series of medium spatial resolution optical satellite imagery [16,17,18,19,20,21,22,23,24,25]. In the last decade, yearly deforestation mapping and the derivation of deforestation rates have become operational at global [1] and national level e.g., [2,16,17], but there is still only fragmented information available on the extent and magnitude of small forest disturbance and forest degradation [18]. As stated in the GOFC-GOLD REDD sourcebook [19], measuring forest degradation or forest regrowth and related forest carbon stock changes is more challenging than measuring deforestation alone, since degradation monitoring requires more frequent and better imagery and processing. Cloud cover strongly influences tropical forest disturbance detection accuracies in both space and time [4]. When there are temporal gaps in the time series, rapid vegetation recovery can obscure the signs of disturbance events [20]. Previous studies using multiple years of cloud-free Landsat data in less cloudy regions showed that the regrowth of trees rapidly masks the spectral signature, even of forest clear cuts [21,22]. Thus, a higher percentage of forest disturbances can only be detected with dense time series of optical data. Dense time series are essential for the continued development of near real-time monitoring systems, such as the Global Forest Watch Humid Tropical Forest Alerts [8], and can provide much needed alert information on location and exact timing of illegal logging activities [23]. In terms of methodology, we can divide the time series analysis methods that were used for forest change detection into four broad categories: (1) threshold based change detection; (2) curve fitting; (3) trajectory fitting; and, (4) trajectory segmentation. Existing algorithms often use a combination of these categories. A more detailed description and comparison of these categories can be found in [24]. Thresholding procedures that separate forest from non-forest or intact from degraded forest in a time series are used in the Vegetation Change Tracker (VCT) and Global Forest Watch algorithm [8,25]. Curve fitting approaches for monitoring forest dynamics have been applied in several studies [26,27]. A large number of forest monitoring approaches today are based on trajectory fitting and trajectory segmentation. Most of the disturbance types show a distinct temporal behavior before and after a degradation event, resulting in a "characteristic spectro-temporal signature" that can be exploited to detect and classify forest changes [22]. Trajectory fitting and segmentation algorithms include LandTrendr, Breaks For Additive Season, and Trend (BFAST) and the Continuous Change Detection and Classification (CCDC) algorithm [28,29,30,31]. The dense time series of satellite data also allow to detect forest disturbances from harmonic regression models. The Exponentially Weighted Moving Average Change Detection (EWMACD) algorithm and its evolution, the dynamic algorithm Edyn [32,33], use the residuals from harmonic regression over many years of Landsat data in conjunction with statistical quality control charts to signal vegetation changes. As a conclusion from the literature review, we can state that algorithms that try to detect lesser-magnitude or small-scale disturbances show higher levels of commission error. Forest Disturbance Monitoring with SAR data SAR-based forest monitoring approaches are among the most promising remote sensing approaches for the NRT mapping of forest disturbances in the tropics, thanks to the ability of SAR to operate in all weather conditions at any time of day or night. However, SAR data based approaches for forest disturbance mapping have not been well developed yet and operational applications have not yet been implemented [8]. The release of the global JERS, PALSAR, and PALSAR-2 mosaics at 25 m resolution has fostered studies that are related to forest monitoring with SAR. ALOS PALSAR mosaics were used to produce the first SAR-based annual (2007–2010) global maps of forest and non-forest cover, from which some maps of forest losses and gain were generated based on thresholds [34]. Tropical forest change monitoring with SAR data is usually performed by measuring backscatter intensity changes over time. Forest disturbances and regrowth have been assessed atsubcontinental scale over South-East Asia while using SAR intensity changes [35]. Indirect approaches relate the backscatter signal to forest biomass using regional empirical regression models [36,37] and then calculate biomass changes over time. Such an approach has been tested in a small area in Central Mozambique [38]: forest aboveground biomass (AGB) is estimated from SAR backscatter at two periods, and then changes in AGB are determined by subtracting the two estimates. This method is relevant in the context of REDD+ measurement, reporting, and verification (MRV), because both disturbance areas and biomass losses are estimated in one approach. However, the method suffers from error propagation, as errors in both maps may be summed. Applications that are based on ALOS PALSAR data are constrained by the small number of available observations: one observation per year in the case of the mosaics and one observation every 42 days at best with the original data. Most applications are therefore limited to bi-temporal analyses. Other SAR based forest disturbance mapping approaches use three-dimensional (3D) information from radargrammetry and InSAR to detect gaps in the forest canopy [39,40,41]. The complementarity of SAR sensors of different frequency for forest disturbance monitoring was demonstrated at a test site in the Republic of the Congo for Sentinel-1 C-band and TerraSAR-X data [24]. C-band SAR data is less suited for forest disturbance assessment and above-grove biomass estimation than L-band SAR due to its shorter wavelength, which limits the penetration into the canopy (as evidenced in [42]). C-band SAR has therefore been used to a much lesser degree than L-band data in past forest monitoring studies. However, the dense time series of the Sentinel-1 constellation offer a unique opportunity to systematically monitor forests at a repeat cycle of six to 12 days, depending on the data type and location. In addition, the continuity of Sentinel data is guaranteed up to 2030 with S-1C/D and S-2C/D, and the next generation of Sentinel satellites is already planned beyond 2030, allowing the development of long-term environmental monitoring systems. A number of recent studies have already used Sentinel-1 data for forest disturbance mapping. Most of the approaches measure changes in SAR backscatter intensity over time, either directly from image to image or by calculating the coefficient of variation of a data stack for a pre-defined time period. Empirical thresholds are then applied to derive the forest disturbances. Such approaches have been tested for forest disturbance mapping in the Republic of the Congo [14,43] and for mapping forest fire-affected areas in Indonesia [44]. A recent research study at a tropical forest site in Bolivia showed that the Sentinel-1 time series data can provide much more timely detections than Landsat and ALOS PALSAR-2 data [45]. Most of the studies for detecting disturbances from Sentinel-1 data assume that forest disturbances are necessarily characterized by a decrease in C-band backscatter within the disturbed area, which does not always seem to be the case [46]. Therefore, a new method that uses the geometric effects of SAR shadowing to detect forest change areas from Sentinel-1 SAR data has been proposed by [47]. Depending on viewing geometry, SAR shadowing occurs at forest edges and new forest edges thus result in new SAR shadows that can be easily detected in the time series. Ascending and descending orbit data are needed to detect new shadows on two sides of the forest disturbance patch. The entire disturbed area is then reconstructed from the newly detected shadows using a convex envelope boundary operator. This new method is used for the SAR based forest disturbance detection in this study and Section 4 describes it in more detail. Forest Disturbance Monitoring Combining SAR and Optical Data Optical and SAR data have been combined in remote sensing applications for many years [48,49]. Combination methods on a data-level that preserve spectral as well as spatial characteristics of both sensors are difficult to design. The combination of SAR and optical data on a result-level avoids this problem by classifying each source individually. The results are then combined while using various methods, such as probabilistic theory, evidence theory, fuzzy theory, neural networks, or ensemble learning classifiers. This way, the characteristics of both sensors can successfully be preserved [49,50]. There are a number of recent studies that argue in support of a combined use of SAR and optical data for tropical forest monitoring [3,50,51,52,53]. Recent results indicate that a combined use can improve tropical forest monitoring for burnt area detection [14], for forest/non-forest mapping [52,54,55], for biomass assessment [56,57,58], and for deforestation and degradation monitoring [45,51,59]. A combination of ALOS PALSAR data and Landsat data was also used to enhance the discrimination of mature forest, secondary forest, and non-forest areas [60]. A recently presented workflow for near real-time deforestation detection integrates medium resolution optical data and SAR data in a Bayesian approach [45,50]. By integrating optical Landsat 7/8 data with ALOS PALSAR-2 L-band and Sentinel-1 C-band SAR data, deforestation areas in Bolivia were detected with a mean time lag of 31 days, and detections show a user accuracy of 88% and a producer accuracy of 89. The time lag increased by a considerable six weeks when only using Landsat data and by one week when only using Sentinel-1 data [3]. However, the study only focuses on deforestation areas and does not address small forest disturbances that are difficult to detect from medium resolution data, such as Landsat, but require high resolution data instead. What is still missing in current research developments is an in-depth analysis of forest disturbance detection capabilities that can be achieved by combining data from the Sentinel-2 and Sentinel-1 missions. These two satellite missions currently have the highest temporal coverage of freely available data and provide imagery at high spatial resolution. One reason why these data sets have not been joined more extensively yet could be that many research studies using time series of data rely on processing services, such as Google Earth Engine (https://earthengine.google.com/), where Sentinel-2 surface reflectance data has only been included recently. 2. Material and Methods 2.1. Test Sites The study is performed at two test sites in humid tropical forest regions: one test site is located in Peru, near the city of Yurimaguas, and the second test site is located in Gabon, near the city of Fougamou. The Peruvian test site (see Figure 1) covers approximately 6000 km² around the city of Yurimaguas (76°05′W, 5°45′S), which is located at the river Huallaga near the border between the Loreto and San Martin regions. It is the same test site that was previously used to demonstrate the SAR shadow method for forest disturbance detection [47]. Most of the study area lies within the tropical rainforest zone, which is characterized by a mean temperature of all months higher than 18 °C, an annual precipitation of at least 1500 mm, and at most three dry months during winter. In Yurimaguas, the mean long-term temperature is 26 °C and the mean annual rainfall is 2200 mm [61,62]. Only in June, July, August, and September the mean monthly rainfall is lower than 200 mm [61]. The Peruvian test site is intensively used and characterized by shifting cultivation, plantations, and mining activities. The patterns of forest degradation strongly differ in extent and temporal behavior. Areas with traditional subsistence farming based on shifting cultivation typically show forest disturbances smaller than 0.5 ha, which by FAO definition is forest degradation [63]. The forests in the area are partly primary and mostly secondary forests, i.e., previously disturbed forests. Forest/Non-forest maps that follow the FAO forest definition [63] and forest disturbance maps are derived for the entire test site. Due to limited availability of reference data, the validation was performed at a small subset of 88.33 km², which includes the major forest change drivers and is mainly characterized by areas of shifting cultivation and by forest disturbances that are smaller than 0.5 ha in size. The Gabonese study area (see Figure 2) is approximately 6480 km² large and it extends over parts of the provinces of Ngouine, Ogooue-Maritime and Moyen-Ogooue. It completely lies within the ecological zone of tropical rainforests. In Fougamou (10°35′E, 1°13′S), the largest city in the study area, the climatic conditions are very similar to Yurimaguas with a mean long-term temperature of 26 °C and a mean annual precipitation of 1995 mm with the dry season between June and September, where precipitation does not exceed 100 mm (https://de.climate-data.org/location/32332/#climate-graph). The presence of cloud cover in Gabon is nearly continuous throughout the year. The main drivers for deforestation at the test site are (i) mining activities in the north; (ii) large palm oil plantations in the south; and (iii) agricultural activities along roads and rivers. The main driver for forest degradation is industrial selective logging [64], which occurs in several concession areas and is characterized by a large number of very small forest disturbances (<0.2 ha) and related logging roads [65]. 2.2. Data Sets For Peru, the set of optical images consists of data from Landsat 8 (WRS: 008-064) and Sentinel-2 (granule: 18MUU) from 03/2015–05/2017. We used optical data from 03/2015 to 03/2016 to generate the initial forest/non-forest mask. Forest disturbance mapping is based on all optical and SAR data from 03/2016 to the end of 12/2016. Optical data from 2017 were only used to confirm the mapped disturbances in the confirmation loop that is described in the respective chapter on data processing. Sentinel-1 data is from both ascending and descending relative orbits (orbit 43—ascending; orbit 49—descending). The time interval between two consecutive acquisitions with the same orbit orientation is 24 days. Images in ascending orbit are acquired three days after images in descending orbit. The polarization mode is single VV until the end of 2016. In short, for Peru, we generate a forest/non-forest mask for status March 2016 and a disturbance map for the period March-December 2016. For Gabon, we used optical data from Landsat 8 (WRS: 185-061) and Sentinel-2 (granule: 32MPD) from 2013 to 2017 and Sentinel-1 SAR data (VV and VH polarizations) from 2015 to 2017. We calculated the initial benchmark forest/non-forest mask based on Landsat 8 data from the year 2013. Initially, we calculated all changes from 2013–2017 from optical data only. For the aim of this study to combine optical and SAR data, we only evaluate the changes between 15 December 2015 and 8 April 2017 in order to have temporally consistent stacks of data from both optical and SAR sensors. In short, for Gabon, we have a FNF map for status 2013 and a forest disturbance map for December 2015–April 2017. 2.3. Pre-Processing Optical Data Pre-processing We downloaded all Sentinel-2 data from the Copernicus Data Hub (https://scihub.copernicus.eu), as Level-1C products. Pre-processing of Sentinel-2 data is performed with the Sentinel-2 Reflectance Data Processing module implemented in the JOANNEUM RESEARCH in-house software IMPACT. First, all of the downloaded Level-1C S-2 images are atmospherically corrected to bottom-of-atmosphere (BoA) values using the integrated Sen2Cor processor version 2.5.5 [66]. The aerosol type is set to "rural", whichbetter fits both test sites then the alternative options "maritime" or "auto". The atmosphere type and ozone content are set to "auto", leading to an automatic determination by the algorithm [66]. Subsequently, we resample the 20 m bands to 10 m spatial resolution and then stack all 10 m and 20 m bands to a 10-band output image. Clouds and cloud shadows must be masked out prior to classification to avoid later misclassifications. The Sen2Cor processor also generates a scene classification map based on threshold operations while using different single spectral bands, band ratios, and indices [66]. We use this scene classification to derive final cloud and cloud shadow masks. Areas classified as clouds with either "medium" or "high" probability and areas classified as cloud shadows are extracted from the scene classification and written to a binary mask. This mask is slightly altered by morphological operations (erode, expand), cloud holes are filled, and the masked areas are then removed from the pre-processed S-2 imagery. We also perform a topographic correction that is based on an implementation of the Minnaert correction in IMPACT software [67] using the Shuttle Radar Topography Mission (SRTM) model at 30 m spatial resolution as digital elevation model. Landsat 8 data are available as surface reflectance products (Level-2 data) through the USGS EarthExplorer (https://earthexplorer.usgs.gov). An accurate cloud mask is also available for each image. It is computed with the CFMask algorithm, which is the C version of the Function of Mask (FMask) algorithm that was developed by [68]. Itperforms an automated object-based cloud and cloud shadow detection in Landsat images [69,70]. Cloud and cloud shadows are removed from the imagery and the Landsat 8 data is resampled to 10 m cell size to be consistent with S-2 spatial resolution. When images from different sensors are used in a combined time series, geometric consistency is a requisite, since geometric errors almost certainly lead to misclassifications [12]. To register the L8 scenes to the S-2 scenes, we use a fully automated multi-modal image matching algorithm that is based on the concept of mutual information maximization [71]. SAR Data Pre-processing We processed S-1 SAR data using the "s1tiling processing chain" [72], based on the free Orfeo Toolbox (website: www.orfeo-toolbox.org). Pre-processing comprised the following steps: calibration, orthorectification, multi-image filtering [73,74,75], and splitting into the S-2 tiling grid that is referenced to the U.S. Military Grid Reference System (MGRS). The SAR data sets are calibrated to gamma nought backscatter and orthorectified to 10 m resolution in UTM projection that was based on the SRTM model at 30 m resolution. A multi-image filter [74,75] is applied to decrease the speckle effect and to enhance the equivalent number of looks (ENL). A 3 × 3 spatial window was chosen, and each image was filtered only with the images acquired before its own acquisition date, in order to simulate NRT conditions. Further details on the pre-processing of the SAR data can be found in chapter 2.2.1 of [47]. 2.4. Main Processing The overall workflow is depicted in Figure 3 and explained in detail in the respective chapters below. 2.4.1. Generating an Initial Benchmark Forest Mask The first step of the forest disturbance monitoring workflow is the generation of an initial benchmark forest/non-forest (FNF) mask (see Figure 3), which accurately represents the forest area right at the beginning of the change detection window. The forest masks were derived with slightly different approaches for Gabon and Peru, due to differences in optical data availability. For the Peru test site, the FNF map is based on all optical data from 03/2015-03/2016, in total 13 images from both Landsat 8 and Sentinel-2 sensors. We started with deriving training data for the FNF classification from a low cloud covered Sentinel-2 scene from 10 March 2016 and from RapidEye and VHR data sets from 2015 and early 2016 (for details see Section 2.4.3). Subsequently, we used the Sentinel-2 scene from 10 March 2016 and the training data (forest and non-forest areas) to train a classifier and derive a classification model with the Orfeo Toolbox (www.orfeo-toolbox.org). We then tested different classification approaches (Maximum Likelihood, Random Forest, Normalized Difference Vegetation Index (NDVI), and Normalized Difference Infrared Index (NDII7) thresholding) and different input data options, and found the Random Forest based machine learning classifier to deliver the best overall accuracies for the benchmark FNF map [76]. We then applied the Random Forest classification model derived from the 10 March 2016 Sentinel-2 scene to all other pre-processed optical images from 03/2015-03/2016, resulting in 13 individual FNF maps with class categories "forest", "non-forest", and "no data" (e.g., clouds). For the final FNF map, we apply a weighted majority approach, where FNF masks that lie near the reference date (10 March 2016) in the temporal domain are attributed the highest weights. This weighting approach increases the overall accuracy of the FNF map by 3% [76]. For Gabon, we calculated the FNF mask for the benchmark year 2013. This reference year was a specification from the related project. In 2013, Sentinel-2 data was not yet available. Therefore, the FNF mask in Gabon is solely based on Landsat 8 data. For the FNF mask classification, we chose the Landsat 8 scene with lowest cloud cover in 2013 (02.08.2013). We collected the required training data (forest and non-forest areas) from VHR image chips that were distributed over the test site and then used the Random Forest classifier of OrfeoToolbox to train and classify the Landsat 8 scene while using all Landsat 8 spectral bands. The remaining cloud holes were filled with the classification results of additional scenes with low cloud cover (01.07.2013; 17.07.2013) to guarantee a complete coverage of the final FNF mask. 2.4.2. Disturbance Detection Optical data Both of the study areas are located in the humid tropics, where the seasonality of the spectral signal of forests is much less pronounced than for dry tropical forests or temperate forests. Therefore, the applied change detection method does not need to account for seasonality as in the dry tropics [77], and we can apply straightforward thresholding approaches. Single spectral bands usually show higher variance in time series than indices, which is why indices are commonly used as input data to times series analyses. Our initial tests revealed that NDVI and NDII7 (Normalized Difference Infrared Index with Landsat band 7: (band4-band7) / (band4 + band7)) show the best overall accuracies for forest disturbance detection. However, the Landsat 8 surface reflectance data in Peru and Gabon showed artifacts in the optical bands that, according to the USGS, were caused by the Global Climate Modeling (GCM) grid's aerosol values not being correctly interpolated to the Landsat grid. We use the NDII7 index as input to disturbance mapping with Landsat 8 instead of NDVI to bypass this radiometric issue, as NDII7 does not include the optical bands. The NDVI (for Sentinel-2) and NDII7 (for Landsat 8) are individually calculated for all pre-processed and radiometrically adjusted images for the time window 10.03.2016 to 31.12.2016 at the Peruvian test site and for the time window 15.12.2015 to 08.04.2017 at the Gabonese test site. An additional eight weeks are also processed for the optical data disturbance confirmation. All of the pixels with NDVI and NDII7 values exceeding the thresholds indicate a forest disturbance and are therefore named "disturbance indicator" (see Figure 3). The forest disturbance detection workflow is described in [52] and follows recommendations from other operational approaches [8]. It is based on the temporal behavior of index values and a thresholding approach. From reference data plots, we derive a threshold value for the NDVI and the NDII7, which separates the forested and non-forested areas. When an index value falls below the defined threshold, it is considered to be a preliminary potential forest disturbance detection. At this point, it is still unknown if this detection represents a real forest disturbance, or just remnants of an undetected cloud, a radiometric artifact or simply a false classification in the benchmark FNF map. Therefore, single outlier detection is only considered as a "possible disturbance". We then developed a confirmation tool to analyze these preliminary disturbance detections in time. Subsequent observations are used to either confirm the disturbance detection if their spectral values are also below the defined threshold or to reject the detection if spectral values of subsequent images are above the valid threshold. If three consecutive outliers are detected, the forest disturbance is confirmed and added to the final disturbed forest output file (referred to as "Optical Only" in subsequent text). The final optical data based forest disturbance file is a 10 m raster product that sums all of the confirmed changes within the forest area for the observation period. A minimum mapping unit of 0.04 ha is applied. SAR data The near real time forest disturbances detection method based on SAR data has been described in detail in [47] and was successfully tested in Peru. Classical methods for SAR based forest change detection are based on the hypothesis that the radar backscatter decreases when forest disturbances occur. However, backscatter does not necessarily decrease, because soil moisture and/or trees that remain on the ground can also lead to an increase of the radar backscatter [46]. To overcome this problem, the new method [47] is based on the detection of radar shadowing. Shadowing occurs in radar images because of the particular side-looking viewing geometry of radar systems. A shadow in a radar image is an area that cannot be reached by any radar pulse. The incidence angle and the tree height, shadows created by trees at the border between forest and non-forest areas, can be observed in high-resolution radar images, depending on the viewing direction (Figure 4). A sudden drop in backscatter in the radar time series characterizes new radar shadows. Thanks to the purely geometrical nature of the shadowing effects, this decrease of backscatter is expected to be persistent over time. New shadows should consequently remain visible for a long time and are easily detectable when dense time series of radar data, such as Sentinel-1 time series, are available. Ascending and descending orbit data are needed to detect new shadows on two sides of a forest disturbance patch. The entire disturbed area is then reconstructed from the newly detected shadows while using a convex envelope boundary operator. For every SAR image of the time series, new SAR shadows are identified and combined with previous/subsequent detections in the complementary orbit direction. If a disturbed area can be reconstructed, it is written to a disturbance detection file for this specific date and the area is removed from the forest area in the FNF mask. The single date detections are then summed over the time window of interest to a final SAR disturbance map ("S1 Only"). Combination of Optical and SAR SAR and optical data are combined by merging the final forest disturbance maps of both sensor types. For the chosen time window of interest, we processed and confirmed the forest disturbance detections for each sensor type separately (see workflow illustration Figure 3), and we then combined the sensor type specific forest disturbance map results with a simple "Union" process in a Geographic Information System. The union process joins all detections of the two forest disturbance maps, thus increasing the overall detected disturbed forest area. 2.4.3. Evaluation Concept for FNF Mapping The evaluation of the FNF maps is based on systematic random sampling points [78] that were visually interpreted in VHR image chips (see Figure 5, example of Peru). The VHR image chips were ordered based on a pre-defined systematic sampling grid. Within each grid cell a random sampling was applied. We additionally derived inclusion probabilities and area estimates from the final forest/non-forest classifications since the VHR image chips do not cover the entire mapped area. Table A1 and Table A2 provide the confusion matrices for Peru including all accuracy measures (overall, user, and producer accuracy; confidence levels). 2.4.4. Evaluation Concept for Disturbance Mapping We use two different approaches to validate the disturbance maps: a plot-based approach based on digitized disturbance patches and an area-based approach based on stratified random sample points. Both validation approaches are meaningful for different assessments. The plot-based approach is viable for example in an alert system, where it is more important to pinpoint the location, where something is happening than delineating the exact outline. The area-based approach is needed e.g., in the frame of REDD reporting, where the disturbed forest area is needed as activity data. For the plot-based approach, we calculate the detected area percentage for each reference plot. A plot is considered to be correctly detected if a user defined percentage of its area is detected. For Peru and Gabon, we use a 10% threshold. If a percentage of the plot area higher than 10% is detected by the disturbance mask, the plot is considered correctly identified. For Peru, the 148 reference disturbance plots cover an area of roughly 108 ha. In terms of plot size, only 34 of the 142 plots are larger than 1 ha and 35 are smaller than 0.2 ha. For Gabon, the 362 reference disturbance plots cover an area of 6420 ha. In terms of plot size, 51 plots are larger than 1 ha and 166 are smaller than 0.2 ha. These patches are an arbitrary subset based on available VHR data and they do not necessarily represent the whole test site. The area-based approach allows for the computation of user, producer, and overall accuracy based on the representative areas of reference plots in each stratum (non-forest, undisturbed forest, and disturbed forest). We performed a visual interpretation of HR and VHR (RapidEye, Worldview-2, Spot-6) data in order to obtain reliable ground truth data for this assessment. For Peru, we used RapidEye data from 14 August 2016, ArcGIS Basemap (WorldView-2) data from 20.06.2015, Google Earth imagery from 2015 and 2018, and all Sentinel-2 data from 10.01.2016 to 25.12.2016. The area-based validation was carried out at a sub-region covering 88.33 km² for which RapidEye data was available. For Gabon, we visually interpreted a combination of RapidEye, WorldView and Spot-6 data and the full Sentinel-2 time series for the time window 12/2015–4/2017. The validation was carried out at the full extent. The overall number of plots and the plots per stratum are based on recommendations for land cover accuracy estimation [79,80]. The disturbance result from the "Union" combination was used to estimate the map areas per stratum. For Peru, we sampled a total of 575 reference points within a subset of the map (see Figure 6) with 75 sample points for the stratum "D" (disturbed forest), 114 sample points for stratum "NF" (non-forest) and 384 sample points for stratum "F" (undisturbed forest). The respective area weights are provided in Table A3. For Gabon, we sampled a total of 855 reference points within 14 subset areas of the map, for which also VHR imagery was available. The 855 total plots are composed of 112 sample points for stratum "Dlarge" (disturbed forest areas ≥ 25 ha), 81 sample points stratum "Dsmall" (disturbed forest areas < 25 ha), 121 sample points for stratum "NF" (non-forest) and 541 sample points for stratum "F" (undisturbed forest). Table A9 provides the respective area weights. For the area-based accuracy assessment, we calculate the confusion matrices for the three, respectively, four, different strata and for all sensor specific disturbance maps—i.e., "S1 only", "Optical only", and "Union". The confusion matrices are provided in Table A3, Table A4 and Table A5 for Peru and Table A9, Table A10 and Table A11 for Gabon. We estimate the products' user and producer accuracies from the error matrix using the estimated area proportions and Equations (6)–(8) from [79]. The benchmark non-forest mask is an essential a priori information, and errors in forest/non-forest masking strongly influence forest disturbance detection accuracies. We therefore derived a second set of confusion matrices, where all non-forest areas and misclassified sample points related to FNF mask errors are removed from the reference data set and where area weights of the strata are then recalculated accordingly, in order to evaluate the methods' accuracies independently from forest mask errors. The results are provided in Table A6, Table A7 and Table A8 for Peru and Table A12, Table A13 and Table A14 for Gabon). 3.1. Results of Benchmark Forest/Non-Forest Masking According to the FNF maps, at the Peru site 81.0% of the area is forest and 19.0% is non-forest, at the Gabon site 94.7% is forest and 5.3% is non-forest. These values were used for the sampling design and to calculate the respective plot weights. In total, we interpreted 5010 samples for Peru and 6000 samples for Gabon. Table 1 summarizes the main validation results. The user and producer accuracies for forest are very high, while the values for NF are considerably lower. However, the overall accuracies are also high due to the large share of forest (0.93 for Peru and 0.99 for Gabon). Table A1 and Table A2 provide the individual confusion matrices including all accuracy measures. 3.2. Results of Forest Disturbance Mapping in Peru The three different disturbance results from the "S1 Only" approach, the "Optical Only" approach, and the "Union" approach show very different results in terms of overall detected disturbed forest area. This is shown in Figure 7 and Table 2. The "Union" approach detects almost twice the area of the "Optical Only" approach. The overlapping area of the "S1 Only" and "Optical Only" disturbance maps (yellow areas in Figure 7) is only 70.88 ha, which explains why the "Union" disturbance area is significantly larger than in the two single sensor based disturbance maps. We use two different approaches to estimate the final disturbance map accuracies. The first accuracy assessment approach is targeted to analyze detection accuracy by disturbance patch size. Disturbance patch size is a critical issue in forest disturbance mapping but it is seldom addressed in publications on forest monitoring. Many studies only map disturbances larger than 0.5 ha and thus only focus on deforestation and do not include small and patchy forest degradation. In our analysis, a disturbance patch is considered detected when > 10% of the disturbed patch area is mapped. The results for Peru are shown in Table 3; those for Gabon are shown in Table 4. The second accuracy assessment is based on the stratified random sampling approach described in detail in Section 2.4.4. Table 5 and Table 6 provide summaries of the accuracy estimates. They include two analyses per test site. The first analysis also includes the non-forest areas, thus it reflects the accuracy of the map. The second analysis is only carried out for plots that are forest at the beginning of the disturbance detection window. Non-forest areas and all misclassifications of the FNF map are removed. This reflects the accuracy of the change detection method. Broadly speaking, the first analysis is a map validation, while the second analysis is a method validation. The validations are based on 575 points (including NF) and 385 points (only forest) for Peru, and 855 points (including NF) and 691 points (only forest) for Gabon. For Peru, we differentiated three classes, where F = undisturbed forest; D = disturbed forest; NF = non-forest at the beginning of the disturbance detection window. For Gabon, the disturbed class was split in two classes that were based on the area of disturbance plots: Dlarge = disturbed forest area ≥ 25 ha; Dsmall = disturbed forest area < 25 ha. Large area disturbances are mostly related to new palm oil plantations. At the Peru subset, there are no disturbances larger than 25 ha, therefore no subdivision was made. The mapped area of the strata and their respective weights are derived from the "Union" disturbance map. The detailed confusion matrices, including plot numbers and applied area weights, can be found in the Table A3, Table A4, Table A5, Table A6, Table A7 and Table A8 for Peru and Table A9, Table A10 and Table A11 for Gabon. We can calculate adjusted area estimates for each class based on the omission and commission errors and the respective area weights [79]. The above accuracy assessments clearly show that misclassifications in the forest/non-forest map contribute a major component to the overall error of the disturbance maps. When the FNF mask errors are removed, the overall accuracies increase significantly. Forest/Non-Forest Mask: The benchmark FNF maps show good overall accuracies for both the Peru (93.0%) and the Gabon test site (98.8%), but the omission and commission errors for the non-forest class are larger than 10% at both test sites. The overall accuracies for Gabon are similar to those reported by other studies: 98.1% for the national forest map of year 2000 and 95.9% for the Global Forest Watch dataset of year 2000 evaluated at a 1 ha minimum mapping unit and > 30% tree cover [81]. The accuracies are generally lower at the Peru site, with an omission error of 21.5% for the non-forest class. The lower accuracies in Peru seem to be related to a lower total percentage of forest area and a higher complexity of land use at the test site. Non-forest areas at the Peru test site make up 20% of the total area, which is a much higher percentage than at the Gabon test site with 5.6%. Large parts of the forested area are composed of secondary forests with rapid regrowth rates and the overall percentage of forest changes is higher than in Gabon. The primary cause of forest loss in the Loret and San Martin region of Peru where the test site is located is clearing for agriculture and pastures and large-scale industrial oil-palm plantations [2]. A rapid regrowth with shrubs, bushes, and low height trees is very common soon after agricultural clearing. Even from visual interpretation of VHR imagery it is often difficult to decide if an area is already forest regrowth or still non-forest. Non-forest areas are mostly characterized by different forms of agricultural use, but also by shrublands and grasslands. The situation is different in Gabon, where 88.5% of the entire country is covered by forests and large parts of the forest are still primary forests [81]. Large parts of the test site in Gabon are characterized by homogeneous forests. Non-forest areas are not distributed evenly at the Gabon test site, but they are concentrated around villages and oil palm plantations in the South. These site characteristics largely explain the higher FNF map accuracies at the Gabon site. The different methodologies that are used to derive the FNF maps could also explain some of the observed errors. The Peru FNF map is based on 13 images spanning 12 months. Some changes that occurred towards the end of the time window might not have been accounted for with the applied majority approach, even if a temporal weighting is applied. Forest Disturbance Detection: When interpreting the forest disturbance results, it has to be considered that forest disturbances are quite different in character at both of the test sites. Forest disturbances at the Peru test site are characterized by the logging of mostly secondary forests for subsequent agricultural use, which is in-line with other findings [2]. There is almost no selective logging of individual trees, but the size of forest disturbance areas is mostly small (60% are smaller than 0.5 ha). At the Gabon site, we find three different disturbance types: large-area deforestation for oil palm plantations in the South, small scale logging for subsequent agricultural/urban use near villages, and a large amount of industrial selective logging and logging roads in the forest concession areas. Gabon contributes 4.7 ± 0.9% of total forest loss in the Congo basin [64]. Selective logging is by far the most prominent forest disturbance type in Gabon, which is different to the other Congo Basin countries, where small scale clearing for agriculture is the most important driver of forest loss [64]. In the final disturbance map of the combined approach, 85% of the detected forest disturbance areas are smaller than 0.2 ha and 94% are smaller than 0.5 ha. Disturbance patch size is a critical issue in forest disturbance mapping, but it is seldom addressed in publications on forest monitoring. Many studies only map disturbances that are larger than 0.09 ha [8] to 1 ha [81] and, thus, only focus on deforestation and do not include small and patchy forest degradation. In terms of area changes, a large deforestation for oil-palm plantations contributes approximately 55% of the total forest change area. When comparing the accuracies based on the stratified sampling of the three approaches in Peru-"S1 Only", "Optical Only", and "Union"-we find highest overall accuracies for the "Union" approach (0.84), but differences in overall accuracy are small ("S1 Only": 0.82, "Optical Only": 0.79). When comparing SAR and optical results for the forest disturbance class, the "Optical Only" approach shows higher user accuracy and lower producer accuracy than the "S1 Only" approach. However, the producer accuracy is lower than 0.5 for both approaches showing high omission errors for both data types. For the "Union" approach, the producer accuracy increases up to 0.71 suggesting that SAR and optical map results are highly complementary. This is also confirmed by the area statistics. Of the 331.79 ha of disturbance area in the "Union" disturbance map, only 42.70 ha are detected by both "S1 Only" and the "Optical Only" approach. This is only about one-fifth of the recalculated adjusted disturbed forest area (225.61 ha). The remaining area is only detected by one of the two approaches. The area statistics also show that the NDVI and NDII7 thresholds used for the optical approach in Peru are slightly too conservative. Only 153.90 ha are detected as disturbed, which is 30% less than the calculated adjusted area. The producer accuracy of the NF class in Peru is quite low (0.6), suggesting that parts of the forest area in the benchmark FNF mask are indeed not tree-covered at the beginning of the change detection window. The accuracies at this subset are lower than the overall accuracy of the FNF mask for the entire Peru test site. This can be explained by the higher complexity of land use at the subset. It is also partly a problem of the different minimum mapping units used for generating and validating the FNF mask and the disturbance detection. Misclassifications in the benchmark FNF map can make up a large part of the overall error in disturbance detection. This error can be more important than the errors related to the change detection methodology, since the overall area percentage of forest disturbances in one year is only very small. This highlights the importance of using accurate and up-to-date benchmark forest masks in operational forest disturbance monitoring. When we remove the NF class in the second validation approach, the overall and the user accuracies improve considerably. For the "S1 Only" approach, the user accuracy of the disturbed forest increases from 0.64 to 0.95, as many of detected changes turned out to have been non-forest already at the start of the change detection period. These areas are vegetated and often used for agriculture. In the optical images their spectral signals are quite similar to those of forest and are thus misclassified as forest and no change is detected. However, the S1 approach detects a SAR shadow that is related to an existing forest border. In Peru, the S1 disturbance detection starts with the same date as the optical approach. A shadow is detected in the first S1 image of the time series and, thus, a change is written to the change map, even though the shadow was already present before. In Gabon, of the 15,723 ha detected as disturbed forest in the "Union" approach, only 5241 ha are detected by both of the approaches. However, in Gabon the "Optical Only" approach detects a much larger disturbed area than the "S1 Only" approach—13,763 ha vs. 7201 ha. This effect is mostly caused by the large deforestation area for a palm oil plantation. The applied S1 disturbance detection approach is not suitable for such large disturbance areas, as the detected shadows on both sides are too far apart to reconstruct the disturbance area with a convex envelope boundary operator. Also very small or very narrow patches are difficult to detect with the S1 approach used in this study. If the gap is too small, the distinct shadow is lost and detection fails. Also thin and elongated disturbance patches stretching in the east–west direction are difficult to detect, as the sensor is side-looking from east or west due to its near-polar orbit. It can also be expected, that due to their smaller pixel size, the S-2 results are better than the results only based on Landsat data. Shortcomings in all optical approaches are problems with regrowth and the selection of appropriate thresholds. Regrowth and cloud cover are the main limitations of optical disturbance monitoring in the tropics, as discussed in the Introduction. A recent study on forest loss monitoring from 2000–2014 in the Congo Basin with Landsat data showed that in Gabon on average only 1.1 cloud-free observations were available per year [64]. Regrowth cannot properly be detected with such a sparse time series. Several forest change detection maps and accuracy analyses for both Peru and Gabon have been produced by other research teams in recent years. Most of them are based on Landsat data and validations are mostly carried out at Landsat pixel level, i.e., 30 × 30 m [8,64]. In Peru, a Landsat-based national analysis of forest cover loss from 2000 to 2010 revealed that the majority of gross forest cover loss (92.2%) was attributed to clearing for agriculture and tree plantations. The rest was due to natural disturbance, mostly represented by flooding and river meandering (6.0%), and fires (1.5%). The classification was based on training data derived primarily from Landsat composites and forest cover loss was mapped at 30 m Landsat pixel scale. Validation was performed while using a point-based sampling design at Landsat pixel level (30 × 30 m) [2]. The pixels are visually analyzed for forest cover loss while using Landsat (2000) and RapidEye (2011) data at 5 m spatial resolution. The reported producer accuracy for forest loss is 75.4%. This is significantly higher than our "Optical Only" producer accuracy at the Peru test site of 38.2% and similar to the combined map producer accuracy of 71.0%. We assume that the observed differences are primarily related to sampling unit size-30 × 30 m for the analysis vs. 10 × 10 m in our study-and the complexity of land use at our test site subset. The reference plot area we use for the validation is only 11.11% of that used in [2] and our reference data set includes very small disturbances (0.01 ha) that are neglected in the other study. Peru was also one of the first countries used to demonstrate and validate the near real-time GFW HTFA (Global Forest Watch—Humid Tropical Forest Alerts) methodology [8]. Detected forest loss was grouped by patch size: Loss detection consisting of a single Landsat pixel (approximately 0.1 ha) totaled only 4% of total detected loss, 33% of detected loss patches were less than one hectare, and 63% were larger or equal one hectare in size. The reported user accuracies on a Landsat-pixel level of 30 × 30 m are between 86.5%, when including boundary pixels, and 96.2% with boundary pixels removed [8]. Producer accuracies are not provided in [8], but most of the omitted samples were from tree cover loss patches < 10 ha (= less than 100 Landsat pixels). For a better comparison between our results and the GFW dataset we also derived mapping accuracies for the GFW dataset for the exact same validation plots and time frame at the Peru test site. The dataset is based on the GFW version 1.6 (https://earthenginepartners.appspot.com/science-2013-global-forest/download_v1.6.html) and includes all changes of 2016 ("lossyear" = 2016) clipped with our FNF mask of March 2016. Thus, the time frame for disturbance detections is identical to the observation period we used in this study. Therefore, the results can be directly compared. For geometric congruency, we also performed a geometric adjustment to the S-2 geometry with the same parameters as used for the adjustment of the original Landsat 8 data. Table 7 shows the GFW validation results. The user accuracies of disturbances are quite comparable between our approaches ("S1 Only": 0.95, "Optical Only": 0.963, "Union": 0.941 from Table 5) and GFW (0.969 from Table 7), especially the "Optical Only" result is very similar, which makes sense, as GFW is also a result based on optical data. The same is true for the overall accuracy values. Producer accuracy of disturbances for the "Optical Only" result is a bit lower than for the GFW result, which is probably due to the very conservative thresholds selected in the assessments. When comparing the GFW result to the "Union" approach however, the producer accuracy of "Union" is much higher (0.71—Table 5) than that of the GFW data set (0.377—Table 7). This clearly shows the added value of including the additional S-1 data. Figure 8 provides a visual comparison of the "Union" mapping result and the GFW mapping result. For Gabon, a recent study compares the forest change estimates from 2000 to 2010 between national forest maps and the Global Forest Change (GFC) map from the University of Maryland [81]. The results indicate that net deforestation is not significantly different to 0, which suggests that the loss of forest is counter balanced by regeneration. The MMU of 1 ha used by GFC in the change analysis [81] strongly differs to that of our presented study, where we detect and validate forest disturbances at a MMU of 0.04 ha. At this spatial level, we also detect small forest disturbances that are typical of industrial selective logging. Selective logging is the most prominent forest disturbance type in Gabon, accounting for more than 60% of forest disturbances [64]. At a 1 ha MMU such small forest disturbance features cannot be detected and, therefore, also not included in the analysis. Despite or even because of the aforementioned shortcomings, S-1 and optical data are highly complementary. Combining the two data sets significantly increases the accuracy, especially the producer accuracy of disturbances and, as other studies have demonstrated, it also allows for a more rapid detection needed for near real-time alerting [3]. The results show that some forest disturbance patches are indeed detected by both sensor types and related mapping approaches. However, different types of disturbances are often only captured by only one sensor, which confirms the benefit of sensor combination. The contributions of optical and SAR data to the forest disturbance mapping success are variable and they depend on the land use and forest characteristics of a specific site. The dependencies are primarily based on type and typical height of forest (primary, secondary); weather and atmospheric conditions; topography; dominant LU types outside of the forest; change drivers (selective logging, plantations, agriculture, shifting cultivation, fire) and related size of disturbance plots; data availability (ascending, descending for SAR); density of the time series; and regrowth speed, and orientation of the satellite (especially S1) with respect to the disturbance site. Our study confirms, that combining optical and SAR data considerably improves the accuracies of forest disturbance maps. The overall accuracies increase by about 3% at both test sites in Peru and Gabon. Producer accuracies of the disturbance class increase by 13% in Gabon and by 25% in Peru when compared to using only one sensor type. The assessment by disturbance patch size shows, that about 30% of very small disturbances (< 0.2 ha) in Gabon can be detected by using both data sets compared to 17.5/15.7% when using a single sensor. The detection of forest disturbances of this small size from satellite imagery has not been studied yet, as most studies base their validations on Landsat pixel-size validation plots. A combination of S-1 and optical data at data level instead of at result level could proof to be difficult. The detected forest disturbance patches are often not identical but instead highly complementary. This is something to be tested in future research. We could also show that a combination of optical and SAR data at result level does not yield the same amount of accuracy gain at the two test sites (PA increase of 13% in Gabon and of 25% in Peru). More studies are needed in different parts of the tropics and under different forest conditions to better understand the potential and limitations of sensor type combinations for tropical forest disturbance detections. Further improvements should focus on developing more advanced methods for deriving forest disturbances from SAR and optical data. Simple thresholding, as applied in this study for the optical data, could be replaced by more advanced time series analysis approaches. However, the applicability of such approaches in tropical areas is limited by frequent cloud cover and the lack of dense time series. Here, Planet imagery could play a major role in the future. Commercial Planet data will be available at a daily repetition rate and the new generation of Planet imagery will be spectrally congruent with Sentinel-2. A combination of these two data sets to generate a dense optical time series could stimulate the development of more sophisticated time series analysis tools for tropical forest disturbance monitoring. Such tools are already being used in other fields of remote sensing with good results. In the SAR domain, advancements to the proposed method should focus on an improved detection of larger deforestation patches.. Alternatively, a combination of SAR backscatter-based approaches, which are able to detect large deforestation, and the shadow detection method used in this study could also improve the detection accuracy. Our study supports the use of high resolution, dense time series data for improved forest monitoring, specifically for the detection of small areas of forest disturbance. We could clearly show the high complementarity of the SAR and optical data sets for forest disturbance detection and, therefore, pledge for further development of multi-sensor approaches. Conceptualization, M.H.; Data curation, C.S.; Formal analysis, C.S. and A.B.; Funding acquisition, M.S.; Methodology, M.H., C.S., A.B. and S.M.; Project administration, M.H.; Software, A.B. and S.M.; Supervision, M.H. and M.S.; Validation, J.D.; Writing—original draft, M.H. and J.D.; Writing—review & editing, S.M., J.D. and M.S. All authors have read and agreed to the published version of the manuscript. Horizon 2020 Framework Programme: 685761. The presented work has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 685761 (Project EOMonDis). This appendix contains the detailed accuracy assessment results as tables. For each table, the upper part is a plot based confusion matrix; the lower part is a confusion matrix with the respective summed weights of the plots. Map categories are the rows, reference categories are the columns. FNF maps: Table A1. Confusion matrix for 2016 FNF map of Peru (plots and area weights). Map categories are the rows, reference categories are the columns. Non-Forest Area [ha] Forest 3663 216 3879 489,899 0.810 Non-Forest 151 980 1131 114,603 0.190 Total 3814 1196 5010 Class Forest Non-Forest Total User CI at 95% Producer CI at 95% Overall CI at 95% Forest 0.765 0.045 0.810 0.944 ±0.0072 0.968 ±0.0046 0.930 ±0.0070 Non-Forest 0.025 0.164 0.190 0.867 ±0.0198 0.785 ±0.0222 Total 0.790 0.209 1.000 Table A2. Confusion matrix for 2013 FNF map of Gabon (plots and area weights). Map categories are the rows, reference categories are the columns. Forest 5561 37 5598 648,460 0.947 Non-Forest 43 359 402 36422 0.053 Total 5604 396 6000 Disturbance maps—Peru: Table A3. "S1 Only" Peru disturbance map validation. Map Area [ha] F 330 15 57 402 6693.80 0.758 D 2 38 19 59 331.79 0.038 NF 18 0 96 114 1807.39 0.205 Total 350 53 172 575 8832.98 1.000 Class F D NF Total User Producer Overall F 0.622 0.028 0.107 0.758 0.821 0.949 0.819 D 0.001 0.024 0.012 0.038 0.644 0.461 NF 0.032 0.000 0.172 0.205 0.842 0.590 Total 0.656 0.052 0.292 1.000 Table A4. "Optical Only" Peru disturbance map validation. D 1 26 6 33 331.79 0.038 Table A5. "Union" Peru disturbance map validation. F 329 5 52 386 6693.80 0.758 Table A6. "S1 Only" Peru disturbance method validation (only forest area). F 330 15 345 5805.32 0.963 D 2 38 40 225.61 0.037 Total 332 53 385 6030.93 1.000 Class F D Total User Producer Overall F 0.921 0.042 0.963 0.957 0.998 0.956 D 0.002 0.036 0.037 0.950 0.459 Table A7. "Optical Only" Peru disturbance method validation (only forest area). Table A8. "Union" Peru disturbance method validation (only forest area). F 329 5 334 5805.32 0.963 Disturbance maps Gabon: Table A9. "S1 Only" disturbance map validation Gabon. Dlarge Dsmall F 530 51 20 41 642 46,394.94 0.884 Dlarge 0 55 0 1 56 1732.60 0.033 Dsmall 7 0 28 1 36 1179.65 0.023 NF 7 0 0 114 121 3169.56 0.060 Total 544 106 48 157 855 52,476.75 1.000 Class F Dlarge Dsmall NF Total User Producer Overall F 0.730 0.070 0.028 0.056 0.884 0.826 0.989 0.837 Dlarge 0.000 0.032 0.000 0.001 0.033 0.982 0.316 Dsmall 0.004 0.000 0.017 0.001 0.022 0.778 0.388 NF 0.003 0.000 0.000 0.057 0.060 0.942 0.497 Total 0.738 0.103 0.045 0.115 1.000 Table A10. "Optical Only" disturbance map validation Gabon. F 516 7 29 21 573 46,394.94 0.884 Dlarge 4 99 0 4 107 1732.60 0.033 Dsmall 17 0 19 18 54 1179.65 0.023 Table A11. "Union" disturbance map validation Gabon. Dlarge 4 104 0 4 112 1732.60 0.033 Table A12. "S1 Only" disturbance method validation Gabon (only forest area). F 530 51 20 601 42,399.24 0.896 Dlarge 0 55 0 55 2169.83 0.046 Dsmall 7 0 28 35 2763.15 0.058 Total 537 106 48 691 47,332.23 1.000 Class F Dlarge Dsmall Total User Producer Overall Dlarge 0.000 0.046 0.000 0.046 1.000 0.376 Dsmall 0.012 0.000 0.047 0.058 0.800 0.610 Table A13. "Optical Only" disturbance method validation Gabon (only forest area). F 516 7 29 552 42,399.24 0.896 Dlarge 4 99 0 103 2169.83 0.046 Dsmall 17 0 19 36 2763.15 0.058 Table A14. "Union" disturbance method validation Gabon (only forest area). Dlarge 4 104 0 108 2169.83 0.046 Hansen, M.C.; Potapov, P.V.; Moore, R.; Hancher, M.; Turubanova, S.A.; Tyukavina, A.; Thau, D.; Stehman, S.V.; Goetz, S.J.; Loveland, T.R.; et al. High-Resolution Global Maps of 21st-Century Forest Cover Change. Science 2013, 342, 850–853. [Google Scholar] [CrossRef][Green Version] Potapov, P.; Dempewolf, J.; Talero, Y.; Hansen, M.; Stehman, S.; Vargas, C.; Rojas, E.; Castillo, D.; Mendoza, E.; Calderón, A.; et al. National satellite-based humid tropical forest change assessment in Peru in support of REDD+ implementation. Environ. Res. Lett. 2014, 9. [Google Scholar] [CrossRef] Reiche, J.; Hamunyela, E.; Verbesselt, J.; Hoekman, D.; Herold, M. Improving near-real time deforestation monitoring in tropical dry forests by combining dense Sentinel-1 time series with Landsat and ALOS-2 PALSAR-2. Remote Sens. Environ. 2018, 204, 147–161. [Google Scholar] [CrossRef] Hansen, M.C.; Potapov, P.V.; Goetz, S.J.; Turubanova, S.; Tyukavina, A.; Krylov, A.; Kommareddy, A.; Egorov, A. Mapping tree height distributions in Sub-Saharan Africa using Landsat 7 and 8 data. Remote Sens. Environ. 2016, 185, 221–232. [Google Scholar] [CrossRef][Green Version] Shimabukuro, Y.; Duarte, V.; Anderson, L.; Valeriano, D.; Arai, E.; Freitas, R.; Rudorff, B.F.; Moreira, M. Near real time detection of deforestation in the Brazilian Amazon using MODIS imagery. Ambiente E Agua Interdiscip. J. Appl. Sci. 2007, 1, 37–47. [Google Scholar] [CrossRef] Timothy, R.H.; Pearson, G.S.; Sandra, B.; Lara, M. Greenhouse gas emissions from tropical forest degradation: An underestimated source. Carbon Balance Manag. 2017, 12. [Google Scholar] [CrossRef][Green Version] Hosonuma, N.; Herold, M.; De Sy, V.; De Fries, R.S.; Brockhaus, M.; Verchot, L.; Angelsen, A.; Romijn, E. An assessment of deforestation and forest degradation drivers in developing countries. Environ. Res. Lett. 2012, 7. [Google Scholar] [CrossRef] Hansen, M.C.; Krylov, A.; Tyukavina, A.; Potapov, P.; Turubanova, S.; Zutta, B.; Ifo, S.; Margono, B.; Stolle, F.; Moore, R. Humid tropical forest disturbance alerts using Landsat data. Environ. Res. Lett. 2016, 11. [Google Scholar] [CrossRef] Wulder, M.A.; White, J.C.; Loveland, T.R.; Woodcock, C.E.; Belward, A.S.; Cohen, W.B.; Fosnight, E.A.; Shaw, J.; Masek, J.G.; Roy, D.P. The global Landsat archive: Status, consolidation, and direction. Remote Sens. Environ. 2016, 185, 271–283. [Google Scholar] [CrossRef][Green Version] Berger, M.; Moreno, J.; Johannessen, J.A.; Levelt, P.F.; Hanssen, R.F. ESA's sentinel missions in support of Earth system science. Remote Sens. Environ. 2012, 120, 84–90. [Google Scholar] [CrossRef] Roy, M.; Ghosh, S.; Ghosh, A. A novel approach for change detection of remotely sensed images using semi-supervised multiple classifier system. Inf. Sci. 2014, 269, 35–47. [Google Scholar] [CrossRef] Kuenzer, C.; Dech, S.; Wagner, W. Remote Sensing Time Series: Revealing Land Surface Dynamics; Kuenzer, C., Dech, S., Wagner, W., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2015; pp. 1–24. ISBN 978-3-319-15967-6. [Google Scholar] Zhu, Z. Change detection using Landsat time series: A review of frequencies, preprocessing, algorithms, and applications. ISPRS J. Photogramm. Remote Sens. 2017, 130, 370–384. [Google Scholar] [CrossRef] Verhegghen, A.; Eva, H.; Ceccherini, G.; Achard, F.; Gond, V.; Gourlet-Fleury, S.; Cerutti, P.O. The Potential of Sentinel Satellites for Burnt Area Mapping and Monitoring in the Congo Basin Forests. Remote Sens. 2016, 8, 986. [Google Scholar] [CrossRef][Green Version] Hansen, M.C.; Loveland, T.R. A review of large area monitoring of land cover change using Landsat data. Remote Sens. Environ. 2012, 122, 66–74. [Google Scholar] [CrossRef] Potapov, P.; Turubanova, S.A.; Hansen, M.C.; Adusei, B.; Broich, M.; Altstatt, A.; Mane, L.; Justice, C.O. Quantifying Forest Cover Loss in Democratic Republic of Congo, 2000-2010, with Landsat ETM+ data. Remote Sens. Envion. 2012, 122, 106–116. [Google Scholar] [CrossRef] Shimabukuro, Y.E.; Beuchle, R.; Grecchi, R.C.; Achard, F. Assessment of forest degradation in Brazilian Amazon due to selective logging and fires using time series of fraction images derived from Landsat ETM$\mathplus$ images. Remote Sens. Lett. 2014, 5, 773–782. [Google Scholar] [CrossRef] Bullock, E.L.; Woodcock, C.E.; Olofsson, P. Monitoring tropical forest degradation using spectral unmixing and Landsat time series analysis. Remote Sens. Environ. 2018, 238. [Google Scholar] [CrossRef] GOFC-GOLD. A Sourcebook of Methods and Procedures for Monitoring and Reporting Anthropogenic Greenhouse Gas Emissions and Removals Caused by Deforestation, Gains and Losses of Carbon Stocks in Forests Remaining Forests, and Forestation; GOFC-GOLD Project Office, Natural Resources Canada: Ottawa, AB, Canada, 2014. [Google Scholar] McDowell, N.G.; Coops, N.C.; Beck, P.S.; Chambers, J.Q.; Gangodagamage, C.; Hicke, J.A.; Huang, C.; Kennedy, R.; Krofcheck, D.J.; Litvak, M.; et al. Global satellite monitoring of climate-induced vegetation disturbances. Trends Plant Sci. 2015, 20, 114–123. [Google Scholar] [CrossRef][Green Version] Lunetta, R.S.; Johnson, D.M.; Lyon, J.G.; Crotwell, J. Impacts of imagery temporal frequency on land-cover change detection monitoring. Remote Sens. Environ. 2004, 89, 444–454. [Google Scholar] [CrossRef] Kennedy, R.E.; Cohen, W.B.; Schroeder, T.A. Trajectory-based change detection for automated characterization of forest disturbance dynamics. Remote Sens. Environ. 2007, 110, 370–386. [Google Scholar] [CrossRef] Finer, M.; Novoa, S.; Weisse, M.J.; Petersen, R.; Mascaro, J.; Souto, T.; Stearns, F.; Martinez, R.G. Combating deforestation: From satellite to intervention. Science 2018, 360, 1303–1305. [Google Scholar] [CrossRef] [PubMed] Hirschmugl, M.; Deutscher, J.; Gutjahr, K.-H.; Sobe, C.; Schardt, M. Combined Use of SAR and Optical Time Series Data for Near Real-Time Forest Disturbance Mapping. In Proceedings of the 2017 9th International Workshop on the Analysis of Multitemporal Remote Sensing Images (MultiTemp), Bruges, Belgium, 27–29 June 2017; pp. 1–4. [Google Scholar] Huang, C.; Goward, S.N.; Masek, J.G.; Thomas, N.; Zhu, Z.; Vogelmann, J.E. An automated approach for reconstructing recent forest disturbance history using dense Landsat time series stacks. Remote Sens. Envion. 2010, 114, 183–198. [Google Scholar] [CrossRef] Vogelmann, J.E.; Xian, G.; Homer, C.; Tolk, B. Monitoring gradual ecosystem change using Landsat time series analyses: Case studies in selected forest and rangeland ecosystems. Remote Sens. Environ. 2012, 122, 92–105. [Google Scholar] [CrossRef][Green Version] Lehmann, E.A.; Wallace, J.F.; Caccetta, P.A.; Furby, S.L.; Zdunic, K. Forest cover trends from time series Landsat data for the Australian continent. Int. J. Appl. Earth Obs. Geoinf. 2013, 21, 453–462. [Google Scholar] [CrossRef] Kennedy, R.E.; Yang, Z.; Cohen, W.B. Detecting trends in forest disturbance and recovery using yearly Landsat time series: 1. LandTrendr Temporal segmentation algorithms. Remote Sens. Environ. 2010, 114, 2897–2910. [Google Scholar] [CrossRef] Cohen, W.B.; Yang, Z.; Kennedy, R. Detecting trends in forest disturbance and recovery using yearly Landsat time series: 2. TimeSync Tools for calibration and validation. Remote Sens. Environ. 2010, 114, 2911–2924. [Google Scholar] [CrossRef] Verbesselt, J.; Hyndman, R.; Newnham, G.; Culvenor, D. Detecting trend and seasonal changes in satellite image time series. Remote Sens. Environ. 2010, 114, 106–115. [Google Scholar] [CrossRef] Zhu, Z.; Woodcock, C.E. Continuous change detection and classification of land cover using all available Landsat data. Remote Sens. Environ. 2014, 144, 152–171. [Google Scholar] [CrossRef][Green Version] Brooks, E.B.; Wynne, R.H.; Thomas, V.A.; Blinn, C.E.; Coulston, J.W. On-the-fly massively multitemporal change detection using statistical quality control charts and Landsat data. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3316–3332. [Google Scholar] [CrossRef] Brooks, E.B.; Yang, Z.; Thomas, V.A.; Wynne, R.H. Edyn: Dynamic Signaling of Changes to Forests Using Exponentially Weighted Moving Average Charts. Forests 2017, 8, 304. [Google Scholar] [CrossRef][Green Version] Shimada, M.; Itoh, T.; Motooka, T.; Watanabe, M.; Shiraishi, T.; Thapa, R.; Lucas, R. New global forest/non-forest maps from ALOS PALSAR data (2007–2010). Remote Sens. Environ. 2014, 155, 13–31. [Google Scholar] [CrossRef] Mermoz, S.; Le Toan, T. Forest Disturbances and Regrowth Assessment Using ALOS PALSAR Data from 2007 to 2010 in Vietnam, Cambodia and Lao PDR. Remote Sens. 2016, 8, 217. [Google Scholar] [CrossRef][Green Version] Mermoz, S.; Rejou-Mechain, M.; Villard, L.; Toan, T.L.; Rossi, V.; Gourlet-Fleury, S. Decrease of L-band SAR backscatter with biomass of dense forests. Remote Sens. Environ. 2015, 159, 307–317. [Google Scholar] [CrossRef] LeToan, T.; Quegan, S.; Woodward, I.; Lomas, M.; Delbart, N.; Picard, C. Relating radar remote sensing of biomass to modeling of forest carbon budgets. Clim. Chang. 2004, 76, 379–402. [Google Scholar] [CrossRef] Ryan, C.M.; Hill, T.; Woollen, E.; Ghee, C.; Mitchard, E.; Cassells, G.; Grace, J.; Woodhouse, I.H.; Williams, M. Quantifying small-scale deforestation and forest degradation in African woodlands using radar imagery. Glob. Chang. Biol. 2012, 18, 243–257. [Google Scholar] [CrossRef][Green Version] Perko, R.; Raggam, H.; Deutscher, J.; Karlheinz, G.; Schardt, M. Forest Assessment Using High Resolution SAR Data in X-band. Remote Sens. 2011, 3, 792–815. [Google Scholar] [CrossRef][Green Version] Deutscher, J.; Perko, R.; Gutjahr, K.; Manuela, H.; Schardt, M. Mapping Tropical Rainforest Canopy Disturbances in 3D by COSMO-SkyMed Spotlight InSAR-Stereo Data to Detect Areas of Forest Degradation. Remote Sens. 2013, 5, 648–663. [Google Scholar] [CrossRef][Green Version] Solberg, S.; Riegler, G.; Nonin, P. Estimating forest biomass from TerraSAR-X stripmap radargrammetry. IEEE Trans. Geosci. Remote Sens. 2015, 53, 154–161. [Google Scholar] [CrossRef] Rignot, E.J.; Van Zyl, J.J. Change detection techniques for ERS-1 SAR data. IEEE Trans. Geosci. Remote Sens. 1993, 31, 896–906. [Google Scholar] [CrossRef][Green Version] Deutscher, J.; Gutjahr, K.; Perko, R.; Raggam, H.; Hirschmugl, M.; Schardt, M. Humid tropical forest monitoring with multi-temporal L-, C- and X-Band SAR data. In Proceedings of the 2017 9th International Workshop on the Analysis of Multitemporal Remote Sensing Images (MultiTemp), Bruges, Belgium, 27–29 June 2017; pp. 1–4. [Google Scholar] Lohberger, S.; Staengel, M.; Atwood, E.C.; Siegert, F. Spatial evaluation of Indonesia's 2015 fire-affected area and estimated carbon emissions using Sentinel-1. Glob. Chang. Biol. 2018, 24, 644–654. [Google Scholar] [CrossRef] Reiche, J.; Verhoeven, R.; Verbesselt, J.; Hamunyela, E.; Wielaard, N.; Herold, M. Characterizing Tropical Forest Cover Loss Using Dense Sentinel-1 Data and Active Fire Alerts. Remote Sens. 2018, 10, 777. [Google Scholar] [CrossRef][Green Version] Ruetschi, M.; Small, D.; Waser, L.T. Rapid Detection of Windthrows Using Sentinel-1 C-Band SAR Data. Remote Sens. 2019, 11, 115. [Google Scholar] [CrossRef][Green Version] Bouvet, A.; Mermoz, S.; Ballère, M.; Koleck, T.; Le Toan, T. Use of the SAR Shadowing Effect for Deforestation Detection with Sentinel-1 Time Series. Remote Sens. 2018, 10, 1250. [Google Scholar] [CrossRef][Green Version] Joshi, N.; Baumann, M.; Ehammer, A.; Fensholt, R.; Grogan, K.; Hostert, P.; Jepsen, M.R.; Kuemmerle, T.; Meyfroidt, P.; Mitchard, E.T.; et al. A review of the application of optical and radar remote sensing data fusion to land use mapping and monitoring. Remote Sens. 2016, 8, 70. [Google Scholar] [CrossRef][Green Version] Zhang, J. Multi-source remote sensing data fusion: Status and trends. Int. J. Image Data Fusion 2010, 1, 5–24. [Google Scholar] [CrossRef][Green Version] Reiche, J.; de Bruin, S.; Hoekman, D.; Verbesselt, J.; Herold, M. A Bayesian Approach to Combine Landsat and ALOS PALSAR Time Series for Near Real-Time Deforestation Detection. Remote Sens. 2015, 7, 4973–4996. [Google Scholar] [CrossRef][Green Version] Shimizu, K.; Ota, T.; Mizoue, N. Detecting Forest Changes Using Dense Landsat 8 and Sentinel-1 Time Series Data in Tropical Seasonal Forests. Remote Sens. 2019, 11, 1899. [Google Scholar] [CrossRef][Green Version] Hirschmugl, M.; Sobe, C.; Deutscher, J.; Schardt, M. Combined Use of Optical and Synthetic Aperture Radar Data for REDD+ Applications in Malawi. Land 2018, 7, 116. [Google Scholar] [CrossRef][Green Version] Laurin, G.V.; Liesenberg, V.; Chen, Q.; Guerriero, L.; Frate, F.D.; Bartolini, A.; Coomes, D.; Beccy, W.; Lindsell, J.; Valentini, R. Optical and SAR sensor synergies for forest and land cover mapping in a tropical site in West Africa. Int. J. Appl. Earth Obs. Geoinf. 2013, 21, 7–16. [Google Scholar] [CrossRef] Chen, B.; Li, X.; Xiao, X.; Zhao, B.; Dong, J.; Kou, W.; Qin, Y.; Yang, C.; Zhixiang, W.; Sun, R.; et al. Mapping tropical forests and deciduous rubber plantations in Hainan Island, China by integrating PALSAR 25-m and multi-temporal Landsat images. Int. J. Appl. Earth Obs. Geoinf. 2016, 50, 117–130. [Google Scholar] [CrossRef] Sirro, L.; Haeme, T.; Rauste, Y.; Kilpi, J.; Haemaelaeinen, J.; Gunia, K.; De Jong, B.; Paz Pellat, F. Potential of Different Optical and SAR Data in Forest and Land Cover Classification to Support REDD+ MRV. Remote Sens. 2018, 10, 942. [Google Scholar] [CrossRef][Green Version] Basuki, T.M.; Skidmore, A.K.; Hussin, Y.A.; Van Duren, I. Estimating tropical forest biomass more accurately by integrating ALOS PALSAR and Landsat-7 ETM+ data. Int. J. Remote Sens. 2013, 34, 4871–4888. [Google Scholar] [CrossRef][Green Version] Berninger, A.; Lohberger, S.; Stängel, M.; Siegert, F. SAR-based estimation of above-ground biomass and its changes in tropical forests of Kalimantan using L-and C-Band. Remote Sens. 2018, 10, 831. [Google Scholar] [CrossRef][Green Version] Bourgoin, C.; Blanc, L.; Bailly, J.-S.; Cornu, G.; Berenguer, E.; Oszwald, J.; Tritsch, I.; Laurent, F.; Hasan, A.F.; Sist, P.; et al. The Potential of Multisource Remote Sensing for Mapping the Biomass of a Degraded Amazonian Forest. Forests 2018, 9, 303. [Google Scholar] [CrossRef][Green Version] Reiche, J.; Souza, C.M.; Hoekman, D.H.; Verbesselt, J.; Persaud, H.; Herold, M. Feature Level Fusion of Multi-Temporal ALOS PALSAR and Landsat Data for Mapping and Monitoring of Tropical Deforestation and Forest Degradation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 1–15. [Google Scholar] [CrossRef] Carreiras, J.M.B.; Jones, J.; Lucas, R.M.; Shimabukuro, Y.E. Mapping major land cover types and retrieving the age of secondary forests in the Brazilian Amazon by combining single-date optical and radar remote sensing data. Remote Sens. Environ. 2017, 194, 16–32. [Google Scholar] [CrossRef][Green Version] Nicholaides, J.J., III; Bandy, D.E.; Sanchez, P.A.; Benites, J.R.; Villachica, J.H.; Coutu, A.J.; Valverde, C.S. Agricultural alternatives for the Amazon Basin. Bioscience 1985, 35, 279–285. [Google Scholar] [CrossRef] Palm, C.A.; Alegre, J.C.; Arevalo, L.; Mutuo, P.K.; Mosier, A.R.; Coe, R. Nitrous ox-ide and methane fluxes in six different land use systems in the Peruvian Amazon. Glob. Biogeochem. Cycles 2002. [Google Scholar] [CrossRef] Schoene, D.; Killmann, W.; Luepke, H.V.; LoycheWilkie, M. Forest and Climate Change Working Paper 5: Definitional Issues Related to Reducing Emissions from Deforestation in Developing Countries; Food and Agriculture Organization of the United Nations: Rome, Italy, 2007. [Google Scholar] Tyukavina, A.; Hansen, M.C.; Potapov, P.; Parker, D.; Okpa, C.; Stehman, S.V.; Kommareddy, I.; Turubanova, S. Congo Basin forest loss dominated by increasing smallholder clearing. Sci. Adv. 2018, 4. [Google Scholar] [CrossRef][Green Version] Laporte, N.T.; Stabach, J.A.; Grosch, R.; Lin, T.S.; Goetz, S.J. Expansion of Industrial Logging in Central Africa. Science 2007, 316, 1451. [Google Scholar] [CrossRef][Green Version] Mueller-Wilm, U. Sentinel-2 MSI Level-2A Prototype Processor Installation and User Manual. 2016. Available online: http://step.esa.int/thirdparties/sen2cor/2.2.1/S2PAD-VEGA-SUM-0001-2.2.pdf (accessed on 20 February 2020). Gallaun, H.; Schardt, M.; Linser, S. Remote Sensing Based Forest Map of Austria and Derived Environmental Indicators. In Proceedings of the ForestSat Conference, Montpellier, France, 5–7 November 2007. [Google Scholar] Zhu, Z.; Woodcock, C.E. Object-based cloud and cloud shadow detection in Landsat imagery. Remote Sens. Environ. 2012, 118, 83–94. [Google Scholar] [CrossRef] Zhu, Z.; Qiu, S.; He, B.; Deng, C. Cloud and Cloud Shadow Detection for Landsat Images: The Fundamental Basis for Analyzing Landsat Time Series. In Remote Sensing Time Series Image Processing; CRC Press: Boca Raton, FL, USA, 2018; pp. 3–23. [Google Scholar] Zhu, Z.; Wang, S.; Woodcock, C.E. Improvement and expansion of the Fmask algorithm: Cloud, cloud shadow, and snow detection for Landsats 4–7, 8, and Sentinel 2 images. Remote Sens. Environ. 2015, 159, 269–277. [Google Scholar] [CrossRef] Perko, R.; Raggam, H.; Gutjahr, K.; Schardt, M. Using worldwide available TerraSAR-X data to calibrate the geo-location accuracy of optical sensors. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium Proceedings, Vancouver, BC, Canada, 24–29 July 2011; pp. 2551–2554. [Google Scholar] Koleck, T.; Ballere, M. A Multipurpose Open Source Processing Chain for Sentinel-1 Time Series. In Proceedings of the in Living Planet Symposium, 13–17 May 2019; pp. 13–17. [Google Scholar] Mermoz, S.; Toan, T.L.; Villard, L.; Réjou-Méchain, M.; Seifert-Granzin, J. Biomass assessment in the Cameroon savanna using ALOS PALSAR data. Remote Sens. Environ. 2014, 155, 109–119. [Google Scholar] [CrossRef] Bruniquel, J.; Lopes, A. Multi-variate optimal speckle reduction in SAR imagery. Int. J. Remote Sens. 1997, 18, 603–627. [Google Scholar] [CrossRef] Quegan, S.; Yu, J.J. Filtering of multichannel SAR images. IEEE Trans. Geosci. Remote Sens. 2001, 39, 2373–2379. [Google Scholar] [CrossRef] Sobe, C. Combining Optical and Synthetic Aperture Radar Time Series Data to Improve Tropical Forest Monitoring. Master's Thesis, Graz University of Technology, Graz, Austria, 2018. [Google Scholar] Hamunyela, E. Space-Time Monitoring of Tropical Forest Changes Using Observations from Multiple Satellites. Ph.D. Thesis, Wageningen University & Research, Laboratory of Geo-Information Science and Remote Sensing, Wageningen, The Netherlands, 2017. [Google Scholar] Sannier, C.; McRoberts, R.E.; Fichet, L.-V.; Makaga, E.M.K. Using the regression estimator with Landsat data to estimate proportion forest cover and net proportion deforestation in Gabon. Remote Sens. Environ. 2014, 151, 138–148. [Google Scholar] [CrossRef] Olofsson, P.; Foody, G.M.; Stehman, S.V.; Woodcock, C.E. Making better use of accuracy data in land change studies: Estimating accuracy and area and quantifying uncertainty using stratified estimation. Remote Sens. Environ. 2013, 129, 122–131. [Google Scholar] [CrossRef] Olofsson, P.; Foody, G.M.; Herold, M.; Stehman, S.V.; Woodcock, C.E.; Wulder, M.A. Good practices for estimating area and assessing accuracy of land change. Remote Sens. Environ. 2014, 148, 42–57. [Google Scholar] [CrossRef] Sannier, C.; McRoberts, R.E.; Fichet, L.-V. Suitability of Global Forest Change data to report forest cover estimates at national level in Gabon. Remote Sens. Environ. 2016, 173, 326–338. [Google Scholar] [CrossRef] Figure 1. Test site Peru. Figure 2. Test site Gabon. Figure 3. Overall workflow. Figure 4. Illustration of the S-1 C-band SAR shadowing effect at the border between forests and deforested area. Left: schematic illustration, middle: 10 m C-band S-1 SAR image before forest disturbance, right: 10 m C-band S-1 SAR image after forest disturbance: the arrows point at the new shadows. Figure 5. Forest/non-forest (FNF) evaluation in Peru: example of the systematic random sampling design. Figure 6. Peru validation site. Green areas show the initial optical forest mask as of 03/2016. Red polygons are the disturbances detected by the "Union" approach. The dots are the 575 validation points from stratified random sampling of which 385 are located within forest in 3/2016 (yellow) and 170 are within non-forest areas (blue) based on visual interpretation of VHR and HR imagery. Figure 7. Comparison of forest disturbance detection results in Peru showing the high complementarity of S-1 and optical data. Top left: "S1 Only" map; top right: ""Optical only" map; bottom left: Union of "S1 Only" and "Optical Only" maps (yellow areas are detected by both approaches); bottom right: enlarged area of "Union" map (black rectangle). Figure 8. Comparison of S1 and optical "Union" disturbance detections (left) and Global Forest Watch disturbance detections (right) (March 2016-end of December 2016). Table 1. Forest/Non-Forest mask validation Peru and Gabon. User Accuracy Producer Accuracy Overall Accuracy class Peru Gabon Peru Gabon Peru Gabon NF 0.867 0.893 0.785 0.884 Table 2. Area of detected forest disturbances at the Peru validation site for each mapping approach. Mapping Approach Detected Area of Forest Disturbances in ha S1 Only 248.77 Optical Only 153.90 S1/Optical Union 331.79 S1/Optical Intersect Area 70.88 Adjusted Area Estimate (Reference) 225.61 Table 3. Detection accuracy per disturbance patch size for Peru. Number of Disturbance Areas S1 Only Optical Only >1 ha 34 0.882 0.853 0.971 0.5–1 ha 36 0.806 0.778 0.889 0.2–0.5 ha 43 0.698 0.674 0.884 <0.2 ha 35 0.486 0.657 0.800 all sizes 148 0.716 0.736 0.885 Table 4. Detection accuracy per disturbance patch size for Gabon. 0.2–0.5 ha 101 0.248 0.277 0.416 <0.2 ha 166 0.157 0.175 0.295 Table 5. Disturbance map validation for Peru. Peru disturbance map validation including NF class: User Accuracy Producer Accuracy Overall Accuracy class S1 Only Optical Only Union S1 Only Optical Only Union S1 Only Optical Only Union F 0.821 0.773 0.852 0.949 0.946 0.950 0.819 0.788 0.842 Peru disturbance map validation only within the forest area (method validation): Table 6. Disturbance map validation for Gabon. Gabon disturbance map validation including NF class: Gabon disturbance map validation only within the forest area (method validation): Table 7. Global Forest Watch changes of 2016 (version 1.6). Hirschmugl, M.; Deutscher, J.; Sobe, C.; Bouvet, A.; Mermoz, S.; Schardt, M. Use of SAR and Optical Time Series for Tropical Forest Disturbance Mapping. Remote Sens. 2020, 12, 727. https://doi.org/10.3390/rs12040727 Hirschmugl M, Deutscher J, Sobe C, Bouvet A, Mermoz S, Schardt M. Use of SAR and Optical Time Series for Tropical Forest Disturbance Mapping. Remote Sensing. 2020; 12(4):727. https://doi.org/10.3390/rs12040727 Hirschmugl, Manuela, Janik Deutscher, Carina Sobe, Alexandre Bouvet, Stéphane Mermoz, and Mathias Schardt. 2020. "Use of SAR and Optical Time Series for Tropical Forest Disturbance Mapping" Remote Sensing 12, no. 4: 727. https://doi.org/10.3390/rs12040727 Remote Sens., EISSN 2072-4292, Published by MDPI
CommonCrawl
Home » Binary Options Step By Step The Best Binary Options Broker 2020! Good for Beginners! Free Education + Free Demo Account! Only For Experienced Traders! Stochastic Oscillator Definition What Is A Stochastic Oscillator? The Formula For The Stochastic Oscillator Is What Does The Stochastic Oscillator Tell You? Example Of How To Use The Stochastic Oscillator The Difference Between The Relative Strength Index (RSI) and The Stochastic Oscillator Limitations Of The Stochastic Oscillator Fast, Slow or Full Overbought/Oversold Bull/Bear Divergences Bull/Bear Set-Ups Using with SharpCharts Suggested Scans Stochastic Oscillator Oversold Upturn Stochastic Oscillator Overbought Downturn Stocks & Commodities Magazine Articles TOOLS AND STRATEGIES BASICS OF TRADING PROTECT YOUR CAPITAL INVESTMENT A stochastic oscillator is a momentum indicator comparing a particular closing price of a security to a range of its prices over a certain period of time. The sensitivity of the oscillator to market movements is reducible by adjusting that time period or by taking a moving average of the result. It is used to generate overbought and oversold trading signals, utilizing a 0-100 bounded range of values. A stochastic oscillator is a popular technical indicator for generating overbought and oversold signals. It was developed in the 1950s and is still in wide use to this day. Stochastic oscillators are sensitive to momentum rather than absolute price. %K = ( C − L14 H14 − L14 ) × 1 0 0 where: C = The most recent closing price L14 = The lowest price traded of the 14 previous trading sessions H14 = The highest price traded during the same 14-day period %K = The current value of the stochastic indicator \begin &\text<\%K>=\left(\frac <\text– \text> <\text– \text>\right)\times100\\ &\textbf\\ &\text\\ &\text\\ &\text\\ &\text\\ &\text<14-day period>\\ &\text<\%K = The current value of the stochastic indicator>\\ \end ​ %K = ( H14 − L14 C − L14 ​ ) × 1 0 0 where: C = The most recent closing price L14 = The lowest price traded of the 14 previous trading sessions H14 = The highest price traded during the same 14-day period %K = The current value of the stochastic indicator ​ %K is referred to sometimes as the slow stochastic indicator. The "fast" stochastic indicator is taken as %D = 3-period moving average of %K. The general theory serving as the foundation for this indicator is that in a market trending upward, prices will close near the high, and in a market trending downward, prices close near the low. Transaction signals are created when the %K crosses through a three-period moving average, which is called the %D. The stochastic oscillator is range-bound, meaning it is always between 0 and 100. This makes it a useful indicator of overbought and oversold conditions. Traditionally, readings over 80 are considered in the overbought range, and readings under 20 are considered oversold. However, these are not always indicative of impending reversal; very strong trends can maintain overbought or oversold conditions for an extended period. Instead, traders should look to changes in the stochastic oscillator for clues about future trend shifts. Stochastic oscillator charting generally consists of two lines: one reflecting the actual value of the oscillator for each session, and one reflecting its three-day simple moving average. Because price is thought to follow momentum, intersection of these two lines is considered to be a signal that a reversal may be in the works, as it indicates a large shift in momentum from day to day. Divergence between the stochastic oscillator and trending price action is also seen as an important reversal signal. For example, when a bearish trend reaches a new lower low, but the oscillator prints a higher low, it may be an indicator that bears are exhausting their momentum and a bullish reversal is brewing. The stochastic oscillator was developed in the late 1950s by George Lane. As designed by Lane, the stochastic oscillator presents the location of the closing price of a stock in relation to the high and low range of the price of a stock over a period of time, typically a 14-day period. Lane, over the course of numerous interviews, has said that the stochastic oscillator does not follow price or volume or anything similar. He indicates that the oscillator follows the speed or momentum of price. Lane also reveals in interviews that, as a rule, the momentum or speed of the price of a stock changes before the price changes itself. In this way, the stochastic oscillator can be used to foreshadow reversals when the indicator reveals bullish or bearish divergences. This signal is the first, and arguably the most important, trading signal Lane identified. The stochastic oscillator is included in most charting tools and can be easily employed in practice. The standard time period used is 14 days, though this can be adjusted to meet specific analytical needs. The stochastic oscillator is calculated by subtracting the low for the period from the current closing price, dividing by the total range for the period and multiplying by 100. As a hypothetical example, if the 14-day high is $150, the low is $125 and the current close is $145, then the reading for the current session would be: (145-125)/(150-125)*100, or 80. By comparing current price to the range over time, the stochastic oscillator reflects the consistency with which price closes near its recent high or low. A reading of 80 would indicate that the asset is on the verge of being overbought. The relative strength index (RSI) and stochastic oscillator are both price momentum oscillators that are widely used in technical analysis. While often used in tandem, they each have different underlying theories and methods. The stochastic oscillator is predicated on the assumption that closing prices should close near the same direction as the current trend. Meanwhile, the RSI tracks overbought and oversold levels by measuring the velocity of price movements. In other words, the RSI was designed to measure the speed of price movements, while the stochastic oscillator formula works best in consistent trading ranges. In general, the RSI is more useful during trending markets, and stochastics more so in sideways or choppy markets. The primary limitation of the stochastic oscillator is that it has been known to produce false signals. This is when a trading signal is generated by the indicator, yet the price does not actually follow through, which can end up as a losing trade. During volatile market conditions this can happen quite regularly. One way to help with this is to take the price trend as a filter, where signals are only taken if they are in the same direction as the trend. Developed by George C. Lane in the late 1950s, the Stochastic Oscillator is a momentum indicator that shows the location of the close relative to the high-low range over a set number of periods. According to an interview with Lane, the Stochastic Oscillator "doesn't follow price, it doesn't follow volume or anything like that. It follows the speed or the momentum of price. As a rule, the momentum changes direction before price." As such, bullish and bearish divergences in the Stochastic Oscillator can be used to foreshadow reversals. This was the first, and most important, signal that Lane identified. Lane also used this oscillator to identify bull and bear set-ups to anticipate a future reversal. As the Stochastic Oscillator is range-bound, it is also useful for identifying overbought and oversold levels. The default setting for the Stochastic Oscillator is 14 periods, which can be days, weeks, months or an intraday timeframe. A 14-period %K would use the most recent close, the highest high over the last 14 periods and the lowest low over the last 14 periods. %D is a 3-day simple moving average of %K. This line is plotted alongside %K to act as a signal or trigger line. The Stochastic Oscillator measures the level of the close relative to the high-low range over a given period of time. Assume that the highest high equals 110, the lowest low equals 100 and the close equals 108. The high-low range is 10, which is the denominator in the %K formula. The close less the lowest low equals 8, which is the numerator. 8 divided by 10 equals .80 or 80%. Multiply this number by 100 to find %K. %K would equal 30 if the close was at 103 (.30 x 100). The Stochastic Oscillator is above 50 when the close is in the upper half of the range and below 50 when the close is in the lower half. Low readings (below 20) indicate that price is near its low for the given time period. High readings (above 80) indicate that price is near its high for the given time period. The IBM example above shows three 14-day ranges (yellow areas) with the closing price at the end of the period (red dotted) line. The Stochastic Oscillator equals 91 when the close was at the top of the range, 15 when it was near the bottom and 57 when it was in the middle of the range. There are three versions of the Stochastic Oscillator available on SharpCharts. The Fast Stochastic Oscillator is based on George Lane's original formulas for %K and %D. In this fast version of the oscillator, %K can appear rather choppy. %D is the 3-day SMA of %K. In fact, Lane used %D to generate buy or sell signals based on bullish and bearish divergences. Lane asserts that a %D divergence is the "only signal which will cause you to buy or sell." Because %D in the Fast Stochastic Oscillator is used for signals, the Slow Stochastic Oscillator was introduced to reflect this emphasis. The Slow Stochastic Oscillator smooths %K with a 3-day SMA, which is exactly what %D is in the Fast Stochastic Oscillator. Notice that %K in the Slow Stochastic Oscillator equals %D in the Fast Stochastic Oscillator (chart 2). Fast Stochastic Oscillator: Slow Stochastic Oscillator: The Full Stochastic Oscillator is a fully customizable version of the Slow Stochastic Oscillator. Users can set the look-back period, the number of periods for slow %K and the number of periods for the %D moving average. The default parameters were used in these examples: Fast Stochastic Oscillator (14,3), Slow Stochastic Oscillator (14,3) and Full Stochastic Oscillator (14,3,3). Full Stochastic Oscillator: As a bound oscillator, the Stochastic Oscillator makes it easy to identify overbought and oversold levels. The oscillator ranges from zero to one hundred. No matter how fast a security advances or declines, the Stochastic Oscillator will always fluctuate within this range. Traditional settings use 80 as the overbought threshold and 20 as the oversold threshold. These levels can be adjusted to suit analytical needs and security characteristics. Readings above 80 for the 20-day Stochastic Oscillator would indicate that the underlying security was trading near the top of its 20-day high-low range. Readings below 20 occur when a security is trading at the low end of its high-low range. Before looking at some chart examples, it is important to note that overbought readings are not necessarily bearish. Securities can become overbought and remain overbought during a strong uptrend. Closing levels that are consistently near the top of the range indicate sustained buying pressure. In a similar vein, oversold readings are not necessarily bullish. Securities can also become oversold and remain oversold during a strong downtrend. Closing levels consistently near the bottom of the range indicate sustained selling pressure. It is, therefore, important to identify the bigger trend and trade in the direction of this trend. Look for occasional oversold readings in an uptrend and ignore frequent overbought readings. Similarly, look for occasional overbought readings in a strong downtrend and ignore frequent oversold readings. Chart 3 shows Yahoo! (YHOO) with the Full Stochastic Oscillator (20,5,5). A longer look-back period (20 days versus 14) and longer moving averages for smoothing (5 versus 3) produce a less sensitive oscillator with fewer signals. Yahoo was trading between 14 and 18 from July 2009 until April 2020. Such trading ranges are well suited for the Stochastic Oscillator. Dips below 20 warn of oversold conditions that could foreshadow a bounce. Moves above 80 warn of overbought conditions that could foreshadow a decline. Notice how the oscillator can move above 80 and remain above 80 (orange highlights). Similarly, the oscillator moved below 20 and sometimes remained below 20. The indicator is both overbought AND strong when above 80. A subsequent move below 80 is needed to signal some sort of reversal or failure at resistance (red dotted lines). Conversely, the oscillator is both oversold and weak when below 20. A move above 20 is needed to show an actual upturn and successful support test (green dotted lines). Chart 4 shows Crown Castle (CCI) with a breakout in July to start an uptrend. The Full Stochastic Oscillator (20,5,5) was used to identify oversold readings. Overbought readings were ignored because the bigger trend was up. Trading in the direction of the bigger trend improves the odds. The Full Stochastic Oscillator moved below 20 in early September and early November. Subsequent moves back above 20 signaled an upturn in prices (green dotted line) and continuation of the bigger uptrend. Chart 5 shows Autozone (AZO) with a support break in May 2009 that started a downtrend. With a downtrend in force, the Full Stochastic Oscillator (10,3,3) was used to identify overbought readings to foreshadow a potential reversal. Oversold readings were ignored because of the bigger downtrend. The shorter look-back period (10 versus 14) increases the sensitivity of the oscillator for more overbought readings. For reference, the Full Stochastic Oscillator (20,5,5) is also shown. Notice that this less sensitive version did not become overbought in August, September, and October. It is sometimes necessary to increase sensitivity to generate signals. Divergences form when a new high or low in price is not confirmed by the Stochastic Oscillator. A bullish divergence forms when price records a lower low, but the Stochastic Oscillator forms a higher low. This shows less downside momentum that could foreshadow a bullish reversal. A bearish divergence forms when price records a higher high, but the Stochastic Oscillator forms a lower high. This shows less upside momentum that could foreshadow a bearish reversal. Once a divergence takes hold, chartists should look for a confirmation to signal an actual reversal. A bearish divergence can be confirmed with a support break on the price chart or a Stochastic Oscillator break below 50, which is the centerline. A bullish divergence can be confirmed with a resistance break on the price chart or a Stochastic Oscillator break above 50. 50 is an important level to watch. The Stochastic Oscillator moves between zero and one hundred, which makes 50 the centerline. Think of it as the 50-yard line in football. The offense has a higher chance of scoring when it crosses the 50-yard line. The defense has an edge as long as it prevents the offense from crossing the 50-yard line. A Stochastic Oscillator cross above 50 signals that prices are trading in the upper half of their high-low range for the given look-back period. This suggests that the cup is half full. Conversely, a cross below 50 means that prices are trading in the bottom half of the given look-back period. This suggests that the cup is half empty. Chart 6 shows International Gaming Tech (IGT) with a bullish divergence in February-March 2020. Notice how the stock moved to a new low, but the Stochastic Oscillator formed a higher low. There are three steps to confirming this higher low. The first is a signal line cross and/or move back above 20. A signal line cross occurs when %K (black) crosses %D (red). This provides the earliest entry possible. The second is a move above 50, which puts prices in the upper half of the Stochastic range. The third is a resistance breakout on the price chart. Notice how the Stochastic Oscillator moved above 50 in late March and remained above 50 until late May. Chart 7 shows Kohls (KSS) with a bearish divergence in April 2020. The stock moved to higher highs in early and late April, but the Stochastic Oscillator peaked in late March and formed lower highs. The signal line crosses and moves below 80 did not provide good early signals in this case because KSS kept moving higher. The Stochastic Oscillator moved below 50 for the second signal and the stock broke support for the third signal. As KSS shows, early signals are not always clean and simple. Signal line crosses, moves below 80, and moves above 20 are frequent and prone to whipsaw. Even after KSS broke support and the Stochastic Oscillator moved below 50, the stock bounced back above 57 and the Stochastic Oscillator bounced back above 50 before the stock continued sharply lower. George Lane identified another form of divergence to predict bottoms or tops, dubbed "set-ups." A bull set-up is basically the inverse of a bullish divergence. The underlying security forms a lower high, but the Stochastic Oscillator forms a higher high. Even though the stock could not exceed its prior high, the higher high in the Stochastic Oscillator shows strengthening upside momentum. The next decline is then expected to result in a tradable bottom. Chart 8 shows Network Appliance (NTAP) with a bull set-up in June 2009. The stock formed a lower high as the Stochastic Oscillator forged a higher high. This higher high shows strength in upside momentum. Remember that this is a set-up, not a signal. The set-up foreshadows a tradable low in the near future. NTAP declined below its June low and the Stochastic Oscillator moved below 20 to become oversold. Traders could have acted when the Stochastic Oscillator moved above its signal line, above 20 or above 50, or after NTAP broke resistance with a strong move. A bear set-up occurs when the security forms a higher low, but the Stochastic Oscillator forms a lower low. Even though the stock held above its prior low, the lower low in the Stochastic Oscillator shows increasing downside momentum. The next advance is expected to result in an important peak. Chart 9 shows Motorola (MOT) with a bear set-up in November 2009. The stock formed a higher low in late-November and early December, but the Stochastic Oscillator formed a lower low with a move below 20. This showed strong downside momentum. The subsequent bounce did not last long as the stock quickly peaked. Notice that the Stochastic Oscillator did not make it back above 80 and turned down below its signal line in mid-December. While momentum oscillators are best suited for trading ranges, they can also be used with securities that trend, provided the trend takes on a zigzag format. Pullbacks are part of uptrends that zigzag higher. Bounces are part of downtrends that zigzag lower. In this regard, the Stochastic Oscillator can be used to identify opportunities in harmony with the bigger trend. The indicator can also be used to identify turns near support or resistance. Should a security trade near support with an oversold Stochastic Oscillator, look for a break above 20 to signal an upturn and successful support test. Conversely, should a security trade near resistance with an overbought Stochastic Oscillator, look for a break below 80 to signal a downturn and resistance failure. The settings on the Stochastic Oscillator depend on personal preferences, trading style and timeframe. A shorter look-back period will produce a choppy oscillator with many overbought and oversold readings. A longer look-back period will provide a smoother oscillator with fewer overbought and oversold readings. Like all technical indicators, it is important to use the Stochastic Oscillator in conjunction with other technical analysis tools. Volume, support/resistance and breakouts can be used to confirm or refute signals produced by the Stochastic Oscillator. As noted above, there are three versions of the Stochastic Oscillator available as an indicator on SharpCharts. The default settings are as follows: Fast Stochastic Oscillator (14,3), Slow Stochastic Oscillator (14,3) and Full Stochastic Oscillator (14,3,3). The look-back period (14) is used for the basic %K calculation. Remember, %K in the Fast Stochastic Oscillator is unsmoothed and %K in the Slow Stochastic Oscillator is smoothed with a 3-day SMA. The "3" in the Fast and Slow Stochastic Oscillator settings (14,3) sets the moving average period for %D. Chartists looking for maximum flexibility can simply choose the Full Stochastic Oscillator to set the look-back period, the smoothing factor for %K and the moving average for %D. The indicator can be placed above, below or behind the actual price plot. Placing the Stochastic Oscillator behind the price allows users to easily match indicator swings with price swings. Click here for a live example. This scan starts with stocks that are trading above their 200-day moving average to focus on those that are in a bigger uptrend. Of these, the scan then looks for stocks with a Stochastic Oscillator that turned up from an oversold level (below 20). This scan starts with stocks that are trading below their 200-day moving average to focus on those that are in a bigger downtrend. Of these, the scan then looks for stocks with a Stochastic Oscillator that turned down after an overbought reading (above 80). For more details on the syntax to use for Stochastic Oscillator scans, please see our Scanning Indicator Reference in the Support Center. John Murphy's Technical Analysis of the Financial Markets has a chapter devoted to momentum oscillators and their various uses, covering the pros and cons as well as some examples specific to the Stochastic Oscillator. Martin Pring's Technical Analysis Explained explains the basics of momentum indicators by covering divergences, crossovers, and other signals. There are two more chapters covering specific momentum indicators, each containing a number of examples. Technical Analysis of the Financial Markets John J. Murphy Technical Analysis Explained Martin Pring The Stochastic Oscillator by Joe Luisi Nov 1997 – Stocks & Commodities Open a demo account to fine tune your trade strategies Apply for a live account now and you could be trading in minutes Trading involves significant risk of loss New to trading or to OANDA? Learn the basics here. Develop your trading strategy and learn to use trading tools for market analysis. Learn to apply risk management tools to preserve your capital. Learn the skills necessary to open, modify and close trades, and the basic features of our trading platform. A trading strategy can offer benefits such as consistency of positive outcomes, and error minimization. An optimal trading strategy reflects the trader's objective and personal approach. Fundamental traders watch interest rates, employment reports, and other economic indicators trying to forecast market trends. Technical analysts track historical prices, and traded volumes in an attempt to identify market trends. They rely on graphs and charts to plot this information and identify repeating patterns as a means to signal future buy and sell opportunities. Leveraged trading involves high risk since losses can exceed the original investment. A capital management plan is vital to the success and survival of traders with all levels of experience. Learn risk management concepts to preserve your capital and minimize your risk exposure. Seek to understand how leveraged trading can generate larger profits or larger losses and how multiple open trades can increase your risk of an automatic margin closeout. This page is for general information purposes only: examples are not investment advice or an inducement to trade. Past history is not an indication of future performance. Execution speed and numbers are based on the median round trip latency from receipt to response for all Market Order and Trade Close requests executed between January 1st and May 1st 2020 on the OANDA execution platform. Contracts for Difference (CFDs) or Precious Metals are NOT available to residents of the United States. MT4 hedging capabilities are NOT available to residents of the United States. The Commodity Futures Trading Commission (CFTC) limits leverage available to retail forex traders in the United States to 50:1 on major currency pairs and 20:1 for all others. OANDA Asia Pacific offers maximum leverage of 50:1 on FX products and limits to leverage offered on CFDs apply. Maximum leverage for OANDA Canada clients is determined by IIROC and is subject to change. For more information refer to our regulatory and financial compliance section. This is for general information purposes only – Examples shown are for illustrative purposes and may not reflect current prices from OANDA. It is not investment advice or an inducement to trade. Past history is not an indication of future performance. © 1996 – 2020 OANDA Corporation. All rights reserved. "OANDA", "fxTrade" and OANDA's "fx" family of trademarks are owned by OANDA Corporation. All other trademarks appearing on this Website are the property of their respective owners. Leveraged trading in foreign currency contracts or other off-exchange products on margin carries a high level of risk and may not be suitable for everyone. We advise you to carefully consider whether trading is appropriate for you in light of your personal circumstances. You may lose more than you invest (except for OANDA Europe Ltd customers who have negative balance protection). Information on this website is general in nature. We recommend that you seek independent financial advice and ensure you fully understand the risks involved before trading. Trading through an online platform carries additional risks. Refer to our legal section here. Financial spread betting is only available to OANDA Europe Ltd customers who reside in the UK or Republic of Ireland. CFDs, MT4 hedging capabilities and leverage ratios exceeding 50:1 are not available to US residents. The information on this site is not directed at residents of countries where its distribution, or use by any person, would be contrary to local law or regulation. OANDA (Canada) Corporation ULC accounts are available to anyone with a Canadian bank account. OANDA (Canada) Corporation ULC is regulated by the Investment Industry Regulatory Organization of Canada (IIROC), which includes IIROC's online advisor check database (IIROC AdvisorReport), and customer accounts are protected by the Canadian Investor Protection Fund within specified limits. A brochure describing the nature and limits of coverage is available upon request or at www.cipf.ca. OANDA Australia Pty Ltd is regulated by the Australian Securities and Investments Commission ASIC (ABN 26 152 088 349, AFSL No. 412981) and is the issuer of the products and/or services on this website. It's important for you to consider the current Financial Service Guide (FSG), Product Disclosure Statement ('PDS'), Account Terms and any other relevant OANDA documents before making any financial investment decisions. These documents can be found here. OANDA Corporation is a registered Futures Commission Merchant and Retail Foreign Exchange Dealer with the Commodity Futures Trading Commission and is a member of the National Futures Association. No: 0325821. Please refer to the NFA's FOREX INVESTOR ALERT where appropriate. OANDA Europe Limited is a company registered in England number 7110087, and has its registered office at Floor 3, 18 St. Swithin's Lane, London EC4N 8AD. It is authorised and regulated by the Financial Conduct Authority, No: 542574. OANDA Japan Co., Ltd. First Type I Financial Instruments Business Director of the Kanto Local Financial Bureau (Kin-sho) No. 2137 Institute Financial Futures Association subscriber number 1571. OANDA Asia Pacific Pte Ltd (Co. Reg. No 200704926K) holds a Capital Markets Services Licence issued by the Monetary Authority of Singapore. Binary Options Step By Step 0 WynnFinance Broker Review Xsinergia Review Is Xsinergia.com Legit Or Scam Read Shocking Finds! XTB – overview of FX and CFD broker. Trading in crypto-currencies When to Take Profits © 2021 Binary Options Trading
CommonCrawl
Interference Management with Block Diagonalization for Macro/Femto Coexisting Networks Jang, Uk;Cho, Kee-Seong;Ryu, Won;Lee, Ho-Jin 297 https://doi.org/10.4218/etrij.12.0110.0793 PDF KSCI A femtocell is a small cellular base station, typically designed for use in a home or small business. The random deployment of a femtocell has a critical effect on the performance of a macrocell network due to co-channel interference. Utilizing the advantage of a multiple-input multiple-output system, each femto base station (FBS) is able to form a cluster and generates a precoding matrix, which is a modified version of conventional single-cell block diagonalization, in a cooperative manner. Since interference from clustered-FBSs located at the nearby macro user equipment (MUE) is the dominant interference contributor to the coexisting networks, each cluster generates a precoding matrix considering the effects of interference on nearby MUEs. Through simulation, we verify that the proposed algorithm shows better performance respective to both MUE and femto user equipment, in terms of capacity. Analyzing the Economic Effect of Mobile Network Sharing in Korea Song, Young-Keun;Zo, Hang-Jung;Lee, Sung-Joo 308 As mobile markets in most developed countries are rapidly coming close to saturation, it is increasingly challenging to cover the cost of providing the network, as revenues are not growing. This has driven mobile operators, thus far mostly involved in facility-based competition, to turn their attention to network sharing. There exist various types of mobile network sharing (MNS), from passive to active sharing. In this paper, we propose a model, based on the supply-demand model, for evaluating the economic effects of using six types of MNS. Our study measures the economic effects of employing these six types of MNS, using actual WiBro-related data. Considering lower service price and expenditure reduction, the total economic effect from a year's worth of MNS use is estimated to be between 513 million and 689 million USD, which is equal to three to four percent of the annual revenue of Korean mobile operators. The results of this study will be used to support the establishment of a MNS policy in Korea. In addition, the results can be used as a basic model for developing various network sharing models. Closed-Loop Transmit Diversity Techniques for Small Wireless Terminals and Their Performance Assessment in a Flat Fading Channel Mostafa, Raqibul;Pallat, Ramesh C.;Ringel, Uwe;Tikku, Ashok A.;Reed, Jeffrey H. 319 Closed-loop transmit diversity is considered an important technique for improving the link budget in the third generation and future wireless communication standards. This paper proposes several transmit diversity algorithms suitable for small wireless terminals and presents performance assessment in terms of average signal-to-noise ratio (SNR) and outage improvement, convergence, and complexity of operations. The algorithms presented herein are verified using data from measured indoor channels with variable antenna spacing and the results explained using measured radiation patterns for a two-element array. It is shown that for a two-element array, the best among the proposed techniques provides SNR improvement of about 3 dB in a tightly spaced array (inter-element spacing of 0.1 wavelength at 2 GHz) typical of small wireless devices. Additionally, these techniques are shown to perform significantly better than a single antenna device in an indoor channel considering realistic values of latency and propagation errors. Scate: A Scalable Time and Energy Aware Actor Task Allocation Algorithm in Wireless Sensor and Actor Networks Sharifi, Mohsen;Okhovvat, Morteza 330 In many applications of wireless sensor actor networks (WSANs) that often run in harsh environments, the reduction of completion times of tasks is highly desired. We present a new time-aware, energy-aware, and starvation-free algorithm called Scate for assigning tasks to actors while satisfying the scalability and distribution requirements of WSANs with semi-automated architecture. The proposed algorithm allows concurrent executions of any mix of small and large tasks and yet prevents probable starvation of tasks. To achieve this, it estimates the completion times of tasks on each available actor and then takes the remaining energies and the current workloads of these actors into account during task assignment to actors. The results of our experiments with a prototyped implementation of Scate show longer network lifetime, shorter makespan of resulting schedules, and more balanced loads on actors compared to when one of the three well-known task-scheduling algorithms, namely, the max-min, min-min, and opportunistic load balancing algorithms, is used. Multimedia Service Discrimination Based on Fair Resource Allocation Using Bargaining Solutions Shin, Kwang-Sup;Jung, Jae-Yoon;Suh, Doug-Young;Kang, Suk-Ho 341 We deal with a resource allocation problem for multimedia service discrimination in wireless networks. We assume that a service provider allocates network resources to users who can choose and access one of the discriminated services. To express the rational service selection of users, the utility function of users is devised to reflect both service quality and cost. Regarding the utility function of a service provider, total profit and efficiency of resource usage have been considered. The proposed service discrimination framework is composed of two game models. An outer model is a repeated Stackelberg game between a service provider and a user group, while an inner model is a service selection game among users, which is solved by adopting the Kalai-Smorodinsky bargaining solution. Through simulation experiments, we compare the proposed framework with existing resource allocation methods according to user cost sensitivity. The proposed framework performed better than existing frameworks in terms of total profit and fairness. Low-Cost, Low-Power, High-Capacity 3R OEO-Type Reach Extender for a Long-Reach TDMA-PON Kim, Kwang-Ok;Lee, Jie-Hyun;Lee, Sang-Soo;Lee, Jong-Hyun;Jang, Youn-Seon 352 This paper proposes a low-cost, low-power, and high-capacity optical-electrical-optical-type reach extender that can provide 3R frame regeneration and remote management to increase the reach and split ratio with no change to a legacy time division multiple access passive optical network. To provide remote management, the extender gathers information regarding optical transceivers and link status per port and then transmits to a service provider using a simple network management protocol agent. The extender can also apply to an Ethernet passive optical network (E-PON) or a gigabit-capable PON (G-PON) by remote control. In a G-PON, in particular, it can provide burst mode signal retiming and burst-to-continuous mode conversion at the upstream path through a G-PON transmission convergence frame adaptor. Our proposed reach extender is based on the quad-port architecture for cost-effective design and can accommodate both the physical reach of 60 km and the 512 split ratios in a G-PON and the physical reach of 80 km and the 256 split ratios in an E-PON. High-Frequency Modeling and Optimization of E/O Response and Reflection Characteristics of 40 Gb/s EML Module for Optical Transmitters Xu, Chengzhi;Xu, Y.Z.;Zhao, Yanli;Lu, Kunzhong;Liu, Weihua;Fan, Shibing;Zou, Hui;Liu, Wen 361 A complete high-frequency small-signal circuit model of a 40 Gb/s butterfly electroabsorption modulator integrated laser module is presented for the first time to analyze and optimize its electro-optic (E/O) response and reflection characteristics. An agreement between measured and simulated results demonstrates the accuracy and validity of the procedures. By optimizing the bonding wire length and the impedance of the coplanar waveguide transmission lines, the E/O response increases approximately 5% to 15% from 20 GHz to 33 GHz, while the signal injection efficiency increases from approximately 15% to 25% over 18 GHz to 35 GHz. Probability Constrained Search Range Determination for Fast Motion Estimation Kang, Hyun-Soo;Lee, Si-Woong;Hosseini, Hamid Gholam 369 In this paper, we propose new adaptive search range motion estimation methods where the search ranges are constrained by the probabilities of motion vector differences and a search point sampling technique is applied to the constrained search ranges. Our new methods are based on our previous work, in which the search ranges were analytically determined by the probabilities. Since the proposed adaptive search range motion estimation methods effectively restrict the search ranges instead of search point sampling patterns, they provide a very flexible and hardware-friendly approach in motion estimation. The proposed methods were evaluated and tested with JM16.2 of the H.264/AVC video coding standard. Experiment results exhibit that with negligible degradation in PSNR, the proposed methods considerably reduce the computational complexity in comparison with the conventional methods. In particular, the combined method provides performance similar to that of the hybrid unsymmetrical-cross multi-hexagon-grid search method and outstanding merits in hardware implementation. Modified RHKF Filter for Improved DR/GPS Navigation against Uncertain Model Dynamics Cho, Seong-Yun;Lee, Hyung-Keun 379 In this paper, an error compensation technique for a dead reckoning (DR) system using a magnetic compass module is proposed. The magnetic compass-based azimuth may include a bias that varies with location due to the surrounding magnetic sources. In this paper, the DR system is integrated with a Global Positioning System (GPS) receiver using a finite impulse response (FIR) filter to reduce errors. This filter can estimate the varying bias more effectively than the conventional Kalman filter, which has an infinite impulse response structure. Moreover, the conventional receding horizon Kalman FIR (RHKF) filter is modified for application in nonlinear systems and to compensate the drawbacks of the RHKF filter. The modified RHKF filter is a novel RHKF filter scheme for nonlinear dynamics. The inverse covariance form of the linearized Kalman filter is combined with a receding horizon FIR strategy. This filter is then combined with an extended Kalman filter to enhance the convergence characteristics of the FIR filter. Also, the receding interval is extended to reduce the computational burden. The performance of the proposed DR/GPS integrated system using the modified RHKF filter is evaluated through simulation. Yield Enhancement Techniques for 3D Memories by Redundancy Sharing among All Layers Lee, Joo-Hwan;Park, Ki-Hyun;Kang, Sung-Ho 388 Three-dimensional (3D) memories using through-silicon vias (TSVs) will likely be the first commercial applications of 3D integrated circuit technology. A 3D memory yield can be enhanced by vertical redundancy sharing strategies. The methods used to select memory dies to form 3D memories have a great effect on the 3D memory yield. Since previous die-selection methods share redundancies only between neighboring memory dies, the opportunity to achieve significant yield enhancement is limited. In this paper, a novel die-selection method is proposed for multilayer 3D memories that shares redundancies among all of the memory dies by using additional TSVs. The proposed method uses three selection conditions to form a good multi-layer 3D memory. Furthermore, the proposed method considers memory fault characteristics, newly detected faults after bonding, and multiple memory blocks in each memory die. Simulation results show that the proposed method can significantly improve the multilayer 3D memory yield in a variety of situations. The TSV overhead for the proposed method is almost the same as that for the previous methods. Object Modeling with Color Arrangement for Region-Based Tracking Kim, Dae-Hwan;Jung, Seung-Won;Suryanto, Suryanto;Lee, Seung-Jun;Kim, Hyo-Kak;Ko, Sung-Jea 399 In this paper, we propose a new color histogram model for object tracking. The proposed model incorporates the color arrangement of the target that encodes the relative spatial distribution of the colors inside the object. Using the color arrangement, we can determine which color bin is more reliable for tracking. Based on the proposed color histogram model, we derive a mean shift framework using a modified Bhattacharyya distance. In addition, we present a method of updating an object scale and a target model to cope with changes in the target appearance. Unlike conventional mean shift based methods, our algorithm produces satisfactory results even when the object being tracked shares similar colors with the background. Reversible Watermark Using an Accurate Predictor and Sorter Based on Payload Balancing Kang, Sang-Ug;Hwang, Hee-Joon;Kim, Hyoung-Joong 410 A series of reversible watermarking technologies have been proposed to increase embedding capacity and the quality of the watermarked image simultaneously. The major skills include difference expansion, histogram shifting, and optimizing embedding order. In this paper, an accurate predictor is proposed to enhance the difference expansion. An efficient sorter is also suggested to find a more desirable embedding order. The payload is differently distributed into two sub-images, split like a chessboard pattern, for better watermarked image quality. Simulation results of the accurate prediction and sorter based on the payload balancing method yield generally better performance over previous methods. The gap is wide, in particular, in low payload for natural images. The peak signal-to-noise ratio improvement is around 2 dB in low payload ranges. Provably Secure Aggregate Signcryption Scheme Ren, Xun-Yi;Qi, Zheng-Hua;Geng, Yang 421 An aggregate signature scheme is a digital signature scheme that allows aggregation of n distinct signatures by n distinct users on n distinct messages. In this paper, we present an aggregate signcryption scheme (ASC) that is useful for reducing the size of certification chains (by aggregating all signatures in the chain) and for reducing message size in secure routing protocols. The new ASC scheme combines identity-based encryption and the aggregation of signatures in a practical way that can simultaneously satisfy the security requirements for confidentiality and authentication. We formally prove the security of the new scheme in a random oracle model with respect to security properties IND-CCA2, AUTH-CMA2, and EUF-CMA. High-Quality and Robust Reversible Data Hiding by Coefficient Shifting Algorithm Yang, Ching-Yu;Lin, Chih-Hung 429 This study presents two reversible data hiding schemes based on the coefficient shifting (CS) algorithm. The first scheme uses the CS algorithm with a mean predictor in the spatial domain to provide a large payload while minimizing distortion. To guard against manipulations, the second scheme uses a robust version of the CS algorithm with feature embedding implemented in the integer wavelet transform domain. Simulations demonstrate that both the payload and peak signal-to-noise ratio generated by the CS algorithm with a mean predictor are better than those generated by existing techniques. In addition, the marked images generated by the variant of the CS algorithm are robust to various manipulations created by JPEG2000 compression, JPEG compression, noise additions, (edge) sharpening, low-pass filtering, bit truncation, brightness, contrast, (color) quantization, winding, zigzag and poster edge distortion, and inversion. Technological Convergence of IT and BT: Evidence from Patent Analysis Geum, Young-Jung;Kim, Chul-Hyun;Lee, Sung-Joo;Kim, Moon-Soo 439 In recent innovation trends, one notable feature is the merging and overlapping of technologies: in other words, technological convergence. A key technological convergence is the fusion of biotechnology (BT) and information technology (IT). Major IT advances have led to innovative devices that allow us to advance BT. However, the lack of data on IT-BT convergence is a major impediment: relatively little research has analyzed the inter-disciplinary relationship of different industries. We propose a systematic approach to analyzing the technological convergence of BT and IT. Patent analysis, including citation and co-classification analyses, was adopted as a main method to measure the convergence intensity and coverage, and two portfolio matrices were developed to manage the technological convergence. The contribution of this paper is that it provides practical evidences for IT-BT convergence, based on quantitative data and systematic processes. This has managerial implications for each sector of IT and BT. Channel Estimation Scheme for WLAN Systems with Backward Compatibility Kim, Jee-Hoon;Yu, Hee-Jung;Lee, Sok-Kyu 450 IEEE 802.11n standards introduced a mixed-mode format frame structure to achieve higher throughput with multiple antennas while providing backward compatibility with legacy systems. Although multi-input multi-output channel estimation was possible only with high-throughput long training fields (HT-LTFs), the proposed scheme utilizes a legacy LTF as well as HT-LTFs in a decision feedback manner to improve the accuracy of the estimates. It was verified through theoretical analysis and simulations that the proposed scheme effectively enhances the mean square error performance. Planar DVB-T Antenna Using a Patterned Helical Line and Matching Circuit Lim, Jong-Hyuk;Yun, Tae-Yeoul 454 A miniaturized planar digital video broadcasting terrestrial (DVB-T) antenna, which is composed of a patterned helical line, an open stub, and an impedance matching circuit on an FR4 (${\varepsilon}_r$=4.4) substrate for portable media player applications, is presented in this letter. The antenna has monopole-like, omni-directional radiation characteristics and a wide impedance bandwidth (VSWR<3) in the DVB-T band from 174 MHz to 230 MHz at the VHF band. ML-Based Estimation Algorithm of Frequency Offset for $2{\times}2$ STBC-OFDM Systems Lei, Ming;Zhao, Minjian;Zhong, Jie;Cai, Yunlong 458 In this letter, we propose a novel frequency offset estimation algorithm for space-time block code (STBC) orthogonal frequency division multiplexing systems. The algorithm mainly exploits the specific construction of STBC so that it does not need any additional pilots or sequences in the data field. The estimator is derived on the basis of the maximum likelihood theory. Simulation results show that this method can provide a significant performance improvement in terms of the estimation accuracy of the frequency offset. A Distributed Sequential Link Schedule Combined with Routing in Wireless Mesh Networks Cha, Jae-Ryong;Kim, Jae-Hyun 462 This letter proposes a new distributed scheduling scheme combined with routing to support the quality of service of real-time applications in wireless mesh networks. Next, this letter drives average end-to-end delay of the proposed scheduling scheme that sequentially schedules the slots on a path. Finally, this letter simulates the time division multiple access network for performance comparison. From the simulation results, when the average number of hops is 2.02, 2.66, 4.1, 4.75, and 6.3, the proposed sequential scheduling scheme reduces the average end-to-end delay by about 28%, 10%, 17%, 27%, and 30%, respectively, compared to the conventional random scheduling scheme. Dual Autostereoscopic Display Platform for Multi-user Collaboration with Natural Interaction Kim, Hye-Mi;Lee, Gun-A.;Yang, Ung-Yeon;Kwak, Tae-Jin;Kim, Ki-Hong 466 In this letter, we propose a dual autostereoscopic display platform employing a natural interaction method, which will be useful for sharing visual data with users. To provide 3D visualization of a model to users who collaborate with each other, a beamsplitter is used with a pair of autostereoscopic displays, providing a visual illusion of a floating 3D image. To interact with the virtual object, we track the user's hands with a depth camera. The gesture recognition technique we use operates without any initialization process, such as specific poses or gestures, and supports several commands to control virtual objects by gesture recognition. Experiment results show that our system performs well in visualizing 3D models in real-time and handling them under unconstrained conditions, such as complicated backgrounds or a user wearing short sleeves. Image Independent Driving Power Reduction for High Frame Rate LCD Televisions Nam, Hyoung-Sik;Shim, Jae-Hoon 470 In this letter, the constant driving power reduction ratio has been achieved for column drivers regardless of the input image by incorporating a new static power reduction scheme into the previous dynamic power reduction method. The measured power reduction ratio is around 50% for a 120 Hz liquid crystal display panel in such cases of still input video and fallback. Adaptive TCX Windowing Technology for Unified Structure MPEG-D USAC Lee, Tae-Jin;Beack, Seung-Kwon;Kang, Kyeong-Ok;Kim, Whan-Woo 474 The MPEG-D unified speech and audio coding (USAC) standardization process was initiated by MPEG to develop an audio codec that is able to provide consistent quality for mixed speech and music contents. The current USAC reference model structure consists of frequency domain (FD) and linear prediction domain (LPD) core modules and is controlled using a signal classifier tool. In this letter, we propose an LPD single-mode USAC structure using an adaptive widowing-based transform-coded excitation module. We tested our system using official test items for all mono-evaluation modes. The results of the experiment show that the objective and subjective performances of the proposed single-mode USAC system are better than those of the FD/LPD dual-mode USAC system. A Fast Redundancy Analysis Algorithm in ATE for Repairing Faulty Memories Cho, Hyung-Jun;Kang, Woo-Heon;Kang, Sung-Ho 478 Testing memory and repairing faults have become increasingly important for improving yield. Redundancy analysis (RA) algorithms have been developed to repair memory faults. However, many RA algorithms have low analysis speeds and occupy memory space within automatic test equipment. A fast RA algorithm using simple calculations is proposed in this letter to minimize both the test and repair time. This analysis uses the grouped addresses in the faulty bitmap. Since the fault groups are independent of each other, the time needed to find solutions can be greatly reduced using these fault groups. Also, the proposed algorithm does not need to store searching trees, thereby minimizing the required memory space. Our experiments show that the proposed RA algorithm is very efficient in terms of speed and memory requirements. Cryptanalysis of an Authenticated Key Agreement Protocol for Wireless Mobile Communications He, Debiao 482 With the rapid progress of wireless mobile communications, the authenticated key agreement (AKA) protocol has attracted an increasing amount of attention. However, due to the limitations of bandwidth and storage of the mobile devices, most of the existing AKA protocols are not suitable for wireless mobile communications. Recently, Lo and others presented an efficient AKA protocol based on elliptic curve cryptography and included their protocol in 3GPP2 specifications. However, in this letter, we point out that Lo and others' protocol is vulnerable to an offline password guessing attack. To resist the attack, we also propose an efficient countermeasure.
CommonCrawl
Focus on: All days Oct 23, 2017 Oct 24, 2017 Oct 25, 2017 Oct 26, 2017 Oct 27, 2017 All sessions Invited Talks Parallel Session - MP Parallel Sessions - HEP Parallel Sessions - NAT Parallel Sessions - NINST Parallel Sessions - NUC Plenary Talks Poster Session - HEP Poster Session - MP Poster Session - NAT Poster Session - NINST Poster Session - NUC Hide Contributions America/Havana LASNPA & WONP-NURT 2017 Oct 23, 2017, 8:30 AM → Oct 27, 2017, 4:30 PM America/Havana Colegio Universitario San Gerónimo de La Habana Obispo street, 10200 Old Havana, Havana, Cuba. Ana E. Cabal (CEADEN) , Ibrahin Pinera (University of Antwerp) , Siannah Penaranda Rivas (University of Zaragoza) , Yamiel Abreu (Universiteit Antwerpen) LASNPA & WONP-NURT 2017 will be devoted to the discussion of current problems in various fields of applied and fundamental research. The LASNPA (Latin-American Symposium on Nuclear Physics and Applications) is a traditional symposium which takes place every two years since 1995 in different countries of Latin America. For the next symposium, Cuba was chosen to hold the LASNPA. This series of symposia is attended by the most important nuclear physicists of Latin America and has the participation of several scientists from the USA, Europe and Asia. The XII LASNPA is being organized in cooperation with the International Atomic Energy Agency (IAEA) and the International Union of Pure and Applied Physics (IUPAP). The WONP and the NURT (Workshops on Nuclear Physics and Nuclear Related Techniques) are key Cuban Scientific Meetings in the field of Nuclear and High Energy Particle Physics and peaceful applications of nuclear techniques in the economic and social life. WONP-NURT Symposia has a biannual frequency, hosting delegates and invited scientists from more than 40 countries from all over the world. Since 2009 a pre-conference school is also organized. In this edition the Symposium NURT is celebrating its 20th Anniversary. poster_2017.pdf Adlin López Díaz Ailec Bell Ailier Rivero-Acosta Alberto Andrighetto Alejandro Genaro Cabo Montes de Oca Alejandro Martínez León Alex Manners Alexandra Pabon Alexandrina Petrovici Alexey Guskov Alexey Zhemchugov Alexis Diaz-Torres Alinka Lépine-Szily Anel Hernandez-Garces Ani Aprahamian Annie Ortiz Puentes Antonino Foti Antonio Leyva Fabelo Antonio Torres Arianna Grisel Torres Ramos Ariel García Fleitas Armando Bermudez Martinez Armando José Hernández Arnaldo Núñez Mascaró Arturo Gomez Camacho Aurora Perez Martinez Barbara E. García Moreno Boris Kopeliovich Carlos Ernesto Garrido Salmon Carlos Manuel Cruz Inclán Carlos Manuel Ferras Hernandez Carlos Munoz Camacho Carlos Sandin Carlos Vargas Madrazo Claudia Argote Clayton Souza Clementina Agodi Corina Andreoiu Cruz Duménigo González César García Trápaga Daina Leyva Pernía Dania Consuegra Rodríguez Daniel Abriola Daniel Codorniu Pujals Daniel E. Milian Lorenzo Daniel Jose Marin-Lambarri Daniel Phillips Daniel Venencia Daniela Dominguez Damiani Daniela Fabris Daniele Mengoni Danila Kozhevnikov Dante Roa Dario Leon Valido Dario Ramirez David Adame Brooks David Alonso Fernandez David Antonio Brito David Flechas Dayana Castillo Seoane Dayron Ramos López Deijany Rodríguez Denys Yen Arrebato Diana Alvear Terrero Diego Tellez Dimitra Pierroutsakou Doris Rivero Duvier Suarez Fontanella Débora Hernández Torres Eda Sahin Eilen Llanes Veiga Elizabeth Musacchio Gonzalez Elizabeth Rodríguez Querts Enrico Maglione Enrique Minaya Ramirez Ezequiel Gomez Fabiana Gramegna Fabio Happacher Fabio Maltoni Fatima Benrachi Fernando Cristancho Fernando Garcia Yip Fernando Guzman Fernando Montes Fitzgerald Ramírez Moreno Francisco Pérez González Frank Bello Gabriel de la Fuente Rosales Gaia Pupillo Gerardo Herrera Corral Gianluigi Boca Gilmer Valdes Gordon Baym Grazia Cabras Gretel Quintero Angulo Grichar Valdes Santurio Grzeorz Kaminski Guerda Massillon-JL Guido Martin Hernández Haydee Maria Linares Rosales Hector Lubian Iraola Hellen C. Santos Hernan Olaya Hugo Celso Perez Rojas I. Antoniu Popescu Ibrahin Piñera Hernández Ileana Silvestre Ines Quesada Wiemann Iram Rivas-Ortiz Irina Potashnikova Israel Reyes Molina Ivo Van Vulpen Iván Padrón Díaz Ivón Oramas Polo Javier Alejandro Wachter Chamblas Jeppe Brage Christensen Jorge Enrique Portuondo Cisneros Jorge García Ramírez Jorge Luis Acosta Avalo Jorge Luis Dominguez Martinez Jorge Luis Valdes Albuenres Jose L. Rodríguez Jose Luis Alonso Samper Jose Trujillo Josiel de Jesús Barrios Cossio José Alejandro Fragoso Negrín José Alejandro Rubiera Gimeno José Antonio Díaz Merchán Juan José Gamboa-Carballo Juan Pablo Gallardo Fiandor Julian Shorto Julio Nazco Kelly C. C. Pires Kirill Gikal Landy Castro Leandro Gasques Leidys Laura Pérez González Leydis Leal Lidia Ferreira Liliana Caballero Liliana Mou Liset de la Fuente Rosales Lismary de la Caridad Suarez Gonzalez Lisán David Cabrera Gonzalez Liudy Garcia Hernandez Lourdes M. Garcia-Fernandez Luc Beaulieu Luciano Canton Luis Baly Gil Luis Enrique Llanes Montesino Luz Anny Pamela Ochoa Parra M. Saiful Huq Mack Roach III Maikel Diaz Castro Manuel Alejandro Cardosa-Gutierrez Manuel Arsenio Lores-Guevara Manuel I. Vega Hernandez Manuel Rapado Paneque Manuela Cavallaro Marcia A. Rizzutto Marcilei Aparecida Guazzelli da Silveira Maria Isabel Martinez Solarte Maria Tomas Betancourt Mariana Cecilia Betancourt Marlete Assuncao María Laura Haye Michael Dittmar Miguel Vidal Marono Modesto Montoya Moshe Gai Máriel Morales Nadjet LAOUET Nahuel Facundo Martínez Clemente Naila Gómez González Neven Simicevic Nick Van Remortel Nikola Poljak Nilberto Medina Olivier Schalm Oscar Daniel Zambrano Ramirez Oscar Díaz Rizo Oscar Naviliat-Cuncic Osvaldo Brígido Osvaldo C. B. Santos Pablo Aguilera Pablo Ortiz-Ramírez Penelope Rodriguez-Zamora Petr Smolyanskiy Pierfrancesco Mastinu Piet Van Espen Rafael Miller Rafael Sosa Ricardo Raul Argota Perez Rayner Hernández Pérez Renato Padovani René Toledo Acosta Riccardo Orlandi Rodolfo G. Figueroa Saavedra Rogelio Manuel Diaz Moreno Rubén Orozco-Morales Sabin Stoica Samantha López Sandro Barlini Segundo Agustín Martínez Ovalle Shalev Gilad Siannah Penaranda-Rivas Silvia Murillo-Morales Stephen Avery Steven Medina Sunay Rodríguez Pérez Tania Valdés González Teresita Cepero Chao Tom Swayne Tommaso Marchi Ulrich Parzefall Victor Bourel Victor Modamio Vinicius Zagatto Vladimir Rosa Febles Walter Vilca Vega Wilfredo Sol Zamora Yaisel Córdova-Chávez Yakdiel Rodriguez-Gallo Yakov Pipman Yamiel Abreu Alfonso Yan Carlos Diaz Yannet Interian Yeline Sola Rodríguez Yoval Aguiar Yudmila Reyes Yuri Aguilera Corrales Ángel Alejandro Pérez Martínez LASNPA & WONP-NURT [email protected] [email protected] Mon, Oct 23 Wed, Oct 25 Fri, Oct 27 Opening Aula Magna Invited Talks Aula Magna Invited Talk (45 min + 10 min) will be held at "Aula Magna", Colegio Universitario San Gerónimo de La Habana. Convener: Alinka Lepine-Szily (University of Sao Paulo, Brazil) Nuclear matter and related systems: from the hottest and the coldest places in the universe to the densest Our knowledge of the states of nuclear matter under extreme conditions has advanced significantly in recent years through developments along numerous interrelated paths. This talk will describe this progress by focusing on new understanding of the densest matter in the universe, that deep inside neutron stars. Recent advances driving a better picture of neutron star interiors include observations of heavy neutron stars with masses just twice that of the sun; ongoing observational simultaneous determinations of neutron star masses and radii; an emerging understanding in QCD of how nuclear matter can turn into deconfined quark matter in the interior; and the creation of new states of quantum matter in the laboratory, including quark-gluon plasmas in ultrarelativistic heavy ion collisions, and new Bose and Fermi superfluids in ultracold trapped atomic clouds. The importance of a better understanding of dense nuclear matter is strongly underlined by the observational discovery of gravitational radiation, since merging of neutron stars with black holes or neutron stars will be a principal source of future gravitational radiation events. Speaker: Prof. Gordon Baym (University of Illinois, USA.) Low lying Oscillations of Deformed Nuclei The 1975 Nobel prize in Physics was awarded to Bohr, Mottelson, and Rainwater for the discovery of the connection between nucleon motion and the emergent collective behavior. They described nuclei geometrically as a shape and the oscillations of the nucleus around that shape. In deformed nuclei, they predicted low-lying quadrupole oscillations of the deformed shape with respect to projections on the symmetry axis as "$\beta$" and "$\gamma$" vibrations. The "$\gamma$" vibration seems to be well characterized and exhibits a systematic behavior across the region of deformed nuclei with typical B(E2; 2+ γ → 0+ g.s. ) values of a few Weisskopf units (W.u.). The discussion on the "$\beta$" vibration however still continues today some forty years later and remains an open challenge to nuclear structure studies to large part to the lack of experimental data on the identification and characterization of 0$^+$ states. This has been changing recently with the discovery of numerous 0$^+$ states well below the pairing gap in several isotopes of Sm, Gd, Dy, Er and Hf in the rare earth region. We have been measuring lifetimes of low-lying excited states of K$_{\pi}$ = 0$^+$, 2$^+$, 4$^+$ states in this region of deformation and will present our results along with expected levels of collectivity. This work was supported by the US National Science Foundation under contract number PHYS- 1419765. Speaker: Ani Aprahamian (University of Notre Dame, USA.) Lunch Hotel Florida Parallel Sessions - HEP Room "Fernando Portuondo" Room "Fernando Portuondo" Convener: Nick van Remortel (University of Antwerp) Radiation-Hard Silicon Detectors and the ATLAS HL-LHC-Upgrade The experiments at the Large Hadron Collider (LHC) at CERN are in need of major detector upgrades to cope with the increased luminosity of the High-Luminosity Upgrade of the LHC. In order to cope with the massive increases in track densities, event rates and radiation damage, the entire Inner Tracker of the ATLAS experiment will be replaced. This presentation outlines the huge challenges of this task, and discusses methods to increase the radiation hardness of silicon particle detectors. An overview of radiation-hard silicon detector technologies will be given. The technological choices made for the ATLAS Upgrade will be shown and motivated, and the layout and expected performance of the new ATLAS Inner Tracker will be presented. Speaker: Dr Ulrich Parzefall (University of Freiburg, Germany) New results On Proton Tomography from Jefferson Lab Exclusive processes at high momentum transfer, such as Deeply Virtual Compton Scattering (DVCS) access the Generalized Parton Distributions (GPDs) of the nucleon. GPDs offer the exciting possibility of mapping the 3-D internal structure of protons and neutrons by providing a transverse image of the constituents as a function of their longitudinal momentum. A vigorous experimental program is currently pursued at Jefferson Lab (JLab) to study GPDs through DVCS and meson production. New results from Hall A will be shown and discussed. Special attention will be devoted to the pplicability of the GPD formalism at the moderate values of momentum transfer. In addition, we will report on results for L/T separated pi0 electroproduction cross sections off the proton, the neutron and the deuteron. A large transverse response for both the proton and neutron cases is found, pointing to a possible dominance of higher-twist transversity GPD contributions. For the first time, a flavor decomposition of the u and d quark contributions to the cross section will be shown. We will conclude with a brief overview of additional DVCS experiments under analysis and planned with the future Upgrade of JLab to 12 GeV. Speaker: Dr Carlos Munoz Camacho (IPN-Orsay (CNRS/IN2P3, France)) Chiral effects in gauge theories and the chiral chemical potential We report the generation of a pseudovector electric current having imbalanced chirality in an electron-positron strongly magnetized gas in QED. It propagates along the external applied magnetic field B as a chiral magnetic effect in QED. It is triggered by a perturbative electric field parallel to B, associated to a pseudovector longitudinal mode propagating along B. This mode is associated to the chiral charge density in a massive charged medium in the context of QED. We do not introduce a chiral chemical potential, this term usually used in the literature is not well defined, because the axial charge is not conserved. However, an electromagnetic chemical potential was introduced, but our results remain valid even when it vanishes. A nonzero fermion mass was assumed, which is usually considered vanishing in the literature. In the quantum field theory formalism at finite temperature and density, an anomaly relation for the axial current was found for a medium of massive fermions. It bears some analogy to the Adler-Bell-Jackiw anomaly. We obtain that the pair creation process, due to longitudinal photons (out of light cone), induces an imbalanced chirality and contributes to the chiral current along B. We also discuss about the introduction of a chiral chemical potential in the quantum field theory formalism at finite temperature and density, specifically in the frame of electroweak theory. Speaker: Mr Jorge Luis Acosta Avalo (Instituto Superior de Tecnologías y Ciencias Aplicadas (InSTEC)) Parallel Sessions - NINST Room "Jenaro Artiles" Room "Jenaro Artiles" Convener: Fabiana Gramegna (INFN Laboratori Nazionali di Legnaro) Neutron source based on $^7$Li(p,n)$^7$Be reaction for Boron Neutron Capture Therapy If Boron Neutron Capture Therapy is to become a practical option, accelerator-based sources of high fluxes of epithermal neutrons are essential. Generation of low energy neutron can be achieved by $^7$Li(p,n)$^7$Be reaction using accelerator-based neutron source. Much work has been performed on development of high-flux compact proton accelerators, but a doselimiting component remains design of the neutron production target. Specifically, lithium has a low melting point (180ºC) and low thermal conductivity (44 W/mºC). In this study, therapeutic gain and tumor dose per target power, as parameters to evaluate the treatment quality, were calculated. Energies near the reaction threshold for deep-seated brain tumors were employed. These calculations were performed with the Monte Carlo N-Particle (MCNP) code. As a result, a good therapeutic gain was obtained with a simple but effective beam shaping assembly. Also, heat transfer evaluations of a lithium target designed were performed by ANSYS software. The target designed show that the peak lithium temperature can be held below 150ºC with indalloy flowing by a cooper microchannels plate. Speaker: Elizabeth Musacchio González (Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear, Cuba.) Calculation of self-shielding factor for neutron activation experiments using Geant4 and MCNP In this work we calculated the self-shielding factor, $G$, as a function of the neutron energy, which is important to consider in precise neutron activation experiments. Twelve samples of pure metallic materials were simulated using the Geant4 Monte Carlo toolkit[1,2] and the MCNP[3] code. The self-shielding factor is defined as the ratio between the neutron flux inside the sample volume and the flux in the surface of the sample, $$ G = \left( \int_{E_1}^{E_2}dE\Phi_V \right) \div \left( \int_{E_1}^{E_2}dE\Phi_S \right). $$ We have simulated the behaviour of the self-shielding factor for neutron energies from 10$^{-5}$ eV to 20 MeV. Results obtained by running 10$^{6}$ neutron events in MCNP6 using the ENDF-B/VII.1, JEFF 3.2 and TENDL2014 neutron cross section libraries, shows that the self-shielding factor is relevant to include in neutron activation analysis experiments for thermal neutron energies and for sample thickness greater than 10$^{-4}$ m, as seen in the recent calculation of the neutron flux at the RECH-1 nuclear reactor[4]. S. Agostinelli, et al., Geant4: A simulation toolkit, Nucl. Instrum. Meth. A, 506 3 (2003) 250-303. J. Allison, et al., Geant4 developments and applications, Nuclear Science, IEEE Transactions on, 53 1 (2006) 270-278. T. Goorley, et al., Initial MCNP6 Release Overview, Nuclear Technology 180 (2012) 298-315. F. Molina, et al., Energy distribution of the neutron flux measurements at the Chilean Reactor RECH-1 using multi-foil neutron activation and the Expectation Maximization unfolding algorithm, Appl. Radiat. Isot. 129 (2017) 28-34. Speaker: Jaime Alfonso Romero Barrientos (CCHEN Comision Chilena de Energia Nuclear, Chile.) Particle Accelerators to Study Radiation Effects in Electronic Devices Electronic devices are strongly influenced by radiation and the need for radiation tolerant devices is growing for applications in environments with high radiation dose [1]. The effects due to radiation on electronic components are mainly: Total Ionizing Dose (TID), Displacement Damage (DD), and Single Event Effects (SEE). TID is a cumulative effect that changes the characteristics of electronic devices. DD can change the arrangement of the atoms in the lattice, modifying also component electrical properties. SEE can be a transient effect in which free charge, generated by heavy-ions directly into the device, may provoke data corruption or even a permanent device failure. In order to study TID radiation effects in electronic devices subjected to proton beams, is used a 1.7 MV 5SDH Pelletron accelerator of São Paulo University, which can produce proton beams with energies up to 3.0 MeV [2]. In order to study SEE at the 8.0 MV São Paulo University Pelletron accelerator, a new beam line was mounted to test electronic devices with heavy-ion beams [2]. The heavy-ion beam characteristics follow the requirements to test electronic devices for SEE recommended by the European Space Agency (ESA). This setup is being currently used to provoke failures in integrated circuits and to test the performance of redundancy and correction algorithms in FPGAs. Johnston, A., Reliability and Radiation Effects in Compound Semiconductors. World Sci. Pub. Co. Pte. Ltd., California Inst. of Tech., USA, 2010. Medina, N.H., et al.. Jour. Nucl. Phys., Mat. Sci., Rad. and Appl., v. 4, p. 13-23, 2016. Speaker: N.H. Medina (Instituto de Física da Universidade de São Paulo) Parallel Sessions - NUC Room "Benigno Souza" Room "Benigno Souza" Convener: Prof. Lidia Ferreira (CeFEMA/IST) $^7$Be(p,γ)$^8$B: how EFT and Bayesian analysis can improve a reaction calculation The reaction $^7$Be(p,γ)$^8$B generates most of the high-energy neutrinos emanating from the pp-fusion chain in our Sun. Over the past twenty years there has been a substantial effort to measure its cross section at center-of-mass energies below 500 keV. One goal of this effort was accurate extrapolation of the astrophysical S-factor to solar energies. I will explain our treatment of this problem (Zhang et al., Phys. Lett. B 751, 535 (2015)), which uses an effective field theory (EFT) for $^7$Be(p,γ)$^8$B and Bayesian methods to perform the extrapolation. We find a zero-energy S-factor S(0)=21.3$\pm$0.7 eV–an uncertainty smaller by a factor of two than previously recommended. This improvement occurs because the EFT encapsulates all plausible low-energy models of the process, and so model selection for this problem can be accomplished in a rigorous and statistically meaningful way. Speaker: Daniel Phillips (Ohio University) Shape coexistence phenomena in A$\sim$70 rp-process nuclei within a beyond-mean-field approach Proton-rich nuclei in the A$\sim$70 mass region relevant to the astrophysical rp-process manifest exotic structure and dynamics induced by shape coexistence and mixing, competition between like-nucleon and neutron-proton pairing correlations, as well as isospin-symmetry-breaking interactions. Recent results [1-4] concerning the interplay between isospin-symmetry-breaking and shape-coexistence effects on the structure and $\beta$-decay properties of N$\sim$Z nuclei obtained within the beyond-mean-field complex Excited Vampir variational model will be presented. Reliable predictions on beyond experimental reach characteristics of these nuclei require realistic description of the experimentally accesible properties. Shape coexistence effects on terrestrial and stellar Fermi and Gamow-Teller $\beta$-decay properties of low-lying states and their influence on the effective half-lives of exotic nuclei at the high temperatures of X-ray bursts will be illustrated. A. Petrovici, Phys. Rev. C 91, 014302 (2015). A. Petrovici and O. Andrei, Eur. Phys. J. A 51, 133 (2015). A. Petrovici and O. Andrei, Phys. Rev. C 92, 064305 (2015). A. Petrovici, Phys. Scr. 92, 064003 (2017). Speaker: Prof. Alexandrina Petrovici (IFIN-HH, Horia Hulubei National Institute for Physics and Nuclear Engineering) $\beta$-decay of $^{77,75}$Ni The evolution of the shell structure when moving away from the stability line is discussed in the present contribution. In particular, the effect of the tensor interaction in the appearance and disappearance of magic numbers and the specific case of $^{78}$Ni. To contribute to the understanding of the case, the level structure of $^{75,77}$Cu was studied in a $\beta$-delayed $\gamma$-spectroscopy experiment. The $\beta$-decay experiment was performed at the Radioactive Ion Beam Factory (RIBF) of the RIKEN Nishina Center. A secondary beam of nuclei in the region of $^{78}$Ni was produced by the in-flight fission of $^{238}$U projectiles on a $^{9}$Be target. After being selected and identified in the BigRIPS fragment separator, the nuclei of interest were implanted in the WAS3ABi active stopper, where the $\beta$-decay events were detected. The EURICA array, consisting of 12 germanium cluster detectors, was surrounding the active stopper for the detection of the $\gamma$-rays. The level schemes of $^{75,77}$Cu are presented together with the results of new shell model calculations. Speaker: Frank Leonel Bello Garrote (University of Oslo, Norway.) Coffee Break Room "Bens Arrate" Room "Bens Arrate" Convener: Alejandro Cabo (ICIMAF, Havana, Cuba.) Color reconnection studies in underlying event observables at the LHC Studies of the effects of different color reconnection (CR) choices for three different models implemented in Pythia 8 event generator is shown. Validation plots for the new tunes for the three main Pythia CR model, the MPI-based scheme, the new more QCD-based and the gluon-move model are shown. Four different Rivet validated analysis are presented, the CMS_2011_S9215166 which investigates the agreement of the tunes in the forward region, CMS_2012_PAS_QCD_11_010, which investigates the agreement of the tunes for strange particles, CMS_2015_I1356998 that investigates the agreement of the tunes for diffractive observables and ATLAS_2010_S8894728 which investigates the agreement of the tunes for UE observables. Speaker: Arturo Rodriguez Rodriguez (Instituto Superior de Tecnologías y Ciencias Aplicadas (InSTEC), Universidad de La Habana, Cuba.) High pt top-jet production at the LHC The production of $t\bar{t}$ pairs at high pT, i.e. the so-called boosted regime, is characterized by two collimated jets which contain most of the particles originating from the top decays. We investigate a scenario with both top quarks decaying hadronically. We attempt a definition of "top jet", by considering the substructure of the selected "fat" jets resulting from the top decay and we study the contamination from QCD events (the background). Theoretical predictions of the differential cross sections as a function of the azimuthal difference between the two top jets, as well as pt distribution of the top jets is presented using the definition of "top jets", on analogies at QCD dijet topologies. Speaker: Daniela Dominguez Damiani (DESY) Convener: Daniel Phillips (Ohio University) Systematic CDCC calculations of total fusion for $^{6}$Li with targets $^{28}$Si, $^{59}$Co, $^{96}$Zr, $^{144}$Sm and $^{209}$Bi. Effect of resonance states CDCC calculations of total fusion cross sections for reactions of the weakly bound $^{6}$Li with targets $^{28}$Si, $^{59}$Co, $^{96}$Zr, $^{144}$Sm and $^{209}$Bi at energies around the Coulomb barrier are presented. In the cluster structure frame of $^{6}$Li$\rightarrow \alpha +d$, short-range absorption potentials are considered for the interactions between the $\alpha$ and -$d$ fragments with the targets. The effect of resonance states ($l=2, J^{\pi}=3^{+},2^{+},1^{+}$) and non-resonance states of $^{6}$Li on fusion is studied by i) omitting resonance states from the full discretized breakup space and ii) by considering only the resonance discretized space. A systematic analysis of the effect on fusion from resonance breakup couplings is carried out from light to heavy target masses. Speaker: Arturo Gómez Camacho (Departamento de Aceleradores, Instituto Nacional de Investigaciones Nucleares, Mexico.) Microscopic description of low-lying properties in $^{168-178}$Yb nuclei by the pseudo-SU(3) shell model The rare-earth nuclei have well-known collective properties. The theoretical description of these nuclei represents a challenge to nuclear models, due to the enormous dimensions of the valence space, making the problem unmanageable. This leads us to use symmetry-based models, where it is possible to calculate in a free-truncation environment. In this work we present results for the energy spectrum and the electromagnetic properties in even-even Yb isotopes using the pseudo-SU(3) shell model. The model considers a Nilsson Hamiltonian that additionally includes the quadrupole-quadrupole and pairing interactions, systematically parameterized. The results show that the model considered is a powerful theoretical tool, allowing us to describe the normal parity sector of deformed rare earth nuclei. Speaker: Dr Carlos Ernesto Vargas Madrazo (Facultad de Física, Universidad Veracruzana, Mexico.) Elastic scattering of $^9$Be + $^{80}$Se and the breakup threshold anomaly Full angular distributions for the elastic scattering of the $^9$Be + $^{80}$Se system were measured at eleven bombarding energies around the Coulomb barrier namely, 17,18, 19, 20, 21, 22, 23, 24, 25, 30 and 32.8 MeV, in order to investigate the influence of the breakup of the weakly bound $^9$Be projectile on the reaction mechanisms. The $^9$Be beams were delivered by the 20 UD tandem accelerator of the TANDAR Laboratory at Buenos Aires. The experimental data have been analyzed in the framework of the optical model using two different potentials: 1) a semi-microscopic double folding potential with normalization factors for the real and imaginary parts, $N_R$ and $N_I$ respectively, as free parameters, and 2) a phenomenological Wood-Saxon potential with six free parameters, namely the depth, radius parameter and diffusivity for the real part ($V$, $r_{0R}$ and $a_R$) and the similarly for the imaginary part ($W$, $r_{0I}$ and $a_I$). The coupling to the breakup channel, at energies close and below the Coulomb barrier, is reflected as an increase of the imaginary part of the potential as a function of the decreasing bombarding energies, while the real part decreases in strength. The correlation is explained by the dispersion relation, which connects the real and imaginary parts of the nuclear potential. The analysis of the the present experimental data shows that the energy dependence of the two parts of the optical potential indeed presents a behavior consistent with the so called Breakup Threshold Anomaly (BTA). The uncertainties in the study of the parameter´s dependence with energy are studied using the covariance-matrix method [2]. As a conclusion, we will show that the BTA is determined unambiguously, independently of the model chosen for the nuclear potential. M. Hussein, et al. Physical Review C76, 019902(E) (2007) D. Abriola, G. V. Marti and J. E. Testoni, International Conference on Nuclear Data for Science and Technology (ND 2016), September 11-16, 2016 Speaker: Dr Daniel Abriola (Comision Nacional de Energia Atomica, Argentina.) Parallel Sessions - HEP Room "Jose L. Franco" Room "Jose L. Franco" Convener: Ivo Van Vulpen (NIKHEF and Universiteit Van Amsterdam, Netherlands) A QCD Lagrangian including renormalizable NLJ terms A local and gauge invariant version of QCD Lagrangian is introduced. The model includes Nambu- Jona-Lasinio (NJL) terms within its action in a surprisingly renormalizable form. This occurs thanks to the presence of action terms which at first sight, look as breaking power counting renormalizability. However, those terms also modify the quark propagators, to become more decreasing than the Dirac propagator at large momenta, indicating power counting renormalizability. The approach, can also be interpreted as generalized renormalization procedure for massless QCD. The free propagator, given by the substraction between a massive and a massless Dirac ones, in the Lee-Wick form, suggests that the theory also retains unitarity. The appearance of finite quark masses already in the tree approximation in the scheme is determined by the fact that the new action terms explicitly break chiral invariance. The approach looks as being able to implement the Fritzsch Democratic Symmetry breaking ideas about the quark mass hierarchy. Also, it seems that a link of the theory with the SM can follow after employing the Zimmermann's couplings reduction scheme. The renormalized Feynman diagram expansion of the model is written here and the formula for the degree of divergence of the diagrams is derived. The primitive divergent graphs are identified and the two gloun legs ones are evaluated. The result shows the required gauge invariant transversal structure. Speaker: Dr Alejandro Genaro Cabo Montes de Oca (Departamento de Fisica Teorica, Instituto de Cibernetica, Matematica y Fisica, La Habana,Cuba ) COMPASS experiment at CERN COMPASS is a modern fixed target experiment at a secondary beam of the Super Proton Synchrotron at CERN. The purpose of the experiment is the study of hadron structure and hadron spectroscopy with muon and hadron beams of high intensity. The COMPASS setup is a multipurpose universal spectrometer based on two analysing magnets and equipped with various tracking detectors, electromagnetic and hadron calorimeters, muon filters and Cherenkov detectors for particle identification. COMPASS has an intensive physics programme which includes the following topics: study of nucleon spin structure in semi-inclusive deep inelastic scattering and Drell-Yan process; measurement of the generalized parton distributions for the nucleon in reactions of deeply virtual Compton scattering and deeply virtual meson productions; search for new hadronic states and study of their production mechanisms; test of low-energy QCD models in Primakoff reactions. A physics programme for the period after 2020 is under discussion. Speaker: Alexey Guskov (Joint Institute for Nuclear Research (RU)) Parallel Sessions - NAT Room "Fernando Portuondo" Convener: Michael Dittmar (Institute of Particle Physics, Switzerland) Development of a method for multielemental determination in water by EDXRF with radioisotopic source of $^{238}$Pu A method for determination of Cr, Fe, Co, Ni, Cu, Zn, Hg and Pb in waters by Energy Dispersive X Ray Fluorescence (EDXRF) was implemented, using a radioisotopic source of $^{238}$Pu. For previous concentration was employed a procedure including a coprecipitation step with ammonium pyrrolidinedithiocarbamate (APDC) as quelant agent, the separation of the phases by filtration, the measurement of filter by EDXRF and quantification by a thin layer absolute method. Sensitivity curves for K and L lines were obtained respectively. The sensitivity for most elements was greater by an order of magnitude in the case of measurement with a source of $^{238}$Pu instead of $^{109}$Cd, which means a considerable decrease in measurement times. The influence of the concentration in the precipitation efficiency was evaluated for each element. In all cases the recoveries are close to 100%, for this reason it can be affirmed that the method of determination of the studied elements is quantitative. Metrological parameters of the method such as trueness, precision, detection limit and uncertainty were calculated. A procedure to calculate the uncertainty of the method was elaborated; the most significant source of uncertainty for the thin layer EDXRF method is associated with the determination of instrumental sensitivities. The error associated with the determination, expressed as expanded uncertainty (in %), varied from 15.4% for low element concentrations (2.5-5 μg/L) to 5.4% for the higher concentration range (20-25 μg/L). Speaker: P. Van Espen (University of Antwerp, Belgium) Implementation and application of a method of separation of rare earth elements by ion exchange and subsequent determination by ICP-AES and ICP-MS The great interest in rare earth elements (ETR) is due to the many and valuable applications that these elements have in the science and economy of many countries. On the other hand highly sensitive methods are generally required for the determination of these elements in many of the matrices of interest such as geological samples. An alternative is to apply chemical separation procedures, prior to analysis. In the present work, model experiments were performed to study the ETR separation performance of Cr, Fe, Ni and some elements of the alkaline earth and platinum groups by cation exchange in HCl medium. The final determination in the eluates was carried out by Inductively Coupled Plasma Atomic Emission Spectrometry (EEA-ICP). The elaborated method was used in the ETR analysis in 100 geological samples supplied by the Institute of Geology and Paleontology of MINEM. In the latter case, the determination of ETR was performed by Inductively Coupled Plasma Mass Spectrometry (ICP-MS). An analysis of the obtained results is presented in the work Speaker: L. Leal (Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear (CEADEN)) Elemental analysis of peloids from some Cuban spas using INAA Peloids from some Cuban spas (San Diego, Elguea, Santa Lucía, Cajío and Colony) have been studied using Instrumental Neutron Activation Analysis (INAA). Concentrations of 30 major, minor and trace elements in the peloids are reported, including an important group of REE (La, Ce, Nd, Sm. Eu, Gd, Tb, Tm and Yb). No difference is observed for metal contents (including REE) determined for raw and maturated peloids from San Diego spa. Elemental concentrations are compared with other worldwide reported peloids. The iron-normalization using raw (non-matured) mud from San Diego spa as reference material shows that an anthropogenic metal input is present in the Elguea, Cajío and Colony spas. The measured REE contents are in the same order of magnitude as those reported for Earth's upper crust average shales and muds as well as with worldwide reported peloids. However, the behavior of the normalized-to-chondrites REE shows different patterns for Cuban peloids matured with marine and fresh waters, respectively. Speaker: Oscar Díaz Rizo (Instituto de Tecnologías y Ciencias Aplicadas, Universidad de La Habana (InsTEC-UH), La Habana, Cuba.) FAZIA: a new very performant array for heavy ion reactions at Fermi energies The FAZIA detector is a new fully digital array based on Silicon(300um)+Silicon(500um)+CsI(Tl)(10cm) telescopes. It is design to study heavy ions collisions in the Fermi energies range using a fully integrated digital electronics. Some details about the construction of this array and about the incredible performance in terms of isotopic separation will be given. Finally, some preliminary results about the isospin physics which can be studied with such powerful tool will be shown. Speaker: Mr Sandro Barlini (University and INFN of Florence) A detector for neutron time of flight spectrometry Time of flight neutron spectrometry requires for high detection efficiency. In this work we present a detector which is based on a neutron-gamma converter ($^{10}$B disk) followed by a gamma detector (BaF$_2$). The response function to convert time of flight spectrum to energy one was constructed. About four times higher efficiency, compared to an equal thickness $^{6}$Li-doped typically-used neutron detector is predicted. The well-known $^{7}$Li(p,n)$^{7}$Be reaction is used to compare the calculated detector response with the measured one. The low energy tail of the neutron spectrum is well reproduced. Speaker: Guido Martin Hernandez (CEADEN, Cuba.) The LENOS Project at Laboratori Nazionali di Legnaro of INFN-LNL: a thermal to 70 MeV neutron beam facility LENOS (Legnaro NeutrOn Source) project at the LNL-INFN (Italy) is a neutron irradiation facility for nuclear astrophysics studies and data validation for energy and non-energy applications. It is based on a high current low energy RFQ. The facility, will use the 5MeV, 50 mA proton beam of RFQ under test at LNL to produce an unprecedent neutron flux, precisely shaped to a Maxwell-Boltzmann energy distribution. A new method has been proposed to obtain the desired neutron spectra at different stellar energies and a dedicated target, able to sustain a very high specific power, has been developed and tested. We will present the facility, the method used to shape the neutron beam, the results of the high power test of the microchannel water cooled target and the preliminary results of a measurement dedicated to the validation of the proposed method. Beside the neutron facility based on RFQ, we have currently partially financed an higher neutron energy facility based on the new Cyclotron of 35-70MeV tuneable energy and up to 750uA current. The high energy facility is called NEPHIR and at the moment, is dedicated to the study of Single Event Effect (SEE) on electronic devices. The 70MeV proton beam will produce an atmospheric like neutron spectra up to 70 MeV using a novel technique and novel target: the neutron spectra is constructed by a weighted convolution of neutron spectra coming from different reactions. The target able to use two different materials and thus two different reactions, is a rotating target. The opportunity to have the tenable energy between 35 and 70 MeV will offer also the opportunity to produce quasi-monochromatic neutron beam, whose applications span from fundamental physics to applied one (within the framework of SEE studies allow the search for threshold effects for instance). Finally, in the long term a pulsing system with about 2 ns time width is planned to allow a neutron Time of Flight facility. In this presentation, the novel method to produce atmospheric neutron spectra, the target design, the calculations for the production of quasi mono-energetic neutron beam with p,Li and p,Be reactions will be presented. Speaker: Dr Pierfrancesco Mastinu (Istituto Nazionale di fisica Nucleare- Laboratori Nazionali di Legnaro) Convener: Dimitra Pierroutsakou (INFN, Section of Naples) Preliminary results on Se64 beta decay experiment at RIKEN Nishina Center and Mirror Symmetry Spin-isospin excitations can be studied by beta decay and charge exchange reactions in mirror nuclei, shedding light on mirror symmetry, hence we can compare our results on the beta decay of proton-rich nuclei with the results of charge exchange experiments when appropriate targets for the mirror nuclei are available. Accordingly we have performed experiments at GSI and GANIL to study $T_z$ =-1 and $T_z$ =-2 nuclei respectively where it became clear that the study of heavier, more exotic systems, demands beam intensities available only at the RIKEN Nishina Center. In this work we present the first experimental observation of the beta-delayed protons in the decay of the $T_z$ =-2 ${}^{64}$Se. We have performed an experiment using the fragmentation of a 345 MeV·A ${}^{78}$Kr beam with typical intensity of 200 particle nA on a Be target. The fragments were separated in flight using the BigRIPS separator and implanted in three double-sided Silicon strip detectors (DSSSD) named WASSSABi (60 mm × 40 mm × 1.2 mm, 60 horizontal and 40 vertical strips). The implantation setup was surrounded by the EUROBALL-RIKEN Cluster Array (EURICA). We perform time correlation between the ${}^{64}$Se implantations and beta signals within a single pixel defined as the crossing of one X and one Y strip. In addition, only the strips where the highest energy has been deposited are correlated. The DSSSD detectors were calibrated using an external electron conversion ${}^{207}$Bi source. Due to the fact that the proton emitted is in prompt coincidence with the beta particle, also partially absorbed in the DSSSD detector, shifts in the measured energies are expected. These energy shifts were estimated using the ${}^{57}$Zn and ${}^{61}$Ge beta-delayed proton spectra observed in the same experiment. The ${}^{57}$Zn and ${}^{61}$Ge proton energies were taken from literature. Three proton peaks were observed for ${}^{64}$Se, (1612(10), 2003(13), and 3249(22) keV) and the proton energy errors were estimated from the fit for each proton peak as well as the systematic shift with the proton energies reported in the literature. Speaker: Pablo Aguilera Jorquera (Universidad de Chile) High-resolution gamma-ray spectroscopy at LNL with GALILEO: commissioning campaign and first results The Legnaro National Laboratories have a long-standing tradition in gamma-ray spectroscopy. They hosted the most recent HPGe arrays, from GASP, one of the first Compton-shielded large HPGe array to AGATA, the first operational tracking array worldwide. In this context, a new resident gamma-ray spectrometer GALILEO has been developed. After a 1-y long commissioning campaign, a physics campaign started. In such campaign, GALILEO has been combined with a light-charge particle and a neutron array, EUCLIDES and NEUTRON WALL, respectively, for the investigation of neutron-deficient nuclei. The experiments performed so far aimed mainly at studying the shape coexistence phenomenon in medium-mass and heavy nuclei, the octupole correlations in the Ba region and the isospin symmetry breaking effect in light nuclei. The first results will be reported. Speaker: Mr Daniele Mengoni (INFN - National Institute for Nuclear Physics) The role of pairing in heavy-ion induced transfer reactions An experimental campaign to study heavy-ion induced one- and two-nucleon transfer reactions has been performed at INFN-Laboratori Nazionali del Sud in Catania (Italy). In particular reactions induced by 18O and 20Ne beams at energies above the Coulomb barrier on different target isotopes have been explored with high resolution (both in energy and angle) and in a quite wide angular range including zero degrees. The aim of this study is two-fold. First of all, the experimental observations and the analysis of the reaction mechanism in two-nucleon transfer reactions in a quantum-mechanical description can give interesting information on the role of the pairing force in populating specific excited states and resonances, such as the so called Giant Pairing Vibration. Moreover, the study of multi-nucleon transfer cross-sections is a crucial aspect for recently proposed research projects involving the use of nuclear reactions of double charge exchange in relation with the physics of neutrinoless double beta decay. The multi-nucleon transfer mechanism could compete with the double meson exchange mechanism in double charge exchange reactions and their role must be understood in order to extract accurate information on the nuclear matrix elements of interest. Speaker: Manuela Cavallaro (INFN - National Institute for Nuclear Physics) Convener: Ulrich Parzefall (Albert Ludwigs Universitaet Freiburg (DE)) The LUCID-2 detector The LUCID-2 detector is the main online and offline luminosity provider of the ATLAS experiment. It provides over 100 different luminosity measurements from different algorithms for each of the 2808 LHC bunches. LUCID was entirely redesigned in preparation for LHC Run 2: both the detector and the electronics were upgraded in order to cope with the challenging conditions expected at the LHC center of mass energy of 13 TeV with only 25 ns bunch-spacing. While LUCID-1 used gas as a Cherenkov medium, the LUCID-2 detector is in a new unique way using the quartz windows of small photomultipliers as the Cherenkov medium. The main challenge for a luminometer is to keep the efficiency constant during years of data-taking. LUCID-2 is using an innovative calibration system based on radioactive 207 Bi sources deposited on the quartz window of the readout photomultipliers. This makes it possible to accurately monitor and control the gain of the photomultipliers so that the detector efficiency can be kept stable at a percent level. A description of the detector and its readout electronics will be given, as well as preliminary results on the ATLAS luminosity measurement and related systematic uncertainties. Speaker: Grazia Cabras (Universita e INFN, Bologna (IT)) Kicks of magnetized strange quarks stars induced by anisotropic emision of neutrinos Beta disintegration is studied in the presence of a magnetic field, which imposes a preferential direction on the emission of neutrinos. It is explored the possibility that this anisotropy in neutrino emission can account for observed Neutron (Quarks) Star velocities (kicks). The conditions under which the anisotropic emission of neutrinos (due to the magnetic field present in the system) causes a ``kick'' of the compact star are discussed. The matrix element for the beta decay process is computed from first principles taking into account the $W$ boson propagator in presence of a strong magnetic field. The neutrino emissivity is also computed. Speaker: Prof. Aurora Perez Martinez (ICIMAF) Status and prospects of the SoLid neutrino experiment The SoLid experiment intends to search for active-to-sterile anti-neutrino oscillation at very short baseline from the SCK$\bullet$CEN BR2 research reactor (Mol, Belgium). A novel detector approach to measure reactor anti-neutrinos was developed based on an innovative sandwich of composite polyvynil-toluene and $^{6}$LiF:ZnS scintillators. The system is highly segmented and read out by a network of wavelength shifting optical fibers and silicon photomultipliers (SiPMs). This detector will have few passive shielding, relying on its volume segmentation and robust neutron identification capabilities to reject the backgrounds components of the experiment and provide a precise measurement. We will describe the principle of detection and the detector design. Results from the first full scale detector prototype (SM1) measurements will be presented. Particular focus will be made on the current status and the expected results of the SoLid experiment. The SoLid Phase I is planing to start data taking in fall 2017 and will be able to provide important results to clarify the so-called Reactor Antineutrino Anomaly. Speaker: Yamiel Abreu (University of Antwerp, Belgium.) Convener: Olivier Schalm (University of Antwerp & Antwerp Maritime Academy) Natural Radiation and Environmental Applications Naturally occurring radioactive materials (NORM) are continuous and an unavoidable feature of life on Earth and are present in its crust since its origin. The irradiation of the human body from external terrestrial sources is mainly due to gamma rays coming from natural radionuclides, such as 40K and the elements of the series of 238U and 232Th. These primordial radionuclides present long half-lives, decaying to achieve stability, producing ionizing radiation. It is very important to study radionuclide distribution in soils to understand the radiological implications in relation to the exposure of the human body to ionizing radiation and the knowledge of which components are found in one specific geographic region. Natural background radiation studies are needed to establish reference levels, especially in areas where the risk of radioactive exposure may be higher, and this risk index can be worsened through the soil mineral extraction, generating Technologically Enhanced Naturally Occurring Radioactive Material (TENORM). On the other hand, the secondary effects of natural radiation are also of extreme importance, since the human being feeds on animals and plants, which determine the intake of natural radionuclides. In this work the distribution of natural radiation from Southeastern Brazilian beach sands using gamma-ray spectrometry was studied. In most of the samples studied the dose due to external exposure to gamma-rays, proceeding from natural terrestrial elements, are within the values 0.3 and 1.0 mSv/year, typical range indicated by the United Nations Scientific Committee on the Effects of Atomic Radiation. Gamma-ray technique was used to evaluate the transfer rate of these radionuclides from soil to the plants. Energy-Dispersive X-Ray Spectroscopy (EDS) microanalysis and X-Ray Fluorescence were used also to assist in the sample analysis. The study of natural radiation present in TENORM, plants and food was done. Various chemical processes have been applied in TENORM considering waste samples from extraction of phosphatic rocks, in order to make viable the extraction process and reducing the amount of radioactive waste Speaker: Marcilei Aparecida Guazzelli (Centro Universitário FEI) A Regional Oil Extraction and Consumption Model. Part II: Predicting the declines in regional oil consumption In part I of this analysis, the striking similarities of the declining oil production in the North Sea, Indonesia and Mexico were used to model the future maximum possible oil production per annum in all larger countries and regions of the planet from 2015 to 2050. In part II, the oil export and oil consumption patterns, that were established in recent decades, are combined with the consequences of the forecast declines in regional oil production that were developed in part I of this analysis. The results are quantitative predictions of the maximum possible region-by-region oil consumption during the next 20 years. The predictions indicate that several of the larger oil consuming and importing countries and regions will be confronted with the economic consequences of the onset of the world's final oil supply crisis as early as 2020. In particular, during the next few years a reduction of the average per capita oil consumption of about 5%/year is predicted for most OECD countries in Western Europe, and slightly smaller reductions, about 2-3%/year, is predicted for all other oil importing countries and regions. The consequences of the predicted oil supply crisis are thoroughly at odds with business-as-usual, never-ending-global-growth predictions of oil production and consumption. Speaker: Michael Dittmar (Institute of Particle Physics, Switzerland) Pollution characteristics and human health risks of potentially toxic elements in road dust from the central metropolitan area of Havana city (Cuba) using X-ray fluorescence analysis Our study aimed to investigate road dust from relevant locations in Havana city and the associated health risks of potentially toxic elements (PTEs) to humans using X-ray fluorescence analysis. The Geoaccumulation Index, Enrichment Factor and Integral Pollution Index were used to describe pollution characteristics of roadside dust in urban areas associated with traffic and child activities (schools, parks, etc.) from Havana city central metropolitan area (Old Havana, Central Havana, San Miguel del Padrón and Regla municipalities and the Alamar district). Results indicate that industrial roadside dust is contaminated with Pb near high traffic, as well as power and gas station locations, and with Zn and Cu in areas were reconstruction works were performed. The Hazard Quotient (HQ) and Hazard Index (HI) values for all the exposure routes (ingestion, inhalation, and dermal contact) were below the international established limits, except for Pb, Zn and Cu in mentioned areas. The risk of contracting cancer from the studied metals was found to be in safe levels as the RI (carcinogenic risks) values were below the international established limits. Speakers: Oscar Díaz Rizo (Instituto de Tecnologías y Ciencias Aplicadas, Universidad de La Habana (InsTEC-UH), La Habana, Cuba.) , María Tomás Betancourt (Instituto de Tecnologías y Ciencias Aplicadas, Universidad de La Habana (InsTEC-UH), La Habana, Cuba.) Convener: Guido Martin (CEADEN, Havana, cuba) A real time pulse processing DAQ for neutron wall modular detector on RIBRAS experiments In order to potentiate the experiments with Brazil Radioactive Ion Beam (RIBRAS), new VME (Versa Module Euro Card) Data Acquisition (DAQ) modules characteristics to control, triggering and data acquisition will be described. The DAQ will define to include the Strip Array and Neutron Wall detectors with maximum readout efficiency, no dead time, data selection and event synchronization. CAEN digitizer modules for VME provide features like zero suppressed readout and overflow suppression. Zero suppression, once enabled, prevents conversion of value which is lower than user defined threshold. Overflow suppression, once enabled, aborts the memorization of data which constitutes an ADC overflow. Adding FPGAs (field programmable gate array) to data acquisition provides pre- and post-algorithmic processing on data. The hardware elements chosen should have features that make the modules easy to program and handle, while the FPGAs should be reprogrammable when required. For simplification of the interaction between DAQ elements, provision of standalone working mode for each sub detectors, easy reconfiguration of active sub-detector and easy hardware replacement, the DAQ hardware units are functionally subdivided into hierarchy by logical level along the data stream. Speaker: Ivan Padron Diaz (CEADEN, Cuba.) The new systems at Mexico for nuclear reactions measurements at low energies During the last 6 year, the infrastructure related with Experimental Nuclear Physics at the Physics Institute, UNAM, has been benefited with an unprecedented injection of resources: four new beam lines were mounted at the CN-Accelerator Van der Graaff of 5.5 MV, which originally had just one. Moreover, a modern Tandetron AMS system of 1 MV was acquired in order to establish this kind of technique for the first time at Mexico. This AMS system is nowadays a National Laboratory called LEMA. In one of the lines of the Van der Graaff Accelerator was built a permanent windowless Supersonic Gas target (SUGAR), which is been completed with a neutron wall composed of 16 scintillator crystals (MONDE). With this combination will be possible to measure coincidences between charge particles and neutrons and/or gammas. To complete the AMS system, and additional beam line will be coupled to the accelerator in order to extract intense and pure stable beams at low energy. In addition, the current radioactive beams used for AMS purposes, will be able to be used with an acceptable intensity in order to develop experiments related with the scattering and resonances of $^{13,14}$C and $^{26}$Al, using light targets. In this work are presented some of the characteristics of all these new systems: the first data related with the commissioning of the Supersonic Gas Target, where the $^{14}$N(d,$\alpha$)$^{12}$C was studied, achieving a good resolution for the alpha resonances and proving the good qualities of the entire system; besides, is described the first approximation for the measurements of cross sections for the $^{28}$Si(d,$\alpha$)$^{26}$Al and $^{25}$Mg(p,$\gamma$)$^{26}$Al reactions at low energies, where are used the two accelerators that were mentioned before, the CN-Accelerator to produce the reaction, and the AMS system to measure the concentration of $^{26}$Al produced after the irradiation. These $^{26}$Al concentrations are directly related with cross sections of a particular reaction energy, whose values are currently interesting for Astrophysics parameters. This research is been partially supported by PAPIIT-DGAPA-UNAM-Project: IA101616. C. Solís, et al. Nucl. Inst. Meth. Phys. B 331 (2014) 233–237. F. Favela, et al. Phys. Rev. ST-AB, Vol.18, Pag.1-10. (2015). P. Santa Rita, et al. J. of Phys. Conf. Ser, Vol.730, Pag.1-7 (2016). V. Araujo-Escalona, et al. J. of Phys. Conf. Ser,Vol.730, Pag.1-7 (2016). A. Arazi, et al. Phys. Rev. C. 74, 025802 (2006). Speaker: Prof. L. Acosta (Instituto de Física, Universidad Nacional Autónoma de México, Mexico.) Advances on applied atomic and nuclear physics at Universidad Tecnológica Metropolitana of Chile (LIATAN laboratory) Recently the installation of a Van de Graaff accelerator in the Campus Macul of the Universidad Tecnológica Metropolitana has promoted the formation of the Laboratorio de investigación aplicada con tecnologías atomicas y nucleares (LIATAN). One of the objectives of this scientific laboratory is the development of advanced research mainly in the field of material sciences and environmental sciences using atomic and nuclear technology. This facility is to be considered as a complex laboratory, consisting of facilities that once were operating in other institutions and that will enhance its operation through a project that our University leads. An essential component of the LIATAN Laboratory is a Van de Graaff electrostatic accelerator. This equipment includes very complex research instruments which are unique in the country since allowing different kind of studies by using IBA techniques[1] such as characterization of surface material samples or to determine trace elements in aerosols samples. On the other hand, a linear electron accelerator will also be installed at the LIATAN laboratory. This accelerator was donated by the National Cancer Institute and it was mainly used for the treatment of tumor diseases and the study of other solid, liquid, gaseous samples. Both facilities are aimed to promote further applied atomic and nuclear research in our country. This talk will describe the main characteristics of the LIATAN facilities and future research prospects. Acknowledgments: The authors recognize financial support from Dirección de investigación y desarrollo académico (DIDA) of Universidad Tecnológica Metropolitana. P.A. Miranda, M.A. Chesta, S.A. Cancino, J.R. Morales, M.I. Dinator, J.A. Wachter and C. Tenreiro. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms. Volume 248, Issue 1, 2006, Pages 150-154. Speaker: Dr Javier Wachter (Departamento de Física, Facultad de Ciencias Naturales, Matemática y del Medio Ambiente, Universidad Tecnológica Metropolitana.) Convener: Nilberto Medina (Universidade de São Paulo) The EXOTIC project at INFN-LNL I will present the low-energy light Radioactive Ion Beam (RIB) in-flight facility EXOTIC [1-4], operational at the INFN-Laboratori Nazionali di Legnaro (INFN-LNL) and the associated experimental set up [5] designed for nuclear physics and nuclear astrophysics experiments. I will present the outline of the experimental program carried out employing the produced RIBs and discuss few selected recent experiments. Finally, I will give the perspectives of the EXOTIC project. V.Z. Maidikov et al., Nucl. Phys. A 746 (2004) 389c D. Pierroutsakou et al., Eur. Phys. J. Special Topics 150 (2007) 47 F. Farinon et al., Nucl. Instr. and Meth. B 266 (2008) 4097 M. Mazzocco et al., Nucl. Instr. and Meth. B 266 (2008) 4665 ; Nucl. Instr. and Meth. B 317 (2013) 223 D. Pierroutsakou et al., Nucl. Instr. and Meth. A 834 (2016) 46 Speaker: D. Pierroutsakou (INFN-Sezione di Napoli, via Cintia, 80126 Napoli, Italy) Exotic light nuclei; The structure of $^{12}$C and the mirror states of $^9$Be | $^9$B Despite all the information we have accumulated during the past decades about the structure of $^{12}$C we have not resolved a number of detail, for example, the geometry of its second excited state at 7.65 MeV (0$^+$), the Hoyle-State, has not yet been solved. Recent results have showed a possible 2$^+$ resonance between 9-12 MeV that could be related with a collective rotational or vibrational excitation while a resonance at 13.3 MeV is a strong candidate for the corresponding 4$^+$ excitation. In particular, a most recent measurement in which a new high spin Jπ = 5$^-$ resonance at 22.4 MeV which matches with that predicted ground state rotational band of an oblate equilateral triangle with a D3h symmetry. Symmetry that was observed for the first time in nuclear physics. Through angular correlations it was possible to characterize the 22.4 state in terms of spin and parity. The structure of A=9 nuclei are relevant in astrophysics and nuclear structure. The measurement of the low-lying excited states in 9B nucleus through the $^9$Be($^3$He,t)$^9$B reaction, with the K600 spectrometer in conjunction with a segmented silicon detector array was performed at iThemba LABS facility. Of particular interest was the investigation of the first ½$^+$ state in $^9$B in order to address discrepancies that currently exist between theoretical models in describing these nuclei. Speaker: Dr Daniel José Marín-Lámbarri (Physics Institute, UNAM, Mexico.) Study of reactions involving weakly-bound nuclei Recently, an experimental campaign was performed at the LAFN (Open Laboratory of Nuclear Physics) of the University of São Paulo. Various angular distributions were obtained at energies around the Coulomb barrier for the $^7$Li + $^{120}$Sn and $^{10,11}$B + $^{120}$Sn reactions. Besides the elastic scattering, other reaction channels, such as projectile and target-like excitation, and transfer of nucleons, were observed. Details on the experiment, and a theoretical analysis performed within the coupled-channel formalism, will be presented. Speaker: Leandro Gasques (University of São Paulo, Nuclear Physics Department, Brazil.) Convener: Airton Deppman (Universidade de Sao Paulo (BR)) Thermodynamical properties of a neutral vector boson gas in a constant magnetic field We study the thermodynamical properties of a neutral vector boson gas in a constant magnetic field starting from the spectrum obtained by Proca formalism. Bose Einstein Condensation (BEC) and magnetization are obtained, for the three and one dimensional cases, in the limit of low temperatures. In three dimensions the gas undergoes a phase transition to a usual BEC in which the critical temperature depends on the magnetic field. Therefore, the condensation is reached not only decreasing the temperature, but also by increasing the field. For the one dimensional gas a diffuse BEC appears. In both, one and three dimensions, the magnetization is a positive quantity and for densities under a critical value the gas can sustain its own magnetic field. The anisotropic pressures are also considered. The pressure exerted along the field is always positive, but the perpendicular pressure might be negative and the system turns out to be susceptible to suffer, under certain conditions, a transversal magnetic collapse. The above describe phenomenology is manifested for magnetic fields and densities in the order of those typical of compacts objects. In this regard, a brief discussion of astrophysical implications is presented. Speaker: Gretel Quintero Angulo (Facultad de Física, Universidad de La Habana) Azimuthal angular correlations in high transverse momentum dijet scenarios. Measurements of the azimuthal angle correlation between the two jets with the largest transverse momenta (pt) in inclusive 2-jet topologies, close to the back-to-back configuration, are presented for several regions of the leading jet transverse momentum. The features of the different models considered in the comparisons and their physical impact are discussed. Speaker: Armando Bermudez Martinez (CMS-DESY) Convener: Piet Van Espen (University of Antwerp) Simulation with GEANT4 of a new imaging Gamma-ray Compton Backscattering device A novel imaging device is proposed based on the gamma-backscattering technique described in Refs. [1,2], developed at GSI (Darmstadt, Germany) and modified by the Nuclear Physics Group at National University in Bogotá, which has been successfully tested by observation of concealed objects behind metallic walls, inspection of metallic structures and localization of buried objects [3,4]. The camera comprises essentially a positron source, a backscattering detector and a position detector that determines the direction of correlated 511 keV gamma-rays used to inspect the object. The backscattered gamma-rays are detected with a Compton Camera, following a design presented in Ref. [5], which provides additional information on the position where the scattering process occurs. In order to evaluate the imaging capabilities of the new camera a simulation was developed using the GEANT4 [6] simulation toolkit. In this work, a description and characterization of the new device is presented. Simulated results suggest already methods to improve the position resolution of the camera, which has been applied to study defects presented in corroded metallic structures. J. Gerl, F. Ameil, I. Kojouharov and A. Surowiec, Nucl. Instr. and Meth. A 525, 328-331 (2004). E. Fajardo, M.F. Nader, F. Cristancho and J. Gerl, AIP Conf. Proc., 1265, 449-450 (2010). D. Flechas, L. Sarmiento, F. Cristancho and E. Fajardo, Int. J. Mod. Phys. Conf. Ser., 27, 1460152 (2014). D. Flechas, L. Sarmiento, F. Cristancho and E. Fajardo, Proceedings of Science (XLASNPA), 058, 1-8 (2014). R.W. Todd, J.M. Nightingale and D.B. Everett, Nature 251, 132 (1974). S. Agostinelli, et al., Nuclear Instruments and Methods in Physics Research A 506, 250-303 (2003). This work was supported in part by Universidad Nacional de Colombia DIB 13440 and Colciencias 110152128824. Speaker: Mr David Camilo Flechas Garcia (Universidad Nacional de Colombia) Structure Determination and Interactions of Protein Desmoplakin C-terminal by Nuclear Magnetic Resonance Spectrometry and Small Angle X-Ray Scattering The tertiary structure and interactions with intermediate filament proteins of the desmoplakin C-terminal, a cytolinker protein related to severe skin diseases and fatal cardiovascular failures, has been determined by the use of two complementary techniques: Nuclear Magnetic Resonance (NMR) and Small Angle X-Ray Scattering (SAXS). NMR spectroscopy provided the atomic structure detail and interactions dynamics information of the desmoplakin linker domain, while SAXS was used to solve the global shape and orientation of the B-linker-C desmoplakin multi-domain. By resolving the ambiguities of the orientations of the individual domains with SAXS it was possible to discriminate between similar structural conformations obtained by NMR. Through the use of these two techniques, we gauged the architecture of the desmoplakin plakin repeat domains B and C in a construct including the linker domain, with the latter offering a pair of basic residues that recognise acidic residues on helical intermediate filament proteins that enhances the desmoplakin binding activity with these proteins. Speaker: Penelope Rodriguez-Zamora (Instituto de Fisica, Universidad Nacional Autonoma de Mexico (UNAM)) Convener: Oscar Naviliat-Cuncic (Michigan State University) Alpha-transfer Reaction in Combination with Transient Field Technique and DSAM to Measure Magnetic Moments and Life-Times in $^{110}$Sn and $^{106}$Cd Studies on magnetic moments and life-times of exotic nuclei have unveil properties which have leaded to the deeper understanding of the nature and behavior of the nuclear potential. During last years, the alpha-transfer technique has been useful for the study of properties of nuclear species which cannot be created with the current radioactive beam facilities. One of these characteristics, the magnetic moment of short life-time spin-states, had always been a huge challenge because several difficulties such as the alignment of the nuclear spin along a quantum axis. The Transient Field technique allows the measurement of nuclear magnetic moments using the variations of the angular distribution of the emitted gamma-ray radiation, from the state of interest, with a resolution around mrad. In addition to the latter, the Doppler Shift Attenuation Method allows to establish the life-time of excited nuclear states. In this work the measurement of the 2$^+$ and 4$^+$ spin-states of the deficient-neutron $^{110}$Sn and the life-time of the $^{106}$Cd excited spin-sates will be presented, the experimental technique makes use of the alpha-transfer reaction in combination with Transient-field technique and DSAM. Keywords: Alpha transfer, Transient Field, Coulomb excitation, DSMA, life-time, magnetic moments. Speaker: Fitzgerald Ramı́rez Moreno (Universidad Nacional de Colombia, Bogotá, Colombia) Gamma-ray spectroscopy of the heaviest nuclei at the JAEA-Tokai Tandem laboratory, using $^{249}$Cf and $^{254}$Es targets The spectroscopy of heavy nuclei near the $N$=152 and $N$=162 deformed shell gaps provides precious information to improve current predictions of long-lived super-heavy elements in the Island of Stability. Due to deformation, in fact, substates of spherical orbits in the island of stability can be found near the Fermi level in these lighter systems. At the JAEA-Tokai Tandem accelerator, the first in-beam $\gamma$-ray spectroscopy of $^{252}$Fm (Z=100, N=152) was attempted. $^{252}$Fm was produced via the multi-nucleon transfer reaction $^{249}$Cf($^{12}$C,$^9$Be) respectively at 75 and 77 MeV. The target radioactivity was nearly 150 kBq. The target chamber was surrounded by a new particle-gamma detection setup, comprising an array of Silicon detectors to detect and identify the light reaction ejectiles, and a mixed array of four Germanium and four LaBr$_3$(Ce) detectors, with an absolute photopeak efficiency of nearly 30% at 150 keV. $\gamma$-ray transitions from $^{252}$Fm were discriminated by the coincident detection of $^{9}$Be. The analysis revealed candidate peaks for $^{252}$Fm E2 transitions in ground-state rotational band. The implications of this measurement and future plans using a $^{254}$Es target will be presented. Speaker: Dr R. Orlandi (ASRC, JAEA, Japan) Poster Session - HEP Room "Bens Arrate" Conveners: Fernando Guzman (InSTEC, Cuba) , Nick van Remortel (University of Antwerp) , Oscar Naviliat-Cuncic (Michigan State University) , Siannah Penaranda-Rivas (Zaragoza University, Spain) Correlated background within the SoLid anti-neutrino detector The SoLid experiment aims to measure the anti-neutrino energy spectrum 6--9 m from the core of the BR2 nuclear reactor at SCK$\bullet$CEN in Mol, Belgium. The main goals are to provide a very sensitive search for short-baseline neutrino oscillations and to resolve the reactor neutrino anomaly. The proposed detector technology will be very useful for anti-neutrino detection in other settings as well, such as nuclear safeguard and non-proliferation monitoring of nuclear reactors. The experiment uses a novel, highly segmented composite scintillator detector. The detector unit is based on 5 cm polyvinyl toluene scintillator cubes, thin neutron sensitive $^6$LiF:ZnS(Ag) sheets and a reflective Tyvek layer wrapping them for light tight. A first large scale detector prototype based on this technology was deployed at the BR2 reactor by the end of 2014. The main purpose was to study the capability of the detector design to discriminate background. Due to the low overburden and proximity to a nuclear reactor, efficient background reduction is crucial for a successful experiment. This contribution will be presenting and discussing the advantages of the SoLid detector design for background reduction. The background components studied include atmospheric and spallation neutrons induced by cosmic rays and possible natural radioactivity contamination from the decay chains of $^{238}$U and $^{232}$Th. The results are based on the data taken with the prototype detector and its full chain GEANT4 based Monte Carlo simulations. Speaker: Ibrahin Piñera-Hernández (University of Antwerp, Belgium.) Analysis of the Higgs field non-minimally coupled to gravity Nowadays one of the main problems of Physics is not to know the origin and nature of the acceleration of cosmic expansion during the present and future stages in the evolution of the Universe. The research carried out includes the study of a model based on the Higgs field non-minimally coupled to gravity. Cosmological implications of the Higgs field have been studied for some years, and they have taken more importance after the recent discovery of the Higgs Boson in 2012 at the Large Hadrons Collider in Geneva, Switzerland. Using the dynamical systems technique, we study the possibility that the cosmological Higgs field non-minimally coupled to gravity can lead to the current behavior of the universe. We consider a homogeneous and isotropic flat Friedmann-Robertson-Walker (FRW) metric. We prove that there are two possible late time attractors corresponding to stable de Sitter solutions. These two solutions correspond to late-time accelerated expansion. Speaker: Ailier Rivero-Acosta (Departamento de Fisica, Universidad Central de Las Villas, 54830 Santa Clara, Cuba) Charmonium: comparison between potential models Heavy quark $c\acute{c}$ has been studied in the non-relativistic framework using interquark potential models of the form of sum of power of the interquark distance. The form of potential is based on phenomenology facts. The proposed potential was solved numerically using a program written on C++. Mass spectra and the expectation value of the radius have been estimated for different quantum mechanical state for $c\acute{c}$ system. The results have been compared with other similar and recent works. The mass spectra obtained is in acceptable agreement with the experimental data for $c\acute{c}$. Speaker: Dario Alberto Ramirez Zaldivar (InSTEC) Color transparecy in vector meson photoproduction Employing the Feynman diagrams technique, the GBW model for performing nucleon-dipole interaction and the Regge theory, a new approach for computing in a unique way (perturbative and non-perturbative), the vector meson production has been presented. The interaction between the color dipole created by the virtual photon and the nucleon is carried out by the exchange of two virtual gluons, which connect the color dipole with the gluonic sea inside of the nucleon trough a three gluonic vertex. The model was explored in the production of ρ0 ,ϕ and J⁄ψ. Speaker: Denys Yen Arrebato Villar (Higher Institute of Technology and Applied Science ) Poster Session - NAT Room "Fernando Portuondo" Conveners: Juan Estevez (CEADEN, Cbua) , Oscar Diaz-Rizo (InSTEC, Cuba) , Prof. Piet Van Espen (University of Antwerp) Ionizing Radiation Effects in Electronic Devices The ionizing radiation absorbed by semiconductor devices can change their properties by modifying the electrical parameters that characterize them and, in the case of memories, processors, can modify the information contained in these devices. In this way, the development of electronic devices resistant to radiation and the qualification of devices confirming that they are more tolerant to the effects of ionizing radiation, require a qualified workforce with the specific knowledge of the physical mechanisms acting on the devices when exposed to the radiation. It is also necessary to know all internationally existing standards for the qualification of devices. Digital systems are often used in space applications to process data, implement control logic, or even store data from sensors. These systems are composed of electronic devices, such as transistors, microcontrollers and microprocessors, which are exposed to ionizing radiation. The use of Field-Programmable Gate Arrays (FGPAs) in aerospace & defense field has become a general consensus among Integrated Circuits (ICs) and embedded system designers. Radiation-hardened electronics used in this domain is regulated under important political and commercial treaties. In order to refrain from these undesired political and commercial barriers COTS FPGAs have been considered as a promising alternative to replace ICs. The development of instrumentation that makes it possible to design new electronic devices, based on new materials and new technologies, as well as knowing how to properly characterize a device, is extremely important for this research area to be self-sufficient in our country. This research project aims to study the effects of ionizing radiation from X-rays, alpha source, protons and heavy ions on electronic devices. The specific objective of the project is to operationalize a system for radiation testing and a methodology for the qualification of electronic devices and components when subjected to radiation doses induced by heavy ions, particles and X-rays. We intend to study the physical phenomena responsible for the effects of radiation and generate training in this strategic area. The extended source efficiency correction to measure norm concentrations using a HPGe detector The objective of the experiment is to measure NORM (Naturally Occurring Radioactive Material) in natural samples and calculate their concentrations. For this purpose, experiments detecting the radiation of several gamma-ray calibration sources located at different positions around an HPGe detector were conducted. The efficiency calibration curve for each position was obtained, a piece of information useful to determine the concentration of radionuclides within an extended source. To perform the validation of the results an IAEA reference standard (40K) was placed in different geometries within the volume of a lead shield together with the HPGE detector and the efficiency correction was considered to determine the concentration of radioactive material. The measurement of the 40K concentration was compared with the activity concentration values ​​reported in the calibration certificate of the reference standard. The usefulness of this work is to measure NORM in natural samples to calculate their activity concentration without using the comparison with a reference standard. Speaker: Mrs Luz Anny Pamela Ochoa Parra (Univesidad Nacional de Colombia) Energetic and structural properties of fullerenes under irradiation processes On the basis of the atomic displacement energy ($T_d$) calculated using the Density Functional Theory with Tight Binding Approximation (DFTB), the cross sections of electron-induced atomic displacement were obtained as a function of the order of the fullerene. Three types of defects commonly induced by radiation (mono-vacancy, di-vacancy and Stone-Wales) were also analyzed, determining their formation energies and the structural changes they produce in the molecule. The results are consistent with the model for the transformation of polyhedral structures to spherical nano-onions under electron irradiation, proposed by Ugarte in 1995. Speaker: Rafael E. Sosa Ricardo (Instituto Superior de Tecnologías y Ciencias Aplicadas (InSTEC), Universidad de La Habana, Cuba.) Study of Cr-39 detectors Every day, the population is exposed to various types of radiation sources. In particular, the radon is the most common radiation present in a gas form. Radon belongs to the radioactive series of uranium and in its decay emits alpha particles of 5.49 MeV. To detect these alpha particles, an effective method very commun is to use Solid State Nuclear Track Detectors (SSNTD). This work propose a comparative study of track diameters using three kinds of CR-39 SSNTDs concerning the chemical etching time. The measurements have been performed exposing the sample plastic detectors to an 241Am radiation source during about 6.5 hours and chemical etched using well known solutions [2-4]. Two different temperatures have been used in this process (70 and 80 $^{\circ}$C) and different ranges of time. The diameter of the tracks has been obtained and followed between one chemical etching and other by taking digitalized images with calibration scale of 50μm and using the ImageJ code [5]. The track density have been calculated and the comparative results will be presented. [1] S. Bing, Nucl. Tracks Radiat. Meas. 22, (1993) 451. [2] J. N. Corrêa and et al., Rad. Phys. and Chem. 104, (2014) 104. [3] F H Manocchi and et al., J. Radiol. Prot. 34, (2014) 339. [4] S. Guedes and et al., Radiation Measurements 31, (1999) 287. [5] Programme ImageJ: http://imagej.nih.gov/ij/index.html. Speaker: Dr K. C. C. Pires (Departamento de Física Nuclear, Instituto de Física, Universidade de São Paulo, 05508-090, São Paulo, Brazil) Sentaurus simulation of 3N163 Mosfet to study heavy ions effects Semiconductor devices are susceptible to ionization radiation and this kind of event can change their behaviour and electrical properties, causing since a simple current peak on output of the circuit to a logical inversion, each event have your own root cause that can variable from an absorbed radiation dose to a different source of radiation. To deal with these problems is necessary understand the mechanisms behind of each event and use this knowledge to build circuits and semiconductor devices more robust to the radiation. The spatial program and military applications use electronic systems composed of microcontrollers, memories and other parties that can be exposed to radioactive environments in their normally use, a way to handle an unexpected behaviour are needed to avoid accidents. Electronic devices simulation is one of the methods to study the effects of ionizing radiation effects in semiconductor systems. The capability of isolate regions of the device under tests and control the simulation environment bring the possibility of observing the component of an event and understand the mechanisms behind of the event. This research project aims were create a 3D struct to simulate a commercial P-MOSFET (3N163), the effects of a Total Ionization Dose of X-Ray was widely studied and the effects of heavy ions emission is the object of studies in this work to observe the SEE event. The main purpose of the simulations is understanding the results collected to extract the main data and study secondary results that don't was captured in the field experiment. Speaker: Dr Marcilei Aparecida Guazzelli (Centro Universitário da FEI) Age determination of sediments by quartz OSL dating technique The age of sediment layers in regions near to the coast in Pinar del Rio province are determined using the quartz photoluminescence method. In some locations the studied layer could be associated with altimetry measurements allowing future analysis of recent tectonic movements. The method for sample collection, preparation and measurement are described. The results analysis indicates variations in the age of formation of these geological structures. Speaker: I. Quesada (Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear (CEADEN), Cuba.) Determination of the natural dose rate in samples of geological interest The natural dose rate is one of the physical magnitudes calculated during the process of age determination of geological sediments with the luminescence technique. The dose rate is calculated on the basis of the radioactive content presented in the sediments. Because of the low content of these radioactive isotopes a low background system should be used. In the present work the low background gamma spectrometric system used in the luminescence dating laboratory at CEADEN is described. The construction of a reference material with a similar composition to common sediments is also described. Finally, the radioactive content of quartz rich sediment from Pinar del Rio province and the respective natural dose rate are presented. Speaker: H. Lubian (Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear (CEADEN), Cuba.) Development of a semi-empirical method to determine the efficiency of a gamma radiation detector for point sources The gamma spectrometry is the most popular technique for the determination and quantification of the radioactive nuclei present in a radioactive source. In order to obtain the quantification of the radioactive material it is necessary to determine the efficiency of the detector for the energies of the gamma photons emitted from the source. In the present work we present the theoretical development and the first tests of a new method to calibrate the efficiency of a gamma radiation detector for point sources. The method consists in the determination of the detector efficiency using a mono-energetic gamma source (such determination must be realized experimentally), which we will be named the reference energy. Using a mono-energetic source is the easiest way to do this. Then from this value we can extrapolate the efficiency to the complete energy range using the physics of first principles of gamma radiation detection theory. Therefore the proposed method corresponds to a semi-empirical method. In the first work we show the application and the study for the validation of the method using one reference energy (661,65 keV). In this second work we will apply the method using three references energies along the gamma energy range of interest, due to that in the first work the agree between the obtained values by the proposed method and the expected values were worse while the energies were farther of the reference energy. The reference energies will be: 59,54 keV, 661,65 keV and 1460,65 keV, which are associated to the $^{241}$Am, $^{137}$Cs and $^{40}$K respectively. The second part of the method, the extrapolation from the reference energies to the gamma range, will be done over the energies emitted by $^{152}$Eu. All simulations will be done using FLUKA code. We expect to improve the results obtained by the extrapolation from one energy reference to the complete energy range, with this new proposal considering a set of reference energies and a local extrapolation from these. Speaker: Mr Pablo Ortiz-Ramírez (University of Chile) Development of alternative XRD patterns evaluation methods and their applications for crystalline structure evaluations of ferroelectric PbZr$_{0.53}$Ti$_{0.47}$O$_3$ doped with La (1%) ceramics samples under $^{60}$Co gamma irradiation In previous researches, it has been identified the presence of tetragonal and rhombohedral crystalline phases in ferroelectric ceramic samples of PbZr$_{0.53}$Ti$_{0.47}$O$_3$ doped with La (1%) (PLZT) irradiated with $^{60}$Co gamma rays, showing their corresponding XRD patterns not conclusive evidences of phase transformations between both phases assisted by gamma irradiation. The present work concerns with the evaluation of the gamma radiation damage of the irradiated PLZT samples, taking into the account the depleted atom displacement threshold energies due to the presence of point defects in the studied ceramic samples, as has been reported in [1]. It has been crystallographic evaluated, also, XRD reflexions of the non-irradiated PLZT samples, as well as of the irradiated ones by using new research tools of extrinsic and intrinsic nature [2], which are introduced in the present work, according their dependence or not of the choice of a particular crystalline vector basis. In particular, in the case of the extrinsic methods, it has been established for cubic, tetragonal, orthorhombic and rhombohedral crystalline structures, an important property of the mean square value of the inverse of the interplanar distances taken over those reflexions owing the same value of the sum of square of their Miller Indexes. It results that the former one is proportional to the value of the sum the square Miller Indexes. On the other hand, new intrinsic evaluation methods were introduced, like the density of reflexion lines and the differences between successive values of the inverse of the interplanar distances. It results, that all studied PLZT samples ( non- gamma irradiated and irradiated ones) comprise two crystalline phases (with tetragonal and rhombohedral structures), where the application of the intrinsic and extrinsic developed methods showed drastic qualitative and quantitative difference among the non-irradiated simple on regard to the irradiated one. The irradiated PLZT XRD patterns showed the presence of two phases with tetragonal and rhombohedral crystalline structures close to the corresponding single-phase reference systems, which are tending to a pseudo – cubic symmetry. E. González, et al. NIM A 865 (2016) 144-147. Eduardo L Mendoza Caballero. BsC. Thesis, InSTEC, Julio/2017. Speakers: E. L. Mendoza Caballero (Instituto Superior de Tecnologías y Ciencias Aplicadas, Cuba.) , C. M. Cruz Inclán (Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear, Cuba.) Irradiated quartz for beta source calibration For luminescence dating to be an accurate absolute dating technique it is very important that we are able to deliver absolutely known radiation doses in the laboratory. This is normally done using a beta source calibrated against an absolutely known reference source or by using a reference luminescence material that has been irradiated in a radiation calibration facility. Here we describe in detail the preparation and luminescence characteristics of a new quartz reference material of the luminescence dating laboratory at CEADEN. A selected sample of quartz extracted from a query in Pinar del Rio province has been treated to extract high pure quartz grains with diameters ranging from 180 to 250 µm. The resulting material was further treated to sensitize and stabilize the luminescence signal prior being irradiated to 5.0±0.3 Gy at the secondary calibration gamma source (related to BIPM) at the CPHR. With this material the dose rate of beta source of the automated luminescence reader LF02 has been calculated to be 0.034±0.002 Gy/s. Speaker: T. Cepero (Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear (CEADEN), Cuba.) Methodology for labeling silica sand with $^{99m}$Tc for using as solid radiotracer For decades, the great economic benefit that represents the use of radiotracers has been recognized by the international industry, however, this technique is still underutilized. The main cause is the lack of timely availability of required radiotracer. Previous studies show the possibility of labeling sediments (with a high content of aluminosilicates) with $^{99m}$Tc. This paper aims to develop a methodology for labeling silica sand with $^{99m}$Tc for using as solid radiotracer. Labeling the pretreated silica sand and untreated using varying concentrations of stannous fluoride and chloride as reducing agents and different times labeling was performed. The influence of different parameters of the pretreatment of sand in labeling yields obtained was evaluated and the effectiveness of the methods used to reduce $^{99m}$TcO$_4^-$ (R$_{red\%}$ > 87%) by ascending paper chromatography was proved. Changes in the composition of the silica sand after its pretreatment is able to observe from SEM-EDS techniques. It was possible to establish a methodology for obtaining solid $^{99m}$Tc labeled radiotracers in support of silica sand with an estimated preparation time of 4 hours a R$_{ret\%}$ equal to 74% (or a methodology for obtaining solid radiotracers labeling with $^{99m}$Tc in support of silica sand with an estimated preparation time and a R$_{ret\%}$ equal to 74%). Speaker: Luis Enrique Llanes Montesino (Instituto Superior de Tecnologías y Ciencias Aplicadas (InSTEC), La Habana, Cuba.) Simulation of a coaxial HPGe detector using FLUKA code The simulation of the spectroscopy systems using MC codes is a common practice in these days. The most popular softwares to do this are MCNP and Geant4 codes. In this work we present the simulation of a gamma spectroscopy system based on a coaxial HPGe detector using FLUKA code. The geometrical characterization of the detector was done from manufacturer information and using the spatial FEP efficiency distribution of the detector for 661.65 keV. The last one was used to determine the dimensions of the inner cavity of the detector, which are not informed by the manufacturer. Due to the differences between the real and simulated response functions, we suppose that these are proportional. The aims of this work is double. On the one hand, to characterize the detector without the necessity to apply a radiography or any other technique that is not directly associated to a nuclear physics laboratory. And, on the other hand, to validate the simulation of a coaxial HPGe detector using FLUKA code. Synthesis and characterization of pH and temperature responsible poly(2-hydroxyethyl methacrylate-co-acrylamide) hydrogels by gamma photon irradiation. Doxorubicin release. 2-hydroxyethyl methacrylate/Acrylamide hydrogels were prepared by simultaneous radiation induced cross-linking copolymerization of acrylamide (AAm), 2-hydroxyethyl methacrylate (HEMA) and water mixtures at a radiation dose of 10 kGy. Hydrogesl were characterized by infrared spectroscopy. Dynamic and equilibrium swelling of hydrogels in water and in buffer solutions were investigated. They were sensitive to pH and temperature. Swelling was non–Fickean and increased with increasing the acrylamide content. Temperature dependence of the equilibrium water uptake of copolymers exhibited a discontinuity around 35ºC resulting from the weakening of the hydrogen bonds between the hydroxyl groups of HEMA and the amide groups of AAm . The thermodynamic and network parameters derived from swelling and mechanical measurements are compared and discussed. They exhibit a strong dependence on the AAM content in the hydrogel. The doxorubicin release was governed by copolymer composition, the absorbed dose and their self solubility in water media. Speaker: Manuel Rapado Paneque (Center of Technological Aplications and Nuclear Development, CEADEN, Havana, Cuba.) Synthesis of polymeric nanogels by gamma radiation: influence of total absorbed dose Biomaterials have received a considerable attention over the last 30 years as a means of treating diseases and easing suffering. These materials have found applications in approximately 8000 different kinds of medical devices; even though biomaterials have had a pronounced impact in medical treatment, a need still exists to be able to design and develop better polymer, ceramic, and metal systems. Nowadays, the emergence of micro- and nanoscale science and engineering has provided new avenues for engineering materials with macromolecular and even down to molecular-scale precision, leading to diagnostic and therapeutic technologies that will revolutionize the way health care is administered. Mainly, micro and nanosystems from polymers, such as nanogels, have achieved a great attention in biomedical applications due to they have lots of advantages over conventional systems since they enhance the delivery, extend the bioactivity of the drug, show minimal side effects, demonstrate high-performance characteristics and are more economical. Several of techniques have been described for the synthesis of nanomaterials from polymers. However, the use of ionizing radiation (γ, e-), to obtain polymeric micro and nanogels are characterized by the possibility of obtaining products with a high degree of purity, which ones are a technological novelty mainly in biomaterial manufacture and therefore in biomedical applications. The irradiation dose influence on nanogels synthesis by gamma radiation in diluted PVP solutions at a dose ranging from 3 - 22 kGy was performed in this paper. The experiments were performed in absence of oxygen using aqueous PVP solution (0.05 – 0.3 %). Crosslinking reactions were carried out at 25 °C in a gamma irradiation chamber with a $^{60}$Co source (ISOGAMMA LLCo). The Scanning Electron Microscopy (SEM), Attenuate Total Reflection Spectroscopy (ATR), Dynamic Light Scattering (DLS), and Viscosimetry were used as characterization techniques. Keywords: polymeric nanogels, gamma irradiation, biomaterial. Speaker: Y. Aguilera (Instituto Superior de Tecnologías y Ciencias Aplicadas, La Habana, Cuba.) Testing the intrinsic spatial efficiency method for homogeneous and without matrix sources using MC simulation The intrinsic spatial efficiency method is a general and absolute method to determine the efficiency of any extended source. This was experimentally demonstrated and validated only for cylindrical sources and the gamma photons emitted by $^{137}$Cs (661,65 keV). Also, we carried out a research to tested the method for different shapes and sizes. Due to the difficulty that the preparation of sources with any shape represents, the simplest way to do this is by the simulation of the spectroscopy system and the source. We simulated the spectroscopy system and the sources using the FLUKA code. The shapes of the sources were: rings, discs, cylindrical shells, spheres and spherical shells. In such work we only considered gamma photons with an energy of 661.65 keV. In this work we present a test of the intrinsic spatial efficiency method for sources with the shapes already mentioned and for energies different to 661.65 keV. The gamma energies considered in this new work will be: 59.54 keV ($^{241}$Am), 351.93 keV ($^{214}$Pb), 911.19 keV ($^{228}$Ac), and 1460.65 keV ($^{40}$K). Until this moment we have applied the method by simulation for ring sources emitting 1460.65 keV gamma photons, and located coaxially on different positions along the axial axes. The preliminary results shown an excellent agree between the absolute efficiencies determined by the standard relative method (statistical count) and the intrinsic spatial efficiency method. The relative bias in all cases are lesser than 1.1%. Speaker: Pablo Ortiz-Ramírez (University of Chile) The Electrical Point Charges Model: application boundaries for Electrical Field Gradient calculations at nuclei sites in disordered $^{57}$Fe doped ZnO matrixes Electric Field Gradients (EFG) at Zn and O sites in disordered $^{57}$Fe doped ZnO matrixes arising from $^{57}$Mn irradiated ZnO samples with wurtzite crystal structure were simulated through DFT atomistic calculations methods. The ZnO disordered crystal defective structures were simulated by means of overdimensioned unit cells (supercell) containing Zn vacancies at different concentrations with a $^{57}$Fe atom per supercell. Previous DFT electron density calculation results in these defective crystalline structures reported in [1] were applied. Electric Point Charge Model (PCM) was also applied to calculate EFG at Zn and O sites in the same $^{57}$Fe doped ZnO supercells as before, where Steinheimer antishielding factors γ͚ values were taken from previously reported ones. A new methodology based on algebraic invariants, as I and D named, of the EFG resulting secular equations for both, DFT and PCM EFG data, were introduced and applied. It was proved that the second degree invariant I is proportional to the electrical quadrupole splitting at $^{57}$Fe sites. It results that a general agreement among Zn and O DFT- and PCM- EFG invariant data statistical distributions was observed, though the best results were achieved at the supercell with higher dimension (3x3x2) and the lowest vacancy concentration, where a good linear correlations among DFT and PCM I and D data were proved and the resulting effective γ͚ values are very close to the previously reported ones. It was concluded that PCM approach might be applied as an initial numerical assessment for EFG data for high dimensioned supercells. Y. Abreu, et al. Solid State Communications, 185 (2014) 25-29. Speaker: C. M. Cruz Inclán (Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear, Cuba.) Verification of the external irradiation process for the LF02 automated luminescence reader In OSL dating of sediments the application of the single aliquot regenerative (SAR) protocol requires the calibration of each aliquot by giving them a known dose using an internal beta source with a dose rate ranging from 100 to 30 mGy/s. In the case of old samples that have received in natural conditions doses higher than 70 Gy the time employed to equate this dose may be significant. One distinctive feature implemented in the LF02 automated luminescence reader is the possibility of external irradiating the samples without manipulating them. The goal of this process is to reduce the total irradiation time by simultaneously irradiating all the aliquots in a gamma cell facility. The aim of the present work is to establish the conditions and the dose rate of the external irradiation process when using the gamma irradiation facility at CEADEN. Speaker: L. Baly (Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear (CEADEN), Cuba.) Poster Session - NUC Room "Bens Arrate" Conveners: Alinka Lepine-Szily (University of São Paulo, Brazil) , Leandro Gasques (University of São Paulo, Brazil) Electron and gamma radiation damage assessments on composite YBCO - Nano/C samples Present work deals with the scientific problems and the "states of arts" on regard the knowledge of the physical and structural behaviors of composite materials based on superconducting YBa$_2$Cu$_3$O$_7$ (YBCO) and carbon nanostructured compounds (wires and onnions), as well as, their electron and gamma radiation response, which are nominated as YBCO-Nano/C. Based in the application of radiation transport simulation codes relying on the Monte Carlo Methods, and the Monte Carlo assisted Classical Method (MCCM) an assessment of the radiation damage in terms of displacements per atom (dpa) rate spatial distributions, as well as, of the energy deposition distribution in the composite YBCO-Nano/C is presented. A simple structural model of the composite YBCO-Nano/C is introduced and justified, where a comparison among the behaviors its different physical properties with the irradiation treatment is discussed. Speakers: Jorge Luis Valdés Albuenres (Instituto Superior de Tecnologías y Ciencias Aplicadas, Cuba.) , Carlos M. Cruz Inclán (Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear, Cuba.) Effects of vacancies on atom displacement threshold energies calculation through Molecular Dynamic Methods in BaTiO$_3$ Under progressive and intensive radiation damage conditions atom displacement threshold energy, $T_d$, calculations through Molecular Dynamic Methods must take into the account, in addition to the framework of single recoil atom excitation in an ideal crystalline lattice, multiple excitations and real crystal structures with point defects, in order to reach a better approach to situations emerging from severe and intensive radiation damage impacts on irradiated materials. At the present work, atom displacement threshold energies evaluations are performed by application of Molecular Dynamics, MD, calculation tools under the assumption that the BaTiO$_3$ tetragonal crystalline structure possesses vacancies. In this context, a 2x2x2 over-dimensioned tetragonal BaTiO$_3$ unit cell was considered containing several primitive ones and owing vacancies on Ba, Ti and O atomic positions under the requirements of electrical charge balance. On this basis and following a previous report [1,2] on an ideal BaTiO$_3$ tetragonal structure, the present report concern with Ba, Ti and O MD $T_d$ calculation, where the corresponding primary knock-on atom (PKA) defect formation probability functions dependence on the initial excitation energies were calculated at principal crystal directions, and compared with previous one calculated at an ideal BaTiO$_3$ ideal tetrahedral crystal structure. E. Gonzalez, Y. Abreu, C.M. Cruz, I. Piñera, A. Leyva. NIM B 358 (2015) 142-145. E. González, C.M. Cruz, A. Rodríguez, F. Guzmán, Y. Abreu, C.M. Cruz, I. Piñera, A. Leyva. NIM A 865 (2016) 144-147. Fission modes $^{248}$Cf, $^{254}$Fm and $^{260}$No in the reactions $^{22}$Ne + $^{232}$Th, $^{238}$U; $^{16}$O+$^{238}$U, $^{232}$Th The purpose of this work was to investigate fission modes in the fission of heavy actinides. The experiments have been carried out at the U400 cyclotron at the FLNR JINR (Dubna, Russia) using the double-arm time-of-flight spectrometer CORSET. To investigate the role of closed proton and neutron shells in fission of $^{248}$Cf, $^{254}$Fm and $^{260}$No nuclei at an excitation energy of 40-45 MeV the mass and energy distributions of fission fragments in the reactions $^{16}$O+$^{232}$Th, $^{16}$O+$^{238}$U, $^{22}$Ne + $^{232}$Th and $^{22}$Ne + $^{238}$U have been measured. For the compound nucleus $^{260}$No$^*$ formed in the reaction $^{22}$Ne + $^{238}$U an increase of fragment yields in the superasymmetric mass region of 52/208 u, that corresponds to the formation of pair of two double magic nuclei $^{48}$Ca and $^{208}$Pb, were observed. Moreover, for $^{260}$No$^*$ compound nucleus at the initial excitation energy of 41 MeV the bimodal fission was observed. In this case at the symmetric mass split the both fission fragments was close to the double magic $^{132}$Sn. For the compound nuclei $^{248}$Cf and $^{254}$Fm* formed in the reactions $^{16}$O+$^{232}$Th, $^{16}$O+$^{238}$U and $^{22}$Ne + $^{232}$Th an increase of fragment yields in the mass region 70/180 u that corresponds to the nuclei $^{70}$Ni was observed. It present an interesting case because the initial excitation energy of compound nuclei was around 40-45 MeV whereas typically shell structure starts to break at around 20-30 MeV. Speaker: K. B. Gikal (Flerov Laboratory of Nuclear Reactions, Joint Institute for Nuclear Research, Dubna, Russia.) Upgrading algorithm of the Monte Carlo Simulation of Atom Displacements (MCSAD) induced in solids under high fluency electron and gamma irradiation environments The Monte Carlo Simulation of Atom Displacement, MCSAD, algorithm and code have been developed and applied in solid materials for the assessment of electron and gamma radiation damage. In the code, single primary knock-on atom (PKA), processes are taken into account in regard to atom displacement (AD) occurrences, which give a well-suited description of radiation damage effects on relative low particle fluency irradiation environments. In addition, target matrix main properties on regard with electron and gamma quanta transport are assumed to remain constant during the radiation transport, it means that, and consequently, material related atom displacement threshold energies and density consequently do not change at different calculation history trails, which supposes a weak radiation damage effects on the target properties. However, under high brightness and fluency irradiation environments, the foregoing MCSAD algorithm and code assumptions are not adequate for describing progressive and intensive radiation damage effects on a given target matrix. $\beta^+$ decay properties of A=100 isotopes The estimation of spectrosopic properties of neutron-deficient nuclei in the A=100 tin mass region is needed for the understanding of the rp-process path and the experimental exploration of the nuclear landscape. In order to evaluate some spectroscopic properties of the Gamow-Teller $\beta^+$ decay of neutron deficient tin isotopes of A=100, we have performed shell model calculations by means of Oxbash nuclear structure code. The jj45pn valence space used consists of nine proton and neutron orbitals. The calculations included few valence hole-proton and particle-neutron in $\pi$g$_{9/2}$ and $\nu$g$_{7/2}$ orbitals respectively, in $^{100}$Sn doubly magic core. Effective interaction deduced from CD-Bonn one is introduced taking into account the nuclear monopole effect in this mass region. The results are then compared with the available experimental data. Keywords: Nuclear Structure, $^{100}$Sn core, Monopole Effect, Oxbash Nuclear structure code, $\beta^+$ decay, neutron deficient tin isotopes. Speaker: Fatima Benrachi (Frères Mentouri Constantine 1 University, Physics Department, Constantine-ALGERIA) A hybrid linear-discontinuous spectro-nodal method for one-group unidimensional fixed source discrete ordinates problems with isotropic source Nowadays, much attention has been given to the problem of obtaining accurate numerical solutions of fixed source discrete ordinates problems. In this work, we described and tested four different numerical methods to solve one-group unidimensional discrete ordinates problems. First, we derived the Diamond Difference (DD) method, next it´s implemented the Linear Discontinuous (LD) and Spectral Green Function (SGF) methods and finally, we obtained the hybrid Linear-Discontinuous Spectro-Nodal (LD-SN) method for discrete ordinates problems. These methods are based on the use of the standard balance equations, which holds in each spatial cell and for each discrete ordinates direction, and consider four different auxiliary equations for the cell-average angular flux. Numerical results of benchmarks are given to illustrate and compare the methods' accuracy. SGF demonstrated be the best method with no spatial truncation errors, follow by LD-SN, LD and DD, respectively. The LD-SN results proved be better than SGF in computational storage and numerical calculation per direction and per iteration. Keywords: fixed source problems, discrete ordinates, hybrid linear-discontinuous spectro-nodal Speaker: Rivas-Ortiz Iram B. (Higher Institute of Technologies and Applied Sciences-University of Havana, La Havana, Cuba) A surface Coalescence model The production of heavy clusters in nuclear reactions is a mechanism that until now is not well understood despite its importance and its wide variety of applications: radiation protection, space and engineering design, medical physics, the design of accelerators and others. According to the Exciton Cascade Model (CEM), there are three possible ways of producing high energy heavy clusters. The first mechanism is through the coalescence of the nucleons produced in the Intra-Nuclear Cascade (INC), the second way is through the pre-equilibrium model, and the last one is the so-called Fermi break-up. In this work, a new theoretical approach is developed based on the INC model, which allows explaining the formation of light clusters through a first principles coalescence criterion that does not depend directly on experimental parameters and, at the same, can reproduce the experimental data. It is considered a semi-classical Wigner's distribution function in phase space. The calculations have been assessed for the production of deuterons in p-197Au collisions in an energetic range of 2 – 10 GeV and the results obtained for the deuteron emission probabilities, and the total cross section is shown and compared with the experimental data. Speaker: Ms Ailec Bell Hechavarria (Radio Isotopic Center) Calculation of induced gamma atom displacement in solids considering nonhomogeneous threshold displacement energies An extension to the Monte Carlo assisted Classical Method (MCCM) methodology was developed and implemented. The extension allows to calculate the atom displacement per atom (dpa) profiles considering threshold displacement energies which are spatially inhomogeneous. Calculation of the dpa profiles was performed using both, the standard methodology and extended one for gammas of 1.25, 3, 7 and 12 MeV energies for the BaTiO$_3$ ceramic. Significant differences for the oxygen sublattice in all the energy range were found, being these ones the highest ones. In general, the dpa profiles calculated at the same sample deep and energy with the standard MCCM are between one or two orders higher than the ones calculated through the extension. Speaker: Arturo Rodriguez Rodriguez (Instituto Superior de Tecnologías y Ciencias Aplicadas (InSTEC), Universidad de La Habana, Havana, Cuba.) Contribution of nuclear reactions in the production of heavy elements: Analysis in a supernovae environment The production of neutron-rich heavy elements takes place via the rapid neutron capture process (r-process). To favour neutron captures over beta decays the astrophysical environment should be explosive like the one found in the core-collapse supernovae. In this work, we focus on the High Entropy Winds (HEW) in Type II supernovae which are one of the more promising sites for the r-process. After the neutron capture rates decrease, due to drop in the neutron density, the remaining nuclides can be highly unstable. After considerable time, these nuclei will decay to stability. The final abundances of elements will depend, besides of neutron capture, on other processes such as beta decay, alpha decay, photo dissociation, alpha capture and beta-delayed neutron emission. The present work evaluates the contribution that these processes have to the final abundances of certain nuclides and studies the dependence of that contribution in the presence of each one of the other processes. For our nucleosynthesis calculations, we used rJava 2.0, software that is able to simulate the physical environment of HEW as well as other r-process sites as the ejecta of neutron star mergers and the ejecta of quarknova. Speaker: Mr Jose Trujillo (Universidad de los Andes, Colombia.) Cross sections measurements for the $^{13}$C(n, γ)$^{14}$C and $^{14}$N(n, p)$^{14}$C nuclear reactions using the accelerator mass spectrometry technique In this work, the cross sections of the nuclear reactions $^{13}$C(n, γ)$^{14}$C and $^{14}$N(n, p)$^{14}$C are experimentally measured using the accelerators mass spectrometry (AMS) technique. To carry out this experiment we use the thermal neutron flux produced by the TRIGA Mark III nuclear reactor located at the National Institute of Nuclear Research (ININ) in Mexico. The concentration of $^{14}$C with respect to that of $^{12}$C in materials before and after neutron irradiation was determined in the Laboratory of Accelerators Mass Spectrometry (LEMA) of the Institute of Physics of the National Autonomous University of Mexico (UNAM). From the measurements of the relative $^{14}$C/$^{12}$C concentration it was possible to determine the corresponding cross sections. This work establishes an experimental protocol for the measurement of cross sections of neutron induced nuclear reactions at thermal energies. The results obtained for the $^{14}$N(n, p)$^{14}$C reaction are in good agreement with the values previously reported by other authors. On the other hand, for the $^{13}$C(n, γ)$^{14}$C reaction, the obtained values differ from those previously reported, which impels us to continue studying this reaction. Speaker: Jorge Garcia Ramirez (Institute of Physics UNAM, Mexico.) Gamow-Teller $\beta^+$ decay properties of A=98 isobars near $^{100}$Sn doubly magic core In this work, we have realized some spectroscopic calculations in the frame work of the nuclear shell model, in order to estimate the Gamow-teller (GT) $\beta^+$ decay of A=98 proton rich isobars in $^{100}$Sn mass region near rp-process path. The calculations are carried out by means of Oxbash nuclear structure code, taking into account the monopole effect in the studied mass region. The obtained results are then compared to the available experimental data. Speaker: Nadjet LAOUET (Frères Mentouri Constantine 1 University, Physics Department, Constantine-ALGERIA) Stopping power cross section of Hf and Pd for protons and alpha particles in the energy range between 0.3 and 2.5 MeV Understanding the mechanism of energy loss of charged particles in matter is very important in atomic and nuclear physics given its application in fields such as material and surface science or radiation damage studies. In the last few years, there has been a renewed interest in increasing the accuracy of energy loss and straggling experimental values in order to determine some key parameters included in the theory of stopping power. However, there is still a lack of data about the straggling and the stopping power even for light ions. In this work we present new data on the stopping power of Hf and Pd for protons and alphas particles in the energy range between 0.3 and 2.5 MeV. These measurements were carried out by using a 2.5 MV Van de Graaff accelerator of the Laboratory of Accelerators and Radiation Technologies (LATR) in C2TN/IST (Lisbon, Portugal). Transmission methodology was used to determine the stopping power cross sections by measuring the energy loss of protons and alphas passing through very thin foils made of Hf and Pd. So far, a good agreement has been found between the obtained results and values reported in literature (SRIM-2013, ICRU49). Additionally, modified Bethe–Bloch theory is being used to extract Pd mean excitation/ionization energy (I) and Barkas effect parameter (b) from the available experimental data. Acknowledgments: The authors recognize financial support through "Proyecto de Iniciación en I+D y Creación, VRAC código L1-17, Universidad Tecnológica Metropolitana". The authors gratefully acknowledge the invaluable technical support from F. Baptista (C2TN/IST). Speaker: Dr Javier Wachter (Departamento de Física, Facultad de Ciencias Naturales, Matemática y del Medio Ambiente, Universidad Tecnológica Metropolitana, Chile.) Study of light-particle multiplicities in p + non-fissionable nuclei events in the 0.5 - 2 GeV energy range In recent years, the investigation of spallation reactions have caught the attention of scientific community due to their application in the transmutation of nuclear waste by using the Accelerator Driven System (ADS) reactors. Due to the experimental difficulties that nuclear reactions researches face; the study of spallation reaction by using simulation codes is more suitable for generating more complete database for different energy ranges. This work aims to study spallation reactions induced by protons at intermediate energies 0.5 - 2 GeV on non-fissionable nuclei by using the Monte Carlo code: CRISP (Collaboration Rio-Ilhéus-São Paulo). The target nuclei studied were: $^{27}$Al, $^{91}$Zr, $^{184}$W, $^{197}$Au and $^{208}$Pb, focusing on the last one. Multiplicity of light particles obtained with CRISP was compared with the available experimental data and other Monte Carlo codes involved in the study of spallation reactions, resulting on a quite satisfactory agreement. Speaker: Mrs Dania Consuegra Rodíguez (Jožef Stefan Institute, Slovenia.) Study of the $^8$Li elastic scattering at low energies The study of nuclei out of the line of stability has been one of the main fields of research in low energy nuclear physics [1]. Light exotic nuclei such as 6He, 11Be, 11Li, 8B and others are produced in laboratory [2-4] and present new interesting phenomena such as the Borromean structure and the neutron and proton halos [5]. Nuclei like 7Be [6] and 8Li are not so exotic, however, may have much interest in both, nuclear structure and in nuclear astrophysics aspects [7]. The synthesis of heavy elements in stars, has to overcome the mass gaps A=5 and A=8 for which there are no stable elements. For A=8 there are two bound nuclides, 8Li and 8B, which are mirror nuclei, and have half-lifes of arround 800 ms. The presence of these nuclei in stars could affect the nucleosynthesis of heavy elements up to 12C. In addition, nuclear data about nuclei A=8 is very scarce. This work propose measurements of 8Li elastic scattering on several targets: 9Be, 27Al, 58Ni, and 120Sn at laboratory energies from 16 to 32 MeV. Elastic scattering angular distributions at these energies will provide information about the nuclear potential. The interplay between Coulomb and nuclear effects can be investigated, as a function of the target mass. In addition, measurements with the light target 9Be may provide important spectroscopic information of the proton transfer reaction 9Be(8Li,9Be). The measurements will be performed at University of Sao Paulo, using the 8Li beam produced by the RIBRAS (Radioactive Ion Beams in Brasil) facility [8]. N. Keeley, et al Prog. Part. Nucl. Phys. 63, 396 (2009). P.N. de Faria et al, Phys. Rev. C81, 044605 (2010). K.C.C. Pires and et al, Phys. Rev. C83, 064603 (2011). V. Morcelle et al, Phys. Let. B 732, 228 (2014). I. Tanihata et al. Phys. Rev. Lett. 55 no.24, 2676 (1985). V. Morcelle et al, Phys. Rev. C89, 044611 (2014). S. Mukherjee, et al. Eur. Phys. J.A 45, 23 (2010). A. Lépine-Szily, et al Eur. Phys. J.A. 50, 128 (2014). Speaker: Mr O. C. B. Santos (Departamento de Física Nuclear, Instituto de Física, Universidade de São Paulo) Thermo-hydraulic study using CFD from the core of a Pebble Bed Reactor (HTR-10) Very High Temperature Reactor (VHTR) designs of Generation IV, offer promising performance characteristics; they can provide sustainable energy, improved proliferation resistance, inherent safety, and high temperature heat supply. The 10MW High Temperature Reactor-Test Module (HTR-10) is a pebble bed reactor (PBR). To achieve the commercialization of these reactors in the nuclear industry, it is necessary to take into account very important factors such as safety, because the investigation of their thermo-hydraulic characteristics is a key tool for the design and safe operation of VHTR. Currently the use of codes of Computational Fluid Dynamics (CFD) for the deterministic safety analysis of nuclear reactors have been increased, because it is a tool able to describe in detail the thermal-hydraulic phenomena occurring in the cooling system of the reactor core. In this paper from CFD models (porous and / or realistic) is described with good accuracy the thermal-hydraulic behavior of the reactor core HTR-10 at steady state and the results are compared with a benchmark. The maximum temperature values in the porous medium model were reached at the reactor core outlet, specifically in the central zone. Therefore, the realistic simulation was performed in that region, in order to verify the behavior of the maximum temperature reached by the fuel, which do not exceed the allowable limit for this type of nuclear fuel. The results obtained are consistent with the results presented by other authors using other techniques and simulation models. Speaker: Yaisel Córdova Chávez (InSTEC) Towards the measurement of the cross section of the $^{13}$C(d,p)$^{14}$C nuclear reaction using AMS An experimental protocol to study the total cross section for the $^{13}$C(d, p)$^{14}$C nuclear reaction via AMS is being developed for energies in the center-of-mass frame between 100 and 533 keV. We started a series of experiments in which two aluminium cathodes filled with natural-graphite (98.9$\%$ 12C, 1.1$\%$ 13C) were irradiated at a deuterium energy of 4 MeV at the 6.0 MeV Tandem Van de Graaff Accelerator of the Instituto Nacional de Investigaciones Nucleares (ININ) in Mexico. The number of incident particles was determined using RBS techniques. The relative concentrations of $^{14}$C/$^{12}$C were analyzed using AMS at the Laboratorio Nacional de Espectrometría de Masas con Aceleradores (LEMA) of the Universidad Nacional Autónoma de México (UNAM). The relevance of the $^{13}$C(d, p)$^{14}$C reaction in the study of compound nucleus formation as well as in some astrophysics scenarios, and the importance of the development of the AMS technique to measure cross sections of nuclear reactions of astrophysical interest in Mexico are also discussed. Speaker: Silvia Murillo-Morales (Instituto de Física, UNAM, Mexico.) Plenary Talks Room "Benigno Souza" Plenary Sessions on Wednesday-Friday mornings (25 min + 5 min) Convener: A. Aprahamian (University of Notre Dame, USA.) The SPES Exotic Beam ISOL Facility: Status of the Project, Technical Challenges, Instrumentation, Scientific Program SPES (Selective Production of Exotic Species) is the INFN project for a Nuclear Physics facility with Radioactive Ion Beams (RIBs). It is in advanced construction in Legnaro, with several technological innovations and challenges foreseen, comprehensive of new achievements and improvements. SPES will provide mostly neutron-rich exotic beams derived by the fission fragments (1013 fiss/s) produced in the interaction of an intense proton beam (200 microA) on a direct UCx target. Several other targets will be developed to provide users a large beam selection. The expected SPES beam intensities, their quality and, eventually, their maximum energies (up to 11 MeV/A for A=130) will permit to perform forefront research in nuclear structure and nuclear dynamics, studying a region of the nuclear chart far from stability. This goal will be reached by coordinating the developments on the accelerator complex and those of up to date experimental set ups. The schedule of the project is organized so to give low energy beams (1+ species at 40 keV) at the beginning of 2019, while post accelerated beams up to the maximum energies (around 10-11 MeV/n) are foreseen in 2021, after the installation of the newly developed RFQ new injection system for the ALPI post accelerator. The technical design and the installation phases will be described, followed by the description of some challenging arguments in the Nuclear Physics program to be performed at the Legnaro National Laboratory. Speaker: Dr Fabiana Gramegna (Legnaro National Laboratory, INFN, Italy.) First beams at the new RIBs facility at Dubna – ACCULINNA-2 An significant part of the upgrade of the Dubna Radiactive Ion Beams facility is the replacement of the ACCULINNA fragment separator with a new high acceptance device - the ACCULINNA-2. The project of a new in-flight facility for low energy 30-60 AMeV primary beams with 3 ≤ Z ≤ 36 has been started in 2011. The new device is destined to add considerably to the studies of drip-line nuclei performed with the use of variety of direct reactions known to be distinctive to the 15 – 50 AMeV exotic secondary RIBs. An overview of the design, construction and commissioning studies of the ACCULINNA-2 device will be presented. Secondary beam profiles as well as production rates were measured for 15N (49.7 AMeV) primary beam and Be (2 mm) target. Example dE-ToF identification spectra and calculated beam purity for 6He 31.5 AMeV and 12Be 39.4 AMeV as a main components of secondary beams will be demonstrated. Measured isotope yields agrees with LISE++ simulations. Future upgrades of ACCULINNA 2 setup (zero degree spectrometer, RF-kicker) and prospects of new experiments achievable in next years are presented. Speaker: Dr Grzegorz Kaminski (The H. Niewodniczanski Institute of Nuclear Physics PAN, Poland.) Rare Ion Beams in Brazil The "Radioactive Ion Beams in Brasil" (RIBRAS) facility is the first device in the Southern Hemisphere to produce unstable secondary beams. It is in operation since 2004 and it consists of two super-conducting solenoids of maximum magnetic field B = 6.5T, coupled to the 8UD-Pelletron tandem Accelerator installed at the University of São Paulo Physics Institute. The radioactive ions produced by in-flight transfer reactions of stable projectiles, are selected and focalized by the solenoids into a scattering chamber. Low energy (3-5 MeV/u) radioactive beams of $^6$He, $^8$Li, $^{7,10}$Be, $^{8,12}$B are produced currently and used to study elastic, inelastic, and transfer reactions on a variety of light, medium mass and heavy secondary targets. The 2n halo $^6$He and the 1p halo $^8$B are particularly interesting to study the role of the halo on the elastic and total reaction cross sections. Since 2012 RIBRAS can produce purified beams, using both solenoids with a degrader between them. The use of purified beams opens new possibilities, as the resonance scattering studies using inverse kinematics, examples are the (p,p), (p,d), and (p,α) reactions using an $^8$Li beam hitting thick CH 2 targets to measure their excitation functions. The spectroscopy of the highly excited states of $^9$Be were studied in this way. Fusion reactions are also possible with purified beams, we are programming to measure the fusion cross section by activation and off-line gamma-spectroscopy, and also by on-line particle-gamma coincidence. The interest is to study the influence of the 2n-halo structure of $^6$He on the fusion process. Our Cuban colleague Ivan Padron who spent a year in Sao Paulo put a large (2mx2m) position sensitive neutron detector, also called neutron wall, in operating conditions. It can be used to measure break-up processes of neutron rich radioactive beams hitting a secondary target. The installation of RIBRAS has opened many new possibilities to our Pelletron laboratory even approaching the frontiers of nuclear physics, demonstrated by the large number of publications and participations in international conferences. Speaker: Alinka Lépine-Szily (Instituto de Física - Universidade de São Paulo, C.P.66318,05389-970 São Paulo, Brazil) Convener: Prof. Piet Van Espen (University of Antwerp) Nuclear physics and astronomical observations of compact objects Overarching questions such how and where are the heavy elements synthesized, and what is the mechanism of stellar explosions, like supernovae, have been the subject of study of nuclear astrophysics for the last decades. These puzzles are closely connected to the behavior of matter under extreme density and temperature conditions. Our current understanding relies on simulations, micro-physics input, observations and the connections among them. In this talk, I shall discuss the influence that the nuclear physics input, e.g. weak processes and the nuclear matter Equation of State, has on the above mentioned astrophysical phenomena. Speaker: Liliana Caballero (University of Guelph, Canada.) Multiple techniques for cultural heritage study and the collaboration with the Brazilian museums Scientific investigations in the cultural heritage and objects of art are routinely performed in Europe and the United States a few decades ago, in Brazil we are currently increasingly using atomic and nuclear methods for this purpose. Since 2003 the Group of Applied Physics with accelerators of the Institute of Physics of the University of São Paulo has worked with various methodologies for characterization and analysis of cultural objects. The analysis methods include processes of imaging and techniques for elemental and compositional characterization. These methods used together allow the better understanding of the materials and techniques used in the creative process and manufacturing of the objects. In the processes of analysis are used as imaging techniques: Photos with visible light, reflectography Infrared, fluorescence radiation with ultraviolet light, images with tangential light and digitized radiography; that are used to examine and document the artistic and cultural heritage objects. In the determinations of the characteristic materials of the objects present in the collections, we used analyzes for determining the existing elements and chemical compounds in the surface layers of these. The techniques involve ion beam analysis such as Particle Induced X-Ray Emission, Rutherford Backscattering and currently Ion Luminescence. Extending further the possibility of analysis, it has been used the techniques of X-ray Fluorescence and Raman spectroscopy with portable equipment that can be used in museums themselves. The results of these analyzes are providing valuable information about the manufacturing process and provide new information of the objects and all this has allowed a new collaboration with different São Paulo museums such as Pinacoteca, Museum of Contemporary Art (MAC-USP), Paulista Museum, Museum of Archaeology and Ethnology (MAE-USP) and the Institute of Brazilian Studies (IEB-USP). Several works and studies are being carried out systematically in these institutions in different artworks in the museum's collections such as easel paintings, ceramic objects, papers, photos, etc. The information obtained are allowing the formation of a database about materials, pigments and manufacturing techniques of various artists. Particularly in the study of easel paintings the characterization of pigments in parallel with the imaging techniques has allowed to reveal the artist's creative process and has determined the palette used by the artist in one particular work. The purpose of this systematic study is to produce useful information to historians, curators, conservators and restorers for the expansion of knowledge in art history, but also in determining and defining the technical conditions and preservation of the cultural heritage material. The group of applied physics to the study of the historical and artistic heritage objects (NAP-FAEPAH) was formed in the University of São Paulo with collaboration of the different museums and institutions, and several works are been performed and will be presented and discussed. Speaker: M. A. Rizzutto (Instituto de Física, Universidade de São Paulo, Brazil.) MA-XRF imaging of a 15th century Sicilian painting by Antonello de Saliba For the fisrt time, a Saliba's painting was investigated by means analytical technique, MA-XRF (Macro- X-Ray Fluorescence). The LANDIS-X, a novel mobile scanner, allowed to obtain elemental distribution of the pigments allowing better elucidating palette and painting technique. Speaker: Hellen C. Santos (São Paulo University) Convener: Nikola Poljak (Rudjer Boskovic Institute (HR)) On gravastars solutions of the Einstein-Klein-Gordon equations in the sense of Colombeau-Egorov's generalized functions Spherically symmetric solutions are presented of the equations of motion for a scalar field interacting with gravity (EKG equations) in the Colombeau-Egorov's sense. The scalar fields are confined within the interior region and the exterior fields are purely gravitational and coinciding with the Schwarzschild ones. The solution resembles the so called "gravastars" which had been discussed in the literature. These solutions of the EKG equations open a possibility for the existence of static boson stars. The argumentation is based in defining a one parameter ϵ dependent family of radial dependencies of the metric and the scalar field, being infinitely differentiable. Afterwards, it is argued that in the limit ϵ-> 0, the EKG equations are satisfied in the sense of the generalized functions. The solutions exhibit properties which qualitatively support their physical meaning. For example close to the boundary at the interior, the scalar field energy density pile up towards to the limit surface. On the other hand, also close to the separation surface, but on the outside, the known "non-hair" theorem, clearly indicates that any scalar field perturbation also tends to be attracted to the boundary. The work also suggests the possibility for obtaining a regular gravastar, after using the found singular configuration as a first step in an iterative solution of the quantum EKG equations. Speaker: Mr Duvier Suarez Fontanella (Facultad de Fisica, Universidad de la Habana ) On the effects of magnetic fields and slow rotation in white dwarfs We use Hartle's formalism to study the effects of rotation in the structure of magnetized white dwarfs within the framework of general relativity. The inner matter is described by means of an equation of state for electrons under the action of a constant magnetic field, which breaks the SO(3) symmetry and introduces a splitting of the pressure into one parallel and other perpendicular to the magnetic field. Solutions correspond to typical densities of white dwarfs and values of magnetic field below 10$^{13}$G, considering perpendicular and parallel pressures independently, as if associated to two different equations of state. Rotation effects obtained accounts for an increase of the maximum mass for both, magnetized and non-magnetized stable configurations, up to about 1.5 M$_⊙$. Further effects studied include the deformation of the stars, which become oblate spheroids and the solutions for other quantities of interest, such as the moment of inertia, quadrupolar momentum and eccentricity. In all cases, rotation effects are dominant with respect to those of the magnetic field. Speaker: D. Alvear Terrero (Departamento de Física Teórica, Instituto de Cibernética Matemática y Física, Cuba.) Survival of heavy flavored hadrons in a hot medium Attenuation of hadrons with open or hidden heavy flavor, produced in relativistic heavy ion collisions, is described within the color-dipole approach. A charmonium propagating through a dense matter can be broken-up by either Debye screening of the binding potential (melting), or due to color-exchange interactions with the surrounding medium (absorption). These two effects are found to have similar magnitudes and both vanish at high transverse momenta of the charmonium. Although hadrons with open heavy flavor, charm and beauty, have been predicted to have a high survival probability, they were found to be strongly suppressed by final state interactions with the created dense medium. While vacuum radiation of high-$p_T$ heavy quarks ceases at a short time scale, production of a heavy flavored hadrons in a dense medium is considerably delayed due to prompt breakup in the medium. This causes a strong suppression of the heavy quark yield in a good accord with available data. Speaker: Boris Kopeliovich (Universidad Técnica Federico Santa María, Chile.) Convener: Oscar Diaz-Rizo (InSTEC, Cuba) X-Ray Photoelectron Spectroscopy (XPS) of Carbon nanostructures obtained by underwater arc discharge of graphite electrodes Carbon nanostructures, obtained by underwater arc discharge of graphite electrodes, were studied by X-Ray Photoelectron Spectroscopy (XPS). It was observed that the spectra of the samples taken from the floating part of the synthesis products, composed basically by Carbon nano-onions (CNO), present differences with those obtained from the precipitate , which contains a mixture of CNOs and multi-walled Carbon Nanotubes (MWCNT). These differences are related with the presence of atoms of carbon located in orbitals with different degree of hybridization (sp$^2$-sp$^3$), which in turn is related to the diverse grade of curvature of the planes of carbon in the nanostructures present in the samples. The obtained results indicate that XPS can be an important element in the characterization of the products obtained by the above-mentioned method of synthesis. Speaker: Daniel Codorniu Pujals (Instituto Superior de Tecnologías y Ciencias Aplicadas (InSTEC), Universidad de La Habana, Cuba) Exploring the possibility of radiography in emission mode at higher energies: Improving the visualization of the internal structure of paintings We demonstrated in previous investigations that the internal structure of paintings can be visualized with conventional radiography in transmission mode when paintings have the proper stratigraphy. Unfortunately, there are many paintings that do not result in useful images. This problem can be solved by using radiography in emission mode. With this technique, the painting is irradiated with high energetic X-rays originating from an X-ray tube operating at 100 keV – 320 keV while inside the painting low energetic signals such as photoelectrons or characteristic photons are being generated. These signals escape from the top 10 µm of the painting and are able to illuminate the imaging plate. However, this technique has also some disadvantages. One of them is that it is not able to visualize underlying paintings. In this study, we explored the possibility to enhance the information depth by increasing the energy of the photon source from 100 keV up to 1.3325 MeV (i.e., Co60 source). At the same time, we also studied the impact of the energy on the contrast between pigments and on the lateral resolution. For this, we used mathematical simulation of particle transport in matter to understand the relation between input particle (particle type such as photon, electron or positron and the energy of the particle), the material being irradiated (element from which it is composed, thickness) and the output signal (generated particle types, energy, dispersion). Finally, we will show that it is possible to image paintings using a Co60 source. Speaker: Dr Olivier Schalm (University of Antwerp & Antwerp Maritime Academy, Belgium.) The GFNUN´S Nuclear Innovation: An Innovation Edge Case from a Developing Country Originated in Basic Research During the last two decades, the Nuclear Physics Group of National University (GFNUN) has been leading nuclear basic research, generating innovation edge based on its own basic research, training researchers, public servers and society, and building technological transfer. The nuclear Colombian context presents a weakness on strategical nuclear programs and politics based on technical knowledge, low technical formation of decision makers and regulatory supervisors, and low coordination and not assumed responsibility between different governmental dependencies [1]. Colombia is a developing country which economy is highly depending on extractives activities, commodities, agro-industry and tourism. GFNUN is leading three main techniques not only to be applied in the local context but also worldwide which are an innovative methodology to evaluate radioactive materials, and nuclear techniques for corrosion detection and explosives detection for security and defense, and humanitarian demining purposes. An interesting point as example of the technological evolution is the fact that ate sixth year of developing the demining detection technique, GFNUN found a derived application based on. This is corrosion diagnosis with gamma retrodispersion and explosives detection for security and defense. Now, the corrosion detection technology and explosives detection are starting their third and first year of development, respectively, and the results are promissory but specialized strategies for the technological transfer to the market, potential investors, and a trustworthy royalties system for the researchers are needed; specially in a developing economy context. Isabel Martinez. AIP Conference Proceedings. 1753, 090003. (2016). Speaker: Maria Isabel Martinez Solarte (Universidad Nacional de Colombia) Convener: Leandro Gasques (University of São Paulo, Brazil.) Elastic scattering, inelastic excitation and neutron transfer for $^{7}$Li+$^{120}$Sn at energies around the Coulomb barrier Experimental angular distributions for the $^{7}$Li + $^{120}$Sn elastic and inelastic (projectile and target excitations) scattering, and for the neutron stripping reaction, have been obtained at $E_{{\rm LAB}}=$ 20, 22, 24 and 26 MeV, covering an energy range around the Coulomb barrier ($V_{B}^{{\rm(LAB)}}\approx21.4$ MeV). Coupled channel and coupled reaction channel calculations were performed and both describe satisfactorily the experimental data sets. The $\frac{1}{2}^{-}$ state $^{7}$Li inelastic excitation (using a rotational model), as well as the projectile coupling to the continuum ($\alpha$ plus a tritium particle) play a fundamental role on the proper description of elastic, inelastic and transfer channels. Couplings to the one-neutron stripping channel do not significantly affect the theoretical elastic scattering angular distributions. The spectroscopic amplitudes of the transfer channel were obtained through a shell model calculation. The theoretical angular distributions for the one-neutron stripping reaction agreed with the experimental data. Speaker: Dr V.A.B. Zagatto (Instituto de Física da Universidade Federal Fluminense, Niterói, RJ, Brazil.) Determination of the $^6$He nuclear radius from the total reaction cross section Nuclear reactions induced by neutron-rich radioactive beams have opened new possibilities for studying nuclei far from stability line [1]. The increase of the reaction cross-sections for neutron rich nuclei compared to the stable nuclei can be attributed to their larger nuclear radii. In order to observe this effect a systematic investigation of total reaction cross sections from elastic scattering measurements using the $^9$Be target and tightly bound, weakly bound and exotic projectiles have been performed [2]. In particular, the $^6$He+$^9$Be exotic system shows large values of the reaction cross section compared to reactions induced by stable weakly bound projectiles [1,2]. For this light system, the Coulomb interaction is smaller than nuclear interaction. Thus, the Coulomb breakup of the projectile is expected to have less influence. Another study was carried out to verify the dependence of the observed enhancement as a function of the target mass. The analysis was extended to the $^6$He scattering on light, medium and heavy mass targets. The results showed a weak, but considerable, enhancement in the total reaction cross section for $^6$He+$^9$Be system in comparison to $^6$He scattered on heavy targets [2]. From the total reaction cross sections values, the $^6$He nuclear interaction radius are extracted using a new method employing a simple geometric relation [3]. A comparison with the radius of the $^6$He obtained at higher energies is presented. E. Benjamin et al. Phys. Lett. B 647 (2007), 30. K.C.C. Pires et al. Phys.Rev. C90 (2014) no.2, 027605. A.S. Freitas et al Braz. J. Phys. 46, (2016) 120-128. Speaker: Dr K.C.C. Pires (Departamento de Física Nuclear, Instituto de Física, Universidade de São Paulo,) Clustering structure and possible effects in reaction dynamics forming the $^{46}$Ti* nuclear system. Heavy ion nuclear reaction studies are an important tool to observe and disentangle different and competing mechanisms, which may arise in the different energy regimes. In particular, at relatively low bombarding energy the comparison between pre-equilibrium and thermal emission of light charged particles from hot nuclei is interesting. Indeed, nuclear structure of the interacting partners and reaction dynamics may be strongly correlated, especially at energies close to the Coulomb barrier, and it emerges when some nucleons or clusters of nucleons are emitted or captured [1]. In particular, a major attention has been devoted, in the last years, to the possible observation of cluster structure effects in the competing nuclear reaction mechanisms [2], especially when fast processes are involved. At this purpose, the four reactions $^{16}$O+$^{30}$Si at 111 MeV, $^{16}$O+$^{30}$Si at 128 MeV, $^{18}$O+$^{28}$Si at 126 MeV and $^{19}$F+$^{27}$Al at 133 MeV have been measured, to study the onset of pre-equilibrium in an energy range where, for central collisions, complete fusion is expected to be the favorite mode. Experimental data were collected, using the GARFIELD+RCo array, fully equipped with digital electronics [3], at Legnaro National Laboratories. Following the identification of particles and the energy calibration procedures, the complete analysis has been performed on an event-by-event basis. Experimental data are compared to some theoretical predictions: in particular, both dynamical models based on either Stochastic Mean Field (Twingo [5]) or Anti-symmetrized Molecular Dynamics (AMD [4]) and/or fully statistical model (Gemini++ [6]) have been considered. Events generated through these codes are filtered through a software replica of the setup, in order to take into consideration any possible distortions of the distributions due to the finite size of the apparatus. Differences between the experimental data and the predicted data, which are based on very different physical assumptions, can evidence possible entrance channel effects, which may be due to the cluster nature of the colliding partners. After a general introduction on the experimental campaign, this contribution will focus on the preliminary results obtained so far. P.E. Hodgson, E. Btk, Phys. Rep. 374, 1-89 (2003). L. Morelli et al., Journ. of Phys. G 41, 075107 (2014); L. Morelli et al., Journ. of Phys. G 41, 075108 (2014); D. Fabris et al., in PoS (X LASNPA), 2013, p. 061.D; V.L. Kravchuk et al., EPJ WoCs 2, 10006 (2010); O. V. Fotina et al., Int. Journ. Mod. Phys. E 19, 1134 (2010). F. Gramegna et al., Proceedings of IEEE Nucl. Symposium, 2004, Roma, Italy, p.0-7803-8701-5/04/; M. Bruno et al., Eur. Phys. Jour. A 49, 128 (2013). A. Ono, Phys. Rev. C59, 853 (1999). M. Colonna et al., Nucl. Phys. A 642, 449 (1998). R. J. Charity, Phys. Rev. C82 014610 (2010). Speaker: Dr F. Gramegna (INFN Laboratori Nazionali di Legnaro, Italy.) Poster Session - NINST Room "Bens Arrate" Conveners: Fabiana Gramegna (INFN Laboratori Nazionali di Legnaro) , Guido Martin (CEADEN, Cuba) Monte Carlo simulation of radiation transport in the study of hybrid Timepix detectors based on GaAs:Cr The chromium - compensated gallium arsenide is attaining a relevant position between the materials devoted to the development and fabrication of radiation detectors. This material shows high resistance to the radiation damage, high effective Z, relative low production cost, possibility to grow large area films, etc. Some results from the study of Timepix hybrid detectors based on GaAs:Cr by using the Monte Carlo modeling of radiation transport are presented in this contribution. The MCNPX, GEANT4, SRIM and MCCM code systems were used for this purpose. In-depth profiles of the energy deposited by the incident radiation within the sensor active volume are included. Similarly the shapes and dimensions of the charge carriers clouds generated by different energy incident photons, in different geometrical conditions, are also obtained and presented. The ranges of two different energy $^{22}$Ne ions in GaAs:Cr material and the contributions from each energy loss channel were also determined in the U-400M cyclotron. In addition, the effective displacement cross-sections and the number of displacement per atom produced for each atomic specie are presented when the device is irradiated with electrons and photons at different energies. Speaker: Antonio Leyva Fabelo (Joint Institute for Nuclear Research (JINR), Russian Federation & Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear (CEADEN), Cuba.) Neutronic anisotropy around a radiotherapy treatment field by passive detectors Giant-dipole- resonance photoneutrons (GRN) are produced by LINAC 2300 CX operating in the range of 15 MV. During radiotherapy treatment unwanted neutron dose is delivered to patients. To establish the thermal and epithermal photo-neutrons field during radiotherapy treatments Nuclear Track Methodology (NTM). The well tested polyallyldiclogola carbonate (PADC type CR-39 TM on that a thin boron film is deposite to convert neutrons in charged paticles by 10 B(n,α) reaction are employed. The passive device register with efficiency charged particles as a damaged volume these are visible after chemical etching (6N, NaOH, 70 o C) under light transmission microscope (10 x 40). Tracks of the order of micrometers are visible and their diameters are measure to determine track densities and histogram bars. These provide information on the neutron intensity and energy groups. Enhancement effects are observed on absorbed dose due to both scattered photo-neutrons and (γ,n) reactions and a relatively good response is observed for mixed radiation field. To solve the difficulty in measuring the photoneutron spectra that is produced from the head of the LINAC due to very intense gamma irradiation, can be used this metodology for stablish the energies of neutrons and is corroborated by MCNPX the neutron spectra, establishing the additional dose that is attributed to the neutron component. Speaker: Dr Segundo Agustin Martinez Ovalle (Universidad Pedagógica y Tecnológica de Colombia) Calculation of kinetic parameters of the RECH–1 research nuclear reactor using MCNP and Serpent 2 In this work we calculated two kinetic parameters of the RECH-$1$ research nuclear reactor: the effective delayed neutrons fraction, $\beta_{eff}$, and the mean neutron generation time $\Lambda$ using the Monte Carlo codes MCNP6 [1] and Serpent 2 [2] and the neutron cross section library ENDFV.VII.$1$. To calculate $\beta_{eff}$ we used the method proposed by Meulekamp and van der Marck[3]. In this method the effective delayed neutron fraction is estimated as $$ \beta_{eff} \sim 1 - \frac{k_p}{k}, $$ where $k_p$ is the prompt effective neutron multiplication factor and $k$ is the total effective neutron multiplication factor. To calculate the effective neutron generation time we used the pulsed neutron source method[4]. In this technique a burst of neutrons is injected in a subcritical system and then the decay of the neutron population is observed as a function of time. After the system thermalization and decay of higher flux modes, the fundamental-mode decay constant, $\alpha_0$ can be measured using the point kinetic approximation. The relation between $\alpha_0$ and the reactivity, $\rho$, is obtained from the point kinetics equations: $$ \alpha_0 = \frac{\rho - \beta_{eff}}{\Lambda}. $$ These calculations will be contrasted with reactor operation experimental campaign results during next year. T. Goorley, et al., Initial MCNP6 Release Overview, Nuclear Technology, 180 (2012), 298-315. Leppanen, J., et al., The Serpent Monte Carlo code: Status, development and applications in 2013. Ann. Nucl. Energy, 82 (2015) 142-150. Meulekamp, R.K., van der Marck, S.C., Calculating the effective delayed neutron fraction with Monte Carlo. Nucl. Sci. Eng. 152 (2006), 142–148. Simmons, B.E., King, J.S., A pulsed neutron technique for reactivity determination. Nuclear Science and Engineering 3 (1958), 595–608. Thermomechanical behavior of the TRISO fuel under deep burnup and high temperature in the VHTR The Generation IV Very High Temperature Reactor (VHTR) is envisioned as an outstanding prototype among the six nuclear systems propose in the Generation IV. The characteristics that highlights this reactor are focused in the low electricity generation costs, short construction periods, in proliferation resistance and physical protection. Nevertheless, it presents essential challenges to be deployed as a sustainable energy taking into consideration the high temperatures (1000°C in normal operation and up to 1800°C in accidents conditions) and burnup degrees (150–200 GWd/TonU) achievable in these reactors. One of these key challenges is the nuclear safety which mainly relies on the quality and integrity of the coated fuel particles (TRISO). In this investigation will be studied the thermomechanical behavior of the TRISO fuel in function of the variation of different parameters taking into consideration the deep burnup degrees planned to be reached in the VHTR. The studies performed in this investigation included the evaluation of key parameters in the TRISO such as: release of fission gases and CO, gas pressure, temperature distributions, kernel migration, maximum stress values, and failure probabilities. In order to achieve this goal will be used coupled computational modeling using analytical methods and Monte Carlo and CFD codes such as MCNPX version 2.6e and Ansys version 14. Speaker: Dr Daniel Evelio Milian Lorenzo (The Higher Institute of Technologies and Applied Sciences (InSTEC), Cuba.) Analysis of the radiation effects on some properties of GaAs:Cr and Si sensors exposed to a 22 MeV electron beam Nowadays, the experiments related to the High Energy Physics and others fields demand the use of detectors with greater radiation resistance, and the novel material GaAs:Cr had demonstrated excellent radiation hardness compared with other semiconductors. On the basis of the evidences obtained in the JINR experiment with the use of 22 MeV electrons beam generated by the LINAC-800 accelerator, an analysis of the electron radiation effects on GaAs:Cr and Si detectors is presented. The measured I-V characteristics showed a dark current increase with dose, and an asymmetry between the two branches of the behaviors for all detectors. Analyzing the MIP spectra and CCE dose dependence measurements a deterioration process of the detectors collection capacity with the dose increase was found, although the behaviors are somewhat different according to the detector type. The detailed explanation of these effects from the microscopic point of view appears in the text, and are generally linked to the generation of atomic displacement, vacancies and other radiation defects, modifying the energy levels structure of the target material. These changes affect the lifetime and concentration of the charge carriers, and other characteristics of the target material. Speaker: Ms Arianna Grisel Torres Ramos (Instituto Superior de Tecnologías y Ciencias Aplicadas (InSTEC)) Characterization of a Multichannel Analyzer Implemented with FPGA Board. In this work the characterization of a multichannel analyzer conceived in the CEADEN and developed on the basis of an FPGA, is presented. The system consists of 2048 channels; 32 000 counts per channel at most; 4.5ns of sampling time and a dead time of 16ns. A differential non-linearity (DNL) of +3.62 ± 1.56 and a non-integral linearity (INL) of +0.29 ± 0.05, in addition with the increase of the counting rate the channel of the centroid of the peak moves 64 channels as an average towards the left Of the spectrum, while the FWHM of the peaks remains practically constant between 7 and 8 channels. The spectra obtained for radiation sources are in correspondence with the characteristics of each source. In summary these features ensure that this multi-channel analyzer can be used in nuclear spectrometry. Speaker: Vladimir Rosa Febles (InSTEC) Computational modeling of aqueous homogeneous reactor for medical isotopes production Nowadays, the use of Aqueous Homogeneous Reactors (AHR) for the production of medical isotopes, mainly 99Mo, is potentially advantageous because of their low cost, small critical mass, inherent passive safety, and simplified fuel handling, processing and purification characteristics. However, it faces some challenges to be successfully deployed for the production of medical isotopes. This paper summarized the computational modeling efforts carried out by our research group in order to solve some of the identified challenges. The studies carried out included the neutronic and thermal-hydraulic modeling of 75 kWth AHR based on the LEU configuration of the ARGUS reactor. In addition are presented and discussed benchmarking exercises that included neutronic and thermal-hydraulic results of two solution reactors, the SUPO and ARGUS reactors. The computational platform utilized for the neutronic and thermal-hydraulics studies included the utilization of the MCNPX version 2.6e and ANSYS CFX 14 computational codes and two computational clusters in Cuba and Brazil, the InSTEC-IRL cluster and UFPE-DEN-GER cluster. The neutronic studies included the determination of parameters such as critical height, 99Mo and others medical isotopes production. Thermal-hydraulics studies were focused on demonstrating that sufficient cooling capacity exists to prevent fuel overheating. Our group studies and the results obtained contribute to demonstrate the feasibility of using AHR for the production of medical isotopes, however additional studies are necessary to confirm these results and contribute to development and demonstration of their technical, safety, and economic viability. Speaker: Dr Daniel E. Milian Lorenzo (InSTEC) Design of a preamplifier card for the photomultiplier tubes of a Gamma Camera The service provided by the Gamma Cameras (GC) in the nuclear medicine departments fails because of their breakdown, generally due to the associated electronics and not to the physical detection components. For this reason, it was decided to develop an electronic system that allows the recovery and optimization of disused GC, starting with the design of the preamplifier for each photomultiplier tube (PMT). The circuit was designed and simulated and the list of components necessary for the construction of the preamplifier was generated, as well as the printed circuit board was designed for its assembly. By simulating the preamplifier this worked in linear mode. This determines that the amplitude of the output signal is proportional to the amount of charge delivered by the detector. This card allows an automatic adjustment of the signals of the PMTs as do the modern GC. Besides, the circuit was designed and simulated for 37 and 75 PMTs, and the printed circuit board was designed for both cases. Speaker: Jorge Luis Domínguez Martínez (InSTEC) Energy calibration of hybrid GaAs:Cr-based Timepix detector with alpha particles The advanced GaAs:Cr material for radiation detection is in the scope of many scientific and technological institutions in the world, as consequences of its proved superior properties and economic advantages. Experiments made at the JINR Dzhelepov Laboratory of Nuclear Problems for the energy calibration of a hybrid GaAs:Cr-based Timepix detector with alpha particles reaffirm that this device is able to register this particle in energy range from 3140 keV to 7687 keV. The mathematical simulation was used to calculate the transmitted energy, making possible the experimental calibration with the use of Mylar as absorbent. By calibrating the detector with characteristic X rays of some target materials and using a two steps fitting procedure was determined the relationship between the photon energies and the registered by the detector TOT counts. The energy calibration with alpha particles was performed according to linear function y = 362.08 + 2.41 x, with R$^2$ = 0.99, and verified with the measurement of the $^{218}$Po line of radon in air. Speaker: Mr Dayron Ramos López (Instituto Superior de Tecnologías y Ciencias Aplicadas (InSTEC), Cuba.) Gamma alarm for radiological security This paper describes the development of an instrument for Radiological Monitoring and Gamma Alarm (GAMAL01). It monitors the burst of radiation produced during a radioactive accident and triggers an alarm for evacuation in case the radiation exceeds an established threshold. The instrument consists of two sections, analog and digital. Key words: Geiger Müller counters, data acquisition systems, gamma radiation, radiological protection, dosimetry. Speaker: René Toledo Acosta (CEADEN) MCP-PMT timing at low light intensities whit a DRS4 Evaluation Board Positron emission tomography (PET) is one of the most important diagnostic tools in medicine, allowing three-dimensional imaging of functional processes in the body. It is based on a detection of two gamma rays with an energy of 511 keV originating from the point of annihilation of the positron emitted by a radio-labeled agent. By measuring the difference of the arrival times of both annihilation photons it is possible to localize the tracer inside the body. The gamma rays are normally detected by a scintillation detector, which timing accuracy is limited by a photomultiplier and a scintillator. By replacing a photo sensor with a micro channel plate PMT (MCP-PMT) and a scintillator with Cherenkov radiator, it is possible to localize the interaction position to the cm level. In a pioneering experimental study with Cherenkov detectors using $PbF_2$ crystals and microchannel plate photomultiplier tubes MCP-PMT a time resolution better than 100 ps was achieved. In this work a DRS4 digital ring sampler chip was used to read out single photons output signals from two different MCP-PMTs (Hamamatsu R3809 and Burle 85001) with a sampling rate of 5$\times$10$^9$ samples/s. The digitized waveforms were analyzed and a comparison between the two detectors timing response was made. The best time resolutions achieved were 161 and 224 ps FWHM for the Hamamatsu and Burle MCP-PMT respectively. Speaker: Mrs Dania Consuegra Rodríguez (Jožef Stefan Institute, Ljubljana, Slovenia.) Properties of GaAs:Cr-based pixel detectors High resistivity gallium arsenide compensated by chromium has demonstrated a good suitability as a sensor material for hybrid pixel detectors used in X-ray imaging systems with photon energies up to 60 keV. The material is available with thickness up to 1 mm and thanks to its Z number and fully active volume of the sensor high absorption efficiency in this energy region is provided. Although GaAs:Cr-based detectors are mostly designed for X-ray applications, it can be used for charged particle tracks registration, that will be shown. In this work the results of the GaAs:Cr-based Timepix detectors tests with various particles sources are reported. The energy and spatial resolution, mu-tau distribution over sensor area have been determined. By means of scanning the detector with pencil photon beam generated by synchrotron facility, the geometrical mapping of pixel sensitivity is obtained as well as the energy resolution of a single pixel. The long-term stability of the detector has been evaluated based on the measurements performed over one year. It's well known that the main performance limitation of thick GaAs:Cr-based detectors is caused by readout electronic with relatively small pixels due to the charge sharing effect. But by optimizing the bias voltage it was possible to achieve an FWHM of 2.5 keV at 25 keV in a single photon counting mode. Also the radiation hardness of GaAs:Cr sensors was investigated by means of irradiation with 20 MeV electrons and some results will be presented. Speaker: Mr Petr Smolyanskiy (Joint Institute for Nuclear Research (JIRN)) Spectral CT with a Timepix detector and a GaAs:Cr sensor: material decomposition In recent years, CT has proven itself as a method of nondestructive research in biology, geology, industry, and other fields. However, until recently, the detectors used in CT recorded only the intensity of the radiation, losing information about the energy. The method of dual-energy CT partially corrects this deficiency. With the advent of hybrid matrix detectors with the single photon counting, it became possible to take into account the energy of radiation by comparison with one or more energy thresholds. The ability to use high-Z semiconductors as a sensor makes it possible to increase the efficiency of the detector. This work is devoted to the development of a method of spectral CT with a Timepix detector and a GaAs: Cr sensor. The possibility of material decomposition based on the dependence of the linear attenuation coefficient on energy is demonstrated. Speaker: Mr Danila Kozhevnikov (Joint Institute for Nuclear Research (JINR)) Study of the charge sharing effect and the energy resolution in a hybrid based on GaAs:Cr Timepix detector Among the most modern ionizing radiation detectors, the hybrid based on GaAs:Cr Timepix stands out as one of the most competitive due to its characteristics, such as its high Z and its strong resistance to radiational damage. In addition to their use in high energy physics research, they have been effectively employed in the medical visualization, spatial technologies, geological prospecting, among others advanced fields. The target of this work is a 900 μm GaAs:Cr detector with Timepix read-out technology. Some detector characteristics for three experimental conditions were measured and studied using the X-rays generated by a synchrotron and by an X-ray tube provided with different materials for obtaining the corresponding fluorescence photons. A composite function was used to decompose the differential spectra in to the most important involved contributions. As an additional tool in the investigation, the mathematical modeling of the movement within the detector active volume of the generated by radiation charge carriers was used. The results of the charge-sharing studies showed a noticeable prevalence in the detector of this phenomenon, changing its contribution according to the experiment characteristics. The detector was calibrated for all the planned experiments and the energy resolution of the system was calculated. From the analysis of all obtained results and their comparison with the previously reported in literature data, it was confirmed that the detector has a marked charge-sharing effect between neighboring pixels, seeing its performance more affected as the energy of the incident photons growths. Speaker: Lisan David Cabrera Gonzalez (University of Pinar del Rio, Cuba.) Thermal-hydraulic study using CFD codes of new nuclear fuel alternatives (UN and UC) for the HPLWR The High Performance Light Water Reactor is one of the most promising concepts of the Fourth-Generation reactors. Uranium mono nitride (UN) and Uranium mono carbide (UC) as nuclear fuel alternative for the HPLWR, offer the advantage of high thermal conductivity as compared to UO$_2$. The use of coating can solve the problems of the reactive nature for UN and UC which arise when these fuels are used in light water thermal reactors. In this paper, a thermal-hydraulic study of the HPLWR fuel assembly, for UO$_2$, UN and UC, using Computational Fluid Dynamics (CFD) codes was carried out. The use of UN coated with ZrC layers and UC coated with TiN layers and the changes of the fuel thermal conductivity with the porosity were also studied. The radial and axial temperature distributions in the fuels were obtained for all the cases. The maximum temperature values obtained using UN and UC, (coated and uncoated), were lower than those obtained with UO$_2$. The fuel porosity changes have little influence on the fuel maximum temperature using UN and UC, while using UO$_2$ the maximum temperature increases in 511 K with a 0.2 % porosity increase. Speaker: Landy Castro (Instituto Superior de Tecnologías y Ciencias Aplicadas, Cuba.) Convener: Alexis Diaz-Torres (University of Surrey) Double-Folding Model Analysis of Fusion Reaction Fusion reaction play a critical role in stellar evolution and nucleosynthesis [1]. However, the important fusion reaction still carry large uncertainties at Gamow region due to the low cross sections and, the limited theoretical understanding on the mechanisms related to the fluctuations that occur in the cross section [2]. Recently, we investigated $^{12}$C+$^{12}$C system, by making use of the so-called multi-channel folding model [3]. Our formulation, the nucleon-nucleon interaction can be described from the DDM3Y density-dependent potential [4] and it allows the inclusion of elastic and inelastic channels and the fusion cross section. Therefore, from the coupled-channel system, elastic and fusion cross-sections are simultaneously calculated. The explicit inclusion of inelastic channels, the imaginary part of the optical potential are only an absorption contribution of short range [5]. The $^{12}$C+$^{12}$C results show that the inclusion of inelastic channels and the presence of the Hoyle state improvement the agreement with experimental data [3]. Our model has been applied to the $^{16}$O+$^{16}$O and $^{12}$C+$^{16}$O systems, but the inclusion of the inelastic channels did not show a strong contribution in the determination of the astrophysical factor in the region of astrophysical interest. M. Wiescher et al., Annu. Rev. Astron. Astrophys. 50, 165 (2012). H. Esbensen et al., Phys. Rev. C 84, 064613 (2011). M. Assunção, P. Descouvemont, Phys. Lett. B 723 355 (2013). G. R. Satchler, W. G. Love, Phys. Rep. 55C, 183 (1979). A. Kobos et al., Nucl. Phys. A 384, 65 (1982). Speaker: Dr Marlete Assuncao (Universidade Federal de Sao Paulo, Brazil.) Possible effects of clustering structure in the competition between fast emission processes and compound nucleus decay In the last decades, a renewed attention to clustering in nuclei has emerged due to the study of weakly bound nuclei at the drip lines [1]. Clusters in nuclear systems can be related to their dynamical formation or their structural presence (pre-formation) in nuclei. While for light nuclei several links between cluster emission and its connection with nuclear structure and dynamics have been pointed out [1,2], this is less obvious when moving towards heavier systems, where the determination of pre-formed clusters within nuclear matter is more complicated and there is still a lack of experimental evidences of such structure effects. An interesting way to investigate the structural properties of medium mass systems is to study, in central collisions, the competition between evaporation and pre-equilibrium light particles emission as a function of entrance channel parameters [2]. An experimental campaign has started at the Legnaro National Laboratories using the GARFIELD + RCo multi-detector system [3] with the aim of confirming alpha clusterization in nuclei by comparing pre-equilibrium emission in terms of energy spectra and multiplicities, for different entrance channel parameters like beam velocity, mass asymmetry and structure of the reacting partners. In particular, the two systems $^{16}$O + $^{65}$Cu and $^{19}$F + $^{62}$Ni, leading to the same compound system $^{81}$Rb*, have been studied at the same beam velocity (16 AMeV). Angular distributions and the light charged particles emission spectra in coincidence with evaporation residues have been measured up to very forward angles. The experimental data have been first compared with the predictions of the Moscow Pre-equilibrium Model (MPM) [4] and then with the statistical model GEMINI++ [5]. A comparison with the dynamical models SMF [6] and AMD [7] has also been done. Recent results of the data analysis will be presented. The analysis is still in progress. Phys. Rep. 432, 43-113 (2006) and ref. therein. Phys. Rep. 374 (2003) 1-89. EPJA 49 (2013)12. Int. Jour. Mod. Phys. E 19 (2010) 1134. Phys. Rev. C 82 (2010) 014610. Nucl. Phys. A 642 (1998) 449. Phys. Rev. C 59 (1999) 853. Speaker: D. Fabris (NFN sezione di Padova) Conveners: Juan Estevez (CEADEN, Cuba) , Oscar Diaz-Rizo (InSTEC, Cuba) , Prof. Piet Van Espen (University of Antwerp) Assessment of radiological monitoring network well location through back trajectories Air quality models are an important tool to assess the responsibility for existing air pollution levels through the evaluation of source-receptor relationships. Back trajectories have frequently been used to identify potential source areas of air pollutants and to clarify their respective contribution at receptor sites. CALMET/CALPUFF modelling system is well known, and several validation tests were performed until now. This system has a Back Trajectory Analysis Module that creates plots of back trajectories corresponding to user specified air quality events and locations. Each path is initiated for a particular location and starting time. Then the path of some air parcel that impacts that location at that time is mapped back in time to identify potential transport patterns and source regions associated with some air quality event. In this work, the Back Trajectory Analysis Module of CALMET/CALPUFF system is apply in order to assess the well location of a hypothetical radiological monitoring network around the location of the major potential source of radioactive pollutants to guarantee the detection of a possible accidental spill. In this case study Havana Airport was considered the source. Eight radiological monitoring stations were placed 20km from the source arranged according eight directions (N, NE, E, SE, S, SW, W, NW). The study was conducted during a typical dry season. As result was obtained that detection is unlikely in two stations (N and S) due to the distance between source and receptor. The remoteness of the source causes three stations (NE, E and SE) to remain blind during the complete season. Only three stations (NW, W and SW) probably will detect radioactivity during the season selected. The nearness of the radiological monitoring station to the potential source of radioactive pollutants should be taken in account considered to guarantee the detection of a possible accidental spill regardless of the period studied. Speaker: Prof. Anel Hernández-Garces (CUJAE) New stage on the neutronics and thermal hydraulics analysis of a Small Modular Reactor core Building on the success of the large nuclear plants, SMRs offer the potential to expand the use of clean, reliable nuclear energy to a broad range of customers and energy applications. In this work, a model to describe the neutronics parameters of a SMR core that can produce up to 530 MW of thermal power was developed. Using this model, several configurations of fuel enrichment to obtain the most homogeneous distributions of the power inside the fuel assemblies during all the core lifetime were studied. Temperature reactivity coefficients and mass variation of the principal isotopes for the optimized core were calculated. Finally, thermal hydraulics studies of the highest temperature section in the core was performed to obtain the temperature distributions in the fuel, in the moderator and the radial temperature distribution inside the hottest pin of the fuel assembly was obtained. Speaker: Mariana Cecilia Betancourt (InSTEC) Neutronics Analysis of a Small PWR using TRISO fuel particles with MCNPX Tri-structural Isotropic (TRISO) based fuel with SiC matrix can be used in Light Water Reactors (LWRs) for enhancing its safety features in extreme situations. Besides the simplification of the design, the utilization of TRISO fuel particles in PWR technology enhances its integrity by confining the radioactive fission products within fuel itself during the reactor operation. In this work, the preliminary conceptual design of a small PWR core using TRISO fuel was carried out. The neutronic simulation of the core is carry out using MCNPX program version 2.6e. This reactor produces 25MW of thermal power and with approximately 4 years of effective full power years. A multifactorial statistical study about the influence of three parameters in the effective multiplication coefficient value was carried out; particles size, fuel enrichment and packing fraction. The core was optimized in order to obtain in the first load the necessary excess of reactivity to reach the cycle duration. The power distributions in the first load and at the end of the cycle were obtained, with a total maximum power peaking factor of 2.55. The integral effectiveness of the absorber rods and the energetic spectrum in the most important cycle states were calculated. Speaker: Annie Ortiz Puentes (InSTEC) Chemical and mineralogical characterization of the cupola slag generated in the ``9 de Abril" smelter of Sagua la Grande Slags are final by-products involved in the iron and steelmaking process. The general tendency in the foundry workshops in Cuba is to consider slag as a waste material and deposit it in the yards of the workshops or in municipal landfills affecting the environment. Its composition and characteristics depend and vary widely according to the raw material used, the technology employed and the cooling rate once extracted from the furnace. The chemical and mineralogical characterization of slag generated in cupola furnace in the ``9 de Abril" smelter of Sagua la Grande has been performed using techniques such as: EDX, XRD and FTIR. Speaker: Leidys Laura Pérez González (Universidad Central "Marta Abreu" de Las Villas, Cuba.) Chemical composition of PM10 collected indoor at workshop areas in a factory located in Santa Clara The objective of this work was to evaluate the exposure levels to PM10 to which workers are exposed during the working day. Sampling was carried out in two workplace areas of a factory located in Santa Clara city: the iron casting workshop and the unmolding workshop. The factory utilizes as raw materials: iron scrap, ferroalloys, coke, and materials from the own process as pig iron and return sand, which are important sources of the pollutants present in particulate matter. The concentrations of PM10 on the air were determined by gravimetric analysis. The samples were analyzed by Energy Dispersive X-ray Fluorescence (EDXRF) and Ionic Chromatography (IC) for determining the elemental composition and some ions of interest. X-ray Diffraction (XRD) analysis was applied to selected samples in order to identify possible phases of compounds commonly found in this type of industries. The results were compared with the reported in the Cuban Standard NC872:2011 that regulates the harmful substances in the air of the working zone and evaluates the occupational exposure. Speaker: Yennier Cruz Bermúdez (Universidad Central "Marta Abreu" de Las Villas, Cuba.) Determination of the soil water content using the Neutron Backscattering Technique for explosive devices location One of the nuclear techniques that are being investigated by different groups in the field of explosives detection and demining is the thermal Neutron Backscattering Technique (NBT). NBT is based on the fact that the buried target is Hydrogen-rich and therefore if it is in a media with different Hydrogen content and it is exposed to a fast neutron source, the number of backscattered thermal neutrons produced by the moderation process will give us a signal from which we can infer the presence of the Hydrogen-rich target. NBT is used to locate buried Hydrogen-rich objects using a fast neutron source and two $^3$He neutron detectors. Special problems need to be understood as well as is necessary to study the advantages, disadvantages and limits of the technique in the Colombian case. One of the most important issues that have to be investigated is the soil moisture. The backscattered neutrons intensity was related with the soil water content for farming soil, with and without Hydorgen-rich objects in soil. The experimental results are presented with the purpose of comparing the soil water content as measured by NBT and the gravimetric method. NBT intensity signal changes for both object and soil water content, and the best performance in the data analysis. Speaker: D Bautista-Sánchez (Universidad Nacional de Colombia) Oil samples analysis using X-ray fluorescence In this work, several oil samples from different Brazilian basins were investigated using Energy Dispersive X-Ray Fluorescence spectroscopy technique (ED-XRF) aiming to obtain qualitative information about their chemical composition [1,2]. The budget of analyzed samples was composed of twelve oil samples from five distinct oil fields belonging to the Campos Basin of the Esp ́ırito Santo and to the Santos Basin, Brazil. The samples were trickled on a thick Kym foil (99.9%) with 2 cm of diameter. The ED-XRF measurements were carried out using a portable device (Amptek XR-100SDD model) [3,4] and the spectra analysis were performed using the WinQxas software. Besides, to enhance the detection of some elements at lower concentrations, Tungsten and Aluminum filters were placed at the exit of the X-ray tube. The preliminary results indicate the presence of S, Cl, K chemical elements and with the use of Al and W filters, identification of Br and Sr at lower concentrations in four oil samples from the Marlim, Pampo and Jubarte fields of the Campos Basin could be also achieved. In addition, the obtained data so far raise the question how these different chemical elements are correlated with the different oil basis especially for those elements present at low concentrations. [1] D.A. Skoog, D.M. West, F.J. Holler, Fundamentals of Analytical Chemistry. New York: Saunders College Publishing, 2006. [2] P. Tasch, F. Damiani, T ́ecnicas de an ́alise e caracteriza ̧c ̃ao de materiais: XRF - X-Rays Fluorescence Spectroscopy. Campinas: Unicamp, 2000. [3] Amptek - http://amptek.com/, acessed in april 2017. [4] C.B. Zamboni, S. Metairon, L. Kovacs, D.V. Macedo, M.A. Rizzutto., Journ. Radioanal. Nucl. Chem. 307 1641, 2015. Speaker: Dr K.C.C. Pires (Instituto de Física, Universidade de São Paulo) Procedure to improve the Quality Management System of the Chemical and Materials Analysis laboratories of CEADEN The present work presents a procedure designed for the improvement of the Quality Management System (QMS) implemented in the laboratories of Chemical Analysis and Materials of the CEADEN according to NC ISO / IEC 17025: 2006 in order to allow a stability in time of their status as accredited laboratories. In these laboratories, nuclear and complementary techniques are used in research and scientific and technical services related to the environment, food safety, nuclear safety and security of other technological facilities, among others. Following a diagnosis of the QMS and identification of the main deficiencies of the last accreditation period, an improvement procedure based on the Plan-Do-Check-Act cycle (PDVA) is applied, which is a methodological tool that can be applied to other testing laboratories of this type that have the status of accredited or that claim to obtain it. Different techniques and tools are used such as expert method, cause - effect diagram, interrelations diagram, interviews, brainstorming, among others. Speaker: Débora Hernández Torres (Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear (CEADEN)) Radioactivity levels in peloids used in main Cuban spas Radioactivity levels in peloids from some Cuban spas (San Diego, Elguea, Santa Lucía and Cajío) have been studied. The radionuclide concentration (in Bq.kg-1 dry weight) varied as follows: $^{226}$Ra = 6 – 1800, $^{137}$Cs = <2 – 5, $^{232}$Th = 6 – 38 and $^{40}$K = 47 – 365, being comparable to concentrations reported for therapeutic peloids used worldwide. Considering the short exposition times associated with the usual therapeutic practices in Cuban spas, the estimated annual equivalent dose in $^{226}$Ra-enriched peloids is well below the accepted equivalent dose values for the skin of peloids-based treatment users. Therefore, radioactivity levels present in peloids from the studied Cuban spas do not represent an impediment for its use with therapeutic purposes. Speakers: Oscar Díaz Rizo (Instituto de Tecnologías y Ciencias Aplicadas, Universidad de La Habana (InsTEC-UH), La Habana, Cuba.) , Katia D´Alessandro Rodríguez (Instituto de Tecnologías y Ciencias Aplicadas, Universidad de La Habana (InsTEC-UH), La Habana, Cuba.) Radiocarbon marine reservoir effect. Regional offset for the Northwest coast of Cuba The regional offset correction ΔR of Marine Reservoir Effect (MRE), crucial for correcting $^{14}$C ages calibration for marine influenced samples, was determined for Cuban Northwest coast. Fifteen different locations were studied by $^{14}$C dating of pre-bomb known-age marine shells specimens of bivalves and gastropods from the Felipe Poey Museum collection. The distribution of results indicates ΔR values from -46 ± 38 to 140 ± 52 $^{14}$C yr and a possible pattern related to the position along the coast and ocean dynamics. We present both mean values for each region and a general ΔR of 28 ± 13 $^{14}$C yr for the northwestern coast of Cuba. Speaker: M. Diaz (Instituto Superior de Tecnologías y Ciencias Aplicadas (InSTEC), Cuba & Universidade Federal Fluminense, Brazil.) Radiolabeling of anti-tumoral peptide for in vivo studies in animal models The application of the method of radioactive indicators in research and development of drugs is of great importance for the pharmaceutical industry. Many non-clinical and clinical trials rely on the radiolabelling of the molecules being tested as potential drugs. In the present investigation, a preliminary computational model was developed to obtain the possible structures of the CIGB-300 peptide without radiolabelling and radiolabelling. The radiolabelling with $^{99m}$Tc of the antitumor peptide CIGB-300 in its cyclic and non-cyclic forms was studied considering the influence of the peptide mass in the reaction, as well as the influence of the amount of the reducing system. The stability of the radiolabeling was evaluated by mean of cysteine ​​and DTPA challenged and in dilution of serum albumin and fresh human serum. Subsequently, pharmacokinetic evaluation of radiolabeled peptide was performed to verify its tumor uptake in an experimental model in mice and to observe its tissues distribution. Sampling schedule follows a spars data design and the obtained information was processed using the Monolix Suit. Population and individual pharmacokinetic parameters were obtained after selecting the best-fit model. The study was supplemented with scintigraphic images. The results indicate that with only 100 μg of the non-cyclic peptide 98% of radiochemical purity is obtained and is stable, with adequate tumor uptake in the experimental model. Speaker: Naila Gómez González (Centro de Isótopos (CENTIS), Cuba.) Spatial distribution and contamination assessment of heavy metals in street dust from Camagüey city (Cuba) using X-ray fluorescence Concentrations of various chemical elements in street dusts from Camagüey city were studied by X-ray fluorescence analysis. The mean Cr, Co, Ni, Cu, Zn and Pb contents (in mg.kg$^{-1}$ dry weight) in the urban dust samples were compared with mean concentrations in other cities around the world. Spatial distribution maps indicated the same behaviour for Cr–Ni and Pb–Zn–Cu, whereas the spatial distribution of Co differs from the other heavy metals. The metal-to-iron normalization using Cuban average metal soil contents as background showed that street dusts from Camagüey city are moderately or significantly enriched with Zn-Pb in those areas associated with heavy traffic density and metallurgic plant location. However, the calculation of the potential ecological risk index shows that metal content do not represent a risk for the city´s population. Speakers: Oscar Díaz Rizo (Instituto de Tecnologías y Ciencias Aplicadas, Universidad de La Habana (InsTEC-UH), La Habana, Cuba.) , César García Trápaga (Instituto de Tecnologías y Ciencias Aplicadas, Universidad de La Habana (InsTEC-UH), La Habana, Cuba.) The new version of the Sequence-Toolkit software package As part of the instrumentation development program of the luminescence dating laboratory at CEADEN a package of open-source software supporting the luminescence measuring process is carried out. The package released under the name of Sequence-Toolkit is routinely used to create and analyze the results of the measuring sequences introduced to the automated luminescence reader LF02. Here we describe the upgrade and the new features introduced in the new version of this package. The modifications are based on the recommendations arising after two years of exploitation of the initial version specially those concerning the speed of the data manipulation and also related to the migration to the new versions of Python and QT programming software. Speaker: C. M. Ferras (Universidad de Ciencias Informáticas (UCI), Cuba.) The role of aluminiun chloride in the Fischer-Hafner synthesis of technetium and rhenium bisarenes Rhenium bis-arenes can be obtained by heating the corresponding potassium perrhenate (KReO$_4$) salt with an arene as a solvent and in the presence of aluminum chloride (AlCl$_3$) and zinc. Variations of this method, originally proposed by Fischer and Hafner, demonstrated that, in some cases, the reaction occurs without using Zn as a reductive agent, but also, that the presence of the Lewis acid is essential. The aim of this work is to study the interactions of AlCl$_3$ with benzene, as well as the reactivity of the system formed afterwards, in order to understand its role in the reaction pathway. Calculations at DFT/M06-2X and MP2 levels using 6-311G(2df,2pd) depicted that the association between AlCl$_3$ and benzene leads to the formation of a charge transfer adduct where the AlCl$_3$ is placed over one of the carbons of benzene. The charge is transfered from the aromatic $\pi$ system to the Al atom. The analysis of this complex through local reactivity descriptors allowed locating the areas of the benzene's $\pi$ system more susceptible to react with the Rhenium via electrophilic attack. Meta and orto positions are particularly reactive to that class of attack. Therefore, the initial association with rhenium should occur in these sites. Thus, results show that the formation of the AlCl$_3$-benzene adduct could be an important intermediate in the formation of [Re($\eta^6$-benzene)$_2$]$^+$ complex. Speaker: Manuel Alejandro Cardosa-Gutiérrez (Instituto Superior de Tecnologías y Ciencias Aplicadas (InSTEC), Universidad de La Habana, La Habana, CP 10600, Cuba) Theoretical evaluation losartan derivatives as $^{18}$F-labeled radiopharmaceutical candidates for cancer diagnosis The losartan and its fluoro n-ethoxy-methyl-triazole losartan derivatives (FEM, n=0-3) were studied as potential candidates of $^{18}$F-labeled radiopharmaceuticals. Each derivate is obtained by the reaction of a terminal alkyne with a substituted azide in presence of Cu(I) catalyst. The stability and association energy of four FEM to the AT1 receptor was evaluated. The Density Functional Theory with the base 6-31G(2d, 2p) was used to evaluate radiotracers stability. In order to determine the functional that provides the best description of FEM, the experimental X-ray diffraction structure of losartan potassium was compared with calculated using different functionals, been the M06-2X the most suitable. The vibrational frequencies of the FEM structures and the bond dissociation energy (BDE) were also calculated. In both, vacuum and water calculations, the stability of compounds decreased following the order: FTEMT(n = 3)>FDEMTL(n = 2)>FMTL (n = 0) > FEMTL(n = 1). When water as implicit solvent was considered in the model, the difference of BDE was only 6kJ/mol. The electron density analysis of atoms in molecules was performed in order to characterize the intramolecular interactions in each FEM derivative. There was an increase of van der Waals type interactions with the increase of the length of the chain, being the FDEMTL the only one with two hydrogen bonds. Molecular Docking study was performed to evaluate the interactions of the four FEM with the receptor. All evaluated derivatives have similar interaction energy with the receptor than losartan. The FMTL derivative can be considered the best candidate as radiotracer. W. Nakanishi, S. Hayashi, and K. Narahara, J. Phys. Chem., 2008. 112: p. 13593-13599. Tsipis A. C., Coordination Chemistry Reviews, 2014. 272: p. 1–29. Álvarez-Ginarte Y. M., et al., Journal of Steroid Biochemistry and Molecular Biology, 2013. 138: p. 348– 358. Speaker: A. García Fleitas (Instituto Superior de Tecnologías y Ciencias Aplicadas (InSTEC), Universidad de La Habana, Cuba.) X-ray fluorescence analysis of sediments from Mampostón reservoir (Mayabeque, Cuba) The objective of the present study was to evaluate the distribution and accumulation of some heavy metals in surface sediments from the Mamposton reservoir (Mayabeque province, Cuba) and its main tributaries (Ganuza and Mamposton rivers and Pedroso dam). Furthermore, it was of great interest to assess their potential risk with regard to human health. Concentrations of Fe, Ni, Cu, Zn and Pb were determined by X-ray Fluorescence Analysis in samples collected in dry and rainy seasons. The results show a significant difference in metal contents in each season. Determined values corresponding to the contamination factor and modified degree of contamination indicate a very low to high degree of contamination effects for the studied sediments. The comparison with sediment quality guidelines reveals that the average concentration of Cu, Zn and Pb in Ganuza river sediments is higher than the established Threshold Effect Levels (TEL) [1]. On the other hand, the Ni content exceeds the established Probable Effect Levels (PEL) [1] in almost every studied location. Based on PEL-Q calculations [2] for Ni, Cu, Zn and Pb, the samples classify as sediments with a medium-low level of contamination in all the studied stations, except for one in the Ganuza River, which classifies as medium-high level of contamination. Refs: 1. Long ER, MacDonald DD. Human Ecological Risk Assessment. 1998; 4:1019–1039. 2. McCready S, Birch GF, Long ER. Environment International, 2006; 32(4):455-465. Speakers: Rayner Hernández Pérez (Instituto de Tecnologías y Ciencias Aplicadas, Universidad de La Habana (InsTEC-UH), La Habana, Cuba.) , Oscar Díaz Rizo (Instituto de Tecnologías y Ciencias Aplicadas, Universidad de La Habana (InsTEC-UH), La Habana, Cuba.) Recent Higgs results by the ATLAS experiment and future prospects Five years ago, particle physicists announced the discovery of the Higgs boson, the last missing ingredient in the Standard Model. Since then, the enormous wealth of data collected by the ATLAS experiment has allowed us to zoom in on the properties of this fundamental scalar that is linked to electroweak symmetry breaking, a fundamental ingredient in the model that describes the elementary particles. I will present the latest results on its properties like the mass, width, observation of different decay channels and coupling(structure) and discuss their implications in the context of the Standard Model. Because of the special role of the Higgs boson, the precision measurements can be used to look for physics beyond the Standard Model that are expected to show up at the TeV energies the LHC can probe, by looking for inconsistencies between the predicted and observed properties. I will discuss our strategy, the impact current limits have on these models and describe what new Higgs boson decay channels and properties we hope to be able to observe in the current LHC run(s). Speaker: Dr Ivo Van Vulpen (NIKHEF and Universiteit Van Amsterdam, Netherlands) Recent results from the ALICE detector at the LHC The ALICE experiment at CERN investigates the properties of strongly interacting matter at high temperatures. This talk highlights some of the recent results from the collaboration,presenting key constraints on properties of the QCD matter. A brief description of the ALICE upgrade program is also given. Speaker: Nikola Poljak (ALICE collaboration) Highlights from the CMS Experiment In the Run-2 of the Large Hadron Collider, CMS is recording an impressive amount of proton-proton collision data at a center of mass energy of 13 TeV. In this talk, we highlight the CMS status in 2017 and some of the latest physics results. Speaker: Miguel Vidal Marono (Universite Catholique de Louvain (UCL) (BE)) Convener: Gordon Baym (University of Illinois) The Mu2e experiment at Fermilab The Mu2e Experiment at Fermilab will search for coherent, neutrino-less conversion of negative muons into electrons in the field of an Aluminum nucleus. The dynamics of such charged lepton flavour violating (CLFV) process is well modelled by a two-body decay, resulting in a mono-energetic electron with an energy slightly below the muon rest mass. If no events are observed in three years of running, Mu2e will set an upper limit on the ratio between the conversion and the capture rates \convrate of $\leq 6\ \times\ 10^{-17}$ (@ 90$\%$ C.L.). This will improve the current limit of four order of magnitudes with respect to the previous best experiment. Mu2e complements and extends the current search for $\mu \to e \gamma$ decay at MEG as well as the direct searches for new physics at the LHC. This CLFV process probes new physics at a scale inaccessible to direct searches at either present or planned high energy colliders. Observation of a signal will be a clear evidence for new physics beyond the Standard Model. To search for the muon conversion process, a very intense pulsed beam of negative muons ($\sim 10^{10} \mu/$ sec) is stopped on an Aluminum target inside a very long solenoid where the detector is also located. The Mu2e detector is composed of a straw tube tracker and a CsI crystals electromagnetic calorimeter. An external veto for cosmic rays is surrounding the detector solenoid. In 2016, Mu2e has passed the final approval stage from DOE and has started its construction phase. Data collection is planned for the end of 2021. An overview of the physics motivations for Mu2e, the current status of the experiment and the required performances and design details of the calorimeter are presented. Speaker: Fabio Happacher (INFN) The results from the Q-weak experiment measurement of the parity-violating electron-proton scattering The results of the Q-weak experiment at the Thomas Jefferson National Accelerator Facility are presented. The experiment performed the most precise measurement of the parity-violating electron-proton scattering asymmetry at low momentum transfer, resulting in the first direct determination of the weak charge of the proton (a weak force analog to the electric charge for the electromagnetic force) and the most precise value of the weak mixing (Weinberg) angle, for the first time measured in a semi-leptonic reaction. Since the weak mixing angle is precisely predicted by the Standard model, these results provide new constraints on classes of physics beyond the Standard Model and are complementary to direct searches in the high energy measurements. The requirements of this precise measurement posed technical challenges resulting in the design of a custom apparatus consisting of a triple collimator system, a resistive copper-coil toroidal magnet and eight fused silica Cherenkov detectors, the world's highest power liquid hydrogen target, precision control of helicity-correlated beam properties and beam polarimetry. The detector array absorbed a scattered electron rate of about 7 GHz read out in integrating mode by custom-built modules. Dedicated low-current measurements were undertaken to determine the momentum transfer using sets of drift chambers before and after the toroidal magnet. The technical aspect of the Q-weak experiment will also be presented. Speaker: Neven Simicevic (Louisiana Tech University) Decay Spectroscopy Experiments Using the GRIFFIN Spectrometer at TRIUMF GRIFFIN [1], the Gamma-Ray Infrastructure For Fundamental Investigations of Nuclei is the new decay spectroscopy array located at TRIUMF, Canada's National Laboratory for Nuclear and Particle Physics. GRIFFIN consists of 16 large-volume hyper-pure germanium clover detectors assisted by a custom-built digital data acquisition system, providing 10% efficiency for detecting gamma-rays at 1.3 MeV. A suite of ancillary detector systems can be coupled to GRIFFIN for comprehensive decay spectroscopy experiments with radioactive beams delivered by the TRIUMF-ISAC facility: SCEPTAR [2], an array of plastic scintillators for beta-particle tagging, and PACES [2], an array of five lithium-drifted cooled silicon detectors for high-resolution internal conversion-electron spectroscopy, eight lanthanum bromide scintillators for fast gamma-ray timing measurements [2], and a neutron detector array for the detection of beta-delayed neutron-emitting nuclei called DESCANT, [3]. This versatile experimental set-up allows for the identification of weak branching ratios and firm spin and parity assignments of excited states thorough angular correlation measurements. Results obtained with the GRIFFIN spectrometer near and far from stability using beta decay of beams of 128−130Cd [4], 46,47K [5,6], 32Na [7], and 118,132In [8,9] will be presented along with a discussion of future opportunities, including the addition of the Compton and background suppression shields in 2018. The GRIFFIN spectrometer is funded by the Canada Foundation for Innovation, TRIUMF, and the University of Guelph with matching contributions from the British Columbia Knowledge and Development Fund and the Ontario Ministry of Research and Industry. TRIUMF receives federal funding via a contribution agreement through the National Research Council of Canada. This research is supported by the Natural Sciences and Engineering Research Council of Canada. C.E. Svensson and A.B. Garnsworthy, Hyperfine Interactions 225, 127 (2014). A.B. Garnsworthy and P.E. Garrett, Hyperfine Interactions 225, 121 (2014). P.E. Garrett, Hyperfine Interactions 225, 137 (2014). R. Dunlop et al., Phys. Rev. C 93, 062801 (R) (2016). J.L. Pore et al., to be published. J. Smith et al., to be published. F. Sarazin et al., to be published. K. Ortner et al., to be published. K. Whitmore et al., to be published. Speaker: Prof. Corina Andreoiu (Simon Fraser University) Parallel Sessions - NAT Room "Fernando Portuondo" Room "Fernando Portuondo" Convener: Daniel Codorniu (InSTEC, Cuba) Origin and residence time of water in the Lima aquifer Lima, the capital of Peru, and Callao, an urban region, form on of the largest urban centers in South America, with a total population of 10,9 million as of 2017. Three rivers supply water to the capital, however, water for municipal use is mainly taken from the Rimac and Chillon River. The estimated water availability is 148 m$^3$/person/year, well below the limit of 1000 m$^3$/person/year Falkermark establishes as of extreme scarcity. Therefore, it is of great importance the identification of water sources for the future. An experimental study was conducted to analyze Lima´s groundwater origin. Water levels in a well located in Miraflores, near the Pacific Ocean, were observed between 2003 and 2009. A maximum water level was registered three years after the occurrence of a peak in a hydrologic station in Chosica, located approximately 35 km away. Therefore, it was estimated that the residence time, this is the time passing after water infiltrates the ground during a rainfall event, until it reaches the Miraflores well, is three years. Water samples extracted this well were analyzed. The estimated residence time is in agreement with the 3H contents which also indicate that the residence time is three years. In addition, The results obtained in the Miraflores well are also in agreement with the permeability of the valley, which is in the order of 1x10$^{-3}$ m/s. The permeability in the alluvial fan, on which Lima is located, is in the order of and 10$^{-4}$ m/s. The storage coefficient is 5% in the valley and 0.2% in the coastal area (INGEMMET, 1988). In other hand, the relative abundance of 2H and 18O in wells in the Lima aquifer are in agreement with the hypothesis that the aquifer is recharged with water from rainfall events that occur in the highlands at an average elevation of 3000 m. These results are useful to improve management of water resources for the 10.9 million inhabitants of the coastal city of Lima, whose water supply depends mainly of water flows of the Rimac river and water wells in the Lima aquifer. Optimization of water use is crucial in the present times characterized by climate change. In 2017, for example, due to a critical year of extreme floods, water supply from the Rimac River was interrupted and Lima had to use only water wells for municipal use. Keywords: Coastal aquifer, Lima, water residence time. INGEMMET. 1988. Estudio Geodinámico de la Cuenca del Río Rímac. (Boletín Nº 8b Serie C). Instituto Geológico Minero y Metalúrgico: Lima; 263. Speaker: Julio Kuroiwa (Universidad Nacional de Ingeniería, Av. Túpac Amaru 210, Rímac, Lima, Peru.) Analytical approximations for double Compton backscattering Gamma-ray Compton backscattering has proven to be of interest for technical applications, one example being an imaging device [1] able of getting the shape of objects behind material obstacles. A big deal of work has been performed in order to characterize the process, which, given the huge mathematical difficulties of an intrinsically random, non-lineal process, numerical simulation is preferred [2]. However, since having an analytical approximation is a powerful prediction and evaluation tool, the analytical path deserves also exploration. An attempt to understand the difference in capacity to backscatter photons by different materials, and for the same material, for different thicknesses and densities produced an analytical expression for the single-backscattered intensity [3]. Obtaining a collective expression for higher scattering orders is a dauting task. A rather ambitious method in that direction uses transport theory to obtain both intensity and spectral shape of the scattered radiation [4]. This method, although offering completeness in the solution is rather difficult to implement in practical cases and simplified approaches might be needed. One example in that direction is the mixed analytical-numerical algorithm in Ref. [5]. Following this approach of incremental theoretical improvement we present a method and several theoretical and numerical results for only-double Compton scattering. These results may prove useful in practice since according to numerical simulations double Compton backscattering events may be responsible in some materials for more than $30\%$ of the total number of multiple scattering events whereas both together, single and double scattering, add up to more than $60\%$ of the total backscattered intensity. J. Gerl, Nucl. Phys. A752, 688c (2005). D. Flechas et al., Int. J. Mod. Phys. Conf. Ser. 27, 1470152 (2013). D. Flechas et al., AIP Conf. Proc. 1529, 40 (2013). J. E. Fernandez and V. G. Molinari, Adv. Nucl. Sc. Tech. 22, 45 (1991). G. Das et al., phys. stat. sol. (b) 149, 365 (1988). Speaker: Fernando Cristancho (Universidad Nacional de Colombia) Hydrochemical and isotopic characterization of the Mampostón-Jaruco basin This work presents the results obtained during the hydrochemical and isotopic characterization of the waters of the Mampostón - Jaruco basin in the province of Mayabeque, Cuba. Two sampling campaigns were carried out in different wells drilled in the area, carrying out the collection of hydrochemical data in the field, the analysis of macrocomponents and physico-chemical parameters in the laboratory, isotopic analysis ($^2$H and $^{18}$O) and the evaluation of the quality of the analytical data. The interpretation of the results showed that the waters are generally classified as calcium bicarbonate with very similar characteristics between them, suggesting a single origin (recharge). Possible recharge mechanisms involving surface sources in the region were identified. Understanding the origin of the elements using rare isotopes in the laboratory Nucleosynthetic processes in supernovae and X-ray bursts often involve unstable ions and reactions that are difficult to produce at the relevant energies in rare beam facilities. Recent progress in astronomical observations and the chemical evolution of the Galaxy need to be accompanied with similar progress in understanding the relevant properties of rare isotopes through nuclear physics experiments. I will review the important role that rare isotopes play in understanding stellar explosions, show some examples of recent nuclear physics measurements and give a (very abbreviated) outlook of future nuclear astrophysics studies. Speaker: Fernando Montes (Michigan State University, USA.) High-precision mass measurement programme around the N=82 shell closure with the Penning trap mass spectrometer MLLTRAP at ALTO The international ISOL facility ALTO, located at Orsay in France, provides stable ion beams based on a 15 MV tandem accelerator and neutron-rich radioactive ion beams from the interaction of a γ-flux induced by a 50 MeV 10 µA electron beam in a uranium carbide target. New setups are under preparation to extend the fundamental properties measured at ALTO of ground and excited states of exotic nuclei. As for example, high-precision mass measurements for an accurate determination of the nuclear binding energy. To perform those measurements two devices will be hosted at ALTO: a radiofrequency quadrupole to cool and bunch the continuous radioactive beam and the double penning trap mass spectrometer MLLTRAP, commissioned off-line at the Maier-Leibnitz Laboratory (MLL) in Munich, Germany. The unique ion production at the ALTO facility allows mass measurements in a neutron rich area of major interest around 132Sn. In this context, it is proposed to use neutron-rich silver isotopes (Z = 47, A > 121) to explore the possible weakening of the shell gap for Z < 50 and its impact on the A = 130 r-process elemental abundances. The already well measured masses (A < 121) in the silver isotopic chain will be used for the on-line commissioning. In addition, the development started at MLL on a novel detector-trap for in-trap decay spectroscopy will be carried out at ALTO. It will provide background free spectra via direct in-situ spectroscopy of stored ions. The status and timeline of the novel setup will be presented. Speaker: E. Minaya Ramirez (Institut de Physique nucléaire Orsay, 91406 Orsay, France) The NUMEN project: Heavy-Ion Double Charge Exchange reactions towards the 0νββ NME determination NUMEN proposes cross sections measurements of Heavy-Ion double charge exchange reactions as an innovative tool to access the nuclear matrix elements, entering the expression of the life time of Neutrinoless double beta decay (0νββ). If detected, such a process would give direct evidence to the Majorana-nature of neutrinos, opening a window to physics beyond the standard model. A key aspect of the project is the use at INFN-Laboratori Nazionali del Sud (LNS) of the Superconducting Cyclotron (CS) for the acceleration of the required high resolution and low emittance heavy-ion beams and of MAGNEX large acceptance magnetic spectrometer for the detection of the ejectiles. However, a main limitation on the beam current delivered by the accelerator and the maximum rate accepted by the MAGNEX focal plane detector must be sensibly overcome in order to systematically provide accurate numbers to the neutrino physics community in all the studied cases. The upgrade of the LNS facilities, in this view, is part of this project. First experimental results, obtained at the INFN-Laboratori Nazionali del Sud in Catania using MAGNEX magnetic spectrometer, for the $^{40}$Ca($^{18}$O,$^{18}$Ne)$^{40}$Ar reaction at 270 MeV are shown. The data give encouraging indication on the capability to access quantitative information towards the determination of the Nuclear Matrix Elements for 0νββ decay. Preliminary results, in particular of the reaction $^{116}$Cd($^{20}$Ne,$^{20}$O)$^{116}$Sn at 15 MeV/u, performed at INFN LNS, are reported. Speaker: Clementina Agodi (INFN-LNS) COFFEE-BREAK at ROOM "Bens Arrate" Convener: Daniel Abriola (Comisión Nacional de Energía Atómica) Influence of prompt neutron emission on the distribution of charge as a function of the final mass and kinetic energy of fragments from the reaction $^{233}$U(nth, f) Concerning thermal neutron induced fission of uranium 233, using a Monte Carlo simulation, we show how prompt neutron emission from fragments distorts the distribution of charge as a function of final mass and kinetic energy compared with the primary distribution. Since fission discovery, the yield of primary charge (Z), mass (A) and kinetic energy (E), has been one of the objectives of measurements of fragments [1]. However, only the yield of final fragment charge (z), mass (m) and kinetic energy (e), after prompt neutron emission, defined as Y(z,m,e), is accessible in low energy fission [1, 2]. In our Monte Carlo simulation, for each m, we calculate, the average of charge as a function of e. As input for the simulation, we assume that: i) the prompt neutron number (n) decreases linearly with kinetic energy (E), it's equal to the neutron multiplicity (ν) when E is equal to the corresponding average, and it falls to zero at the corresponding maximal value (E_max) [3] and ii) the average of primary fragment charge is proportional to the primary fragment mass (A) and to the charge/mass ratio of fissioning nucleus (92/234). As output of the simulation, we obtain the distribution of final fragment mass (m=A-n), kinetic energy (e=E (1- n/A)) and charge (z=Z). As a result, for a fixed final mass in the region m = 80 - 100, prompt neutron emission produces a negative slope in the curve of average charge as a funtion of e. This result is in agreement with experimental data obtained by Quade et al [1]. Surprisingly, in the region of heavy fragment, contrary to what happens in the mass region m = 80 - 100, the curve of average charge as a funtion of e has a positive slope. In order to compare this results in that heavy mass region, data from experiments with new technologies is expected [2]. U. Quade et al., Nucl. Phys. A 487 (1988) 1-36 P. Grabitz et al., Journal of Low Temperature Physics 184 (2016) 944–951 M. Montoyaa, J. Rojasa and I. Lobato, Rev. mex. fis. 54 (2008) 440–445 Speaker: Dr Modesto Montoya Zavaleta (Universidad Nacional de Ingeniería, Peru.) Exploring the shell structure of exotic Sn isotopes with an Active Target at SPES: the MagicTin project With the aim of studying the evolution of nuclear shells in the region of neutron-rich Sn isotopes, the MagicTin project (EU-MSCA 661777) will exploit the capabilities of the ACTAR TPC detector for measuring direct reactions in the Z≥50, A≥132 mass region. In preparation for these experimental campaigns, to be done at the forthcoming radioactive ion beam facilities like SPES, HIE-ISOLDE or SPIRAL2, several preparatory steps are needed. Indeed, the detection of heavy-ion (Z≥50) induced reactions using an active target represents a challenge itself. After providing an overview about the use of Active Targets in experiments with Radioactive Ion beams, I will focus on the status of the MagicTin project and its future perspectives. Speaker: Dr Tommaso Marchi (IKS - KU Leuven, Belgium.) Methodology trends on gamma and electron radiation damage simulations studies in solids under high fluency irradiation environments The present work deals with the numerical simulation of gamma and electron radiation damage processes under high brightness and radiation particle fluency on regard to two new radiation induced atom displacement processes, which concern with both, the Monte Carlo Method based numerical simulation of the occurrence of atom displacement process as a result of gamma and electron interactions and transport in a solid matrix and the atom displacement threshold energies calculations based on Molecular Dynamic methodologies. The two new radiation damage processes here considered in the framework of high brightness and particle fluency irradiation conditions are: 1) The radiation induced atom displacement processes due to a single primary knockout atom excitation in a defective target crystal matrix increasing its defect concentrations (vacancies, interstitials and Frenkel pairs) as a result of a severe and progressive material radiation damage and 2) The occurrence of atom displacements related to multiple primary knockout atom excitations for the same or different atomic species in an perfect target crystal matrix due to subsequent electron elastic atomic scattering in the same atomic neighborhood during a crystal lattice relaxation time. In the present work a review of numeral simulation attempts of these two new radiation damage processes are presented, starting from the former developed algorithms and codes for Monte Carlo simulation of atom displacements induced by electron and gamma in irradiated materials and, in addition, the Molecular Dynamics calculations atom displacement threshold energies for defective crystalline materials as well as for the cases of multiple primary knockout atomic excitations. Convener: Prof. Corina Andreoiu (Simon Fraser University) Theoretical challenges in double beta decay Double beta decay (DBD) is a nuclear process with the longest lifetime measured until present, which study presents a great interest. Indeed, its possible neutrinoless double beta (0νββ) decay mode is a beyond Standard Model (BSM) process whose discovery would clarify if the lepton number is conserved, decide on the neutrinos character (are they Dirac or Majorana particles?) and give a hint on the scale of their absolute masses [1]. Theoretically, the study of 0νββ decay involves the accurate computation of the nuclear matrix elements (NME) and phase space factors (PSF), two key quantities entering the lifetimes of this process. In my talk I will make first a short review on the actual challenges to calculate the NME and PSF for DBD [2]-[4]. Then, I will show how from the study of 0νββ decay one can constrain BSM parameters related to the neutrino mass and Lorentz violation in weak decays. 1. Vergados, J.D., Ejiri, H., and Simkovic, F, Rep. Prog. Phys., 72, 106201 (2012). 2. M. Horoi and S. Stoica, Phys. Rev. C 81, 024321 (2010). 3. S. Stoica and M. Mirea, Phys. Rev. C 88, 037303 (2013). 4. A. Neacsu and S. Stoica, J. Phys.G 41, 015201 (2014). 5. S. Stoica and A. Neacsu, AHEP2014, 2014, article ID 745082. 6. S.Stoica, INPC2016, 12-16 September, 2016, Adelaide (oral presentation). 7. S. Stoica, MEDEX'17, May 29 – June 2, 2017, Prague (invit. lecture). Speaker: Sabin Stoica (International Centre for Advanced Training and Research in Physics, Horia Hulubei National Institute of Physics and Nuclear Engineering) Molecular structures in slow nuclear collisions I will report on a quantitative study of the sub-Coulomb fusion of astrophysically important heavy-ion collisions, such as $^{16}$O + $^{16}$O and $^{12}$C + $^{12}$C. It is carried out using wave-packet dynamics. The low-energy collision is described in the rotating center-of-mass frame within a nuclear molecular picture [1]. A collective Hamiltonian drives the time propagation of the wave-packet through the collective potential-energy landscape that is calculated with a realistic two-center shell model [2-4]. Among other preliminary results, the theoretical sub-Coulomb fusion resonances for $^{12}$C + $^{12}$C seem to correspond well with observations. The method appears to be useful for expanding the cross-section predictions towards stellar energies. W. Greiner, J.Y. Park & W. Scheid, in Nuclear Molecules, World Scientific Pub, Singapore, 1995. A. Diaz-Torres & W. Scheid, Nucl. Phys. A 757 (2005) 373. A. Diaz-Torres, L.R. Gasques & M. Wiescher, Phys. Lett. B 652 (2007) 255. A. Diaz-Torres, Phys. Rev. Lett. 101 (2008) 122501. Speaker: Dr Alexis Diaz-Torres (University of Surrey, United Kingdom) Importance of proton drip-line nuclei to nuclear astrophysics Nuclear structure far from stability plays a crucial role in the processes that lead to the formation of the elements. In the specific case of the proton drip-line, its location constrains the path of nucleosynthesis in explosive astrophysical scenarios such as in supernovae and X-ray bursters. In such scenarios, the density and temperature are so high, that rapid proton capture can occur, and unstable nuclei will be generated up to and beyond the proton drip-line. The path for these reactions depends on the level structure and existence of resonances in proton rich nuclei. In order to achieve a theoretical understanding of the rapid proton capture (rp) process, the separation energies of proton drip-line nuclei are needed as input in the network calculations. Direct experiments with unstable nuclei are still challenging, creating an obstacle to our understanding of their structure. However, the observation of proton emission and its theoretical interpretation has made possible to access the nuclear structure properties in the neutron deficient region of the nuclear chart, for nuclei with charges between 50 and 81[1,2]. It has also provided an indirect way to determine separation energies. Proton radioactivity from nuclei with Z<50 is also of particular interest to estimate the time scale of the (rp) capture path, controlled by the properties of the waiting points isotopes, like for example the nucleus 72Kr, whose properties have not yet been constrained by direct measurements. The knowledge of the proton separation energies, and half-lives of the neighbour nuclei, would allow to establish the most probable path through 72Kr. This can be achieved analysing the decay properties of Rb isotopes, recently produced at Riken [3]. It is the purpose of this talk to discuss recent developments in the field, and deduce constraints to the astrophysical processes. L. S. Ferreira, E. Maglione, P. Ring, Phys Lett. B753 (2016) 237; P. Arumugam, L. S. Ferreira, E. Maglione, Phys. Lett. B680 (2009) 443. M.Taylor, D. M. Cullen, M. G. Procter, A. J. Smith, et al. Phys. Rev. C 91 (2015)044322. H. Suzuhi, et al. to be published. Speaker: Prof. Lidia Ferreira (CeFEMA/IST) Precision measurements of β-energy spectra in nuclear decays Measurements in nuclear β decay played a crucial role in the development of the (V-A) theory of weak interactions, which is embedded in the standard electroweak model (SM). Experiments in β decay offer today a sensitive tool to search for physics beyond the SM, complementary to direct searches performed at high energies. It has recently been observed that, in searches for new interactions and under very general assumptions, the determination of the so-called "Fierz interference term" in nuclear and neutron decays can potentially compete with searches at the LHC provided the sensitivity reaches a level below 10$^{−3}$ . This is because the Fierz term depends linearly on the exotic couplings whereas the cross sections for the production of new bosons depend quadratically. In nuclear and neutron decays the most direct and sensitive property to extract the Fierz term is the shape of the β energy spectrum. This contribution presents recent precision measurements of β energy spectra in $^{6}$He and $^{20}$F decays performed at the National Superconducting Cyclotron Laboratory. A new technique that eliminates the very critical instrumental effect of back-scattering of electrons on detectors has been explored. The technique is being tested through the determination of the weak-magnetism contribution which can be accurately predicted in well selected transitions using the principle of Conservation of the Vector Current. Speaker: Prof. Oscar Naviliat-Cuncic (National Superconducting Cyclotron Laboratory and Department of Physics and Astronomy) The Interaction of Neutrons With $^7$Be: Lack of Standard Nuclear Physics Solution to the "Primordial $^7$Li Problem" The accurate measurement of the baryon density by WMAP renders Big Bang Nucleosynthesis (BBN) a parameter free theory with only inputs from measurements of the relevant (12 canonical) nuclear reactions. BBN predicts with high accuracy the measured abundance of deuterium, helion and helium relative to hydrogen, but it over-predicts the abundance of 7Li relative to hydrogen by a factor of approximately three and more than three sigma difference from the observed value. This discrepancy was observed early on (more than thirty years ago) and is known as the "Primordial 7Li Problem". Several attempts to reconcile this discrepancy by destroying 7Be with deuterons and helions or a conjectured d + 7Be resonance were ruled out as solutions of the 7Li problem. But the interaction of 7Be with neutrons that are also prevailing during the epoch of BBN, was not directly measured thus far in the BBN window. Also a hitherto unknown n + 7Be narrow resonance in 8Be at energies relevant for the BBN window was not yet ruled out. A worldwide effort for measuring the interaction of neutrons with 7Be is currently underway. We will discuss a measurement in the new neutron facility at the Soreq Applied Research Accelerator Facility (SARAF) in Israel, that covers the "BBN energy window" with T = 0.5 - 0.8 GK and kT = 43 - 72 keV. We measured a significantly small upper limit on the 7Be(n,a) reaction and the first measurement of the 7Be(n,g1)8Be*(3.03 MeV) reaction (Ea = 1.5 MeV). Our measurement allow us to re-evaluate the so designated "7Be(n,a) reaction rate" first derived by Wagoner in 1969 and still used in BBN calculations. Our evaluated new rate demonstrates that the last possible avenue (of the n + 7Be interaction) for a standard nuclear physics solution of the 7Li problem does not solve the problem. We conclude on lack of standard nuclear physics solution to the "Primordial 7Li problem". Work supported by the U.S.-Israel Bi National Science Foundation, Award Number 2012098, and the U.S. Department of Energy, Award Number DE-FG02-94ER40870. Speaker: Prof. Moshe Gai (University of Connecticut) Study of Nucleon-Nucleon Short-Range Correlations, the EMC Effect, and their relation using Backwards-recoiling Protons The EMC effect is the observation that the structure of nucleons in nuclear medium is modified from that in free space. Over 1000 papers were written about the effect, but no explanation is commonly accepted. A linear correlation has recently been observed between the slopes of the EMC universal curve for 0.3 < x$_B$ < 0.7 in deep-inelastic (DIS) lepton scattering, d[F$_2$(A)/F$_2$(d)]/dx$_B$ and a$_2$(A/d), the per-nucleon inclusive quasi-elastic electron scattering cross-section ratios from SRC in nucleus A to that from deuterium for 1.4 < x$_B$ < 2. This correlation is surprising because of the vastly different energy and distance scales of EMC and short-range nucleon-nucleon correlations (SRC). A recent explanation of this correlation is that the modification of F$_2$(A), the nucleon structure-function in the nuclear medium, depends on the virtuality of the nucleons, which is high for short-range correlated nucleons such that the EMC effect, to a large extent, is related to DIS from highly virtual, short-range correlated nucleon. We study this hypothesis by detecting EMC events "tagged" by high-momentum (k$_p$ > k$_F$) protons recoiling backward to the momentum transfer, $q$, which have been shown to be spectators from scattering off their short-range correlated partners. We shall present and discuss results of inclusive SRC, a$_2$(A/d), and "normal" EMC F$_2$(A)/F$_2$(d), as well as semi-exclusive SRC ("tagged" SRC), and semi-inclusive DIS ("tagged" EMC), {$\sigma$[A(e,e'p$_{recoil}$)X]/A}/{$\sigma$[d(e,e'p$_{recoil}$)X/2]} for their respective X$_B$ ranges. Speaker: Shalev Gilad (Massachusetts Institute of Technology, Cambridge MA, 02139, USA.) Convener: Prof. Moshe Gai (University of Connecticut) Intermediate-energy Coulomb excitation of $^{72}$Ni Transition strengths in the Ni isotopes between N=40 and N=50 have been recently subject of extensive experimental and theoretical investigations, aiming to understand whether the tensor forces act to reduce the Z=28 shell closure as the neutron g9/2 orbit is filled towards 78Ni. The effect of the Z=28 shell gap quenching and its evolution from 68Ni towards 78Ni would be reflected as an enhancement in the quadrupole transition stregths, compared with the seniority scheme predictions for the neutron g9/2 subshell. In 70Ni, the large B(E2) value for the first 2+ excited state obtained by Coulomb excitation was interpreted as an evidence of a large neutron-induced polarization of the proton core. This interpretation was reinforced with a later inelastic proton scattering experiment on 74Ni, in which a large deformation parameter was found, pointing to an enhanced quadrupole collectivity. In the last year however, a much lower B(E2) value was deduced for 74Ni in a Coulomb excitation experiment. In that work, both experimental and shell-model calculations using the residual LNPS interaction, restores the normal core polarization picture in the neutron rich Ni isotopic chain and suggests that the B(E2) strength predominantly corresponds to neutron excitation. The known experimental transition strengths by Coulomb excitation are constrained to 70Ni and 74Ni so far. We report on preliminary results from the Coulomb excitation of 72Ni performed at the Radioactive Isotope Beam Factory at RIKEN, Japan. The BigRIPS fragment separator was used to select and purify a secondary beam of 72Ni at 183 MeV/u. Coulomb excitation of 72Ni was produced by impinging the beam on a 950 mg/cm2 Au target. In order to identify the reaction products after the target, the ZeroDegree spectrometer was used, while the gamma-rays were detected with the 186 NaI(TI) detector array DALI2. Speaker: Victor Modamio (University of Oslo, Norway.) Intermediate-energy Coulomb excitation of $^{77}$Cu The present experimental study on $^{77}$Cu has been carried out at Radioactive Ion Beam Factory of the RIKEN Nishina Center. It will complement the previous study of $^{77}$Cu via beta decay of $^{77}$Ni where the low-lying states in $^{77}$Cu were identified as particle-core excitations through the comparison to the large scale Monte Carlo Shell Model calculations (E. Sahin et al. Phys.Rev.Lett. 118, 242502 (2017)). An almost unique way to characterize the states predicted as collective in the calculations is to measure the transition probabilities, i.e. B(E2) strengths. Hence the following Coulomb excitation experiment was performed to study the collective properties of low-lying states in $^{77}$Cu. The characterization of such states and in particular the mixing of both collective and single-particle configurations will provide significant information on the shell structure close to $^{78}$Ni. A Coulomb excitation measurement of the states due to the proton-core excitations in the case of $^{77}$Cu nucleus will also provide an estimation of the collectivity of the 2$^+$ state in th even-even $^{76}$Ni "core". Exotic secondary beam particles were produced by induced fission of the $^{238}$U beam on a 3 mm thick $^{9}$Be target. The uranium beam was accelerated to an energy of 345 MeV/nuclen with an average beam intensity of 20 pnA. Fission products were selected and transported by the BigRIPS fragment separator. Coulomb excitation of the fragments was performed on a 900 mg/cm$^2$ thick $^{197}$Au target, mounted in front of the Zero Degree Spectrometer. The DALI2 NaI array was used to detect de-excitation gamma ray measured in coincidence with beam-like particles identified in the Zero Degree Spectrometer (ZDS). The experimental techniques and results will be discussed in the present contribution. Speaker: Dr Eda Sahin (University of Oslo, Norway.) The modeling of reaction cross sections in the production of theranostic radionuclides Recently, a new high-energy (up to 70 MeV) and high-intensity cyclotron has been installed at the INFN-LNL National Laboratories of Legnaro (Padova, Italy). This facility will be soon put in operation and one of its research goals will focus on the production of radioisotopes for medicine and, in particular, theranostics, in the context of the INFN LARAMED initiative. As research group, we are presently involved in the measurements and modeling of proton-induced nuclear reactions for the production of theranostic isotopes such as 67Cu and 47Sc. A series of measurements have been already performed thanks to a collaboration with the Arronax facility (Nantes, France) and are reported in this very same Symposium by G. Pupillo, L. Mou, et al. Here we review the theoretical reaction models in a study performed with various codes with the aim to guide, interpret, and support the experiments in the proton-induced reaction measurements. The understanding of reaction cross sections at low-intermediate energies is crucial in this context and requires the knowledge of nuclear models available in different codes, analytical or MonteCarlo, such as EMPIRE, TALYS, FLUKA and others. The use of nuclear reaction codes is very important to interpret the measurement of production cross-sections and to complete the measurements with estimates of production of contaminants and/or stable isotopes that are difficult to measure, particularly if the measurements have to rely heavily on radiochemical techniques. We will present a general study of different model calculations to simulate isotope production useful in measurements of proton-induced production reactions of the two theranostic radio-isotopes 67Cu and 47Sc. Speaker: Luciano Canton (INFN - Sezione di Padova, Italy.)
CommonCrawl
Bayesian Gaussian regression analysis of malnutrition for children under five years of age in Ethiopia, EMDHS 2014 Seid Mohammed1 & Zeytu G. Asfaw2 Archives of Public Health volume 76, Article number: 21 (2018) Cite this article The term malnutrition generally refers to both under-nutrition and over-nutrition, but this study uses the term to refer solely to a deficiency of nutrition. In Ethiopia, child malnutrition is one of the most serious public health problem and the highest in the world. The purpose of the present study was to identify the high risk factors of malnutrition and test different statistical models for childhood malnutrition and, thereafter weighing the preferable model through model comparison criteria. Bayesian Gaussian regression model was used to analyze the effect of selected socioeconomic, demographic, health and environmental covariates on malnutrition under five years old child's. Inference was made using Bayesian approach based on Markov Chain Monte Carlo (MCMC) simulation techniques in BayesX. The study found that the variables such as sex of a child, preceding birth interval, age of the child, father's education level, source of water, mother's body mass index, head of household sex, mother's age at birth, wealth index, birth order, diarrhea, child's size at birth and duration of breast feeding showed significant effects on children's malnutrition in Ethiopia. The age of child, mother's age at birth and mother's body mass index could also be important factors with a non linear effect for the child's malnutrition in Ethiopia. Thus, the present study emphasizes a special care on variables such as sex of child, preceding birth interval, father's education level, source of water, sex of head of household, wealth index, birth order, diarrhea, child's size at birth, duration of breast feeding, age of child, mother's age at birth and mother's body mass index to combat childhood malnutrition in developing countries. Malnutrition remains one of the most common causes of morbidity and mortality among under five years old children throughout the World [1]. Worldwide, over 10 million children under the age of 5 years die every year from preventable and treatable illnesses despite effective health interventions. At least half of these deaths are caused by malnutrition. The 2011 Ethiopian DHS report shows that 29% of children under age five are underweight (have low weight-for-age), and 9% are severely underweight. The term "malnutrition" is sometimes also used synonymously for undernutrition. However, strictly speaking, malnutrition includes both undernutrition as well as over nutrition, Fig. 1. Undernutrition may be defined as insufficient intake of energy and nutrients to meet an individual's needs to maintain good health [2, 3]. Undernutrition is classified into type I and type II nutrient deficiencies [4]. In this paper, we have concerned on the type II nutrient deficiencies. Type II nutrients include protein, energy, zinc, magnesium, potassium and sodium. When there is a deficiency in one of the type II or growth nutrients, the person stops growing [5]. General framework for the study on under five years old children malnutrition, EMDHS 2014 There are three kinds of type II undernutrition in children: stunting, underweight and wasting [6]. In nutrition, anthropometric data collected in the Ethiopian mini demographic and health survey (EMDHS) are used to calculate three indices of nutritional status such as height-for-age, weight-for-age and weight-for-height. These three indices are measured through Z-scores. Z-scores represents the number of standard deviations by which an individual child's anthropometric index differs from the median of the World Health Organization international growth reference population [7]. Weight-for-age (Underweight) is a composite index of height-for-age (Stunted) and weight-for-height (Wasted). A child can be underweight for his/her age because he or she is stunted, wasted, or both. Weight-for-age is an overall indicator of a population's nutritional health. Children with weight-for-age Z-scores below minus two standard deviations from the median of the reference population are considered as underweight. Furthermore, children with Z-scores below minus three standard deviations from the median of the reference population are considered to be severely underweight, while children with Z-scores between minus three and minus two standard deviations are known to be moderately underweight [8]. Weight-for-age value for a child i is determined using a Z-score (Z i ) which is defined as: $$\begin{array}{*{20}l} Z_{i}= \frac{{AI}_{i} - MAI}{\sigma} \end{array} $$ where AI i represents child's anthropometric indicator (weight at a certain age) for the ith child, i=1,2,....n, MAI is median of the reference population and σ is standard deviation (SD) of the reference population. Authors are interested in modeling the various possible factors and their contribution for the high prevalence of malnutrition problems. To expand authors understanding of the most common and consistent factors on the risk of childhood malnutrition, it is necessary to consider expected determinants for malnutrition using Bayesian approach. Thus, the present study focuses on the identification of the high risk factors of malnutrition and test different statistical models for childhood malnutrition and, thereafter weighing the preferable model through model comparison criteria. Study sample and setting The data sets used in the present study were obtained from the Ethiopian Mini Demography Health Survey, EMDHS (2014). The survey drew a representative sample of women of reproductive age (15-49), by administering a questionnaire and making an anthropometric assessment of women and their children that were born within the previous five years [9]. For the 2014 EMDHS, a representative sample is approximately 4893 children aged less than 59 months with complete anthropometric measurements of underweight [8]. In the present study, data are presented for 3115 of these children considering that values had missed for malnutrition (underweight) as well as it's determinants. The causes of children malnutrition are multiple. Our analysis started with a large number of covariates including a set of socio-economic, demographic, health and environmental characteristics that are considered as the most important determinants of children's malnutrition as suggested by previous studies ([10–12]). Response variable In our application, malnutrition (underweight) was considered as the response variable. Z-score (in a standardized form) was used as a continuous variable to maximize the amount of information available in the data set. We have considered both continuous and categorical variables as expected determinants of children malnutrition. Continuous covariates Child's age in months (Chag) Mother's age at birth (MAB) Mother's body mass index (BMI) Categorical Covariates (as factor coding) Sex of child (Chsex: female or male) Mother's current work status (MWsts: no or yes) Mother's education level (MED: no formal education, primary or secondary and above) Father's education level (FED: no formal education, primary or secondary and above) Locality where child lives (Residence: rural or urban) Wealth index (Welnx: poor, medium or rich) Duration of breast feeding (Brstfdg: never breast fed, fed but no currently breast feeding or still breast feeding) Sex of household head (HHsex: female or male) Age of household head in years (HHage: 15-38, 39-63 or above 63) Birth order (Border: 1-4, 5-9 or 10 and above) Preceding birth interval in months (PresBint: less than 24, 24-47 or 48 and above) Child's size at birth (Chsize: small, average or large) Sources of drinking water (Water: not improved or improved) Toilet facility (Toilet: no facility or have facility) Had diarrhea recently (Diarhea: no or yes) Ever had vaccination (Vacination: no or yes) Whether mother take drug for intestinal parasites during pregnancy (Drug: no or yes) The statistical analysis employed in the present study is based on Bayesian approaches which allow a flexible framework for realistically complex models. These approaches allow us to analyze usual linear effects of categorical covariates and non linear effects of continuous covariates within a unified semi-parametric Bayesian framework for modeling and inference. Basically, we are interested in model fitting of Gaussian linear regression model to identify those variables which have linear effects on the children's malnutrition. Extending to additive Gaussian regression model to find out those variables which have non linear effect on children malnutrition. Moreover, we have considered the semi-parametric regression model to look at both effects. Finally, we employed the model comparison Criterion to choose the preferable model for the data analysis. Gaussian linear regression model Consider the normal linear regression model in which a response variable y is related to one or more explanatory variables. For a random sample of n individuals, the model becomes: $$ \eta_{i} =W_{i}^{'}\nu + V_{i}^{'}\gamma $$ Here, W i =(wi1,....,w ip ) is a vector of continuous covariates. ν=ν1,.....,ν p is a vector of regression coefficients for the continuous covariates. V i =(vi1,....,v ik ) is a vector of categorical covariates. γ=γ0,γ1,.....,γ k is a vector of regression coefficients for the categorical covariates. p=1,2,3;k=1,2,....,17 and i=1,2,...,3115. And also, this model can be written as: $$\eta_{i}=X_{i}\beta $$ where: X i =(W i ,V i ) and β=(ν,γ). Gaussian semi-parametric regression model The assumption of a parametric linear predictor for assessing the influence of covariate effects on responses seems to be rigid and restrictive in practical application situation and also in many real statistically complex situation since their forms can not be predetermined a priori. Besides, practical experience has shown that continuous covariates often have nonlinear effects. In our study, for the continuous covariates in the data set, the assumption of a strictly linear effect on the predictor may not be appropriate, i.e. some effects may be of unknown nonlinear form (such as, mother's age and mother's BMI) as suggested by Khaled [12] and Mohammed [13]. Hence, it is necessary to seek for a more flexible approach for estimating the continuous covariates by relaxing the parametric linear assumptions. This in turn allows continuous covariates to follow their true functional form. This can be done using an approach referred to as nonparametric regression model. To specify a non parametric regression model, an appropriate smooth function that contains the unknown regression function needs to be chosen. The semi-parametric regression model is obtained by extending model (1) as follows: $$ \eta_{i} = f_{1}(w_{i1})+....+f_{p}(w_{ip}) + V_{i}^{'}\gamma $$ Here, i=1,2,...,n and p=3f i (w i ) are smooth functions of the continuous covariates and \(V_{i}^{'}\gamma \) represents the strictly linear part of the predictor. It is based on the posterior distribution. Basic statistics like mean, mode, median, variance and quartiles are used to characterize the posterior distribution. The joint conjugate prior for (β,σ2) has the structure [14]: $$p\left(\beta,\sigma^{2}\right)= p\left(\beta|\sigma^{2}\right) p\left(\sigma^{2}\right) $$ Then, the posterior distribution is given by: $$ p\left(\beta,\sigma^{2}|y\right) \propto p\left(y|\beta,\sigma^{2}\right) p\left(\beta|\sigma^{2}\right)p\left(\sigma^{2}\right) $$ where the conditional prior for the parameter vector β is the multivariate Gaussian distribution with mean \(\hat {\beta }\) and covariance matrix σ2V β [14]: $$\beta|\sigma^{2} \sim N_{p}\left(\hat{\beta}, \sigma^{2} V_{\beta}\right) $$ and to obtain the prior for σ2, now we integrate β out of the joint posterior to get the marginal posterior for σ2 [14]: $$\pi\left(\sigma^{2}\right)= \int \pi\left(\beta,\sigma^{2}\right) d\beta $$ Then, the marginal posterior distribution of σ2 becomes inverted gamma, which is clearly $$IG(a, b) $$ In Bayesian approach, the vector of unknown parameters to be estimated is θ=(β,σ2). Therefore, we need to choose prior distributions for these parameters. If prior information is scarce, a large value for the variance parameter should be chosen, so that the prior distribution is flat. This type of prior is called non informative prior. On the other hand, if the analyst has considerable information about the coefficient β, he/she should choose a small value for the variance parameter. For our specific application in model (1), due to the absence of any prior knowledge we use a noncommittal or vague priors π(ν)∝constant and π(γ)∝constant for the parameters of fixed (linear) effects. For each regression coefficient, the prior distribution is a very broad normal distribution, with a mean of zero and a standard deviation that is extremely large relative to the scale of the data. The same assumption is made for the prior on the intercept. Finally, the prior on the standard deviation of the predicted value is merely a uniform distribution extending from zero to an extremely large value far beyond any realistic value for the scale of the data. In the specific analysis demonstrated in this section of our article, the data were standardized so that the prior would be broad regardless of the original scale of the data. The results were then simply algebraically transformed back to the original scale. For the standardized data, the prior on the intercept and regression coefficients was a normal distribution with mean at zero and large standard deviation (example; 1000). This normal distribution is virtually flat over the range of possible intercepts and regression coefficients for standardized data. To begin, we will choose a non-informative (vague) prior [14]. But in model (2), the parameters of interest f j is considered as random variables and have to be supplemented with appropriate prior assumptions. Several alternatives are available as smoothness priors for the unknown functions f j (w j ). Among the others, random walk priors [14], Bayesian Penalized-Splines [15], Bayesian smoothing splines [16] are the most commonly used. In the present study, the Bayesian smoothing spline was used by taking cubic P-spline with second order random walk priors [17, 18]. By defining an additional hyperprior for the variance parameters the amount of smoothness can be estimated simultaneously with the regression coefficients. We assign the conjugate prior for \(\tau ^{2}_{j}\) which is an inverse gamma prior with hyper parameters a j and b j , i.e \(\tau ^{2}_{j} \sim IG(a_{j}, b_{j})\). Common choices for a j and b j are a j = 1 and b j small, e.g. b=0.005orb j =0.0005. Alternatively we may set a j =b j , e.g. a j =b j =0.001. Based on experience from extensive simulation studies the researcher use a j =b j =0.001 as the standard choice. Since the results may considerably depend on the choice of a j and b j some sort of sensitivity analysis is strongly recommended. For instance, the models under consideration could be re-estimated with (a small) number of different choices for a j and b j . Model comparison and selection Model selection is the task of selecting the best model from a set of candidate models based on the performance of each model. The next question is why should we consider model selection? There are several reasons. First, people tend to believe or can understand simpler models with fewer predictors and less complicated structure. Second, one can certainly add more and more features into the model without screening and get better and better fit, till perfect fit, but the problem is over fitting. Note that the authors want to find the best-predicting model not the best fitting model. Model comparison is required for a diversity of activities, including variable selection in regression, determination of the number of components or the choice of parametric family. In frequentest approach, we can also perform the familiar statistical test via the anova function. As with frequentest analogues, Bayesian model comparison will not inform about which model is true, but rather about the preference for the model given the data and other information [14]. The models proposed in the present study are quite general and the model building process can be quite challenging. Currently, an automated procedure for Bayesian model selection is not available. However, a few recommendations are possible: Users should try to incorporate everything that is theoretically possible. Different Bayesian models could be compared via the Deviance Information Criterion (DIC) [19]. In the present study, AIC (Akaike Information Criterion) is used to compare the linear frequent and the linear Bayesian approach. Then, we compared the additive frequent and the Bayesian approach by using the GCV (Generalized Cross-Validation) score. The classical approach to model comparison involves a trade-off between how well the model fits the data and the level of complexity. Spiegelhalter et al. [19] devised a selection criterion which was based on Bayesian measures of model complexity and how good a fit the model is for the data. The measure of complexity which we adopted in this work is suggested by [19]. A widely used statistic for comparing models in a Bayesian framework is the DIC. DIC is a hierarchical modeling generalization of the AIC (Akaike information criterion) and BIC (Bayesian information criterion). It is particularly useful in Bayesian model selection problems where the posterior distributions of the models have been obtained by Markov chain Monte Carlo (MCMC) simulation. The idea is that models with smaller DIC should be preferred to models with large DIC, Fig. 2. Chart that approximating the Posterior marginal distribution through BayesX Descriptive analysis In the present study, the response variable malnutrition seems reasonable to assume at least approximately Gaussian (normal) distributed since it has a continuous Z-score value. Then it can be reasonably approximated by a Gaussian distribution that can be observed from the histogram plot in Fig. 3 and Additional file 1. In Fig. 4, the scatter plot of malnutrition vs each continuous covariates such as child age in months, mother's age at birth and mother's body mass index showed that there is no definite pattern of relationships respectively. To overcome this problem, we deployed a non parametric method to explore relationships among covariates (see Fig. 5). Histogram for underweight showing a normal distribution in under five years old children malnutrition, EMDHS 2014 Scatter plots that represent the relationship between each continuous covariates with under five years old children malnutrition, EMDHS 2014 The Non Linear Effects of Continuous Variables on under five years old children malnutrition, EMDHS 2014 The main purpose of the present descriptive analysis was to describe the variation among the categorical explanatory variables with regard to children malnutrition in Ethiopia through percentage value. Table 1 showed that the proportion of children's malnutrition decreases as the age of head of household, child's birth order and father's as well as mother's education level increases. The proportion of underweight children is approximately nine times higher for those born to uneducated father's than for those whose father's have more than secondary education (59.3% versus 6.7%). Children born from mothers in the poorest wealth quintile are more than twice as likely to be malnourished as children born from mothers in the richest wealth quintile (57.3% compared with 26.1%). The proportion of children malnutrition, as can be seen in Table 1, differs by type of place of residence: urban and rural. From Table 1 we observed that children reside in rural areas were more likely to be malnourished. On the other hand, children ever had vaccination were apparently more often affected by malnutrition than those never got vaccination but there was no consistent trend in the pattern of malnutrition with respect to children got vaccination. With regards to underweight children, female children are slightly more likely to be malnourished than male children (52% versus 48%). Table 1 Distribution of categorical variables vs under five years old children malnutrition, EMDHS 2014 Regarding Child's birth interval in month, the lowest prevalence of all child's underweight status was observed among children whose birth interval is less than 24 months (21%), Table 1. As opposed to the highest prevalence of all child's underweight status was recorded from children whose birth interval is between 24 and 47 (54.9%). Also, children reported as small or average at birth are much more likely to be malnourished (34.8% and 38.5%, respectively) than those reported as large at birth (26.7%). Inferential analysis In this section, the statistical procedure was used in combination with the BayesX stepwise selection method. This enabled us to select different covariates which contribute to malnutrition. Table 2 gives results for the fixed effects on the malnutrition of children under five years old in Ethiopia. The output gives posterior means, posterior median along with their standard deviations and 95% credible intervals. Table 2 Results of fixed effects estimation results of parametric coefficients Since the 95% credible interval do not include zero, father's education level, place of residence (rural), sex of the head of household (male), child's sex (female), source of water (not improved), diarrhea (had diarrhea), drug (never took drug for intestinal parasites during pregnancy), children wealth index, birth order, preceding birth interval, duration of breast feeding and size of child at birth were found statistically significant at 5% significance level. But, age of household was found statistically insignificant. Figure 5 displays nonlinear effects and estimated functions of mother's age at birth in year, child's age in month and mother's body mass index for under five years old child data. The shaded region represents twice the point wise asymptotic standard errors of the estimated curve. The panels in Fig. 5 show an interval marked as HDI, which stands for highest density interval. Points inside an HDI have higher probability density (credibility) than points outside the HDI, and the points inside the 95% HDI include 95% of the distribution. Thus, the 95% HDI includes the most credible values of the parameter. The 95% HDI is useful both as a summary of the distribution and as a decision tool. Specifically, the 95% HDI can be used to help decide which parameter values should be deemed not credible, that is, rejected. This decision process goes beyond probabilistic Bayesian inference, which generates the complete posterior distribution, not a discrete decision regarding which values can be accepted or rejected. One simple decision rule is that any value outside the 95% HDI is rejected. In particular, if we want to decide whether the regression coefficients are nonzero, we consider whether zero is included in the 95% HDI. In the present study, all continuous variables shows significant effect on underweight status of children under age of five years old. Here we can see in Fig. 5, the positive and negative linear effects on malnutrition at lower level of mother's body mass index and age of child respectively. And in addition, mother's age at birth seems have a slight positive linear effect on the malnutrition of children. Figure 5 showed the nonlinear effects of child's age in month shows that the children face a risk of suffering from malnutrition during the first 30 months of their life, and then it is slight thereafter. We can use Akaike Information Criterion (AIC), Generalized Cross-Validation (GCV) and Deviance Information Criterion (DIC) as a comparative measure to choose among different models, with lower being better [14]. The core point here is to select the better model with respect to their AIC value. Based on Table 3, it is evident that the Bayesian linear regression model has smaller AIC value than the frequent linear model. Table 3 Cumulative information for all models As illustrated the GCV value of semi parametric regression model in Table 3, the Bayesian approach with small value than that of the frequent approach which still is the one that can be selected. Next, we focused on the comparison of model 1 with model 2 as well as model 2 with model 4 based on the detected results in relative to the frequent and Bayesian approach, respectively. Since model 1 and model 3 are included under the frequent approach. Since ANOVA function is an automatic functioning machine, we used ANOVA function as a comparing system of model 1 and model 3 and thus, model 3 was found to have a better fit. As DIC is a criteria used as a comparing tool for Bayesian approach, model 2 and model 4 can be compared using the descripted DIC value in Table 3. Consequently, the models with a better fit of less DIC value are preferable models. Based on its performance, model 2 was chosen as a suitable model to identify the most determinants of childhood malnutrition. The study aimed at examining the major influential factors behind children's (under five year of age) malnutrition. The status of child malnutrition in the country was measured as underweight. The study showed, all Children 3115 (31.7%) were affected by malnutrition. For our study, suitably fitting (Bayesian Gaussian linear regression) model was chosen as a suitable model to identify determinants to childhood malnutrition in the Ethiopian context. The finding revealed that the covariates such as sex of child, preceding birth interval, age of child, father's education level, source of water (the condition of an availability of water), mother's body mass index, household head's sex, mother's age at birth, wealth index, birth order, diarrhea, child's size at birth and duration of breast feeding were identified as statistically significant factors; whereas age of head of household was found to be statistically insignificant. The results indicated that the variables such as access to health care, for children's mothers who have not taken drug during pregnancy, had significant effects on malnutrition status of children. It was therefore implied that taking drug during pregnancy (by mothers) was more effective against underweight of children. It is a well known fact that breast feeding had a greater influence over the growth of a child which is also confirmed by our study. Furthermore, our study revealed that diarrhea practice and duration of breast feeding also contributed significantly for children's malnutrition which fell in line with the results recorded by Bete - Israel [20]. The living conditions along with the area of living (being in and out of an urban area) could determine the child's malnutrition status. Problems such as poor health care access, lack of sufficiently (accessible) toilet supply, lack of modern source power like stove, cylinder and lack of awareness on the how of curing the available source of water for using it to their personal hygiene was assumed to be the risk factors of malnutrition status [21]. Our study indicated that the place of residence (rural) was associated with significant effects of malnutrition (underweight). This finding evens the finding(s) in earlier (previous) studies [22, 23]. The education attainment of fathers was also associated with significant effects to malnutrition, as of our finding. Similarly, a study [24] concluded that it (the factor in point) had an association with childhood malnutrition. A household's source of drinking water has been shown to be associated with malnutrition of a child in Nigeria (weight-for-age) in separate analysis [12], and that this study has also emphasized the significant of this factor of risk of malnutrition. More, it is associated with malnutrition of a child in that it impacted a risk of childhood diseases such as diarrhea, and is affective indirectly as a 'measure of wealth' and availability of water. This result quite consistent with some studies [11, 23, 25] but not persistent with other finding [26, 27]. Malnutrition in women is assessed using BMI. Parents with low BMI values are malnourished and are therefore likely to have undernourished and weak children. At the same time, very high BMI values indicate poor quality of the food and hence, may also imply weakness of the children [12]. The patterns of mother's body mass index (top of Fig. 5) showed that the higher impact of BMI through the interval between 15-25, indicates that there was poor quality of food for mothers. When the BMI of non pregnant women falls below the suggested cut-off point, which is less than 18.5\(\frac {kg}{m^{2}}\), malnutrition is indicated. Women who are underweight may have complications during childbirth and may deliver a child who can be underweight [6]. Our study finding indicated that there exist an association between the BMI of the mother and child's acquiring of malnutrition. This finding is of not surprise and it correspondence with the results found by others on studies analyzing the childhood malnutrition like [23–25]. Determinants that explain the cause of malnutrition in Ethiopian children community have been explored using different General additive models and Bayesian approaches. By using model comparison criteria, Gaussian linear model in Bayesian approaches was the suitable best fitted model. The findings of the present analysis indicated that sex of child, preceding birth interval, father's education level, source of water, head of household's sex, wealth index, birth order, diarrhea, child's size at birth and duration of breast feeding are important determinants of childhood malnutrition. The age of child, mother's age at birth and mother's body mass index could also be important factors with a non linear effect for the child's malnutrition in Ethiopia. Thus, a special emphasis need to be given on these factors to combat childhood malnutrition in developing countries. AIC: Akaike information criterion DIC: Deviance information criterion EMDHS: Ethiopian mini demographic health survey GCV: Generalized cross-validation MCMC: Markov chain Monte Carlo The State of the World's Children. A UNICEF REPORT. In: Childhood Under Threat: 2005. Maleta K. Epidemiology of Undernutrition in Malawi, chapter 8 in the Epidemiology of Malawi; 2006. Helen Keller International. The Nutritional Surveillance Project in Bangladesh in 1999 towards the Goals of the 1990 World Summit for Children. Dhaka: Helen Keller International, p. 2001. Golden MHN. Specific Deficiencies Versus Growth Failure, Type I and Type II Nutrients. J Nutr Environ Med. 1996; 6(3):301/308. Comrie-Thomson L, Davis J, Renzaho A, Toole M. Published by the Office of Development Effectiveness. Canberra: Australian Government Department of Foreign Affairs and Trade; March 2014. Children's UnitedNationsFund. Tracking Progress on Child and Maternal Nutrition, A survival and Development Priority. New York: UNICEF; 2009. World Health Organization. The world health report 2006, working together for health. In: WHO: 2006. Central Statistical Agency [Ethiopia]. Ethiopia Mini Demographic and Health Survey. Ethiopia: Addis Ababa; 2014. Muller O, Krawinkel M. Malnutrition and Health in Developing Countries; 2005. Belete A. Undernutritional Status of Children in Ethiopia. Unpublished Thesis. 2014. Dereje D. Statistical Analysis of Determinants of Nutritional Status of Children Under Age Five : A Case Study of Hawassa Zuria Wereda in Sidama Zone, SNNPR, Ethiopia. Unpublished Thesis. 2011. Khaled K. Analysis of Childhood Diseases and Malnutrition in Developing Countries of Africa. Verlag-Munich: Dr. Hut; 2007. Mohammad A. Gender Differentials in Mortality and Undernutrition in Pakistan. Peshawar (Pakistan); 2008. Congdon P. Bayesian Statistical Modelling. England: Wiley; 2001. de Onis M, Frongillo EA, Blössner M. Is Malnutrition Declining, An Analysis of Changes in Levels of Child Malnutrition since 1980. Bull World Health Organ. 2000; 78:1222–33. Hastie T, Tibshirani R. Generalized Additive Models. London: Chapman and Hall; 2000. Belitz C, Brezger A, Kneib T, Stefan L. BayesX Software for Bayesian Inference in Structured Additive Regression: Department of Statistics, Ludwig Maximilians University Munich; 2009, p. 1. Version, 2.0, https://www.unigoettingen.de/de/document/.../reisensburg2007.pdf. Fahrmeir L, Lang S. Bayesian Inference for Generalized Additive Mixed Models Based on Markov Random Field Priors' Applied Statistics (JRSS C). Roy Stat Soc. 2001; 50:201–20. Spiegelhalter D, Best N, Carlin B, Van der Line A. (2002); Bayesian Measures of Models Complexity. J R Stat Soc. 2002; 64:1–34. Asres G, Eidelman AI. Nutritional Assessment of Ethiopian Beta-Israel Children, a Cross Sectional Survey. Breastfeed Med. 2011; 6:171–6. National Rural Health Association. What's different about rural health care? 2012. retrived from http://www.ruralhealthweb.org/go/left/about-rural-health. Mulugeta A, Hagos F, Kruseman G, Linderhof V, Stoecker B, et al. Factors Contributing to Child Malnutrition in Tigray. Northern Ethiopia; 2005. https://www.downloads.hindawi.com/journals/jeph/2017/6373595.xml. Tesfaye M. Bayesian Approach to Identify Predictors of Children Nutritional Status in Ethiopia. 2009. https://www.etd.aau.edu.et/bitstream/123456789/11149/1/Tesfaye%20Mesele.pdf. Siddiqi NA, Haque N, Goni MA. Malnutrition of Under-Five Children: Evidence from Bangladesh. Asian J Med Sci. 2011; 2:113–8. USAID. Nutritional Status and Its Determinants in Southern Sudan. 2007. Khaled K. Child Malnutrition in Egypt Using Geoadditive Gaussian and Latent Variable Models; 2010. Sapkota VP, Gurung CK. Prevalence and Predictors of Underweight, Stunting and Wasting in Under-Five Children. Nepal Health Res Counc. 2009; 7:120–6. Authors acknowledge Ethiopian Central Statistical Agency (Addis Ababa) and School of Mathematical and Statistical Modeling, Hawassa University. This work was financially supported by the School of Mathematical and Statistical Modeling, Hawassa University. Availability of data and material The analysis in this study is based on data available from the Ethiopian Demographic and Health Survey. Ethics approval and consent to participant Department of Statistics, Aksum University, Aksum, Ethiopia Seid Mohammed School of Mathematical and Statistical Sciences, College of Natural and Computational Sciences, Hawassa University, Hawassa, Ethiopia Zeytu G. Asfaw Both authors SM and ZGA generated the idea, the corresponding author SM contributed in the data analysis and interpretation, ZGA contributed as an advisory. Both authors read and approved the final manuscript. Correspondence to Seid Mohammed. Additional file 1 Histogram from Z-score value for underweight showing a normal distribution in under five years old children malnutrition, EMDHS 2014. (DOCX 35.9 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Mohammed, S., Asfaw, Z. Bayesian Gaussian regression analysis of malnutrition for children under five years of age in Ethiopia, EMDHS 2014. Arch Public Health 76, 21 (2018). https://doi.org/10.1186/s13690-018-0264-6 Accepted: 14 February 2018 Gaussian linear model BayesX
CommonCrawl
ISI B.Stat & B.Math 2014 Objective Paper| Problems & Solutions Here, you will find all the questions of ISI Entrance Paper 2014 from Indian Statistical Institute's B.Stat Entrance. You will also get the solutions soon of all the previous year problems. The system of inequalities a-b^{2} \geq \frac{1}{4}, b-c^{2} \geq \frac{1}{4}, c-d^{2} \geq \frac{1}{4}, d-a^{2} \geq \frac{1}{4} \quad \text { has } (A) no solutions (B) exactly one solution (C) exactly two solutions (D) infinitely many solutions. Let $\log _{12} 18=a$. Then $\log {24} 16$ is equal to (A) $\frac{8-4 a}{5-a}$ (B) $\frac{1}{3+a}$ (C) $\frac{4 a-1}{2+3 a}$ (D) $\frac{8-4 a}{5+a}$ The number of solutions of the equation $\tan x+\sec x=2 \cos x,$ where $0 \leq x \leq \pi$ is (A) $0$; (B) $1$; (C) $2$; (D) $3$. Using only the digits $2,3$ and $9,$ how many six digit numbers can be formed which are divisible by $6$ ? (A) $41$; (B) $80$; (C) $81$; (D) $161$. What is the value of the following integral? \int_{\frac{1}{2014}}^{2014} \frac{\tan ^{-1} x}{x} d x (A) $\frac{\pi}{4} \log 2014$; (B) $\frac{\pi}{2} \log 2014$; (C) $\pi \log 2014$; (D) $\frac{1}{2} \log 2014$. A light ray travelling along the line $y=1,$ is refiected by a mirror placed along the line $x=2 y .$ The reflected ray travels along the line (A) $4 x-3 y=5$; (B) $3 x-4 y=2$; (C) $x-y=1$; (D) $2 x-3 y=1$. For a real number $x$, let $[x]$ denote the greatest integer less than or equal to $x$. Then the number of real solutions of $|2 x-[x]|=4$ is (D) $4$ . What is the ratio of the areas of the regular pentagons inscribed inside and circumscribed around a given circle? (A) $\cos 36^{\circ}$; (B) $\cos ^{2} 36^{\circ}$; (C) $\cos ^{2} 54^{\circ}$; (D) $\cos ^{2} 72^{\circ}$. Let $z_{1}, z_{2}$ be nonzero complex numbers satisfying $\left|z_{1}+z_{2}\right|=\left|z_{1}-z_{2}\right| .$ The circumcentre of the triangle with the points $z_{1}, z_{2},$ and the origin as its vertices is given by (A) $\frac{1}{2}\left(z_{1}-z_{2}\right)$; (B) $\frac{1}{3}\left(z_{1}+z_{2}\right)$; (C) $\frac{1}{2}\left(z_{1}+z_{2}\right)$; (D) $\frac{1}{3}\left(z_{1}-z_{2}\right)$. In how many ways can 20 identical chocolates be distributed among 8 students so that each student gets at least one chocolate and exactly two students get at least two chocolates each? (A) $308$; (B) $364$; (C) $616$; (D) $\left(\begin{array}{c} 8\\2 \end{array} \right) \left(\begin{array}{c} 17 \\ 7 \end{array}\right)$. Two vertices of a square lie on a circle of radius $r,$ and the other two vertices lie on a tangent to this circle. Then, each side of the square is (A) $\frac{3 r}{2}$; (B) $\frac{4 r}{3}$; (C) $\frac{6 r}{5}$; (D) $\frac{8 r}{5}$. Let $P$ be the set of all numbers obtained by multiplying five distinct integers between 1 and $100 .$ What is the largest integer $n$ such that $2^{n}$ divides at least one element of $P ?$ (D) $25$. Consider the function $f(x)=a x^{3}+b x^{2}+c x+d,$ where $a, b, c$ and $d$ are real numbers with $a>0$. If $f$ is strictly increasing, then the function $g(x)=$ $f^{\prime}(x)-f^{\prime \prime}(x)+f^{\prime \prime \prime}(x)$ is (A) zero for some $x \in \mathbb{R}$; (B) positive for all $x \in \mathbb{R}$; (C) negative for all $x \in \mathbb{R}$; (D) strictly increasing. Let $A$ be the set of all points $(h, k)$ such that the area of the triangle formed by $(h, k),(5,6)$ and (3,2) is 12 square units. What is the least possible length of a line segment joining (0,0) to a point in $A ?$ (A) $\frac{4}{\sqrt{5}}$; (B) $\frac{8}{\sqrt{5}}$; (C) $\frac{12}{\sqrt{5}}$; (D) $\frac{16}{\sqrt{5}}$. Let $P$=$\{a b c: a, b, c \text{ positive integers }, a^{2}+b^{2}=c^{2},\text { and }3 \text { divides } c\}$ . What is the largest integer $n$ such that $3^{n}$ divides every element of $P$? Let $A_{0}=\emptyset$ (the empty set). For each $i=1,2,3, \ldots,$ define the set $A_{i}=$ $A_{i-1} \cup\{A_{i-1}\} .$ The set $A_{3}$ is (A) $\emptyset$; (B) $\{\emptyset\}$; (C) ${\emptyset,\{\emptyset}\}$; (D) $\{\emptyset,\{\emptyset\},\{\emptyset,\{\emptyset\}\}\}$. Let $f(x)=\frac{1}{x-2} \cdot$ The graphs of the functions $f$ and $f^{-1}$ intersect at (A) $(1+\sqrt{2}, 1+\sqrt{2})$ and $(1-\sqrt{2}, 1-\sqrt{2})$; (B) $(1+\sqrt{2}, 1+\sqrt{2})$ and $\left(\sqrt{2},-1-\frac{1}{\sqrt{2}}\right)$; (C) $(1-\sqrt{2}, 1-\sqrt{2})$ and $\left(-\sqrt{2},-1+\frac{1}{\sqrt{2}}\right)$; (D) $\left(\sqrt{2},-1-\frac{1}{\sqrt{2}}\right)$ and $\left(-\sqrt{2},-1+\frac{1}{\sqrt{2}}\right)$. Let $N$ be a number such that whenever you take $N$ consecutive positive integers, at least one of them is coprime to $374 .$ What is the smallest possible value of $N ?$ Let $A_{1}, A_{2}, \ldots, A_{18}$ be the vertices of a regular polygon with 18 sides. How many of the triangles $\Delta A_{i} A_{j} A_{k}, 1 \leq i<j<k \leq 18,$ are isosceles but not equilateral? The limit $\lim _{x \rightarrow 0} \frac{\sin ^{\alpha} x}{x}$ exists only when (A) $\alpha \geq 1$; (B) $\alpha=1$; (C) $|\alpha| \leq 1$; (D) $\alpha$ is a positive integer. Consider the region $R=\{(x, y): x^{2}+y^{2} \leq 100, \sin (x+y)>0\} .$ What is the area of $R ?$ (A) $25 \pi$; (B) $50 \pi$; (D) $100 \pi-50$. Considcr a cyclic trapezium whose circumcentre is on one of the sides. If the ratio of the two parallel sides is $1: 4,$ what is the ratio of the sum of the two oblique sides to the longer parallel side? (A) $\sqrt{3}: \sqrt{2}$; (B) $3: 2$; (C) $\sqrt{2}: 1$; (D) $\sqrt{5}: \sqrt{3}$. Consider the function $f(x)=\{\log _{e}\left(\frac{4+\sqrt{2 x}}{x}\right)\}^{2}$ for $x>0 .$ Then (A) $f$ decreases upto some point and increases after that; (B) $f$ increases upto some point and decreases after that; (C) $f$ increases initially, then decreases and then again increases; (D) $f$ decreases initially, then increases and then again decreases. What is the number of ordered triplets $(a, b, c),$ where $a, b, c$ are positive integers (not necessarily distinct), such that $a b c=1000 ?$ Let $f:(0, \infty) \rightarrow(0, \infty)$ be a function differentiable at $3,$ and satisfying $f(3)=$ $3 f^{\prime}(3)>0 .$ Then the limit \lim _{x \rightarrow \infty}\left(\frac{f\left(3+\frac{3}{x}\right)}{f(3)}\right)^{x} (A) exists and is equal to $3$; (B) exists and is equal to $e$; (C) exists and is always equal to $f(3)$; (D) need not always exist. Let $z$ be a non-zero complex number such that $\left|z-\frac{1}{z}\right|=2$. What is the maximum value of $|z| ?$ (B) $\sqrt{2}$; (D) $1+\sqrt{2}$. The minimum value of |\sin x+\cos x+\tan x+cosec x+\sec x+\cot x| \text { is } (B) $2 \sqrt{2}-1$; (C) $2 \sqrt{2}+1$; For any function $f: X \rightarrow Y$ and any subset $A$ of $Y$, define f^{-1}(A)={x \in X: f(x) \in A} Let $A^{c}$ denote the complement of $A$ in $Y$. For subsets $A_{1}, A_{2}$ of $Y$, consider the following statements: (i) $f^{-1}\left(A_{1}^{c} \cap A_{2}^{c}\right)=\left(f^{-1}\left(A_{1}\right)\right)^{c} \cup\left(f^{-1}\left(A_{2}\right)\right)^{c}$ (ii) If $f^{-1}\left(A_{1}\right)=f^{-1}\left(A_{2}\right)$ then $A_{1}=A_{2}$. (A) both (i) and (ii) are always true; (B) (i) is always true, but (ii) may not always be true; (C) (ii) is always true, but (i) may not always be true; (D) neither (i) nor (ii) is always true. Let $f$ be a function such that $f^{\prime \prime}(x)$ exists, and $f^{\prime \prime}(x)>0$ for all $x \in[a, b] .$ For any point $c \in[a, b],$ let $A(c)$ denote the area of the region bounded by $y=f(x)$ the tangent to the graph of $f$ at $x=c$ and the lines $x=a$ and $x=b .$ Then (A) $A(c)$ attains its minimum at $c=\frac{1}{2}(a+b)$ for any such $f$; (B) $A(c)$ attains its maximum at $c=\frac{1}{2}(a+b)$ for any such $f$; (C) $A(c)$ attains its minimum at both $c=a$ and $c=b$ for any such $f$; (D) the points $c$ where $A(c)$ attains its minimum depend on $f$. In $\triangle A B C,$ the lines $B P, B Q$ trisect $\angle A B C$ and the lines $C M, C N$ trisect $\angle A C B .$ Let $B P$ and $C M$ intersect at $X$ and $B Q$ and $C N$ intersect at $Y .$ If $\angle A B C=45^{\circ}$ and $\angle A C B=75^{\circ},$ then $\angle B X Y$ is (A) $45^{\circ}$; (B) $47 \frac{1^{\circ}}{2}$; (C) $50^{\circ}$; (D) $55^{\circ}$. Some useful link ISI B.stat & B.math 2015 problems & solutions Our ISI & CMI Entrance program Barycenter I.S.I 2014 Problem 2- watch & learn
CommonCrawl
Quasi-optimal control with a general quadratic criterion in a special norm for systems described by parabolic-hyperbolic equations with non-local boundary conditions DCDS-B Home On relation between attractors for single and multivalued semiflows for a certain class of PDEs March 2019, 24(3): 1229-1242. doi: 10.3934/dcdsb.2019013 Attractors of multivalued semi-flows generated by solutions of optimal control problems Olexiy V. Kapustyan 1, , Pavlo O. Kasyanov 2, , José Valero 3, and Mikhail Z. Zgurovsky 4, Taras Shevchenko National University of Kyiv, Kyiv, Ukraine Institute for Applied System Analysis, National Technical University "Igor Sikorsky Kyiv Polytechnic Institute", Kyiv, Ukraine Centro de Investigación Operativa, Universidad Miguel Hernández de Elche, 03202-Elche, Alicante, Spain National Technical University "Igor Sikorsky Kyiv Polytechnic Institute", Kyiv, Ukraine To Professor Valery Melnik, in Memoriam Received February 2018 Revised June 2018 Published January 2019 Fund Project: The first two authors were partially supported by the State Fund for Fundamental Research of Ukraine under grants GP/F66/14921, GP/F78/187 and by the Grant of the National Academy of Sciences of Ukraine 2290/2018. The third author was partially supported by Spanish Ministry of Economy and Competitiveness and FEDER, projects MTM2015-63723-P and MTM2016-74921-P, and by Junta de Andalucía(Spain), project P12-FQM-1492 Full Text(HTML) In this paper we study the dynamical system generated by the solutions of optimal control problems. We obtain suitable conditions under which such systems generate multivalued semiprocesses. We prove the existence of uniform attractors for the multivalued semiprocess generated by the solutions of controlled reaction-diffusion equations and study its properties. Keywords: Multivalued semiflow, multivalued semiprocess, global attractor, optimal control, reaction-diffusion equations. Mathematics Subject Classification: 35B40, 35B41, 35K55, 35Q30, 35Q35, 37B25, 58C06. Citation: Olexiy V. Kapustyan, Pavlo O. Kasyanov, José Valero, Mikhail Z. Zgurovsky. Attractors of multivalued semi-flows generated by solutions of optimal control problems. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 1229-1242. doi: 10.3934/dcdsb.2019013 A. V. Babin and M. I. Vishik, Maximal attractors of semigroups corresponding to evolutionary differential equations, Mat. Sb., 126 (1985), 397-419. Google Scholar A. V. Babin and M. I. Vishik, Attractors of Evolution Equations, Nauka, Moscow, 1989. Google Scholar J. M. Ball, Continuity properties and global attractors of generalized semiflows and the Navier-Stokes equations, in Mechanics: From Theory to Computation, Springer, New York, 2000,447–474. Google Scholar J. M. Ball, Global attractors for damped semilinear wave equations, Discrete Contin. Dyn. Syst., 10 (2004), 31-52. doi: 10.3934/dcds.2004.10.31. Google Scholar V. Barbu, Nonlinear Semigroups and Differential Equations in Banach Spaces, Editura Academiei, Bucuresti, 1976. Google Scholar R. E. Bellman, I. Glicksberg and O. A. Gross, Some Aspects of the Mathematical Theory of Control Processes, Rand Corp., Santa-Monica, 1958. Google Scholar T. Caraballo, J. A. Langa, V. S. Melnik and J. Valero, Pullback attractors of nonautonomous and stochastic multivalued dynamical systems, Set-Valued Anal., 11 (2003), 153-201. doi: 10.1023/A:1022902802385. Google Scholar A. N. Carvalho, J. Langa and J. C. Robinson, Attractors for Infinite-Dimensional Non-Autonomous Dynamical Systems, Applied Mathematical Sciences, 182. Springer, New York, 2013. doi: 10.1007/978-1-4614-4581-4. Google Scholar D. N. Cheban, Global Attractors of Non-Autonomous Dissipative Dynamical Systems, Interdisciplinary Mathematical Sciences, Vol.1, World Scientific, New York, 2004. doi: 10.1142/9789812563088. Google Scholar D. N. Cheban, Global Attractors of Set-Valued Dynamical and Control Systems, Nova Science Publishers Inc, New York, 2010. Google Scholar D. N. Cheban, Compact global attractors of control systems, Dyn. Control Syst., 16 (2010), 23-44. doi: 10.1007/s10883-010-9086-8. Google Scholar D. N. Cheban and D. S. Fakhikh, Global Attractors of Dispersible Dynamical Systems, Sigma, Chisinau, 1994 (in Russian). Google Scholar V. V. Chepyzhov and M. I. Vishik, Attractors for Equations of Mathematical Physics, AMS, Providence, Rhode Island, 2002. Google Scholar D. S. Fakhikh, The Levinson center of dispersible dissipative dynamical systems, Izv. Akad. Nauk Moldav. SSR Mat., (1990), 55–59, 78 (in Russian). Google Scholar D. S. Fakhikh, The structure of the Levinson center of dispersible dissipative dynamical systems, Izv. Akad. Nauk Moldav. SSR Mat, (1991), 62–67, 92 (in Russian). Google Scholar J. K. Hale, J. P. LaSalle and M. Slemrod, Theory of general class of dissipative processes, J. Math. Anal. Appl., 39 (1972), 177-191. doi: 10.1016/0022-247X(72)90233-8. Google Scholar A. Haraux, Systemes Dynamiques Dissipatives et Applications, Masson, Paris, 1991. Google Scholar A. V. Kapustyan, Global attractors of a nonautonomous reaction-diffusion equation, Differential Equations, 38 (2002), 1467-1471. doi: 10.1023/A:1022378831393. Google Scholar O. V. Kapustyan, O. A. Kapustian and A. V. Sukretna, Approximate stabilization for a nonlinear parabolic boundary-value problem, Ukrainian Math. J., 63 (2011), 759-767. doi: 10.1007/s11253-011-0540-x. Google Scholar O. V. Kapustyan, P. O. Kasyanov and J. Valero, Pullback attractors for a class of extremal solutions of the 3D Navier-Stokes system, J. Math. Anal. Appl., 373 (2011), 535-547. doi: 10.1016/j.jmaa.2010.07.040. Google Scholar O. V. Kapustyan, P. O. Kasyanov and J. Valero, Regular solutions and global attractors for reaction-diffusion systems without uniqueness, Commun. Pure Appl. Anal., 13 (2014), 1891-1906. doi: 10.3934/cpaa.2014.13.1891. Google Scholar O. V. Kapustyan, V. S. Melnik and J. Valero, A weak attractor and properties of solutions for the three-dimensional Benard problem, Discrete Contin. Dyn. Syst., 18 (2007), 449-481. doi: 10.3934/dcds.2007.18.449. Google Scholar O. V. Kapustyan, V. S. Melnik, J. Valero and V. V. Yasinsky, Global Attractors of Multi-Valued Dynamical Systems and Evolution Equations without Uniqueness, Naukova Dumka, Kyiv, 2008. Google Scholar O. V. Kapustyan and D. V. Shkundin, Global attractor of one nonlinear parabolic equation, Ukrainian Math. J., 55 (2003), 446-455. doi: 10.1023/B:UKMA.0000010155.48722.f2. Google Scholar O. V. Kapustyan and J. Valero, Attractors of differential inclusions and their approximations, Ukrainian Math. J., 52 (2000), 1118-1123. doi: 10.1023/A:1005237902620. Google Scholar P. O. Kasyanov, Multivalued dynamics of solutions of autonomous operator differential equations with pseudomonotone nonlinearity, Math. Notes, 92 (2012), 205-218. doi: 10.1134/S0001434612070231. Google Scholar O. A. Ladyzhenskaya, On dynamical system generated by Navier-Stokes equations, Zap. Nauch. Sem. LOMI, 27 (1972), 91-115. Google Scholar J. L. Lions, Optimal Control of Systems Governed by Partial Differential Equations, Springer, Berlin, 1971. Google Scholar P. Marín-Rubio, G. Planas and J. Real, Asymptotic behaviour of a phase-field model with three coupled equations without uniqueness, J. Differential Equations, 246 (2009), 4632-4652. doi: 10.1016/j.jde.2009.01.021. Google Scholar V. S. Melnik, Multivalued Dynamics of Nonlinear Infinite-Dimensional Systems, Preprint NAS of Ukraine, 94-17, Kyiv, 1994. Google Scholar V. S. Melnik and O. V. Kapustyan, On global attractors of multivalued semidynamic systems and their approximations, Dokl. Akad. Nauk, 366 (1999), 445-448. Google Scholar V. S. Melnik and J. Valero, On attractors of multi-valued semi-flows and differential inclusions, Set-Valued Anal., 6 (1998), 83-111. doi: 10.1023/A:1008608431399. Google Scholar A. Segatti, Global attractor for a class of doubly nonlinear abstract evolution equations, Discrete Contin. Dyn. Syst., 14 (2006), 801-820. doi: 10.3934/dcds.2006.14.801. Google Scholar G. R. Sell and Y. You, Dynamics of Evolutionary Equations, Springer, New-York, 2002. doi: 10.1007/978-1-4757-5037-9. Google Scholar J. Simsen and E. N. Neres Junior, Existence and upper semicontinuity of global attractors for a p-Laplacian differential inclusion, Bol. Soc. Paran. Mat, 33 (2015), 235-245. doi: 10.5269/bspm.v33i1.21767. Google Scholar R. Temam, Infinite-dimensional Dynamical Systems in Mechanics and Physics, Springer-Verlag, New York, 1997. doi: 10.1007/978-1-4612-0645-3. Google Scholar J. Valero, Attractors of parabolic equations without uniqueness, J. Dynamics Differential Equations, 13 (2001), 711-744. doi: 10.1023/A:1016642525800. Google Scholar J. Valero and A. V. Kapustyan, On the connectedness and asymptotic behaviour of solutions of reaction-diffusion systems, J. Math. Anal. Appl., 323 (2006), 614-633. doi: 10.1016/j.jmaa.2005.10.042. Google Scholar M. Z. Zgurovsky and P. O. Kasyanov, Evolution inclusions in nonsmooth systems with applications for earth data processing : uniform trajectory attractors for nonautonomous evolution inclusions solutions with pointwise pseudomonotone mappings, in Advances in Global Optimization, Springer Proceedings in Mathematics and Statistics, Cham, 95 (2015), 283-294. doi: 10.1007/978-3-319-08377-3_28 . Google Scholar M. Z. Zgurovsky and P. O. Kasyanov, Qualitative and Quantitative Analysis of Nonlinear Systems : Theory and Applications, Springer, Cham, 2018 doi: 10.1007/978-3-319-59840-6. Google Scholar M. Z. Zgurovsky, P. O. Kasyanov, O. V. Kapustyan, J. Valero and N. V. Zadoianchuk, Evolution Inclusions and Variation Inequalities for Earth Data Processing Ⅲ. Long-time Behavior of Evolution Inclusions Solutions in Earth Data Analysis, Springer, Berlin, 2012. doi: 10.1007/978-3-642-28512-7. Google Scholar M. Z. Zgurovsky and V. S. Melnik, Nonlinear Analysis and Control of Physical Processes and Fields, Springer, Berlin, 2004. doi: 10.1007/978-3-642-18770-4. Google Scholar Angelo Favini, Atsushi Yagi. Global existence for Laplace reaction-diffusion equations. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 1-21. doi: 10.3934/dcdss.2020083 Nobuyuki Kenmochi, Noriaki Yamazaki. Global attractor of the multivalued semigroup associated with a phase-field model of grain boundary motion with constraint. Conference Publications, 2011, 2011 (Special) : 824-833. doi: 10.3934/proc.2011.2011.824 Heather Finotti, Suzanne Lenhart, Tuoc Van Phan. Optimal control of advective direction in reaction-diffusion population models. Evolution Equations & Control Theory, 2012, 1 (1) : 81-107. doi: 10.3934/eect.2012.1.81 Oleksiy V. Kapustyan, Pavlo O. Kasyanov, José Valero. Structure and regularity of the global attractor of a reaction-diffusion equation with non-smooth nonlinear term. Discrete & Continuous Dynamical Systems - A, 2014, 34 (10) : 4155-4182. doi: 10.3934/dcds.2014.34.4155 B. Ambrosio, M. A. Aziz-Alaoui, V. L. E. Phan. Global attractor of complex networks of reaction-diffusion systems of Fitzhugh-Nagumo type. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 3787-3797. doi: 10.3934/dcdsb.2018077 Linfang Liu, Xianlong Fu, Yuncheng You. Pullback attractor in $H^{1}$ for nonautonomous stochastic reaction-diffusion equations on $\mathbb{R}^n$. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3629-3651. doi: 10.3934/dcdsb.2017143 Yejuan Wang, Peter E. Kloeden. The uniform attractor of a multi-valued process generated by reaction-diffusion delay equations on an unbounded domain. Discrete & Continuous Dynamical Systems - A, 2014, 34 (10) : 4343-4370. doi: 10.3934/dcds.2014.34.4343 Vladimir V. Chepyzhov, Mark I. Vishik. Trajectory attractor for reaction-diffusion system with diffusion coefficient vanishing in time. Discrete & Continuous Dynamical Systems - A, 2010, 27 (4) : 1493-1509. doi: 10.3934/dcds.2010.27.1493 N. U. Ahmed. Weak solutions of stochastic reaction diffusion equations and their optimal control. Discrete & Continuous Dynamical Systems - S, 2018, 11 (6) : 1011-1029. doi: 10.3934/dcdss.2018059 Boris Andreianov, Halima Labani. Preconditioning operators and $L^\infty$ attractor for a class of reaction-diffusion systems. Communications on Pure & Applied Analysis, 2012, 11 (6) : 2179-2199. doi: 10.3934/cpaa.2012.11.2179 Wanli Yang, Jie Sun, Su Zhang. Analysis of optimal boundary control for a three-dimensional reaction-diffusion system. Numerical Algebra, Control & Optimization, 2017, 7 (3) : 325-344. doi: 10.3934/naco.2017021 Jacson Simsen, Mariza Stefanello Simsen, José Valero. Convergence of nonautonomous multivalued problems with large diffusion to ordinary differential inclusions. Communications on Pure & Applied Analysis, 2020, 19 (4) : 2347-2368. doi: 10.3934/cpaa.2020102 Michel Pierre, Morgan Pierre. Global existence via a multivalued operator for an Allen-Cahn-Gurtin equation. Discrete & Continuous Dynamical Systems - A, 2013, 33 (11&12) : 5347-5377. doi: 10.3934/dcds.2013.33.5347 Shi-Liang Wu, Tong-Chang Niu, Cheng-Hsiung Hsu. Global asymptotic stability of pushed traveling fronts for monostable delayed reaction-diffusion equations. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 3467-3486. doi: 10.3934/dcds.2017147 Aníbal Rodríguez-Bernal, Alejandro Vidal-López. A note on the existence of global solutions for reaction-diffusion equations with almost-monotonic nonlinearities. Communications on Pure & Applied Analysis, 2014, 13 (2) : 635-644. doi: 10.3934/cpaa.2014.13.635 Piermarco Cannarsa, Giuseppe Da Prato. Invariance for stochastic reaction-diffusion equations. Evolution Equations & Control Theory, 2012, 1 (1) : 43-56. doi: 10.3934/eect.2012.1.43 Martino Prizzi. A remark on reaction-diffusion equations in unbounded domains. Discrete & Continuous Dynamical Systems - A, 2003, 9 (2) : 281-286. doi: 10.3934/dcds.2003.9.281 Thomas I. Seidman. Optimal control of a diffusion/reaction/switching system. Evolution Equations & Control Theory, 2013, 2 (4) : 723-731. doi: 10.3934/eect.2013.2.723 Peter E. Kloeden, Thomas Lorenz, Meihua Yang. Reaction-diffusion equations with a switched--off reaction zone. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1907-1933. doi: 10.3934/cpaa.2014.13.1907 Wei Feng, Xin Lu. Global periodicity in a class of reaction-diffusion systems with time delays. Discrete & Continuous Dynamical Systems - B, 2003, 3 (1) : 69-78. doi: 10.3934/dcdsb.2003.3.69 HTML views (70) Olexiy V. Kapustyan Pavlo O. Kasyanov José Valero Mikhail Z. Zgurovsky
CommonCrawl
Only show open access (1) Physics and Astronomy (6) Journal of Fluid Mechanics (6) Protein Science (1) Ryan Test (3) Test Society 2018-05-10 (2) test society (1) Fluid-mediated sources of granular temperature at finite Reynolds numbers Aaron M. Lattanzi, Vahid Tavanashad, Shankar Subramaniam, Jesse Capecelatro Journal: Journal of Fluid Mechanics / Volume 942 / 10 July 2022 Published online by Cambridge University Press: 13 May 2022, A7 You have access Access We derive analytical solutions for hydrodynamic sources and sinks to granular temperature in moderately dense suspensions of elastic particles at finite Reynolds numbers. Modelling the neighbour-induced drag disturbances with a Langevin equation allows an exact solution for the joint fluctuating acceleration–velocity distribution function $P(v^{\prime },a^{\prime };t)$. Quadrant-conditioned covariance integrals of $P(v^{\prime },a^{\prime };t)$ yield the hydrodynamic source and sink that dictate the evolution of granular temperature that can be used in Eulerian two-fluid models. Analytical predictions agree with benchmark data from particle-resolved direct numerical simulations and show promise as a general theory from gas–solid to bubbly flows. Stochastic models for capturing dispersion in particle-laden flows Journal: Journal of Fluid Mechanics / Volume 903 / 25 November 2020 Published online by Cambridge University Press: 18 September 2020, A7 This study provides a detailed account of stochastic approaches that may be utilized in Eulerian–Lagrangian simulations to account for neighbour-induced drag force fluctuations. The frameworks examined here correspond to Langevin equations for the particle position (PL), particle velocity (VL) and fluctuating drag force (FL). Rigorous derivations of the particle velocity variance (granular temperature) and dispersion resulting from each method are presented. The solutions derived herein provide a basis for comparison with particle-resolved direct numerical simulation. The FL method allows for the most complex behaviour, enabling control of both the granular temperature and dispersion. A Stokes number $St_F$ is defined for the fluctuating force that relates the integral time scale of the force to the Stokes response time. Formal convergence of the FL scheme to the VL scheme is shown for $St_F \gg 1$. In the opposite limit, $St_F \ll 1$, the fluctuating drag forces are highly inertial and the FL scheme departs significantly from the VL scheme. Pseudo-turbulent heat flux and average gas–phase conduction during gas–solid heat transfer: flow past random fixed particle assemblies Bo Sun, Sudheer Tenneti, Shankar Subramaniam, Donald L. Koch Published online by Cambridge University Press: 01 June 2016, pp. 299-349 Fluctuations in the gas-phase velocity can contribute significantly to the total gas-phase kinetic energy even in laminar gas–solid flows as shown by Mehrabadi et al. (J. Fluid Mech., vol. 770, 2015, pp. 210–246), and these pseudo-turbulent fluctuations can also enhance heat transfer in gas–solid flow. In this work, the pseudo-turbulent heat flux arising from temperature–velocity covariance, and average fluid-phase conduction during convective heat transfer in a gas–solid flow are quantified and modelled over a wide range of mean slip Reynolds number and solid volume fraction using particle-resolved direct numerical simulations (PR-DNS) of steady flow through a random assembly of fixed isothermal monodisperse spherical particles. A thermal self-similarity condition on the local excess temperature developed by Tenneti et al. (Intl J. Heat Mass Transfer, vol. 58, 2013, pp. 471–479) is used to guarantee thermally fully developed flow. The average gas–solid heat transfer rate for this flow has been reported elsewhere by Sun et al. (Intl J. Heat Mass Transfer, vol. 86, 2015, pp. 898–913). Although the mean velocity field is homogeneous, the mean temperature field in this thermally fully developed flow is inhomogeneous in the streamwise coordinate. An exponential decay model for the average bulk fluid temperature is proposed. The pseudo-turbulent heat flux that is usually neglected in two-fluid models of the average fluid temperature equation is computed using PR-DNS data. It is found that the transport term in the average fluid temperature equation corresponding to the pseudo-turbulent heat flux is significant when compared to the average gas–solid heat transfer over a significant range of solid volume fraction and mean slip Reynolds number that was simulated. For this flow set-up a gradient-diffusion model for the pseudo-turbulent heat flux is found to perform well. The Péclet number dependence of the effective thermal diffusivity implied by this model is explained using a scaling analysis. Axial conduction in the fluid phase, which is often neglected in existing one-dimensional models, is also quantified. As expected, it is found to be important only for low Péclet number flows. Using the exponential decay model for the average bulk fluid temperature, a model for average axial conduction is developed that verifies standard assumptions in the literature. These models can be used in two-fluid simulations of heat transfer in fixed beds. A budget analysis of the mean fluid temperature equation provides insight into the variation of the relative magnitude of the various terms over the parameter space. Stochastic Lagrangian model for hydrodynamic acceleration of inertial particles in gas–solid suspensions Sudheer Tenneti, Mohammad Mehrabadi, Shankar Subramaniam Journal: Journal of Fluid Mechanics / Volume 788 / 10 February 2016 Published online by Cambridge University Press: 12 January 2016, pp. 695-729 The acceleration of an inertial particle in a gas–solid flow arises from the particle's interaction with the gas and from interparticle interactions such as collisions. Analytical treatments to derive a particle acceleration model are difficult outside the Stokes flow regime, but for moderate Reynolds numbers (based on the mean slip velocity between gas and particles) particle-resolved direct numerical simulation (PR-DNS) is a viable tool for model development. In this study, PR-DNS of freely-evolving gas–solid suspensions are performed using the particle-resolved uncontaminated-fluid reconcilable immersed-boundary method (PUReIBM) that has been extensively validated in previous studies. Analysis of the particle velocity variance (granular temperature) equation in statistically homogeneous gas–solid flow shows that a straightforward extension of a class of mean particle acceleration models (drag laws) to their corresponding instantaneous versions, by replacing the mean particle velocity with the instantaneous particle velocity, predicts a granular temperature that decays to zero, which is at variance with the steady particle granular temperature that is obtained from PR-DNS. Fluctuations in particle velocity and particle acceleration (and their correlation) are important because the particle acceleration–velocity covariance governs the evolution of the particle velocity variance (characterized by the particle granular temperature), which plays an important role in the prediction of the core annular structure in riser flows. The acceleration–velocity covariance arising from hydrodynamic forces can be decomposed into source and dissipation terms that appear in the granular temperature evolution equation, and these have already been quantified in the Stokes flow regime using a combination of kinetic theory closure and multipole expansion simulations. From PR-DNS data we show that the fluctuations in the particle acceleration that are aligned with fluctuations in the particle velocity give rise to a source term in the granular temperature evolution equation. This approach is used to quantify the hydrodynamic source and dissipation terms of granular temperature from PR-DNS results for freely-evolving gas–solid suspensions that are performed over a wide range of solid volume fraction ( $0.1\leqslant {\it\phi}\leqslant 0.4$), Reynolds number based on the slip velocity between the solid and the fluid phase ( $10\leqslant \mathit{Re}_{m}\leqslant 100$) and solid-to-fluid density ratio ( $100\leqslant {\it\rho}_{p}/{\it\rho}_{f}\leqslant 2000$). The straightforward extension of drag law models does not give rise to any source in the granular temperature due to hydrodynamic effects. This motivates the development of better Lagrangian particle acceleration models that can be used in Lagrangian–Eulerian formulations of gas–solid flow. It is found that a Langevin equation for the increment in the particle velocity reproduces PR-DNS results for the stationary particle velocity autocorrelation in freely-evolving suspensions. Based on the data obtained from the simulations, the functional dependence of the Langevin model coefficients on solid volume fraction, Reynolds number and solid-to-fluid density ratio is obtained. This new Lagrangian particle acceleration model reproduces the correct steady granular temperature and can also be adapted to gas–solid flow computations using Eulerian moment equations. Two-way coupled stochastic model for dispersion of inertial particles in turbulence Madhusudan G. Pai, Shankar Subramaniam Journal: Journal of Fluid Mechanics / Volume 700 / 10 June 2012 Published online by Cambridge University Press: 18 April 2012, pp. 29-62 Turbulent two-phase flows are characterized by the presence of multiple time and length scales. Of particular interest in flows with non-negligible interphase momentum coupling are the time scales associated with interphase turbulent kinetic energy transfer (TKE) and inertial particle dispersion. Point-particle direct numerical simulations (DNS) of homogeneous turbulent flows laden with sub-Kolmogorov size particles report that the time scale associated with the interphase TKE transfer behaves differently with Stokes number than the time scale associated with particle dispersion. Here, the Stokes number is defined as the ratio of the particle momentum response time scale to the Kolmogorov time scale of turbulence. In this study, we propose a two-way coupled stochastic model (CSM), which is a system of two coupled Langevin equations for the fluctuating velocities in each phase. The basis for the model is the Eulerian–Eulerian probability density function formalism for two-phase flows that was established in Pai & Subramaniam (J. Fluid Mech., vol. 628, 2009, pp. 181–228). This new model possesses the unique capability of simultaneously capturing the disparate dependence of the time scales associated with interphase TKE transfer and particle dispersion on Stokes number. This is ascertained by comparing predicted trends of statistics of turbulent kinetic energy and particle dispersion in both phases from CSM, for varying Stokes number and mass loading, with point-particle DNS datasets of homogeneous particle-laden flows. A comprehensive probability density function formalism for multiphase flows A theoretical foundation for two widely used statistical representations of multiphase flows, namely the Eulerian–Eulerian (EE) and Lagrangian–Eulerian (LE) representations, is established in the framework of the probability density function (p.d.f.) formalism. Consistency relationships between fundamental statistical quantities in the EE and LE representations are rigorously established. It is shown that fundamental quantities in the two statistical representations bear an exact relationship to each other only under conditions of spatial homogeneity. Transport equations for the probability densities in each statistical representation are derived. Exact governing equations for the mean mass, mean momentum and second moment of velocity corresponding to the two statistical representations are derived from these transport equations. In particular, for the EE representation, the p.d.f. formalism is shown to naturally lead to the widely used ensemble-averaged equations for two-phase flows. Galilean-invariant combinations of unclosed terms in the governing equations that need to be modelled are clearly identified. The correspondence between unclosed terms in each statistical representation is established. Hybrid EE–LE computations can benefit from this correspondence, which serves in consistently transferring information from one representation to the other. Advantages and limitations of each statistical representation are identified. The results of this work can also serve as a guiding framework for direct numerical simulations of two-phase flows, which can now be exploited to precisely quantify unclosed terms in the governing equations in the two statistical representations. Protein structure determination using a database of interatomic distance probabilities MICHAEL E. WALL, SHANKAR SUBRAMANIAM, GEORGE N. PHILLIPS Journal: Protein Science / Volume 8 / Issue 12 / December 1999 Published online by Cambridge University Press: 01 December 1999, pp. 2720-2727 Print publication: December 1999 The accelerated pace of genomic sequencing has increased the demand for structural models of gene products. Improved quantitative methods are needed to study the many systems (e.g., macromolecular assemblies) for which data are scarce. Here, we describe a new molecular dynamics method for protein structure determination and molecular modeling. An energy function, or database potential, is derived from distributions of interatomic distances obtained from a database of known structures. X-ray crystal structures are refined by molecular dynamics with the new energy function replacing the Van der Waals potential. Compared to standard methods, this method improved the atomic positions, interatomic distances, and side-chain dihedral angles of structures randomized to mimic the early stages of refinement. The greatest enhancement in side-chain placement was observed for groups that are characteristically buried. More accurate calculated model phases will follow from improved interatomic distances. Details usually seen only in high-resolution refinements were improved, as is shown by an R-factor analysis. The improvements were greatest when refinements were carried out using X-ray data truncated at 3.5 Å. The database potential should therefore be a valuable tool for determining X-ray structures, especially when only low-resolution data are available.
CommonCrawl
Full paper | Open | Published: 23 August 2018 Amphibole–melt disequilibrium in silicic melt of the Aso-4 caldera-forming eruption at Aso Volcano, SW Japan Hidemi Ishibashi1, Yukiko Suwa1, Masaya Miyoshi2, Atsushi Yasuda3 & Natsumi Hokanishi3 Earth, Planets and Spacevolume 70, Article number: 137 (2018) | Download Citation The most recent and largest caldera-forming eruption occurred at ~ 90 ka at Aso Volcano, SW Japan, and is known as the "Aso-4 eruption." We performed chemical analyses of amphibole phenocrysts from Aso-4 pyroclasts collected from the initial and largest pyroclastic unit (4I-1) of the eruption to infer the composition–temperature–pressure conditions of the melt that crystallized amphibole phenocrysts. Each amphibole phenocryst is largely chemically homogeneous, but inter-grain chemical variation is observed. Geothermometry, geobarometry, and melt–SiO2 relationships based on amphibole single-phase compositions reveal that most amphibole phenocrysts were in equilibrium with hydrous melt comprising ~ 63–69 wt% SiO2 (\({\text{SiO}}_{2}^{\text{melt}}\)) at 910–950 °C, although several grains were crystallized from more mafic and higher-temperature melts (~ 57–60.5 wt% SiO2 and 965–980 °C). The amphibole temperatures are comparable with those previously estimated from two-pyroxene geothermometry, but are much higher than temperatures previously estimated from Fe–Ti oxide geothermometry. The estimated \({\text{SiO}}_{2}^{\text{melt}}\) contents are lower than that of the host melt in the 4I-1 pyroclasts. Chemical and thermal disequilibrium between the amphibole rims and the host melt, as well as intra-grain homogeneity and inter-grain heterogeneity of amphibole compositions, suggests that these amphiboles were incorporated into the host melt immediately prior to the caldera-forming eruption. Our results suggest that the amphibole phenocrysts, and perhaps some of the pyroxene and plagioclase phenocrysts, were derived from a chemically and thermally zoned crystal mush layer that had accumulated beneath the chamber of the host 4I-I melt. Amphibole geobarometry indicates a crystallization depth of ~ 13.9 ± 3.5 km, which is consistent with the present-day magma chamber depth beneath the volcano as inferred from geophysical observations. The results suggest that the depth of the post-caldera magma plumbing system is strongly influenced by a relic magma reservoir related to a previous caldera-forming eruption. Caldera-forming eruptions are the most violent and catastrophic volcanic phenomena and have a significant influence on the surface environment on a global scale (e.g., Miller and Wark 2008; Druitt et al. 2012). Therefore, it is critical to understand the causes and processes of caldera-forming eruptions. To address this issue, quantifying the pre-eruptive physicochemical conditions of magma is imperative because this has a significant control on magma ascent processes and eruption dynamics (e.g., Wark et al. 2007; Ruprecht and Bachmann 2010). Textural and chemical analyses of phenocryst phases are valuable methods for deciphering pre-eruptive magmatic conditions (e.g., Ginibre et al. 2007; Cooper 2017). Igneous amphibole has been the focus of many recent studies (e.g., De Angelis et al. 2013; Shane and Smith 2013; Erdmann et al. 2014; Kiss et al. 2014). Empirical equations have been proposed that reliably relate single-phase compositions of amphibole with the temperature (T) and pressure (P) conditions of crystallization and the SiO2 content of coexisting silicate melt (\({\text{SiO}}_{2}^{\text{melt}}\)) (Ridolfi et al. 2010; Ridolfi and Renzulli 2012; Putirka 2016). By using empirical equations, the T, P, and \({\text{SiO}}_{2}^{\text{melt}}\) related to amphibole crystallization can be estimated without assuming equilibrium between phases. In addition, the empirical equations enable recognition of disequilibrium between amphibole and other phases (Putirka 2016). Geothermobarometric methods based on multiple phases require a priori assumptions of equilibria between coexisting phases. Although disequilibrium between coexisting phases causes large errors in thermobarometric results, it is not easy to evaluate equilibrium/disequilibrium between coexisting phases. Empirical thermobarometric and chemometric equations based on a single-phase composition of amphibole are particularly valuable because they do not assume inter-phase equilibrium (Putirka 2016) and are hence useful in detecting phase disequilibrium between amphibole and other phases. The detection of phase disequilibrium in magma is essential in reconstructing the pre-eruptive processes of caldera-forming eruptions based on petrological information recorded in pyroclasts. Aso Volcano, the second-largest caldera volcano in Japan (25 × 18 km), was formed by four caldera-forming eruptions (Aso-1 to Aso-4) that occurred during the period 266–89 ka (Ono and Watanabe 1985). The Aso-4 eruption represents the most recent caldera-forming eruptive cycle and produced voluminous pyroclastic products (> 600 km3, VEI 7; Committee for Catalog of Quaternary Volcanoes in Japan 1999). The Aso-4 pyroclastic flow deposits are deposited throughout the central and northern parts of Kyushu Island and crossed the sea to reach the western part of Honshu Island (Ono and Watanabe 1985). The Aso-4 co-ignimbrite ash fall deposit (15 cm thick) is found at Abashiri in eastern Hokkaido, ~ 1700 km from Aso Volcano (Machida and Arai 2003). Among the four caldera-forming cycles, only the Aso-4 pyroclastic products contain amphibole phenocrysts (Watanabe 2001; Kaneko et al. 2007). On the basis of petrological and isotopic data, Hunter (1998) and Kaneko et al. (2007) showed that the Aso-4 magma is compositionally diverse (basaltic to rhyolitic) and was generated by several intra-crustal processes (magma mixing, crustal assimilation, and fractional crystallization) within a large-zoned magma chamber. However, geothermobarometry studies of the Aso-4 magma are limited, and hence, the detailed pre-eruptive magmatic processes and conditions remain debated. Application of amphibole single-phase-based thermobarometric and chemometric equations to Aso-4 amphibole phenocrysts provides important constraints on the magmatic processes and conditions immediately prior to the largest caldera-forming eruption at Aso Volcano. We investigated the chemical compositions of amphibole phenocrysts in the Aso-4 pyroclastic products and applied the amphibole-based empirical thermobarometer and melt–SiO2 meter to quantify the P–T–\({\text{SiO}}_{2}^{\text{melt}}\) conditions of amphibole crystallization. Our results indicate disequilibrium between the amphibole phenocrysts and the host groundmass melt of the Aso-4 silicic magma. Geology of Aso Volcano and the Aso-4 eruption Aso Volcano is located in the volcanic front on Kyushu Island, southwest Japan–Ryukyu arc (Fig. 1). The Aso Caldera was formed by four pyroclastic flow eruptions: Aso-1 (266 ± 14 ka), Aso-2 (141 ± 5 ka), Aso-3 (123 ± 6 ka), and Aso-4 (89 ± 7 ka) (Ono and Watanabe 1985; K–Ar ages from Matsumoto et al. 1991). The total volumes of these four pyroclastic flow deposits (including co-ignimbrite ash fall deposits) are estimated as 50, 50, > 150, and > 600 km3, respectively (Committee for Catalog of Quaternary Volcanoes in Japan 1999). The caldera walls consist of pre-Aso volcanic rocks that pre-date the caldera-forming Aso-1 eruption (Ono and Watanabe 1985; Watanabe et al. 1989). The Aso-1 to Aso-4 pyroclastic flow deposits were widely deposited over the preexisting pre-Aso volcanic rocks. Post-caldera volcanism commenced soon after the last caldera-forming eruption at 89 ka (Ono and Watanabe 1985; Miyoshi et al. 2012). Distribution of Aso-4 pyroclastic deposits and sampling sites (after Ono and Watanabe 1985; Kaneko et al. 2007). Solid lines show the distribution areas of Aso-4 pyroclastic flow units. Dashed lines show the estimated distribution areas of the units (Kaneko et al. 2007) The Aso-4 pyroclastic eruption is divided into two sub-cycles that are characterized by a progression from silicic to mafic magma (Ono and Watanabe 1983; Kaneko et al. 2007). On the basis of the lithological and petrological features of the pyroclastic flow deposits, the first (I) and second (II) sub-cycles of the Aso-4 eruption are divided into the following units (in stratigraphic order): 4I-1, 4I-2, and 4I-3; and 4II-1, 4II-2, and 4II-3 (Kaneko et al. 2007). The volumes of these pyroclastic flow units are estimated as follows: 4I-1 > 60 km3; 4I-2 > 4 km3; 4I-3 = ~ 0.5 km3; 4II-1 = ~ 10 km3; 4II-2 = 5 km3; and 4II-3 = ~ 0.2 km3 (Ono et al. 1977; Watanabe 1978; Kamata 1997; Kaneko et al. 2007). Kaneko et al. (2007) argued that the magma chambers that fed the Aso-4 sub-cycles were compositionally zoned from mafic (lower) to silicic (upper). Based on mineralogical and geochemical data, they suggested that the andesitic–dacitic products (the pyroclastic units other than 4I-1) are hybrids of the upper and lower magmas (Kaneko et al. 2007). The pre-eruptive depth of the Aso-4 silicic magma was estimated to be deeper than 3 km on the basis of the water content of the melt and the H2O solubility equation (Kaneko et al. 2007). Analyzed samples Two rock samples were collected from Unit 4I-1 of the Aso-4 pyroclastic deposits. Unit 4I-1 was formed from the initial silicic pyroclastic flow (~ 69 wt% bulk SiO2 and ~ 72 wt% melt SiO2) and was the most voluminous unit erupted during the Aso-4 cycle (> 60 km3; Kaneko et al. 2007). Total sum of the 4I-1 deposit and the co-ignimbrite ash fall deposit (> 400 km3) is almost 75% of the total volume of Aso-4 cycle. The dense rock equivalent volume of silicic magma which existed before the 4I-1 eruption is estimated to be more than 200 km3 (Kaneko et al. 2007). The samples were selected to investigate the magmatic processes and conditions in the upper part of the magma chamber immediately before the first sub-cycle of Aso-4. Sample 4I-1p is a pumice clast collected from a non-welded pumice flow deposit at Site-1 (Oyatsu, ~ 23 km west of the center of Aso Caldera). Sample 4I-1w was collected from the densely welded tuff at Site-2 (Takachiho-kyo, ~ 30 km southeast of the center of Aso Caldera; Fig. 1). The amphibole phenocrysts in these samples were fresh enough for chemical analysis. Polished thin sections were made for backscattered electron (BSE) imaging and analysis by electron probe microanalyzer (EPMA). BSE imaging of amphibole phenocrysts from the Aso-4 pyroclast samples was performed using a field emission EPMA (JEOL JXA-8530FPlus) at the Earthquake Research Institute, University of Tokyo, Japan (ERI). Major-element compositions were measured using an EPMA (JEOL-8800R) at ERI. For compositional analyses, the accelerating voltage, analytical current, and beam size conditions were 15 kV, 12 nA, and 1 μm, respectively. The stoichiometric calculations of the amphibole are based on the 13-cation model of Leake et al. (1997). The 1σ relative errors of the element measurements are < 0.7 rel.% for Si; < 1 rel.% for Al, Fe, Mg, and Ca; < 5 rel.% for Ti and Na; < 10 rel.% for K; and < 20 rel.% for Mn. The 1σ relative error for Cl is < 10 rel.% for Cl contents of ~ 0.1 wt% (Nagasaki et al. 2017). In addition, we used SEM (Hitachi S-3400 N) equipped with EDS (Oxford Instruments X-MAX50) at Shizuoka University for phase identification of mineral inclusions in amphiboles. Amphibole thermobarometry and chemometry Empirical equations have been proposed to quantitatively relate the physical and chemical conditions of a silicate melt in equilibrium with calcic amphibole (including T, P, and \({\text{SiO}}_{2}^{\text{melt}}\)) with the single-phase composition of amphibole (Ridolfi et al. 2010; Ridolfi and Renzulli 2012; Putirka 2016). The empirical equations were formulated based on high-P–T equilibrium experiments using natural volcanic rocks as starting materials. In the present study, the P–T–\({\text{SiO}}_{2}^{\text{melt}}\) conditions of silicate melts in equilibrium with amphiboles were estimated using the empirical equations based on amphibole composition. These conditions can be estimated using these equations without assuming equilibrium between amphibole and other phases, which enables us to recognize disequilibrium between these phases. Although several geothermobarometers have been proposed based on element partitioning between amphibole and plagioclase (e.g., Blundy and Holland 1990; Holland and Blundy 1994; Molina et al. 2015) and between amphibole and silicate melt (e.g., Putirka 2016), they all require the assumption of equilibrium between amphibole and other phases. These models therefore have increased errors associated with P–T estimates if amphibole and other phases are in disequilibrium. Therefore, we calculated the P–T–\({\text{SiO}}_{2}^{\text{melt}}\) conditions of silicate melts in equilibrium with amphiboles using the empirical equations based on amphibole composition. In this study, we used the independent thermometric equation proposed by Putirka (2016) (Eq. 1) to estimate amphibole crystallization temperatures: $${\text{T}}\left( {^\circ {\text{C}}} \right) \, = \, 1781 - 132.74 \, \times \, \left[ {{\text{Si}}^{\text{Amp}} } \right] \, + \, 116.6 \, \times \, \left[ {{\text{Ti}}^{\text{Amp}} } \right] - 69.41 \, \times \, \left[ {{\text{Fe}}_{t}^{\text{Amp}} } \right] \, + \, 101.62 \, \times \, \left[ {{\text{Na}}^{\text{Amp}} } \right],$$ where SiAmp, TiAmp, Fe t Amp , and NaAmp are the numbers of cations in amphibole calculated on the basis of 23 atoms and Fe t Amp is the total number of Fe cations as Fe2+. Although Putirka (2016) proposed pressure-dependent and pressure-independent models, the temperature estimation errors are similar for the two models; i.e., the mean and standard deviation of the difference between estimated and experimental temperatures are 3.2 °C and 28 °C for the P-dependent model, respectively, and 0.4 °C and 30 °C for the P-independent model, respectively. Putirka (2016) argued that these models are the most accurate of the existing amphibole geothermometers. In this study, we use the P-independent model to estimate the crystallization temperatures of the analyzed amphiboles. Ridolfi and Renzulli (2012) and Putirka (2016) proposed empirical equations to calculate the SiO2 content of a hydrous silicate melt in equilibrium with amphibole (\({\text{SiO}}_{2}^{\text{melt}}\)) from the amphibole composition and temperature. The two models yield consistent results and reproduce \({\text{SiO}}_{2}^{\text{melt}}\) within ± 3.6 wt% (Putirka 2016). In this study, we used the model of Putirka (2016): $${\text{SiO}}_{2}^{\text{melt}} = \, 751.95 - 0.4 \, \times \, T\left( {^\circ {\text{C}}} \right){-}278{,}000/T\left( {^\circ {\text{C}}} \right) - 9.184 \, \times \, \left[ {{\text{Al}}^{{T - {\text{Amp}}}} } \right],$$ where AlT-Amp is the amount of Al in amphibole when calculated on the basis of 23 O atoms and \({\text{SiO}}_{2}^{\text{melt}}\) is the SiO2 content (wt%) in hydrous silicate melt including the total mass of H2O. Ridolfi et al. (2010) and Ridolfi and Renzulli (2012) proposed empirical barometric equations based on amphibole composition. The model of Ridolfi et al. (2010) is a function of the Al content in amphibole, whereas the models of Ridolfi and Renzulli (2012) depend on both the Al content and the concentration of other elements in amphibole, as shown in the following equations (at pressure conditions of 130 MPa < P < 500 MPa): $$\ln P \, \left( {\text{MPa}} \right) \, = \, 38.723{-}\left( {2.6957 \, \times {\text{ Si}}} \right){-}\left( {2.3565 \, \times {\text{ Ti}}} \right){-}\left( {1.3006 \, \times {\text{ Al}}} \right){-}\left( {2.7780 \, \times {\text{ Fe}}} \right){-}\left( {2.4838 \, \times {\text{ Mg}}} \right){-}\left( {0.6614 \, \times {\text{ Ca}}} \right){-}\left( {0.2705 \, \times {\text{ Na}}} \right) \, + \, \left( {0.1117 \, \times E} \right)$$ $$P \, \left( {\text{MPa}} \right) \, = \, 24023{-}\left( {1925.3 \, \times {\text{ Si}}} \right){-}\left( {1720.6 \, \times {\text{ Ti}}} \right){-}\left( {1478.5 \, \times {\text{ Al}}} \right){-}\left( {1843.2 \, \times {\text{ Fe}}} \right){-}\left( {1746.9 \, \times {\text{ Mg}}} \right){-} \, (158.28 \, \times {\text{ Ca}}){-}\left( {40.444 \, \times {\text{ Na}}} \right) \, + \, \left( {14.389 \, \times {\text{ K}}} \right).$$ In these equations, the cation numbers in amphibole are calculated based on the 13-cation model of Leake et al. (1997). The reliability of amphibole barometry is still debated. Erdmann et al. (2014) and Putirka (2016) quantitatively examined the reliability of the models of Ridolfi et al. (2010) and Ridolfi and Renzulli (2012) and proposed that the pressures estimated using the amphibole barometers were artifacts reflecting the bulk magma compositions and concluded that the models are untenable. However, Nagasaki et al. (2017) re-examined the reliabilities of Eqs. 3 and 4 using the compiled dataset of high-P–T equilibrium experiments selected by Putirka (2016) and showed that these equations can be used to roughly estimate the pressure within a 1σ error of ± 85 MPa under the limited conditions of \({\text{SiO}}_{2}^{\text{melt}}\) > 60 wt% and P = 150–500 MPa. Equations 3 and 4 yield similar pressure values, and we used the average value of both pressures, similar to the approach of Erdmann et al. (2014). We compared the glass and whole-rock compositions of Aso-4 pyroclasts (Kaneko et al. 2007) with experimental melts used for calibrations of Eqs. 1 and 2 by Putirka (2016) and for reliability evaluations of Eqs. 3 and 4 by Nagasaki et al. (2017) and confirmed that the compositions of Aso-4 magmas are within the range of the experimental melts (Additional file 2: Fig. S1); therefore, we concluded that applications of Eqs. 1–4 to Aso-4 pyroclasts are valid. Texture and compositions of amphibole phenocrysts Aso-4 ejecta are crystal-poor with a modal abundance of phenocrysts of 5–10 vol%, with the exception of scoriae with 10–37 vol% of phenocrysts (Kaneko et al. 2007). The studied 4I-1 pumice and welded tuff samples contain ~ 0.3 vol% and ~ 2.4 vol% amphibole phenocrysts, respectively. The pumice sample also contains phenocrysts of plagioclase (~ 2.9 vol%), pyroxene (~ 0.3 vol%), and Fe–Ti oxides (~ 0.3 vol%) and trace apatite. Fe–Ti oxides do not show chemical zoning within crystal rims (Additional file 2: Fig. S2). Crystal clots composed of amphibole and other phenocryst phases such as plagioclase, Fe–Ti oxides and apatite were observed. In addition, some amphiboles host these phases as inclusions. However, we cannot find any glomeroporphyritic aggregate of pyroxene and amphibole crystals. Amphibole phenocrysts are euhedral with rare breakdown rims (Fig. 2a–c). The groundmass of the pumice sample is composed of vesicular glass and is microlite-free, indicating that crystallization did not occur during magma ascent within the conduit. BSE images of amphibole phenocrysts in samples: a 4I-1w and b, c 4I-1p. Abbreviations are as follows: amphibole (amp), plagioclase (plg), Fe–Ti oxide (ox), and apatite (ap) BSE images (Fig. 2) and EPMA analyses show that each amphibole phenocryst is almost chemically homogeneous, as described by Kaneko et al. (2007), although inter-grain chemical variation is observed (Fig. 3 and Additional file 1: Table S1 and Additional file 2: Fig. S3). Based on the 13-cation model of Leake et al. (1997), Ca and Ti contents are > 1.5 and < 0.5 apfu, respectively, and the [6]Al content is lower than the Fe3+ content in all analyzed amphiboles. In addition, the Si content, Mg# [= Mg/(Mg + Fe2+)], and (Na + K)A are 6.1–6.6, 0.75–0.9, and 0.4–0.5, respectively, where (Na + K)A is the total alkali content in the A site. According to the classification of Leake et al. (1997), the analyzed amphiboles are mostly magnesiohornblende or magnesiohastingsite, although rare actinolite and edenite amphiboles were also identified (Fig. 3). Mg# [= Mg/(Mg + Fe2+) on a molar basis] plotted against Si (atoms per formula unit) for amphibole phenocrysts. Diamonds and circles indicate amphibole phenocrysts in samples 4I-1w and 4I-1p, respectively. Black and gray symbols indicate the cores and rims of phenocrysts, respectively. Classification is based on Leake et al. (1997) [4]Al ranges from ~ 1.4 to 1.9 apfu (Additional file 2: Fig. S3), which covers the full range of [4]Al contents previously reported in amphiboles in Aso-4 pyroclasts (Kaneko et al. 2007). Most amphiboles are characterized by a decrease in [6]Al content with a slight increase in [4]Al content, whereas amphiboles with [4]Al > ~ 1.7 apfu are enriched in [6]Al (Additional file 2: Fig. S3a). No obvious correlation is found between [4]Al content and (Na + K)A, Ti, and Ca contents (Additional file 2: Fig. S3b–d). In addition, [6]Al content shows a weak negative correlation with Ti and Fe3+ contents. These observations suggest that (1) the slight negative correlation between [4]Al and [6]Al is due to Ti-Tschermak and Al–Fe3+ substitutions; (2) the Al-enriched nature of amphiboles with [4]Al > ~ 1.7 apfu is due to Tschermak substitution; and (3) edenite and plagioclase substitutions are insignificant. Tschermak substitution is sensitive to both T and P conditions, whereas edenite and Ti-Tschermak substitutions are mostly temperature dependent (e.g., Anderson and Smith 1995; Shane and Smith 2013). Therefore, the crystallization P–T conditions of the Al-enriched amphiboles may be distinct from those of other amphiboles. The Al-enriched amphiboles are rare in the 4I-1 pyroclasts, whereas other pyroclastic units of the Aso-4 eruption, which were hybrid of the 4I-1 magma and the mafic magma erupted as the 4I-3 and 4II-3 Units, commonly include them; on the other hand, the other amphiboles (with no Al enrichment) are common in the 4I-1 magma (Kaneko et al. 2007). T–\({\text{SiO}}_{2}^{\text{melt}}\) conditions of amphibole crystallization Figure 4 and Additional file 2: Fig. S4 show the T–\({\text{SiO}}_{2}^{\text{melt}}\) conditions of amphibole crystallization, as estimated from amphibole compositions. The estimated T–\({\text{SiO}}_{2}^{\text{melt}}\) conditions are homogenous from core to rim of each amphibole phenocryst, although inter-grain variation is observed (Additional file 2: Fig. S4). The estimated \({\text{SiO}}_{2}^{\text{melt}}\) contents are ~ 57–69 wt%, with a gap at ~ 60.5–62.5 wt%. The lower \({\text{SiO}}_{2}^{\text{melt}}\) contents are estimated from Al-enriched amphiboles. The temperatures estimated from amphibole phenocrysts vary from ~ 900 to 980 °C, and the Al-enriched amphiboles yield higher temperatures (~ >960 °C). \({\text{SiO}}_{2}^{\text{melt}}\) contents increase as temperature decreases (Fig. 4). The T–\({\text{SiO}}_{2}^{\text{melt}}\) gap between Al-enriched and other amphiboles suggests that the two groups were crystallized under different conditions. We infer that the Al-enriched amphiboles were crystallized in andesitic melt formed by mixing between the 4I-1 melt and the mafic magma (erupted during the later stage of the Aso-4 eruption; Units 4I-3 and 4II-3) because similar Al-rich amphiboles are common in the pyroclastic units other than the 4I-1 (Kaneko et al. 2007). The other amphiboles (with no Al enrichment), commonly found in the 4I-1 magma, were not related to the mafic magma and were crystallized and accumulated within the main Aso-4 magma reservoir. Temperature conditions and SiO2 contents of hydrous melt in equilibrium with amphibole phenocrysts, as estimated using the methods of Putirka (2016). Diamonds and circles indicate amphibole phenocrysts in samples 4I-1w and 4I-1p, respectively. Error bars indicate 1σ errors for estimations of temperature (±30 °C) and \({\text{SiO}}_{2}^{\text{melt}}\) (± 3 wt%). Gray area shows the T–\({\text{SiO}}_{2}^{\text{melt}}\) conditions of the groundmass melt of the 4I-1 magma; the temperature is based on Fe–Ti oxide thermometry (from Kaneko et al. 2007). The double-headed arrow indicates the range estimated using two-pyroxene thermometry (from Kaneko et al. 2007) The calculated amphibole temperatures are much higher than those estimated from Fe–Ti oxide thermometry (810 °C–850 °C; Kaneko et al. 2007). Fe–Ti oxide thermometry is thought to reliably represent the pre-eruptive temperature of the 4I-1 melt (Kaneko et al. 2007) because the diffusion coefficient of Ti in Fe–Ti oxides is large (~ 10–15 m2/s at 830 °C; Freer and Hauptman 1978) and chemical zoning is absent at the rims of Fe–Ti oxide phenocrysts (Additional file 2: Fig. S2). The absence of chemical zoning in Fe–Ti oxide rims indicates that crystallization/re-equilibration did not occur during magma ascent, and hence, the Fe–Ti oxides record the pre-eruptive temperature of the magma. The large discrepancy between amphibole and Fe–Ti oxide temperatures, as well as the intra-grain homogeneity of amphibole phenocrysts, suggests that the amphiboles were in disequilibrium with the 4I-1 melt. Under hydrous conditions, the 4I-1 melt is characterized by ~ 70 wt% \({\text{SiO}}_{2}^{\text{melt}}\) and ~ 4.1–5.7 wt% H2O (Kaneko et al. 2007). Most of the amphiboles indicate \({\text{SiO}}_{2}^{\text{melt}}\) contents lower than that of the 4I-1 melt, suggesting that they were in disequilibrium with the 4I-1 melt. This is consistent with the implications of the temperature estimations. Although a small number of amphiboles yield \({\text{SiO}}_{2}^{\text{melt}}\) contents of coexisting melts within error of the 4I-1 melt, they show higher temperatures than that of the 4I-1 melt, suggesting that they were in disequilibrium with the melt. The interpretation of disequilibrium between the amphiboles and the host melt is consistent with the results of high-P–T equilibrium experiments on the Aso-4 magma (Ushioda et al. 2017). No amphibole with the composition of the 4I-1 melt crystallized in these experiments, even under a wide range of P–T–H2O conditions (Ushioda et al. 2017). In addition, a similar case of host melt–amphibole disequilibrium is reported from the 2015 rhyolitic magma from Cotopaxi, Ecuador (Martel et al. 2018). Origin of amphiboles Euhedral crystal shapes and the absence of overgrowth/re-equilibration rims around amphibole phenocrysts suggest that they were entrained into the 4I-1 melt immediately before the Aso-4 eruption. Previous experiments showed that several days are required for the formation of amphibole breakdown rims during decompression without overheating (Rutherford and Hill 1993; Brown and Gardner 2006); the experiments of Brown and Gardner (2006) were performed at 840 °C, which is similar to Fe–Ti oxide temperature estimated for the 4I-1 magma (810–850 °C; Kaneko et al. 2007). If an amphibole crystal is overheated, less than an hour is required to form a breakdown rim (De Angelis et al. 2015). Therefore, we conclude that the amphiboles were rapidly transported (possibly less than several days) to the surface after entrainment into the host 4I-1 melt. Plagioclase, Fe–Ti oxide minerals, and apatite are often found in crystal clots with amphiboles and also as inclusions in amphiboles (Fig. 2). This suggests that some crystals of plagioclase, Fe–Ti oxide minerals, and apatite likely had the same origin as the amphiboles and were entrained in the 4I-1 melt immediately before the eruption. The calculated amphibole temperatures (~ 910–950 °C) are similar to previous temperature estimates from two-pyroxene thermometry (900–960 °C; Kaneko et al. 2007), which are higher than Fe–Ti oxide temperatures (810–850 °C; Kaneko et al. 2007). The two-pyroxene temperatures are based on the compositions of ortho- and clinopyroxene pairs, and Kaneko et al. (2007) confirmed a state of quasi-equilibrium between paired pyroxenes based on Fe–Mg partitioning. Kaneko et al. (2007) interpreted that the discrepancy between temperatures estimated using the Fe–Ti oxide and two-pyroxene thermometers is due to differences in closure temperatures between the two methods. If this interpretation is valid, the host melt and Fe–Ti oxides must have cooled rapidly to inhibit the formation of amphibole overgrowth/re-equilibrated rims prior to magma ascent. High-P–T equilibrium experiments showed that crystallization of amphibole with crystal sizes >10 μm in silicic melt occurs within ~ > 10 days (e.g., Holtz et al. 2005). Considering the diffusion coefficient of Ti in magnetite (~ 10–15 m2/s at 830 °C; Freer and Hauptman 1978), diffusion profiles of Ti with several tens of microns thickness should be observed in magnetite if the host melt was cooled during ~ 1–10 days. However, such chemical zoning is not observed. Therefore, the discrepancy between temperatures estimated using the Fe–Ti oxide and two-pyroxene thermometers cannot be attributed to differences in closure temperatures. Alternatively, we infer that some pyroxenes were also entrained into the 4I-1 melt immediately before eruption, as is the case for amphiboles, and the two-pyroxene temperatures record their thermal state before entrainment into the 4I-1 melt. However, the similar temperature estimates from amphibole and pyroxene thermometry (Fig. 5) do not necessarily indicate co-crystallization of these phases in the same melt. This is because there is no pyroxene crystal touching with amphibole although other mineral phases are found as amphibole-hosted inclusions in the 4I-1 pyroclasts (Fig. 2), implying that amphibole and pyroxene did not coexisted. High-P–T equilibrium experiments for andesitic–dacitic systems (e.g., Costa et al. 2004; Scaillet and Ebans 1999; Moore and Carmichael 1998) show that amphibole crystallizes without pyroxene under relatively H2O-rich condition, whereas amphibole is unstable and two pyroxenes crystallize under H2O-poor condition at temperature of ca. 900–950 °C. Under intermediate condition, amphibole and pyroxene can coexist. In addition, thermodynamic stability of amphibole also depends on melt composition (e.g., Putirka 2016). Therefore, the absence of touching amphibole–pyroxene pair, as well as similar temperatures of amphibole and pyroxenes, suggests that amphibole and pyroxenes crystallized from discrete melts of different H2O contents and/or compositions, and both crystals were entrained in the 4I-1 melt immediately before the eruption. Schematic diagrams showing a the state before the Aso-4 event started. Amphibole- and Cpx-free crystal mush underlying the 4I-1 melt chamber and overlying two-pyroxene-bearing crystal mush with a local amphibole-bearing part; b the mafic magma started intrusion into the 4I-1 melt chamber. The initially intruded mafic magma was mixed with the 4I-1 melt to form small volume of hybrid magma, which crystallized Al-enriched amphiboles. In addition, the intrusion induced partial collapse of the crystal mush; c the hybrid magma and crystals derived from the collapsed mush were incorporated into and mixed with the 4I-1 melt, and the magma erupted to form the 4I-1 pyroclastic flow; d large volume of the mafic magma intruded into and mixed with the residual 4I-1 melt to form abundant hybrid magma, which erupted in the later stage of Aso-4 eruption; e after Aso eruption ended, low-density breccias and residual crystal mush acted as density barrier for ascending mafic magma, resulting in formation of mafic magma reservoir at the depth similar to the Aso-4 magma reservoir; f in post-caldera stage, mafic magmas derived from the new mafic magma reservoir have erupted Previous studies have indicated that some phenocrysts in caldera-forming magma are derived from crystal mush surrounding the magma chamber (e.g., Cooper 2017; Ellis et al. 2014). This is also the case for the Aso-4 magma, as (1) amphibole, plagioclase, and Fe–Ti oxide minerals sometimes occur as crystal clots, (2) both amphiboles and some pyroxenes were in disequilibrium with the 4I-1 melt, (3) amphiboles and pyroxenes in the 4I-1 pyroclasts were crystallized in the discrete melts of different H2O contents and/or compositions, and (4) amphiboles not enriched in Al are common in the 4I-1 magma. These evidences indicate that they were incorporated into the 4I-1 melt from crystal mush that accumulated beneath the 4I-1 magma chamber. The crystal mush was formed by the accumulation of mineral phases that crystallized within the Aso-4 magma reservoir. The accumulated crystal mush formed a thermal conduction layer at the base of the reservoir that inhibited cooling from the base. As a result, the magma reservoir cooled only from the roof and the underlying crystal mush layer had a higher temperature than the overlying magma chamber (e.g., Nishimura 2012). Therefore, the temperatures of the underlying crystal-mush-derived amphiboles are higher than those of the overlying 4I-1 melt. We suggest that the 4I-1 magma chamber and the underlying amphibole-bearing crystal mush part were separated by a lower-T amphibole-free crystal mush layer. The interstitial melt in the amphibole-free crystal mush may have filled the thermal–compositional gap between the 4I-1 melt and the melts in equilibrium with amphiboles. In addition, the lower-T amphibole-free crystal mush layer was also clinopyroxene-free because of the lack of ortho- and clinopyroxene pair revealing temperature < 900 °C (Kaneko et al. 2007). This is supported by the results of high-P-T equilibrium experiments for hydrous dacitic system (Costa et al. 2004); the results show that (1) ortho- and clinopyroxenes coexist at relatively high temperatures (> 900 °C), (2) orthopyroxene crystallizes without both clinopyroxene and amphibole under relatively low temperature (< 900 °C) and intermediate–low H2O content conditions (< ca. 4.5 wt% H2O in melt), and (3) amphibole crystallizes even at lower temperature (< 900 °C) under H2O-rich condition (< ca. 4.5 wt% H2O in melt), although the boundary T–H2O conditions may shift slightly with the change in melt composition. The absence of amphibole with T < 910 °C indicates that the amphibole-free mush layer was not H2O-rich. This is consistent with the estimation of H2O content in the 4I-1 melt (ca. 4.1-5.7 wt%) based on plagioclase–melt element partitioning (Kaneko et al. 2007), which is near the lower limit of amphibole stability field. Interstitial melt in the underlying hotter crystal mush is inferred to be less evolved and less hydrous than the 4I-1 melt. Consequently, we think that the crystal mush underlying the 4I-1 melt chamber was under intermediate–low H2O content condition and included orthopyroxene at lower T and two-pyroxene assemblage was limited to higher-T conditions. In the crystal mush, H2O-rich part including amphibole without pyroxenes locally existed (Fig. 5a). Similar model of heterogeneous mushy reservoir was suggested for Central Snake River Plain ignimbrites (Ellis et al. 2014). In the case of the Aso-4 eruption, partial collapse of the crystal mush may have occurred immediately prior to eruption. This is because the 4I-1 pyroclasts include both amphiboles and pyroxenes stored in separated parts of the underlying crystal mush, and residence times of amphiboles in the 4I-1 melt were short. If we assume all phenocrysts were derived from the crystal mush and the crystal mush contained > 60 vol% of crystals, the volume ratio of collapsed crystal mush to the 4I-1 melt is estimated to be less than ~ 0.15. Therefore, the collapsed mush was limited to the part only near the 4I-1 melt chamber. We attribute this collapse to the injection of mafic magma prior to eruption, as inferred from the Al-enriched amphiboles of the 4I-1 pyroclasts, which are in equilibrium with higher-T andesitic melt (Fig. 4). Among the four caldera-forming ignimbrites, only the Aso-4 pyroclasts contain amphibole phenocrysts. We attribute this to the following: (1) The crystal mush had an amphibole-bearing part locally only before Aso-4 eruption and/or (2) because of the larger magnitude of the Aso-4 eruption compared to other ignimbrite eruptions, the collapsed mush was large volume enough to include a locally distributed amphibole-bearing part. Kaneko et al. (2007) suggested that following the 4I-1 eruption, the 4I-1 melt and the underlying mafic melt mixed to form less-silicic hybrid magma, which then erupted during the later Stages 4I-2 and 4II of the Aso-4 eruption (4II of Kaneko et al. 2007). We infer that the mafic magma was derived from another magma pocket located in the bottom part of the crystal mush or underlying Aso-4 magma reservoir (Fig. 5a). Initial intrusion of the mafic magma induced partial collapse of the crystal mush underlying the 4I-1 melt chamber. Furthermore, the intruding mafic magma rapidly mixed with the 4I-1 melt to form andesitic melt which crystallized the Al-rich amphiboles (Fig. 5b). Experimental study of Kouchi and Sunagawa (1985) showed that when basaltic and silicic melts are mixed, basaltic melt rapidly changes to andesitic composition by mixing with silicic melt, whereas silicic melt decreases its mass without significant compositional change. Therefore, at this time, abundance of the hybrid andesitic melt was very small relative to the 4I-1 melt and exerted negligible effect on the 4I-1 melt composition. Then, the small volume of the Al-enriched amphibole-bearing andesitic melt was mixed with the more voluminous 4I-1 melt and the crystal-mush-derived crystals; the magma erupted and formed the 4I-1 pyroclastic flow deposit (Fig. 5c). In the later stage of Aso-4 eruption, more abundant mafic magma was intruded and mixed with the 4I-1 melt to form the hybrid dacitic melts, which erupted as the 4I-2 and 4II Units as Kaneko et al. (2007) discussed (Fig. 5d). Pressure conditions of amphibole crystallization The amphibole barometer of Ridolfi and Renzulli (2012) indicates crystallization pressures of ~ 260–380 MPa for most amphiboles. This is within error of the pressures calculated for the Al-enriched amphiboles (~ 340–420 MPa; Fig. 6; based on the ±85 MPa (1σ) error for the barometer; Nagasaki et al. 2017). Pressure–temperature conditions estimated from amphibole compositions. Symbols are as shown in Fig. 4. Depth is calculated from pressure under an assumption of crust density ~ 2500 kg/m3 Assuming a lithostatic pressure–depth relation and a crust density of 2500 kg/m3, the pressure of ~ 340 ± 85 MPa corresponds to ~ 13.9 ± 3.5 km depth. The crust density is consistent with the reported rock types (granodiorite and psammitic schist) of borehole core samples from the basement of Aso caldera (Miyoshi et al. 2011a). The lithostatic pressure–depth is consistent with a previous estimation of the pre-eruptive depth of the Aso-4 silicic magma at > 3 km (based on the water content of the melt and the H2O solubility equation; Kaneko et al. 2007). The geophysical study of Abe et al. (2016) reported a low-velocity zone at a depth of 8–15 km beneath the eastern flank of the post-caldera central cones, a swarm of deep low-frequency earthquakes at 15–25 km depth, and a sill-like deformation source at 15.5 km depth. These results suggest that a magma reservoir exists at depths of 8–15 km, which is consistent with our amphibole barometry results (Fig. 6). This indicates that the Aso-4 magma reservoir was located at a depth similar to that of the present-day magma reservoir beneath Aso Volcano. Most of the magmas that erupted after the last caldera-forming eruption have basaltic–andesitic compositions (Miyoshi et al. 2012), and the present-day magma chamber is thought to contain mafic magma (Unglert et al. 2011). This suggests that the present magma plumbing system was formed after the Aso-4 eruption. The consistency between the depths of the Aso-4 and the present-day magma reservoirs may imply that the depth of the present-day magma reservoir is strongly influenced by the structure of the Aso-4 magma reservoir. We suggest that the relic of the Aso-4 magma reservoir created a density barrier that inhibited the ascent of newly supplied post-caldera mafic magmas, meaning that they resided at the depth (Fig. 5e, f). After caldera collapse, the collapsed magma reservoir was filled by low-density breccia (e.g., Lipman 1984), which is less dense than mafic magma and hence acted as a density barrier. The crystal mush layer that remained beneath the collapsed magma reservoir further inhibited the ascent of post-caldera mafic magmas. The Aso-4 ejecta are crystal-poor (~ 5–10 vol%; Kaneko et al. 2007; with the exception of scoriae); consequently, a large part of the crystal mush layer is thought to have remained. The crystal mush consisted of andesitic–dacitic melt, plagioclase, and mafic minerals, with density similar to, or lower than, that of mafic melt. As a result, sills of dense mafic magma formed at a similar depth to that of the 4I-1 magma chamber. The post-caldera mafic magmas may have chemically interacted with the remaining crystal mush, although the Aso-4 and post-caldera basaltic–andesitic magmas have similar isotopic compositions (Miyoshi et al. 2011b); consequently, their interaction is difficult to detect using isotopic data. The formation and differentiation of mafic sills at depth also created additional barriers for the ascending mafic magma. Our results indicate that the post-caldera magma plumbing system was strongly influenced by the collapsed silicic magma reservoir that remained after the Aso-4 caldera-forming eruption. Chemical compositions of amphibole phenocrysts in Aso-4I pyroclasts were investigated to clarify their crystallization conditions and pre-eruptive magmatic processes. Our results suggest the following: (1) Most of the amphibole phenocrysts coexisted with silicate melt with 66–72 wt% SiO2 and temperatures of 910–950 °C, whereas some amphiboles crystallized from more mafic and higher-T melt; (2) the amphibole phenocrysts are in thermal and chemical disequilibrium with the host 4I-1 melt, indicating that they were incorporated into the melt immediately prior to eruption; and (3) amphibole phenocrysts crystallized at a depth of ~ 13.9 ± 3.5 km, which coincides with the depth of the present low-velocity zone beneath the volcano, implying that the depth of the post-caldera magma plumbing system is strongly influenced by a relic collapsed magma reservoir related to the most recent caldera-forming eruption. In future study, compositional analyses of minerals and melt inclusions coexisting with amphiboles in the 4I-1 and other units of Aso-4 eruption should be required to evaluate and brush up the model of Aso-4 magma reservoir process. Abe Y, Ohkura T, Shibutani T, Hirahara K, Yoshikawam S, Inoue H (2016) Low-velocity zones in the crust beneath Aso caldera, Kyushu, Japan, derived from receiver function analyses. J Geophys Res Solid Earth. https://doi.org/10.1002/2016JB013686 Anderson JL, Smith DR (1995) The effects of temperature and fO2 on the Al-in-hornblend barometer. Am Miner 80:549–559 Blundy JD, Holland TJB (1990) Calcic amphibole equilibria and a new amphibole-plagioclase geothermometer. Contrib Miner Petrol 104(2):208–224 Brown LB, Gardner JE (2006) The influence of magma ascent path on the texture, mineralogy, and formation of hornblende reaction rims. Earth Planet Sci Lett 246:161–176 Committee for Catalog of Quaternary Volcanoes in Japan (1999) Catalog of quaternary volcanoes in Japan. Ver. 1.0. The Volcanological Society of Japan, CD-ROM Cooper KM (2017) What does a magma reservoir look like? The "Crystal's-Eye" view. Elements 13:23–28 Costa F, Scaillet B, Pichavant M (2004) Petrological and experimental constraints on the pre-eruption conditions of Holocene dacite from Volcán San Pedro (36°S, Chilean Andes) and the importance of sulphur in silicic subduction-related magmas. J Petrol 45:855–881 De Angelis SH, Larsen J, Coombs M (2013) Pre-eruptive magmatic conditions at Augustine volcano, Alaska, 2006: evidence from amphibole geochemistry and textures. J Petrol 54:1939–1961 De Angelis SH, Larsen J, Coombs M, Dunn A, Hayden L (2015) Amphibole reaction rims as a record of pre-eruptive magmatic heating: an experimental approach. Earth Planet Sci Lett 426:235–245 Druitt TH, Costa F, Deloule E, Dungan M, Scaillet B (2012) Decadal to monthly timescales of magma transfer and reservoir growth at a caldera volcano. Nature. https://doi.org/10.1038/nature10706 Ellis BS, Bachmann O, Wolff JA (2014) Cumulate fragments in silicic ignimbrites: the case of the Snake River Plain. Geology 42:431–433 Erdmann S, Martel C, Pichavant M, Kushnir A (2014) Amphibole as an archivist of magmatic crystallization conditions: problems, potential, and implications for inferring magma storage prior to the paroxysmal 2010 eruption of Mount Merapi, Indonesia. Contrib Mineral Petrol 167:1016 Freer R, Hauptman Z (1978) An experimental study of magnetite-ilmenite interdiffusion. Phys Earth Planet Inter 16:223–231 Ginibre C, Wörner G, Kronz A (2007) Crystal zoning as an archive for magma evolution. Elements 3:261–266 Holland T, Blundy J (1994) Non-ideal interactions in calcic amphiboles and their bearing on amphiboleplagioclase thermometry. Contrib Miner Petrol 116(4):433–447 Holtz F, Sato H, Lewis J, Behrens H, Nakada S (2005) Experimental petrology of the 1991-1995 Unzen dacite, Japan. Part I: phase relations, phase composition and pre-eruptive conditions. J Petrol 46:319–337 Hunter AG (1998) Intracrustal controls on the coexistence of tholeiitic and calc-alkaline magma series at Aso Volcano, SW Japan. J Petrol 39:1255–1284 Kamata H (1997) Geology of the Miyanoharu district. With geological sheet map at 1:50000, Geological Survey of Japan, 127p (in Japanese with English abstract 5p) Kaneko K, Kamata H, Koyaguchi T, Yoshikawa M, Furukawa K (2007) Repeated large-scale eruptions from a single compositionally stratified magma chamber: an example from Aso volcano, Southwest Japan. J Volcanol Geotherm Res 167:160–180 Kiss B, Harangi S, Ntaflos T, Mason PRD, Pál-Molnár E (2014) amphibole perspective to unravel pre-eruptive processes and conditions in volcanic plumbing systems beneath intermediate arc volcanoes: a case study from Ciomadul volcano (SE Carpathians). Contrib Mineral Petrol 167:986 Kouchi A, Sunagawa I (1985) A model for mixing basaltic and dacitic magmas as deduced from experimental data. Contrib Mineral Petrol 89:17–23 Leake BE, Woolley AR, Arps CES, Birch WD, Gilbert MC, Grice JD, Hawthorne FC, Kato A, Kisch HJ, Krivovichev VG et al (1997) Nomenclature of amphiboles: report of the subcommittee on amphiboles of the International Mineralogical Association, Commission on New Minerals and Mineral Names. Can Mineral 35:219–246 Lipman P (1984) The roots of ash flow calderas in western north America: windows into the tops of granitic batholiths. J Geophys Res 89:8801–8841 Machida H, Arai F (2003) Atlas of tephra in and around Japan. University of Tokyo Press, Tokyo, p 336 (in Japanese) Martel C, Andújar J, Mothes P, Scaillet B, Pichavant M, Molina I (2018) Storage conditions of the mafic and silicic magmas at Cotopaxi, Ecuador. J Volcanol Geotherm Res 354:74–86 Matsumoto A, Uto K, Ono K, Watanabe K (1991) K-Ar age determinations for Aso volcanic rocks-concordance with volcanostratigraphy and application to pyroclastic flows. Program Abstr Volcanol Soc Jpn 2:73 (in Japanese) Miller CF, Wark DA (2008) Supervolcanoes and their explosive supereruptions. Elements 4:11–15 Miyoshi M, Yuguchi T, Shinmura T, Mori Y, Arakawa Y, Toyohara F (2011a) Petrological characteristics and K-Ar age of borehole core samples of basement rocks from the northwestern caldera floor of Aso, central Kyushu. J Geol Soc Jpn 117:585–590 Miyoshi M, Shibata T, Yoshikawa M, Sano T, Shinmura T, Hasenaka T (2011b) Genetic relationship between post-caldera and caldera-forming magmas from Aso volcano, SW Japan: constraints from Sr isotope and trace element compositions. J Mineral Petrol Sci 106:114–119 Miyoshi M, Sumino H, Myabuchi Y, Sinmura T, Mori Y, Hasenaka T, Furukawa K, Uno K, Nagao K (2012) K-Ar ages determined for post-caldera volcanic products from Aso volcano, central Kyushu, Japan. J Volcanol Geotherm Res 229–230:64–73 Molina JF, Moreno JA, Castro A, Rodríguez C, Fershtater GB (2015) Calcic amphibole thermobarometry in etamorphic and igneous rocks: new calibrations based on plagioclase/amphibole Al-Si partitioning and amphibole/liquid Mg partitioning. Lithos 232:286–305 Moore G, Carmichael ISE (1998) The hydrous phase equilibria (to 3 kbar) of an andesite and basaltic andesite from western Mexico: constraints on water content and conditions of phenocryst growth. Contrib Mineral Petrol 130:304–319 Nagasaki S, Ishibashi H, Suwa Y, Yasuda A, Hokanishi N, Ohkura T, Takemura K (2017) Magma reservoir conditions beneath Tsurumi volcano, SW Japan: evidence from amphibole thermobarometry and seismicity. Lithos 278–281:153–165 Nishimura K (2012) Acceleration and deceleration of crystal settling in convecting silicic magma chambers. J Toyo Univ Nat Sci 56:19–30 (in Japanese with English abstract) Ono K, Watanabe K (1983) Aso caldera. Earth Mon 5:73–82 (in Japanese) Ono K, Watanabe K (1985) Geological map of Aso volcano (1:50,000). Geological map of volcanoes 4, Geological Survey of Japan (in Japanese with English abstract) Ono K, Matsumoto Y, Miyahisa M, Teraoka Y, Kambe N (1977) Geology of the Taketa district. With geological sheet map at 1:50000, Geological Survey of Japan, p 157 (in Japanese with English abstract 8p) Putirka K (2016) Amphibole thermometers and barometers for igneous systems and some implications for eruption mechanisms of felsic magmas at arc volcanoes. Am Mineral 101:851–858 Ridolfi F, Renzulli A (2012) Calcic amphiboles in calc-alkaline and alkaline magmas: thermobarometric and chemometric empirical equations valid up to 1,130 °C and 22 GPa. Contrib Mineral Petrol 163:877–895 Ridolfi F, Renzulli A, Puerini M (2010) Stability and chemical equilibrium of amphibole in calc-alkaline magmas: an overview, new thermbarometric formulations and application to subduction-related volcanoes. Contrib Mineral Petrol 160:45–66 Ruprecht P, Bachmann O (2010) Pre-eruptive reheating during magma mixing at Quizapu volcano and the implications for the explosiveness of silicic arc volcanoes. Geology 38:919–922 Rutherford MJ, Hill PM (1993) Magma ascent rates from amphibole breakdown: an experimental study applied to the 1980–1986 Mount St. Helens eruptions. J Geophys Res 98:19667–19685 Scaillet B, Ebans BW (1999) The 15 June 1991 eruption of Mount Pinatubo. I. Phase equilibria and pre-eruptive P-T-fO2-fH2O conditions of the dacite magma. J Petrol 40:381–411 Shane P, Smith VC (2013) Using amphibole crystals to reconstruct magma storage temperatures and pressures for the post-caldera collapse volcanism at Okataina volcano. Lithos 156–159:159–170 Unglert K, Savage MK, Fournier N, Ohkura T, Abe Y (2011) Shear wave splitting, vp/vs, and GPS during a time of enhanced activity at Aso caldera, Kyushu. J Geophys Res 116:B11203. https://doi.org/10.1029/2011JB008520 Ushioda M, Miyagi I, Suzuki T, Takahashi E (2017) Pre-eruptive P–T conditions and H2O concentration of Aso-4 silicic magma based on high pressure experiments. IAVCEI 2017 abstract Wark DA, Hildreth W, Spear FS, Cherniak DJ, Watson EB (2007) Pre-eruptive recharge of the Bishop magma system. Geology 35:235–238 Watanabe K (1978) Studies on the Aso pyroclastic flow deposits in the region to the west of Aso caldera, southwest Japan, I: geology. Memorial of the Faculty of Education, Kumamoto University. Nat Sci 27:97–120 Watanabe K (2001) History and activity of Aso volcano (Japanese title "Aso kazan no oitachi"). Ichinomiya-cho, Japan (in Japanese) Watanabe K, Itaya T, Ono K, Takada H (1989) K-Ar ages of dike rocks in the Southwestern region of Aso Caldera, Kyushu, Japan. KAZAN 34:189–195 HI participated in the design of the study, carried out field surveys for collecting the studied samples, microscopic observation and EMP analyses of the studied samples, geothermobarometric and chemometric data analyses, and drafted the manuscript. YS carried out sample preparations for SEM observations and EMP analyses and carried out geothermobarometric and chemometric data analyses. MM participated in the design of the study, carried out field surveys for collecting the studied samples, and drafted the manuscript. AY participated in the design of the study, carried out EMP analyses of the studied samples, and drafted the manuscript. NH carried out SEM observations and EMP analyses of the studied samples. All authors read and approved the final manuscript. We are grateful to the editor Dr. A. Yokoo and two reviewers, Dr. K. Kaneko and an anonymous reviewer, who provided thoughtful comments that have improved the manuscript to an enormous degree. We thank Ms. Y. Kakihata for her helpful support of SEM observation at Shizuoka University. Please contact author for data requests. This study was supported by grants from JSPS KAKENHI (JP16K05606), the Earthquake Research Institute Cooperative Program (2017-G-03), and the Integrated Program for Next Generation Research and Human Resource Development. Department of Geoscience, Faculty of Science, Shizuoka University, Ohya 836, Suruga-ku, Shizuoka, 422-8529, Japan Hidemi Ishibashi & Yukiko Suwa Faculty of Education, University of Fukui, 3-9-1 Bnkyo, Fukui-shi, Fukui, 910-8507, Japan Masaya Miyoshi Earthquake Research Institute, University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo, 111-0032, Japan Atsushi Yasuda & Natsumi Hokanishi Search for Hidemi Ishibashi in: Search for Yukiko Suwa in: Search for Masaya Miyoshi in: Search for Atsushi Yasuda in: Search for Natsumi Hokanishi in: Correspondence to Hidemi Ishibashi. Additional file 1: Table S1. Major-element compositions of amphibole phenocrysts in this study. Additional file 2: Fig. S1. Whole-rock and melt compositions of Aso-4 pyroclasts are compared with those of melts of equilibrium experiments used for calibrations of amphibole-based thermometer and melt-SiO2 meter of Putirka (2016) and for reliability evaluation of amphibole-based barometer of Ridorfi and Renzulli (2012) by Nagasaki et al. (2017). Filled circles and open triangles indicate glass and whole-rock compositions of Aso-4 pyroclasts (Kaneko et al. 2007), and gray diamonds are experimental melts (DS-1 dataset compiled by Putirka 2016; see Putirka 2016 and references therein). Compositions of Aso-4 pyroclastic materials are within the range of experimental melts. Fig. S2. BSE images of magnetite phenocrysts in the studied pumice sample. Fig. S3. Amphibole compositions in terms of the relationship between [4]Al and (a) [6]Al, (b) (Na + K)A, (c) Ti, and (d) Ca. Symbols are the same as those in Fig. 3. Fig. S4. Comparisons of (a) temperatures and (b) SiO 2 melt contents estimated from the core and rim compositions of amphiboles. Diamonds and circles indicate amphibole phenocrysts in samples 4I-1w and 4I-1p, respectively. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Aso Volcano Crystal mush Thermobarometry 5. Volcanology Advancement of our knowledge on Aso volcano: Current activity and background
CommonCrawl
Category Archives: Diamond Highly specialized; only a subset of mathematicians who specialize in the area will have the background to follow these posts. The Springer Correspondence, Part I: The Flag Variety Posted on January 11, 2015 by Maria Gillespie In prior posts, we've seen that the irreducible representations of the symmetric group $S_n$ are in one-to-one correspondence with the partitions of $n$, and the Schur functions give an elegant encoding of their characters as symmetric polynomials. Now we can dive a bit deeper: a geometric construction known as the Springer resolution allows us to obtain all the irreducible representations of $S_n$ geometrically, and as a side bonus give natural graded representations that will allow us to define a $q$-analog of the Schur functions known as the Hall-Littlewood polynomials. Quite a mouthful of terminology. Let's start at the beginning. The Classical Flag Variety When you think of a flag, you might imagine something like this: Roughly speaking, a flag on a flagpole consists of: a point (the bulbous part at the top of the pole), a line passing through that point (the pole), a plane passing through that line (the plane containing the flag), and space to put it in. Mathematically, this is the data of a complete flag in three dimensions. However, higher-dimensional beings would require more complicated flags. So in general, a complete flag in $n$-dimensional space $\mathbb{C}^n$ is a chain of vector spaces of each dimension from $0$ to $n$, each containing the previous: $$0=V_0\subset V_1 \subset V_2 \subset \cdots \subset V_n=\mathbb{C}^n$$ with $\dim V_i=i$ for all $i$. (Our higher-dimensional flag-waving imaginary friends are living in a world of complex numbers because $\mathbb{C}$ is algebraically closed and therefore easier to work with. However, one could define the flag variety similarly over any field $k$.) Variety Structure Now that we've defined our flags, let's see what happens when we wave them around continuously in space. It turns out we get a smooth algebraic variety! Indeed, the set of all possible flags in $\mathbb{C}^n$ forms an algebraic variety of dimension $n(n-1)$ (over $\mathbb{R}$), covered by open sets similar to the Schubert cells of the Grassmannian. In particular, given a flag $\{V_i\}_{i=1}^n$, we can choose $n$ vectors $v_1,\ldots,v_n$ such that the span of $v_1,\ldots,v_i$ is $V_i$ for each $i$, and list the vectors $v_i$ as row vectors of an $n\times n$ matrix. We can then perform certain row reduction operations to form a different basis $v_1^\prime,\ldots,v_n^\prime$ that also span the subspaces of the flag, but whose matrix is in the following canonical form: it has $1$'s in a permutation matrix shape, $0$'s to the left and below each $1$, and arbitrary complex numbers in all other entries. For instance, say we start with the flag in three dimensions generated by the vectors $\langle 0,2,3\rangle$, $\langle 1, 1, 4\rangle$, and $\langle 1, 2, -3\rangle$. The corresponding matrix is $$\left(\begin{array}{ccc} 0 & 2 & 3 \\ 1 & 1 & 4 \\ 1 & 2 & -3\end{array}\right).$$ We start by finding the leftmost nonzero element in the first row and scale that row so that this element is $\newcommand{\PP}{\mathbb{P}} \newcommand{\CC}{\mathbb{C}} \newcommand{\RR}{\mathbb{R}} \newcommand{\ZZ}{\mathbb{Z}} \DeclareMathOperator{\Gr}{Gr} \DeclareMathOperator{\Fl}{F\ell} \DeclareMathOperator{\GL}{GL} \DeclareMathOperator{\inv}{inv}1$. Then subtract multiples of this row from the rows below it so that all the entries below that $1$ are $0$. Continue the process on all further rows: $$\left(\begin{array}{ccc} 0 & 2 & 3 \\ 1 & 1 & 4 \\ 1 & 2 & -3\end{array}\right) \to \left(\begin{array}{ccc} 0 & 1 & 1.5 \\ 1 & 0 & 2.5 \\ 1 & 0 & -6\end{array}\right)\to \left(\begin{array}{ccc} 0 & 1 & 1.5 \\ 1 & 0 & 2.5 \\ 0 & 0 & 1\end{array}\right)$$ It is easy to see that this process does not change the flag formed by the partial row spans, and that any two matrices in canonical form define different flags. So, the flag variety is covered by $n!$ open sets given by choosing a permutation and forming the corresponding canonical form. For instance, one such open set in the $5$-dimensional flag variety is the open set given by all matrices of the form $$\left(\begin{array}{ccccc} 0 & 1 & \ast & \ast & \ast \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & \ast & 0 \\ 0 & 0 & 0 & 1 & 0 \end{array}\right)$$ We call this open set $X_{45132}$ because it corresponds to the permutation matrix formed by placing a $1$ in the $4$th column from the right in the first row, in the $5$th from the right in the second row, and so on. The maximum number of $\ast$'s we can have in such a matrix is when the permutation is $w_0=n(n-1)\cdots 3 2 1$, in which case the dimension of the open set $X_{12\cdots n}$ is $n(n-1)/2$ over $\CC$ — or $n(n-1)$ over $\RR$, since $\CC$ is two-dimensional over $\RR$. In general, the number of $\ast$'s in the set $X_w$ is the inversion number $\inv(w)$, the number of pairs of entries of $w$ which are out of order. Finally, in order to paste these disjoint open sets together to form a smooth manifold, we consider the closures of the sets $X_w$ as a disjoint union of other $X_w$'s. The partial ordering in which $\overline{X_w}=\sqcup_{v\le w} X_v$ is called the Bruhat order, a famous partial ordering on permutations. (For a nice introduction to Bruhat order, one place to start is Yufei Zhao's expository paper on the subject.) Intersection Cohomology Now suppose we wish to answer incidence questions about our flags: which flags satisfy certain given constraints? As in the case of the Grassmannians, this boils down to understanding how the Schubert cells $X_w$ intersect. This question is equaivalent to studying the cohomology ring of the flag variety $\Fl_n$ over $\mathbb{Z}$, where we consider the Schubert cells as forming a cell complex structure on the flag variety. The cohomology ring $H^\ast(\Fl_n)$, as it turns out, is the coinvariant ring that we discussed in the last post! For full details I will refer the interested reader to Fulton's book on Young tableaux. To give the reader the general idea here, the Schubert cell $X_w$ can be thought of as a cohomology class in $H^{2i}(\Fl_n)$ where $i=\inv(w)$. We call this cohomology class $\sigma_w$, and note that for the transpositions $s_i$ formed by swapping $i$ and $i+1$, we have $\sigma_{s_i}\in H^2(\Fl_n)$. It turns out that setting $x_i=\sigma_i-\sigma_{i+1}$ for $i\le n-1$ and $x_n=-\sigma_{s_{n-1}}$ gives a set of generators for the cohomology ring, and in fact $$H^\ast(\Fl_n)=\mathbb{Z}[x_1,\ldots,x_n]/(e_1,\ldots,e_n)$$ where $e_1,\ldots,e_n$ are the elementary symmetric polynomials in $x_1,\ldots,x_n$. Posted in Diamond, Gemstones | 1 Reply Digging deeper: The isotypic components Posted on July 27, 2014 by Maria Gillespie In last week's post, we made use of the coinvariant ring $$\mathbb{C}[x_1,\ldots,x_n]/I$$ where $I=(p_1,\ldots,p_n)$ is the ideal generated by the positive-degree homogeneous $S_n$-invariants (symmetric polynomials). We saw that this was an $S_n$-module with Hilbert series $(n)_q!$, and claimed that it was the regular representation. Let's see why that is, and see if we can understand where the irreducible components occur. More precisely, our goal is to understand the series $$\sum_{d} H_{\chi^\mu}(d)q^d$$ where $H_{\chi^\mu}(d)$ is the number of copies of the $\mu$th irreducible representation of $S_n$ occurring in the $d$th degree component of $\mathbb{C}[x_1,\ldots,x_n]/I$. In Stanley's paper on invariants of finite groups, he states without proof the answer as the following "unpublished result of Lusztig": Let $G$ be the group of all $m \times m$ permutation matrices, and let $\chi$ be the irreducible character of $G$ corresponding to the partition $\mu$ of $m$. Then $H_{\chi}(n)$ is equal to the number of standard Young tableaux $T$ of shape $\mu$ such that $n$ is equal to the sum of those entries $i$ of $T$ for which $i$ appears in a column to the left of $i+1$. To prove this, let's start with the identity we showed last time, using boldface $\mathbf{x}$ to denote $x_1,\ldots,x_n$ as a shorthand: $$\mathbb{C}[\mathbf{x}]=\Lambda_{\mathbb{C}}(\mathbf{x})\otimes_{\mathbb{C}[S_n]}\mathbb{C}[\mathbf{x}]/I$$ Since $\Lambda_{\mathbb{C}}(\mathbf{x})$, the ring of symmetric functions, consists entirely of copies of the trivial representation of $S_n$, we see that the irreducible components of type $\chi^\mu$ in degree $d$ on the right hand side come from those of that type in $\mathbb{C}[\mathbf{x}]/I$ of degree $d-k$, tensored with the trivial representations in degree $k$ in $\Lambda$, for some $k$. Moreover, there are $p_n(d)$ copies of the trivial representation in the $d$th degree in $\Lambda$ for all $d$, where $p_n(d)$ is the number of partitions of $d$ into parts of size at most $n$. (One can use the elementary or power sum symmetric function bases to see this.) From this we obtain the following series identity: $$\left(\sum \left\langle (\mathbb{C}[\mathbf{x}])_d,\chi^\mu \right\rangle q^d\right)= \left(\sum p_n(d)q^d\right)\cdot \left(\sum H_{\chi^\mu}(d) q^d\right)$$ To simplify the left hand side, we can use the generalized version of Molien's theorem for isotypic components (see here.) This gives us $$\sum \left\langle (\mathbb{C}[\mathbf{x}])_d,\chi^\mu \right\rangle q^d=\frac{1}{n!}\sum_{\pi\in S_n}\frac{\overline{\chi^\mu}(\pi)}{\prod (1-q^{c_i(\pi)})}$$ where the $c_i(\pi)$'s are the cycle lengths of $\pi$. (See this post for details on the above simplification in the case of the trivial character. The case of a general $\chi^\mu$ is analogous.) If we group the permutations $\pi$ in the sum above according to cycle type (i.e. by conjugacy class), and use the fact that characters of $S_n$ are integers and hence $\overline{\chi^\mu}=\chi^\mu$, we have $$\sum \left\langle (\mathbb{C}[\mathbf{x}])_d,\chi^\mu \right\rangle q^d=\frac{1}{n!}\sum_{\lambda\vdash n} \frac{n!}{z_\lambda}\frac{\chi^\mu(\lambda)}{\prod_i (1-q^{\lambda_i})}.$$ Here $z_\lambda$ are the numbers such that $n!/z_\lambda$ is the size of the conjugacy class corresponding to the partition $\lambda$. It is not hard to see that this simplifies to a specialization of the power sum symmetric functions: $$\sum \frac{\chi^\mu(\lambda)}{z_\lambda} p_\lambda(1,q,q^2,\ldots)$$ Finally, by the representation-theoretic definition of the Schur functions, we see that this is simply $$s_\mu(1,q,q^2,\ldots).$$ Substituting for the left hand side of our original equation, we now have $$s_\lambda(1,q,q^2,\ldots)=\left(\sum p_n(d) q^d\right)\cdot \left(\sum H_{\chi^\mu}(d) q^d\right).$$ We can simplify this further by using the series identity $$\sum p_n(d) q^d=\frac{1}{(1-q)(1-q^2)\cdots(1-q^n)}.$$ In addition, there is a well-known identity (see also Enumerative Combinatorics vol. 2, Proposition 7.19.11) $$s_\mu(1,q,q^2,\ldots)=\frac{\sum_T q^{\operatorname{maj} T}}{(1-q)(1-q^2)\cdots (1-q^n)}$$ where the sum ranges over all standard Young tableaux $T$ of shape $\mu$, and where $\operatorname{maj} T$ denotes the sum of all entries $i$ of $T$ that occur in a higher row than $i+1$ (written in English notation). This does it: putting everything together and solving for $\sum H_{\chi^\mu}(d) q^d$, we obtain $$\sum H_{\chi^\mu}(d) q^d=\sum_{T}q^{\operatorname{maj} T},$$ which is just about equivalent to Lusztig's claim. (The only difference is whether we are looking at the rows or the columns that $i$ and $i+1$ are in. There must have been a typo, because the two are not the same $q$-series for the shape $(3,1)$. Replacing "column to the left of" by "row above" or replacing $\mu$ by its conjugate would fix the theorem statement above.) One final consequence of the formulas above is that it is easy to deduce that the ring of coinvariants, $\mathbb{C}[\mathbf{x}]/I$, is isomorphic to the regular representation of $S_n$. Indeed, setting $q=1$ we see that the total number of copies of the irreducible representation corresponding to $\mu$ is equal to the number of standard Young tableaux of shape $\mu$, giving us the regular representation. Acknowledgments: The above techniques were shown to me by Vic Reiner at a recent conference. Thanks also to Federico Castillo for many discussions about the ring of coinvariants. Posted in Diamond | 2 Replies Molien's Theorem and symmetric functions Posted on August 21, 2013 by Maria Gillespie My colleague David Harden recently pointed me to Molien's theorem, a neat little fact about the invariant polynomials under the action by a finite group. It turns out that this has a nice interpretation in the case of the symmetric group $S_n$ that brings in some nice combinatorial and group-theoretic arguments. The general version of Molien's theorem can be stated thus: Suppose we have a finite subgroup $G$ of the general linear group $GL_n(\mathbb{C})$. Then $G$ acts on the polynomial ring $R=\mathbb{C}[x_1,\ldots,x_n]$ in the natural way, that is, by replacing the column vector of variables $(x_1,\ldots,x_n)$ with their image under left matrix multiplication by $G$. Let $R^G$ be the invariant space under this action. Then $R$ is graded by degree; that is, for each nonnegative integer $k$, the space $R_k^G$ of $G$-invariant homogeneous polynomials of degree $k$ are a finite dimensional subspace of $R^G$, and $R^G$ is the direct sum of these homogeneous components. What is the size (dimension) of these homogeneous components? If $d_k$ denotes the dimension of the $k$th piece, then Molien's theorem states that the generating function for the $d_k$'s is given by $$\sum_{k\ge 0} d_k t^k=\frac{1}{|G|}\sum_{M\in G} \frac{1}{\det(I-tM)}$$ where $I$ is the $n\times n$ identity matrix. There is a nice exposition of the proof (in slightly more generality) in this paper of Richard Stanley, which makes use of some basic facts from representation theory. Rather than go into the proof, let's look at the special case $G=S_n$, namely the set of all permutation matrices in $GL_n$. Specialization at $G=S_n$ In this case, the invariant space $R^{S_n}$ is simply the space of symmetric polynomials in $x_1,\ldots,x_n$, and the $k$th graded piece consists of the degree-$k$ homogeneous symmetric polynomials. But we know exactly how many linearly independent homogeneous symmetric polynomials of degree $k$ there can be – as shown in my previous post, the monomial symmetric polynomials $m_\lambda$, where $\lambda$ is any partition of $k$, form a basis of this space in the case that we have infinitely many variables. Since we only have $n$ variables, however, some of these are now zero, namely those for which $\lambda$ has more than $n$ parts. The nonzero $m_\lambda$'s are still linearly independent, so the dimension of the $k$th graded piece in this case is $p(k,n)$, the number of partitions of $k$ into at most $n$ parts. Notice that by considering the conjugate of each partition, we see that the number of partitions of $k$ into at most $n$ parts is equal to the number of partitions of $k$ that use parts of size at most $n$. It is not hard to see that the generating function for $p(k,n)$ is therefore $$\sum_{k\ge 0}p(k,n)t^k=\frac{1}{(1-t)(1-t^2)\cdots (1-t^n)}.$$ Molien's theorem says that this generating function should also be equal to $$\frac{1}{n!}\sum_{M\in S_n}\frac{1}{\det(I-tM)}$$ where we use the somewhat sloppy notation $M\in S_n$ to indicate that $M$ is an $n\times n$ permutation matrix. What are these determinants? Well, suppose $M$ corresponds to a permutation with cycle type $\lambda$, that is, when we decompose the permutation into cycles the lengths of the cycles are $\lambda_1,\ldots,\lambda_r$ in nonincreasing order. Then notice that, up to simultaneous reordering of the rows and columns, $I-tM$ is a block matrix with blocks of sizes $\lambda_1,\ldots,\lambda_r$. The determinant of a block of size $\lambda_i$ is easily seen to be $1-t^{\lambda_i}$. For instance $$\det \left(\begin{array}{ccc} 1 & -t & 0 \\ 0& 1 & -t \\ -t & 0 & 1\end{array}\right)=1-t^3,$$ and in general, the determinant of such a block will have contributions only from the product of the 1's down the diagonal and from the product of the off-diagonal $-t$'s; all other permutations have a $0$ among the corresponding matrix entries. The sign on the product of $t$'s is always negative since either $\lambda_i$ is odd, in which case the cyclic permutation of length $\lambda_i$ is even, or $\lambda_i$ is even, in which case the permutation is odd. Hence, the determinant of each block is $1-t^{\lambda_i}$, and the entire determinant is $$\det (I-tM)=\prod_i (1-t^{\lambda_i}).$$ So, our summation becomes $$\frac{1}{n!}\sum_{\pi\in S_n} \frac{1}{\prod_{\lambda_i\in c(\pi)} (1-t^{\lambda_i})}$$ where $c(\pi)$ denotes the cycle type of a permutation $\pi$. Already we have an interesting identity; we now know this series is equal to $$\frac{1}{(1-t)(1-t^2)\cdots (1-t^n)}.$$ But can we prove it directly? It turns out that the equality of these two series can be viewed as a consequence of Burnside's Lemma. In particular, consider the action of the symmetric group on the set $X$ of weak compositions of $k$ having $n$ parts, that is, an ordered $n$-tuple of nonnegative integers (possibly $0$) that add up to $k$. Then Burnside's lemma states that the number of orbits under this action, which correspond to the partitions of $k$ having at most $n$ parts, is equal to $$\frac{1}{n!}\sum_{\pi \in S_n} |X^\pi|$$ where $X^\pi$ is the collection of weak compositions which are fixed under permuting the entries by $\pi$. We claim that this is the coefficient of $t^k$ in $$\frac{1}{n!}\sum_{\pi\in S_n} \frac{1}{\prod_{\lambda_i\in c(\pi)} (1-t^{\lambda_i})}$$ hence showing that the two generating functions are equal. To see this, note that if $\pi\in S_n$ has cycle type $\lambda$, then $X^\pi$ consists of the weak compositions which have $\lambda_1$ of their parts equal to each other, $\lambda_2$ other parts equal to each other, and so on. Say WLOG that the first $\lambda_1$ parts are all equal, and the second $\lambda_2$ are equal, and so on. Then the first $\lambda_1$ parts total to some multiple of $\lambda_1$, and the next $\lambda_2$ total to some multiple of $\lambda_2$, and so on, and so the total number of such compositions of $k$ is the coefficient of $t^k$ in the product $$\frac{1}{\prod_{\lambda_i\in c(\pi)} (1-t^{\lambda_i})}.$$ Averaging over all $\pi\in S_n$ yields the result. Posted in Diamond | Leave a reply A q-analog of the decomposition of the regular representation of the symmetric group Posted on March 11, 2013 by Maria Gillespie It is a well-known fact of representation theory that, if the irreducible representations of a finite group $\DeclareMathOperator{\maj}{maj} \DeclareMathOperator{\sh}{sh} G$ are $V_1,\ldots,V_m$, and $R$ is the regular representation formed by $G$ acting on itself by left multiplication, then $$R=\bigoplus_{i=1}^{m} (\dim V_i) \cdot V_i$$ is its decomposition into irreducibles. I've recently discovered a $q$-analog of this fact for $G=S_n$ that is a simple consequence of some known results in symmetric function theory. In Enumerative Combinatorics, Stanley defines a generalization of the major index on permutations to standard tableaux. For a permutation $$w=w_1,\ldots,w_n$$ of $1,\ldots,n$, a descent is a position $i$ such that $w_i>w_{i+1}$. For instance, $52413$ has two descents, in positions $1$ and $3$. The major index of $w$, denoted $\maj(w)$, is the sum of the positions of the descents, in this case $$\maj(52413)=1+3=4.$$ To generalize this to standard Young tableaux, notice that $i$ is a descent of $w$ if and only if the location of $i$ occurs after $i+1$ in the inverse permutation $w^{-1}$. With this as an alternative notion of descent, we define a descent of a tableau $T$ to be a number $i$ for which $i+1$ occurs in a lower row than $i$. In fact, this is precisely a descent of the inverse of the reading word of $T$, the word formed by reading the rows of $T$ from left to right, starting from the bottom row. As an example, the tableau $T$ below has two descents, $2$ and $4$, since $3$ and $5$ occur in lower rows than $2$ and $4$ respectively: So $\maj(T)=2+4=6$. Note that its reading word $5367124$, and the inverse permutation is $5627134$, which correspondingly has descents in positions $2$ and $4$. (This is a slightly different approach to the major index than taken by Stanley, who used a reading word that read the columns from bottom to top, starting at the leftmost column. The descents remain the same in either case, since both reading words Schensted insert to give the same standard Young tableau.) Now, the major index for tableaux gives a remarkable specialization of the Schur functions $s_\lambda$. As shown in Stanley's book, we have $$s_\lambda(1,q,q^2,q^3,\ldots)=\frac{\sum_{T} q^{\maj(T)}}{(1-q)(1-q^2)\cdots(1-q^n)}$$ where the sum is over all standard Young tableaux $T$ of shape $\lambda$. When I came across this fact, I was reminded of a similar specialization of the power sum symmetric functions. It is easy to see that $$p_\lambda(1,q,q^2,q^3,\ldots)=\prod_{i}\frac{1}{1-q^{\lambda_i}},$$ an identity that comes up in defining a $q$-analog of the Hall inner product in the theory of Hall-Littlewood symmetric functions. In any case, the power sum symmetric functions are related to the Schur functions via the irreducible characters $\chi_\mu$ of the symmetric group $S_n$, and so we get \begin{eqnarray*} p_\lambda(1,q,q^2,\ldots) &=& \sum_{|\mu|=n} \chi_{\mu}(\lambda) s_{\mu}(1,q,q^2,\ldots) \\ \prod_{i} \frac{1}{1-q^{\lambda_i}} &=& \sum_{\mu} \chi_{\mu}(\lambda) \frac{\sum_{T\text{ shape }\mu} q^{\maj(T)}}{(1-q)(1-q^2)\cdots(1-q^n)} \\ \end{eqnarray*} This can be simplified to the equation: \begin{equation} \sum_{|T|=n} \chi_{\sh(T)}(\lambda)q^{\maj(T)} = \frac{(1-q)(1-q^2)\cdots (1-q^n)}{(1-q^{\lambda_1})(1-q^{\lambda_2})\cdots(1-q^{\lambda_k})} \end{equation} where $\sh(T)$ denotes the shape of the tableau $T$. Notice that when we take $q\to 1$ above, the right hand side is $0$ unless $\lambda=(1^n)$ is the partition of $n$ into all $1$'s. If $\lambda$ is not this partition, setting $q=1$ yields $$\sum \chi_\mu(\lambda)\cdot f^{\mu}=0$$ where $f^\mu$ is the number of standard Young tableaux of shape $\mu$. Otherwise if $\lambda=(1^n)$, we obtain $$\sum \chi_\mu(\lambda)\cdot f^{\mu}=n!.$$ Recall also that $f^\mu$ (see e.g. Stanley or Sagan) is equal to the dimension of the irreducible representation $V_\mu$ of $S_n$. Thus, these two equations together are equivalent to the fact that, if $R$ is the regular representation, $$\chi_R=\sum_\mu (\dim V_\mu) \cdot \chi_{\mu}$$ which is in turn equivalent to the decomposition of $R$ into irreducibles. Therefore, Equation (1) is a $q$-analog of the decomposition of the regular representation. I'm not sure this is known, and I find it's a rather pretty consequence of the Schur function specialization at powers of $q$. EDIT: It is known, as Steven Sam pointed out in the comments below, and it gives a formula for a graded character of a graded version of the regular representation. Summary: Symmetric functions transition table Posted on August 4, 2012 by Maria Gillespie Over the last few weeks I've been writing about several little gemstones that I have seen in symmetric function theory. But one of the main overarching beauties of the entire area is that there are at least five natural bases with which to express any symmetric functions: the monomial ($m_\lambda$), elementary ($e_\lambda$), power sum ($p_\lambda$), complete homogeneous ($h_\lambda$), and Schur ($s_\lambda$) bases. As a quick reminder, here is an example of each, in three variables $x,y,z$: $m_{(3,2,2)}=x^3y^2z^2+y^3x^2z^2+z^3y^2x^2$ $e_{(3,2,2)}=e_3e_2e_2=xyz(xy+yz+zx)^2$ $p_{(3,2,2)}=p_3p_2p_2=(x^3+y^3+z^3)(x^2+y^2+z^2)^2$ $h_{(2,1)}=h_2h_1=(x^2+y^2+z^2+xy+yz+zx)(x+y+z)$ $s_{(3,1)}=m_{(3,1)}+m_{(2,2)}+2m_{(2,1,1)}$ Since we can usually transition between the bases fairly easily, this gives us lots of flexibility in attacking problems involving symmetric functions; it's sometimes just a matter of picking the right basis. So, to wrap up my recent streak on symmetric function theory, I've posted below a list of rules for transitioning between the bases. (The only ones I have not mentioned is how to take a polynomial expressed in the monomial symmetric functions $m_\lambda$ in terms of the others; this is rarely needed and also rather difficult.) Elementary to monomial: $$e_\lambda=\sum M_{\lambda\mu} m_\mu$$ where $M_{\lambda\mu}$ is the number of $0,1$-matrices with row sums $\lambda_i$ and column sums $\mu_j$. Elementary to homogeneous: $$e_n=\det \left(\begin{array}{cccccc} h_1 & 1 & 0 & 0 &\cdots & 0 \\ h_2 & h_1 & 1 & 0 & \cdots & 0 \\ h_3 & h_2 & h_1 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ h_{n-1} & h_{n-2} & h_{n-3} & h_{n-4} & \ddots & 1 \\ h_n & h_{n-1} & h_{n-2} & h_{n-3} & \cdots & h_1 \end{array}\right)$$ Elementary to power sum: $$e_n=\frac{1}{n!}\det\left(\begin{array}{cccccc} p_1 & 1 & 0 & 0 &\cdots & 0 \\ p_2 & p_1 & 2 & 0 & \cdots & 0 \\ p_3 & p_2 & p_1 & 3 & \cdots & 0 \\ p_{n-1} & p_{n-2} & p_{n-3} & p_{n-4} & \ddots & n-1 \\ p_n & p_{n-1} & p_{n-2} & p_{n-3} & \cdots & p_1 Elementary to Schur: $$e_\lambda=\sum_{\mu} K_{\mu'\lambda}s_\mu$$ where $K_{\lambda\mu}$ is the number of semistandard Young tableau of shape $\lambda$ and content $\mu$. Homogeneous to monomial: $$h_\lambda=\sum N_{\lambda\mu} m_\mu$$ where $N_{\lambda\mu}$ is the number of matrices with nonnegative integer entries with row sums $\lambda_i$ and column sums $\mu_j$. Homogeneous to elementary: $$h_n=\det\left(\begin{array}{cccccc} e_1 & 1 & 0 & 0 &\cdots & 0 \\ e_2 & e_1 & 1 & 0 & \cdots & 0 \\ e_3 & e_2 & e_1 & 1 & \cdots & 0 \\ e_{n-1} & e_{n-2} & e_{n-3} & e_{n-4} & \ddots & 1 \\ e_n & e_{n-1} & e_{n-2} & e_{n-3} & \cdots & e_1 Homogeneous to power sum: $$h_n=\frac{1}{n!}\det\left(\begin{array}{cccccc} p_1 & -1 & 0 & 0 &\cdots & 0 \\ p_2 & p_1 & -2 & 0 & \cdots & 0 \\ p_3 & p_2 & p_1 & -3 & \cdots & 0 \\ p_{n-1} & p_{n-2} & p_{n-3} & p_{n-4} & \ddots & -(n-1) \\ Homogeneous to Schur: $$h_{\lambda}=\sum_\mu K_{\mu\lambda}s_\mu$$ Power sum to monomial: $$p_\lambda=\sum_{\mu} R_{\lambda\mu}m_\mu$$ where $R_{\lambda\mu}$ is the number of ways of sorting the parts of $\lambda$ into a number of ordered blocks in such a way that the sum of the parts in the $j$th block is $\mu_j$. Power sum to elementary: Newton-Gerard identities: $$p_n=\det\left(\begin{array}{cccccc} 2e_2 & e_1 & 1 & 0 & \cdots & 0 \\ 3e_3 & e_2 & e_1 & 1 & \cdots & 0 \\ (n-1)e_{n-1} & e_{n-2} & e_{n-3} & e_{n-4} & \ddots & 1 \\ ne_n & e_{n-1} & e_{n-2} & e_{n-3} & \cdots & e_1 Power sum to homogeneous: $$p_n=(-1)^{n-1}\det\left(\begin{array}{cccccc} 2h_2 & h_1 & 1 & 0 & \cdots & 0 \\ 3h_3 & h_2 & h_1 & 1 & \cdots & 0 \\ (n-1)h_{n-1} & h_{n-2} & h_{n-3} & h_{n-4} & \ddots & 1 \\ nh_n & h_{n-1} & h_{n-2} & h_{n-3} & \cdots & h_1 Power sum to Schur: Let $\chi^\lambda$ be the $\lambda$th character of the symmetric group $S_n$ where $n=|\lambda|$, and write $\chi^\lambda(\mu)$ to denote the value of $\chi^{\lambda}$ at any permutation with cycle type $\mu$. Then for any partition $\mu\vdash n$, we have: $$p_\mu=\sum_{\lambda\vdash n} \chi^\lambda(\mu) s_\lambda$$ Alternatively: $$p_n=s_{(n)}-s_{(n-1,1)}+s_{(n-2,1,1)}-s_{(n-3,1,1,1)}+\cdots+(-1)^{n}s_{(1,1,\ldots,1)}$$ Schur to monomial: $$s_{\lambda}=\sum_{\mu\vdash |\lambda|} K_{\lambda \mu}m_\mu$$ Schur to elementary: (Dual Jacobi-Trudi Identity.) $$s_{\lambda/\mu} = \det \left(e_{\lambda'_i-\mu'_j-i+j}\right)_{i,j=1}^n$$ Schur to homogeneous: (Jacobi-Trudi Identity.) $$s_{\lambda/\mu} = \det \left(h_{\lambda_i-\mu_j-i+j}\right)_{i,j=1}^n$$ Schur to power sum: $$s_\lambda=\sum_{\nu\vdash |\lambda|} z_\nu^{-1} \chi^{\lambda}(\nu) p_\nu$$ Doing mathematics in a pandemic – Part IV: Talks with OBS Doing mathematics in a pandemic – Part III: Teaching Doing mathematics in a pandemic – Part II: Collaboration Doing mathematics in a pandemic – Part I: AlCoVE A linear algebra-free proof of the Matrix-Tree Theorem Thalia on Olympiad vs. research mathematics Doing mathematics in a pandemic – Part IV: Talks with OBS | Mathematical Gemstones on Doing mathematics in a pandemic – Part II: Collaboration Doing mathematics in a pandemic – Part III: Teaching | Mathematical Gemstones on Doing mathematics in a pandemic – Part IV: Talks with OBS Doing mathematics in a pandemic – Part II: Collaboration | Mathematical Gemstones on Doing mathematics in a pandemic – Part III: Teaching Doing mathematics in a pandemic – Part I: AlCoVE | Mathematical Gemstones on Doing mathematics in a pandemic – Part II: Collaboration
CommonCrawl
Modern problems of aircraft aerodynamics Sergey L. Chernyshev1, Sergey V. Lyapunov1 & Andrey V. Wolkov ORCID: orcid.org/0000-0002-4830-61351 The article represents the discussion of several separate directions of investigations, which are performed by TsAGI flight vehicles aerodynamics specialists at the time. There are some major trends of classical layout of route aircraft and also peculiarities of some prospective flight vehicles. Also there are some hypersonic vehicles aerodynamics questions examined along with problems of creation of civil supersonic transport aircraft. There is a description given for well-known and some newer methods of flow control for drag reduction. The latest successes of aviation science and technology in fuel efficiency increase could be observed in Fig. 1. There is a significant reduction of fuel consumption for passenger per kilometer. But not only has the fuel consumption indicated aviation science development. The flight safety and ecological impact (decrease of noise and environment pollution level) of aviation transport became the prime tasks of development. Change of route aircraft fuel efficiency with time At 2014 there was document prepared by representatives of Russia leading scientific organizations (TsAGI, CIAM, VIAM, GosNIIAS etc...) named "Foresight of aviation science and technology development", which defines the long-term forecast of scientific and technological development of Russian Federation in area of aviation industry. This document specifies the ambitious task indicators (see Table 1) of creating of backlog in the area of civil aviation development, which couldn't be achieved without reconsideration of existing technologies of aviation science. Table 1 Prognosis on changes in main aircraft aimed perfection indicators The tasks of aerodynamic science are defined by the necessity of improving of these indicators. The Breguet flight range formula $$ L\sim \frac{K\cdotp M}{C_E}\ln \frac{G_1}{G_0} $$ allows to define the key aerodynamic parameters, which need to be improved. First of all it is an increase of cruise lift-to-drag level (K), cruise Mach number (M) and decrease of specific fuel consumption and minimization of structural weight (G1,G0 - masses of aircraft at the beginning and at the end of flight). In turn, the maximal lift-to-drag ratio could be achieved as follows: $$ {K}_{\mathrm{max}}=\frac{1}{2}\sqrt{\frac{\pi \kern0.5em \lambda }{C_f{\overline{S}}_{wetted}}} $$ where λ – the effective wing aspect ratio, Cf – skin friction coefficient, \( {\overline{S}}_{wetted} \) – the wetted surface area of aircraft divided by wing area. Thus, there are another three directions of flying vehicle economical characteristics improvement, related to aerodynamics: aspect ratio increase, friction drag reduction and aircraft relative wetted area decrease. The main components of full cruise drag of modern aircraft are friction drag, drag due-to-lift and wave drag. The impact of the first two in transonic speeds region reaches up to 50 and 40% of full drag correspondingly. This shows that friction drag reduction is the major source of aircraft lift-to-drag increase. It should be noticed that lift-to-drag increase is not only about drag reduction, but also about increase of lifting capabilities by shape improvement and search for newer layout solutions. In the near future the development of aerodynamic layout of router aircrafts will be carried out in frames of classical layout, basing on progress in area of aerodynamics of high–speed wings, new materials, electronic and electromechanical devices and super high bypass ratio engines. The article also examines peculiarities of aircrafts of integral layouts (flying wing, elliptic fuselage) and aircraft with distributed powerplant and powerplant integrated into the wing. One of the main ways of development of any kind of transport is the increase of passenger transportation speed. One of the results of such development was creation of first generation of supersonic civil aircrafts (SST-1) in Soviet Union (Tu-144) and Europe ("Concorde") in the second half of twenty century. In order to improve the aerodynamic layout of supersonic civil aircraft TsAGI creates specially designed test facilities and develops methodology of sonic boom characteristics estimation. At higher supersonic and hypersonic speeds, the process of aerodynamic design is additionally complicated by necessity of solving problem of intensive aerodynamic heating of surface elements of flight vehicles, and by ensuring of their stability and controllability and also by need of implementing of higher volume tanks for hydrogen fuel. For the successfully solving of the enlisted tasks and for ensuring of prospective technical backlog the leading-in-time mono and multidisciplinary scientific investigations are indispensable. The main directions of aircraft classical layout development It should be admitted, that aerodynamic potential of modern supercritical wings is on the edge of limit, that's why it is needed to investigate and implement some new prospective technologies in order to move forward. Among them the following should be outlined: Adaptive wings for transonic speeds; New types of wingtips; Organization of laminar flow around empennage, engine nacelles, and later around wings (NLF, HLFC); Reduction of turbulent friction drag; Improved types of efficient high-lift devices; Active and passive flow control systems (mini and macro devices, synthetic jets, actuators etc); Active thrust vectoring control; Transition to layouts with moderate stability margin and slightly instable layouts. The problem of increasing of cruise speed (Mach number) is connected with overcoming of intensive drag rise occurring due to existence of intensive shock, closing local area of supersonic flow. Using of supercritical airfoils and wings allowed moving to higher Mach number for preset sweep angle and relative thickness of the wing. At the time, the modern methods of aerodynamic design allow move the mentioned drag rise to higher speeds using global numerical optimization of aerodynamic shape of the wing for given relative thickness and plan form. Further increase of flight Mach number is most likely possible only by using flow control methods and through affecting the shock. These could be, for example, some special actuators or vortex generators [1] provoking additional vortex producing or the tangential jet blowing on wing surface [2, 3]. Most frequently the higher speed possibilities of supercritical wings are "traded" for increase of relative thickness of the wing in order to reduce structural weight or to increase aspect ratio, which, as it is well known, leads to reduction of drag due to lift. Tu-204 and Il-96 aircraft with aspect ratio λ = 9.2 ÷ 10 demonstrate such approach of aerodynamic design, exceeding their predecessors Tu-154 and Il-86 in maximal lift-to-drag by more than 2 units. It should be noticed that supercritical wings implementation is the reason of increased nose-down pitching moment, that leads to higher trim drag. However, these losses could be lowered by some reduction of aircraft longitudinal stability and by use of modern flight control systems, ensuring flight safety. Using of composites in wing structure opens new possibilities for aerodynamic design. On one hand the airframe weight could be reduced, on the other hand, wing aspect ratio could be increased for the same structural weight. The prediction of aircraft flight performance shows that aspect ratio increase. That's why for the new generation Russian passenger aircraft MS-21 record aspect ratio wing with λ = 11.45 was implemented. Aspect ratio increase consequently leads to increase of lift coefficient corresponding to maximal lift-to-drag. Wing aspect ratio increase leads to increase of wingbox weight due to lesser chords and thicknesses. One of the possible ways of weight reduction could be use of additional supporting elements-wing braces (see Fig. 2). This configuration has recently been intensively investigated [4,5,6,7]. Preliminary estimations performed by TsAGI's specialists have shown that with using of such elements in router aircraft design there could be achieved optimal wing aspect ratio up to 14–15, however, approving such estimations require deeper investigations. Layout of aircraft with wing braces It should be noticed, that further increase of aspect ratio, and, consequently, wingspan values is limited by size of existing taxiways and hangars. One of possible solutions of this problem is the using of vertical or folding wingtips, which allows increasing effective aspect ratio of the wing at wingspan limitations. The important reserve for aerodynamic layout lift-to-drag level increase is the optimal positioning of engine nacelles, which is quite actual due to tendency of increase of bypass ratio and sizes of prospective engines. It should be noticed, that high bypass ratio engines have smaller fuel consumption and lower noise levels, but have a negative effect on flow around airframe, including takeoff and landing phases due to limitations on span and extension distance of root section of slat. Besides that, large-sized engine nacelles located under the wing require longer undercarriage struts, that leads to structural weight growth. Application of optimization procedures allows to significantly decrease negative interference of nacelles. The loss of maximal lift, while using insufficiently effective high-lift devices, could be compensated, for example, by application of jet blowing in wing-pylon connection area. Experiments on investigation of efficiency of this conception have already started at TsAGI on large-scaled (span, chord) wing segment at large (full) scale T-101 wind tunnel (Fig. 3). Model of nacelle+wing segment with high-lift devices inside T-101 wind tunnel (AFLONEXT project: www.aflonext.eu) TsAGI has developed technical conception of router aircraft of integral layout with powerplant distributed within wing structure (Fig. 4). The idea of a distributed power plant is fully discussed in the thesis report of Khajehzadeh [8]. Experimental investigations of developed model have shown that such way of powerplant integration into airframe ensures approximately 15% increase of lift-to-drag ratio, comparing to classical layout. Aircraft with powerplant distributed within wing structure "Flying wing" aircraft concept The integral layout "flying wing" (FW), or "blended wing body" (BWB) is considered to be the most aerodynamically perfect layout for long-range aircraft [9]. Flying wing concept is targeted on full elimination of fuselage as a main part of drag. Besides that, for classical flying wing, tail empennage is also absent. Theoretically, the lift-to-drag ratio for flying wing could be 40% higher than that of classical layout for the same wing aspect ratio. Besides that, the aircraft empty weight for flying wing layout should be less due to possibility of more uniform distribution of payload inside wing. However, more complex problems of balancing and controllability of flying wing inevitably lead to losses. Passenger comfort requires significant structural height of the wing, which, in its turn for standard small relative thickness will lead to significant growth of absolute aircraft sizing. Application of these huge flying wing aircrafts could be justified only for super high capacity (1000 passengers) transportation. Such aircrafts are not examined seriously yet due to both safety reasons and difficulties of integration of such aircrafts into existing transportation flows. Potentially, passenger aircrafts of flying wing layout possess three advantages: higher lift-to-drag ratio due to smaller relative wetted area, favorable distribution of mass load along wingspan and relatively small ground noise level for configurations with engines located above airframe. However, taking a closer look, these advantages do not look that obvious. First of all, latest aircrafts of classical layout utilize wings of increased aspect ratio due to composites implementation. Distribution of payload along wingspan could be realized significantly comparing to classical layout due to uncomfortable g-loads conditions for passengers of outer wing sections during roll maneuvers, and finally only third advantage, concerning noise shielding by flying wing central wing is yet conclusive, and that's why lately there were investigations started on aeroacoustics of low-noise flying wing layouts of relatively small passenger capacity of 200–300 passengers (for example SAX-40 Fig. 5). Low noise BWB layout (SAX-40, see http://silentaircraft.org/) Calculation and experiments show that flying wing layout could ensure significant noise shielding for sources located above upper surface due to longer chord lengths (Fig. 5, see http://silentaircraft.org). Possible ways of drag reduction It is known that, the development of aircrafts, ships and high speed ground transport stimulates investigations directed on finding possibilities of lowering of drag for moving objects. Lately, the actuality of such kind of investigations is increasingly growing, due to the fact that possibilities of standard approaches to flight vehicles design are almost ideologically depleted. Aerodynamic perfection of modern passenger aircrafts is gradually going to the "limit", and the struggle is for the decimals of lift-to-drag. To achieve a significant breakthrough in this area, the new conceptions are needed, which are based on ideas of active or passive flow control. Today, it is not enough to just understand or have ability to explain phenomena, but the real challenge is to learn how to purposefully control them. From written above it is clear that, flow control in order to reduce drag of moving objects is one of the most important tasks of applied aerodynamics. Even small decrease of friction drag would allow reducing fuel costs significantly. The analysis of passenger aircraft drag components shows possible ways of its reduction: wing aspect ratio increase decrease of friction drag by reducing wetted area of flying vehicle, flow laminarization or application of some innovative ways of turbulent friction reduction (riblets, surface active substances, vortex destruction devices, different kinds of actuators, movable surface elements and so on). wave drag reduction Problems, related to possibilities of wing aspect ratio increase are described at chapter 2 of this article. Concerning the problem of friction drag decrease, the main question about it is if the flow around most part of wetted area of flying vehicle laminar or turbulent. At Reynolds number range from 106 to 107 or higher on significant part of surface there could be a transition mode of flow, and it is obvious that it is expedient to use some actions for delaying of process of laminar turbulent transition. Among these actions are following: suction of boundary layer through surface creation of negative pressure gradient surface cooling Such methods of laminar flow control could be successful up to Re = 25 · 106 or even higher. Still, there are a lot of questions, related to practical realization, cost and reliability of these methods. It is also should be noted, that the situation is additionally complicated by existence of numerous factors that could create disturbances, leading to flow turbulization. This could be different unfairness (roughness elements) of surface, acoustic factors, vibrations, different particles (rain, dust, insects' pollution) etc. At very high Reynolds number, the flow is usually turbulent across all the length of flying vehicle surface, and in this case the task is about lowering turbulent friction drag. Among the most well-known approaches are: creation of positive pressure gradient blowing (with small pulse) through the slot tangentially to surface distributed blowing normally to surface devices for large eddies destruction ribbed surfaces (small longitudinal flutes). As far as the flutes are oriented along the flow direction, the additional drag is minimal, but the wetted area is increased. Nevertheless, the investigations show that drag reduction is possible if the deepness and pitch of the flutes are of the same order as the size of near wall turbulent formations. In TsAGI there were first experimental investigations performed of effect of geometry of surfaces with chaotic microstructure, having special fractal hierarchy of granularity, on turbulent boundary layer characteristics. It was found that, the distinctive peculiarity of "fractal" surface is non-gauss statistics of distribution of roughness height and it is observed some good matching between fractal surface shape spectra and turbulent boundary layer. This result allows to make an assumption about existence of frequency-space mechanism of selective effect of stochastic model relief on turbulent boundary layer properties. TsAGI experiments clearly registered the effect of conditions of surface of models used on spectra and structures of turbulent boundary layer. In the lower frequencies area, the spectra amplitude lowers by 1.5–2 times, while at high frequencies range the spectra amplitude rises, what speaks of destruction of low-frequency (large) coherent structures by the surface with fractal microstructure. During tests at wide range of Re number there were observed a reduction of drag coefficient Cx for the model with fractal surface comparing to the corresponding value of Cx of abrasive surface with the same mean roughness (Fig. 6) [10]. Drag coefficient dependency on Re number As it was already noted, in frames of existing approaches of aerodynamic design, modern aircrafts already have nearly optimal shape, and in order to significantly improve aerodynamic characteristics it is needed to use of active or passive flow control systems [1, 3]. The following are examined: jet blowing on flap surface, tangential jet blowing right after the shockwave, and different kinds of actuators (plasma, dielectric barrier and corona discharge and also a thermal pulse devices). In order to learn about different aspects of wing laminarization, such as natural laminar flow, combined laminar flow, low-noise layout with position of super high bypass engines on upper wing surface, and also to learn about peculiarities of application of jet system of active flow control, it is proposed to build a specialized prospective technologies demonstrator aircraft. The modern problems of civil supersonic transport aircrafts Today, one of the main factors to hold the development of civil supersonic transport in Russia and abroad is the absence of conventional rules and requirements on sonic boom for civil supersonic transport (CSST). A transition from "overpressure" terminology to "loudness" terminology allows to evaluate the level of sonic boom more adequate and uniquely, and to formulate the CSST layout with low sonic boom level. At the time the process of formulating rules on sonic boom came into active stage. The specialists of TsAGI, GosNIIGA and FRI are involved into the process, including the frames of "RUMBLE" European project. Main task of this project is the formulating of proposals for prospective rules on sonic boom, regulating acceptable levels on sonic boom (threshold levels), measurements metrics, and methods of determining the compliance of CSST to these rules. The analysis of the metrics and threshold levels discussed by world scientific society [11,12,13] allowed Russian specialists to formulate preliminary requirements on sonic boom for prospective SST. The preliminary list of proposals on requirements for metrics and threshold levels is already developed. With consideration to existing Russian regulatory legal base there are formed the preliminary set of measurement equipment, the methodology of measurement of outdoor and indoor sonic boom level. The results of this work are the base for performing a flight tests with sonic boom level measurements at 2014–2018. Basing on preliminary requirements TsAGI, along with Russian research institutes and enterprises, began forming a scientific-technical backlog for creation of passenger supersonic aircrafts of new generation. The base of the conception is the ability to perform a cruise supersonic flight above populated surface with loudness L ≤ 72 dBA. At the time, the works are performed on perfection of numerical methods of sonic boom estimation for CSST at acceleration stage, accounting for real atmosphere properties, and also a secondary boom. The estimations of sonic boom for acceleration stage are needed for definition of flight modes with focusing occurrence, when sonic boom loudness could significantly increase, and definition of possible safety exclusion area [14]. The engines of prospective layout, capable of solving a transportation task with limitations on noise levels at airport area at take-off and landing and atmosphere pollution levels are observed as a component of CSST powerplant. CIAM, "Aviadvigatel" "Lulca design bureau" are actively involved into formulating of prospective layouts of engines for CSST powerplants. Basing on results of preliminary investigations, the PD-14C engine project developed by "Aviadvgatel" could be called "near future" prospect. One stage low pressure compressor, equipped with adjustable entrance stator is designed with m = 2.5 bypass ratio. The appearance of testbench sample of such engine is possible in 5–7 years. The noise-reduction system of prospective CSST includes shielding of engine and jet noise by airframe elements, noise absorbing covering at air intakes channels and cold ducts of engines, ejector nozzles. The special test bench investigations, maximally approximated to natural conditions are needed for noise-reduction system elements development. Solving of tasks of sonic boom and noise reduction is connected with some technical actions, which are not improving aerodynamic and structural weight perfection of layouts. Among them there are fuselage and the wing of special complicated shape, powerplant shielded by airframe elements, with engine positioning above fuselage and wing etc. Besides that, the requirements for transportation task became harder, for example, for supersonic business aviation the basic requirements creating of supersonic expansion is needed on runways with length less than 2000 m. For all CSST the transatlantic flight range (at least 7000 km) is considered to be minimal. For that case, even fixing high load ratio of fuel GT ≈ 50%, the aircraft should ensure cruise lift-to-drag ratio 15–20% higher than SST of first generation. Requirements of ensuring flight safety increase dictate the need of lowering landing approach speed, and as a consequence, increase of CSST wing aspect ratio by 30–35% comparing to CSST-1. Thus, the tasks of aerodynamics, structural strength, stability and control for CSST seem to be pretty complex. At the time, the estimations are made on possibility of creating supersonic business jet (SBJ) with cruise speed of M = 1.8 with 8 passenger capacity for business class compartment and flight range of around 7400 km, maximal takeoff weight of 55 tones and two-engine powerplant. Also the possibility is examined of creation of SBJ with transformable cabin (SBJ/CSST with cruise speed M = 1.8) with transatlantic flight range and maximal takeoff weight up to 130 tones. For "business jet" cabin option CSST/SBJ is capable of transporting 20 passengers in 1-st class cabin, including 1 VIP (with separate compartment, toilet, shower cabin and sleeping bed) on flight distance of up to 8200 km. For "passenger" cabin option, keeping takeoff mass the same, SBJ/CSST is capable of transporting up to 80 passengers at economy plus class cabin for 7400 km. The preliminary estimations are made of possibility of creating of SST with transatlantic range for 140 and 200 passengers, with takeoff weight of 170 and 256 tones correspondingly. The works are also in progress on formulating the layout of demonstrator of supersonic transport aircraft (DSTA) with maximal takeoff weight less than 30 tones. The list of the main technologies, which could be tested on such flying vehicle includes aerodynamic and layout solutions, ensuring low sonic boom level, reasonable structural scheme and application of the newest materials and flight safety and control solutions (Fig. 7). The layout of supersonic business jet and hypersonic civil aircraft On preliminary estimations, for the examined range of weights and sizes of SST of different roles, there exist potential of fulfilling the requirements on threshold noise loudness level of L ≤ 72 dBA at cruise Mach number (for example) M = 1.8 (see also [15]) (Fig. 8). The dependency of sonic boom loudness at the beginning of cruise supersonic flight at M = 1.8 on flight weight of prospective CSST The loudness level of sonic boom of L ≤ 65 dBA would, by preliminary estimations, allow exploitation of CSST all-day long without limitations. However, reaching such level would require significant efforts. The peculiarities of hypersonic aircrafts The aerodynamic design of hypersonic aircrafts is connected with a lot of peculiarities, complicating the process. First of all, the intensive aerodynamic heating of elements of flying vehicle surface should be noted. For example, at M = 6 the stagnation temperature of incoming flow would approach 1900 K, and for M = 8 would exceed 3000 K, that would require taking special actions on ensuring heat resistance of surface elements of the vehicle, especially of the nose part of the fuselage, leading edges of the wings, control surfaces and air intakes. A compromise should be found, allowing to ensure the thermal structural strength with acceptable lift-to-drag loss. The second problem is connected with the fact that hypersonic ramjet with hydrogen fuel is considered as the most suitable type of engine for civil hypersonic aircraft. Such engine could provide high efficiency on hypersonic flight speeds, but requires large volume fuel tanks, equipped with thermal regulation systems. For hydrogen fuel, that inevitably leads to decrease of aircraft lift-to-drag. The over problems are defined by the peculiarities of stability and control assurance for hypersonic flight vehicles. These peculiarities are determined by movement of pressure center and aerodynamic focus forward, comparing to flight vehicles, at lower speed range non-linearity dependencies. This happens due to significant non-linearity of dependencies of aerodynamic characteristics of flight vehicles surface elements on their incidence angle. For that reason, the most part of aerodynamic force, acting on vehicle is concentrated on its forward part, and traditional control surfaces, located at the tail are found to be less effective. This leads to necessity to perform the corresponding optimization both airframe elements and control surfaces. It is possible also to use combined control systems, including both aerodynamic surfaces and gas jets. At high supersonic and hypersonic flight speeds it is especially important to use possibilities of aerodynamic integration of elements of aerodynamic layout and engine. While locating the engine air inlet in the flow areas previously decelerated by vehicle airframe elements the own characteristics of air inlet: the mass flow coefficient and pressure recovery ratio are improved. This leads to reducing of necessary sizing of air inlet devices, and, consequently, to their weight decrease along with decrease of powerplant in whole and increase of powerplant fuel efficiency. The results of investigation of problems of reasonable integration of airframe and powerplant are presented, for example, in publications [16, 17]. At high supersonic and hypersonic speeds there occurs new opportunities of application of non-traditional aerodynamic shapes, based on waverider and Busemann biplane conceptions (Figs. 9, 10) [18]. The results of calculation and experimental investigations show that such aerodynamic configurations, thanks to positive effects of airframe-powerplant integration, allow achieving high lift-to-drag level with relatively large inner volumes. Configuration of vehicle based on waverider concept Configuration of vehicle based on Busemann biplane concept Interest in the possibilities of using unconventional aerodynamic shapes such as waveriders and biplanes in the design of high-speed aircraft currently remains, as evidenced by, for example, recent developments [19,20,21,22]. The important aspect of creation and exploitation of high speed aerial transport is the care about ecology. As for the sonic boom level, the researches show that with flight Mach number increase, the sonic boom intensity lowers. Figure 11 presents examples of the calculations of an aircraft weighing 150 tons with the M = 1.5 at an altitude of 15 km, with the M = 2.5 and an altitude of 20 km, and finally with M = 5 at an altitude of 30 km. Increasing the altitude of the flight is selected in order to preserve the magnitude of the lift with increasing speed of the aircraft. The calculations show an almost two-fold decrease in the intensity of the sonic boom, which is mainly due to an increase in flight altitude. So, the hypersonic flight usually takes place at high altitudes and sonic boom level decrease near the surface. Sonic boom intensity at different flight Mach numbers The ecological aspect also includes ozone layer protection problem. The representative distribution of ozone concentration with altitude is shown at Fig. 12. The most part of ozone layer is located 12–50 km altitude range, where the highest ozone concentration is observed at 15–25 km at polar latitudes, at 20–25 km at middle latitudes and from 25 to 30 km at tropical latitudes. The dependency of ozone partial pressure on altitude It is obvious, that for decreasing of negative ozone layer effect, the cruise flight of hypersonic aircraft should be performed as high as possible, and acceleration to cruise flight should be done as fast as possible. These factors increase requirements for aircraft thrust/weight ratio. To solve all these complicated tasks the efforts of many TsAGI specialists are needed along with the specialists of other research institutes. Modern problems and perspectives of development of flight vehicles aerodynamics are described. The prospective technologies, which are to be implemented for aerodynamic layout perfection of passenger aircrafts are emphasized. The advantages and disadvantages are noted of the most aerodynamically perfect integral configuration – the "flying wing" concept. The conclusion is made, that the application of huge sized flying wing aircraft could be justified only for extra high passenger capacity, around 1000 people. A brief description is made of known methods of friction drag reduction, and also of new method, connected with creating of special microstructure, having special fractal granularity, on the streamlined surface. During tests at wide range of Re number there were observed the reduction of turbulent drag coefficient for the model with fractal surface comparing to the abrasive surface with same mean roughness. The main problems of creation of civil supersonic passenger aircrafts are described. It is noted that one of the main factors, slowing down the development of supersonic aerial transport is the absence of commonly accepted rules and requirements on sonic boom level. The concept is examined of new generation civil supersonic transport aircraft. The base of concept is the ability of aircraft to perform supersonic cruise above the populated surface with sonic boom loudness not exceeding 72 dBA. Based on preliminary estimation, such opportunity exists. The description is given of aerodynamics peculiarities for hypersonic aircrafts. These peculiarities are connected mostly with the necessity of taking into account the intensive heating of elements of flight vehicles surface. The conclusion is made that the most suitable option of engine for civil hypersonic aircraft is the hypersonic ramjet with hydrogen fuel. The new possibilities are noted for implementation of non-traditional configurations, based on wave rider and Busemann biplane concepts. CIAM: Central Institute of Aviation Motors GosNIIAS: State research institute of civil aviation TsAGI: Central aerodynamic institute VIAM: All-Russian scientific research institute of aviation materials Brutyan MA (2015) Problems of gas and fluid flow control. Nauka, Moscow Abramova KA, Brutyan MA, Lyapunov SV et al (2015) Investigation of buffet control on transonic airfoil by tangential jet blowing, 6th edn. European Conference for Aeronautics and space sciences (EUCASS), Krakov, pp 1–9 Petrov AV (2011) Powered lift wing systems. Moscow, Fizmatlit Carrier G, Atinault O, Dequand S, Hantrais-Gervois J-L, Liauzun C, Paluch B, Rodde A-M, & Toussaint C (2012) Investigation of a strut-braced wing configuration for future commercial transport. In ICAS 2012–597 Ko A, Mason WH, & Grossman B (2003) Transonic aerodynamics of a wing pylon strut juncture. In AIAA-2003-4062 Gern F, KO A, Grossman B, Haftka R, Kapania RK, & Mason WH, (2005) Transport weight reduction through mdo: The strut-braced wing transonic transport. In AIAA-2005–4667 Seber G, Ran H, Schetz JA, & Mavris DN (2011) Multidisciplinary design optimization of a truss braced wing aircraft with upgraded aerodynamic analyses. In AIAA-2011–3179 Khajehzadeh A (2018) Analysis of an over the wing based distributed propulsion system. Thesis report. In: Delft University of technology Carter MB, Vicroy DD, & Patel D (2009) Blanded-wing-body transonic aerodynamics: Summary of ground tests and sample results (invited). AIAA 2009–935 Brutyan MA, Budaev VP, Wolkov AV (2016) Influence of surface fractal microstructure on the characteristics of a turbulent boundary layer. Proceedings of ICAS, Daejeon Leatherwood JD, Sullivan BM (1992) Laboratory study of effects of sonic boom shaping on subjective loudness and acceptability. NASA TP 3269 Stevens SS (1972) Perceived level of noise by mark vii and decibels (e). J. Acoust. Soc. Am. 51, 575–601. Coulouvrat, F. (2009) The challenges of defining an acceptable sonic boom overland. AIAA paper 2009–3384 Chernyshev SL (2011) Sonic boom. Nauka, Moscow Chernyshev SL, Kiselev APH, Vorotnikov PP (2008) Sonic boom minimization and atmospheric effects. AIAA paper:2008–2058 Gubanov AA, Pritulo MF, Ruch'yev VM (1994) Theoretical investigation of airframe/inlet interference and integration for hypersonic vehicles. Zeitschrift fur Flugwissenschaften und Weltraumforschung 18:379–382 Gubanov AA (2013) Fundamental relations on airframe/propulsion aerodynamic integration for supersonic aircraft. In 5th European Conference for Aeronautics and Space Sciences (EUCASS) Gubanov AA, & Gusev D, Yu (2014) Potential use of waverider and Busemann biplane in aerodynamic design of high speed vehicle swith air-breathing engines. In ICAS 2014–0453 Steelant J, & Langener T (2014) Potential use of waverider and Busemann biplane in aerodynamic design of high speed vehicle swith air-breathing engines. In ICAS 2014–0428. St.-Petersburg, Russia Xianhong X, Yuan L, Qian Z (2017) Investigation of a wide range adaptable hypersonic dual-waverider integrative design method based on two different types of 3d inward-turning inlets. In 21st AIAA international space Planes, hypersonic systems and technologies conference, Xiamen, China, March 6–9 Li YQ, Zheng X, Teng J, You Y (2017) Dual waverider concept for inlet-airframe integration with controllable wall pressure distribution. In 21st AIAA international space Planes, hypersonic systems and technologies conference, Xiamen, China, March 6–9 Cui K, Xiao Y, Xu Y, Li G (2018) Hypersonic i-shaped aerodynamic configurations. Sci China Physics, Mechanics Astronomy 61 Authors are grateful to the experts of TsAGI for the assistance in preparing the article, in particular: M.A. Brutyan, A.L. Bolsunovsky, Yu. N.Chernavsky, V.G. Yudin, A. A.Gubanov and others. Works are performed at financing of the ministry of industry and trade. All data generated or analyzed during this study are included in this published article. Department of Aerodynamics, Central Aerohydrodynamic Institute named after Prof. N.E, Zhukovsky (TsAGI), 140180, Zhukovsky str. 1, Zhukovsky, Russia Sergey L. Chernyshev, Sergey V. Lyapunov & Andrey V. Wolkov Sergey L. Chernyshev Sergey V. Lyapunov Andrey V. Wolkov The contribution of the authors to the work is equivalent and is approximately 1/3. All authors read and approved the final manuscript. Correspondence to Andrey V. Wolkov. Sergey L. Chernyshev, MIPT, academician, TsAGI, author of over 130 scientific publications. Area of scientific interests – flight vehicles aerodynamics, hypersonic vehicles aerodynamics, sonic boom. Sergey V. Lyapunov, MIPT, Doctor of science, professor, TsAGI, author of over 70 scientific publications. Area of scientific interests – flight vehicles aerodynamics, CFD methods. Andrey V. Wolkov, MIPT, Doctor of science, TsAGI, author of over 70 scientific publications. Area of scientific interests – flight vehicles aerodynamics, CFD methods. E-mail: [email protected] Chernyshev, S.L., Lyapunov, S.V. & Wolkov, A.V. Modern problems of aircraft aerodynamics. Adv. Aerodyn. 1, 7 (2019). https://doi.org/10.1186/s42774-019-0007-6 Aircraft aerodynamics Hypersonic vehicles aerodynamics Civil supersonic transport aircraft
CommonCrawl
Only show content I have access to (50) Only show open access (6) Chapters (30) Last 12 months (9) Over 3 years (201) Physics And Astronomy (109) Materials Research (78) Earth and Environmental Sciences (19) Statistics and Probability (12) Area Studies (1) Language and Linguistics (1) MRS Online Proceedings Library Archive (75) Journal of Fluid Mechanics (13) Epidemiology & Infection (11) Publications of the Astronomical Society of Australia (10) Microscopy and Microanalysis (9) Psychological Medicine (7) Proceedings of the International Astronomical Union (5) Proceedings of the Nutrition Society (4) The British Journal of Psychiatry (4) The Journal of Agricultural Science (4) British Journal of Nutrition (3) Journal of Developmental Origins of Health and Disease (2) Journal of the International Neuropsychological Society (2) Laser and Particle Beams (2) Powder Diffraction (2) Proceedings of the Prehistoric Society (2) Symposium - International Astronomical Union (2) Twin Research and Human Genetics (2) Materials Research Society (76) test society (13) Brazilian Society for Microscopy and Microanalysis (SBMM) (9) International Astronomical Union (9) Nutrition Society (7) The Royal College of Psychiatrists (4) International Soc for Twin Studies (3) BSAS (2) International Neuropsychological Society INS (2) Nestle Foundation - enLINK (2) The Paleontological Society (2) The Prehistoric Society (2) British School at Rome (1) EDPS Sciences - Radioprotection (1) Institute and Faculty of Actuaries (1) MBA Online Only Members (1) RIN (1) Royal College of Speech and Language Therapists (1) Society for Healthcare Epidemiology of America (SHEA) (1) The Australian Society of Otolaryngology Head and Neck Surgery (1) Cambridge Handbooks in Psychology (3) British Mycological Society Symposia (1) Cambridge Companions to Religion (1) Cambridge Handbooks in Behavioral Genetics (1) Cambridge Pocket Clinicians (1) Case Studies in Neurology (1) Systematics Association Special Volume Series (1) Cambridge Handbooks (4) Cambridge Handbooks of Psychology (4) Cambridge Companions (1) The Cambridge Companions to Philosophy and Religion (1) ASKAP commissioning observations of the GAMA 23 field Denis A. Leahy, A. M. Hopkins, R. P. Norris, J. Marvil, J. D. Collier, E. N. Taylor, J. R. Allison, C. Anderson, M. Bell, M. Bilicki, J. Bland-Hawthorn, S. Brough, M. J. I. Brown, S. Driver, G. Gurkan, L. Harvey-Smith, I. Heywood, B. W. Holwerda, J. Liske, A. R. Lopez-Sanchez, D. McConnell, A. Moffett, M. S. Owers, K. A. Pimbblet, W. Raja, N. Seymour, M. A. Voronkov, L. Wang Journal: Publications of the Astronomical Society of Australia / Volume 36 / 2019 Published online by Cambridge University Press: 19 July 2019, e024 We have observed the G23 field of the Galaxy AndMass Assembly (GAMA) survey using the Australian Square Kilometre Array Pathfinder (ASKAP) in its commissioning phase to validate the performance of the telescope and to characterise the detected galaxy populations. This observation covers ~48 deg2 with synthesised beam of 32.7 arcsec by 17.8 arcsec at 936MHz, and ~39 deg2 with synthesised beam of 15.8 arcsec by 12.0 arcsec at 1320MHz. At both frequencies, the root-mean-square (r.m.s.) noise is ~0.1 mJy/beam. We combine these radio observations with the GAMA galaxy data, which includes spectroscopy of galaxies that are i-band selected with a magnitude limit of 19.2. Wide-field Infrared Survey Explorer (WISE) infrared (IR) photometry is used to determine which galaxies host an active galactic nucleus (AGN). In properties including source counts, mass distributions, and IR versus radio luminosity relation, the ASKAP-detected radio sources behave as expected. Radio galaxies have higher stellar mass and luminosity in IR, optical, and UV than other galaxies. We apply optical and IR AGN diagnostics and find that they disagree for ~30% of the galaxies in our sample. We suggest possible causes for the disagreement. Some cases can be explained by optical extinction of the AGN, but for more than half of the cases we do not find a clear explanation. Radio sources aremore likely (~6%) to have an AGN than radio quiet galaxies (~1%), but the majority of AGN are not detected in radio at this sensitivity. Patients with laboratory evidence of West Nile virus disease without reported fever K. Landry, I. B. Rabe, S. L. Messenger, J. K. Hacker, M. L. Salas, C. Scott-Waldron, D. Haydel, E. Rider, S. Simonson, C. M. Brown, S. C. Smole, D. F. Neitzel, E. K. Schiffman, A. K. Strain, S. Vetter, M. Fischer, N. P. Lindsey Journal: Epidemiology & Infection / Volume 147 / 2019 Published online by Cambridge University Press: 17 June 2019, e219 In 2013, the national surveillance case definition for West Nile virus (WNV) disease was revised to remove fever as a criterion for neuroinvasive disease and require at most subjective fever for non-neuroinvasive disease. The aims of this project were to determine how often afebrile WNV disease occurs and assess differences among patients with and without fever. We included cases with laboratory evidence of WNV disease reported from four states in 2014. We compared demographics, clinical symptoms and laboratory evidence for patients with and without fever and stratified the analysis by neuroinvasive and non-neuroinvasive presentations. Among 956 included patients, 39 (4%) had no fever; this proportion was similar among patients with and without neuroinvasive disease symptoms. For neuroinvasive and non-neuroinvasive patients, there were no differences in age, sex, or laboratory evidence between febrile and afebrile patients, but hospitalisations were more common among patients with fever (P < 0.01). The only significant difference in symptoms was for ataxia, which was more common in neuroinvasive patients without fever (P = 0.04). Only 5% of non-neuroinvasive patients did not meet the WNV case definition due to lack of fever. The evidence presented here supports the changes made to the national case definition in 2013. Calculation and Measurement of Integral Reflection Coefficient Versus Wavelength of "Real" Crystals on an Absolute Basis D. B. Brown, M. Fatemi, L. S. Birks Journal: Advances in X-ray Analysis / Volume 17 / 1973 Published online by Cambridge University Press: 06 March 2019, pp. 436-444 Print publication: 1973 A method for calculation of the integral reflection coefficient of crystals of interrnediate perfection is introduced. This method can greatly reduce experimental effort for the selection and calibration of crystals, It also serves as a conceptual framework for studies of mosaic block structure and of crystal modification. Good agreement between calculated and experimental values of the integral reflection coefficient is shown for, (a) LiF crystals of two degrees of perfection, (b) elastically bent quartz, and (c) 001, 005, 006, and 007 diffraction from KAP. Zachariasen's division of crystals into two types is extended. It is concluded that the integral reflection coefficients for 200 LiF cannot be raised to the ideally imperfect limiting values. Pregnancy stress, healthy pregnancy and birth outcomes – the need for early preventative approaches in pregnant Australian Indigenous women: a prospective longitudinal cohort study B. L. Mah, K. G. Pringle, L. Weatherall, L. Keogh, T. Schumacher, S. Eades, A. Brown, E. R. Lumbers, C. T. Roberts, C. Diehm, R. Smith, K. M. Rae Journal: Journal of Developmental Origins of Health and Disease / Volume 10 / Issue 1 / February 2019 Published online by Cambridge University Press: 17 January 2019, pp. 31-38 Print publication: February 2019 Adverse pregnancy outcomes including prematurity and low birth weight (LBW) have been associated with life-long chronic disease risk for the infant. Stress during pregnancy increases the risk of adverse pregnancy outcomes. Many studies have reported the incidence of adverse pregnancy outcomes in Indigenous populations and a smaller number of studies have measured rates of stress and depression in these populations. This study sought to examine the potential association between stress during pregnancy and the rate of adverse pregnancy outcomes in Australian Indigenous women residing in rural and remote communities in New South Wales. This study found a higher rate of post-traumatic stress disorder, depression and anxiety symptoms during pregnancy than the general population. There was also a higher incidence of prematurity and LBW deliveries. Unfortunately, missing post-traumatic stress disorder and depressive symptomatology data impeded the examination of associations of interest. This was largely due to the highly sensitive nature of the issues under investigation, and the need to ensure adequate levels of trust between Indigenous women and research staff before disclosure and recording of sensitive research data. We were unable to demonstrate a significant association between the level of stress and the incidence of adverse pregnancy outcomes at this stage. We recommend this longitudinal study continue until complete data sets are available. Future research in this area should ensure prioritization of building trust in participants and overestimating sample size to ensure no undue pressure is placed upon an already stressed participant. 'Hiding their troubles': a qualitative exploration of suicide in Bhutanese refugees in the USA F. L. Brown, T. Mishra, R. L. Frounfelker, E. Bhargava, B. Gautam, A. Prasai, T. S. Betancourt Journal: Global Mental Health / Volume 6 / 2019 Published online by Cambridge University Press: 15 January 2019, e1 Background. Suicide is a major global health concern. Bhutanese refugees resettled in the USA are disproportionately affected by suicide, yet little research has been conducted to identify factors contributing to this vulnerability. This study aims to investigate the issue of suicide of Bhutanese refugee communities via an in-depth qualitative, social-ecological approach. Methods. Focus groups were conducted with 83 Bhutanese refugees (adults and children), to explore the perceived causes, and risk and protective factors for suicide, at individual, family, community, and societal levels. Audio recordings were translated and transcribed, and inductive thematic analysis conducted. Themes identified can be situated across all levels of the social-ecological model. Individual thoughts, feelings, and behaviors are only fully understood when considering past experiences, and stressors at other levels of an individual's social ecology. Shifting dynamics and conflict within the family are pervasive and challenging. Within the community, there is a high prevalence of suicide, yet major barriers to communicating with others about distress and suicidality. At the societal level, difficulties relating to acculturation, citizenship, employment and finances, language, and literacy are influential. Two themes cut across several levels of the ecosystem: loss; and isolation, exclusion, and loneliness. Conclusions. This study extends on existing research and highlights the necessity for future intervention models of suicide to move beyond an individual focus, and consider factors at all levels of refugees' social-ecology. Simply focusing treatment at the individual level is not sufficient. Researchers and practitioners should strive for community-driven, culturally relevant, socio-ecological approaches for prevention and treatment. Repeated Chlamydia trachomatis infections are associated with lower bacterial loads K. Gupta, R. K. Bakshi, B. Van Der Pol, G. Daniel, L. Brown, C. G. Press, R. Gorwitz, J. Papp, J. Y. Lee, W. M. Geisler Published online by Cambridge University Press: 04 October 2018, e18 Chlamydia trachomatis (CT) infections remain highly prevalent. CT reinfection occurs frequently within months after treatment, likely contributing to sustaining the high CT infection prevalence. Sparse studies have suggested CT reinfection is associated with a lower organism load, but it is unclear whether CT load at the time of treatment influences CT reinfection risk. In this study, women presenting for treatment of a positive CT screening test were enrolled, treated and returned for 3- and 6-month follow-up visits. CT organism loads were quantified at each visit. We evaluated for an association of CT bacterial load at initial infection with reinfection risk and investigated factors influencing the CT load at baseline and follow-up in those with CT reinfection. We found no association of initial CT load with reinfection risk. We found a significant decrease in the median log10 CT load from baseline to follow-up in those with reinfection (5.6 CT/ml vs. 4.5 CT/ml; P = 0.015). Upon stratification of reinfected subjects based upon presence or absence of a history of CT infections prior to their infection at the baseline visit, we found a significant decline in the CT load from baseline to follow-up (5.7 CT/ml vs. 4.3 CT/ml; P = 0.021) exclusively in patients with a history of CT infections prior to our study. Our findings suggest repeated CT infections may lead to possible development of partial immunity against CT. The pattern of symptom change during prolonged exposure therapy and present-centered therapy for PTSD in active duty military personnel Lily A. Brown, Joshua D. Clapp, Joshua J. Kemp, Jeffrey S. Yarvis, Katherine A. Dondanville, Brett T. Litz, Jim Mintz, John D. Roache, Stacey Young-McCaughan, Alan L. Peterson, Edna B. Foa, For the STRONG STAR Consortium Journal: Psychological Medicine , First View Published online by Cambridge University Press: 17 September 2018, pp. 1-10 Few studies have investigated the patterns of posttraumatic stress disorder (PTSD) symptom change in prolonged exposure (PE) therapy. In this study, we aimed to understand the patterns of PTSD symptom change in both PE and present-centered therapy (PCT). Participants were active duty military personnel (N = 326, 89.3% male, 61.2% white, 32.5 years old) randomized to spaced-PE (S-PE; 10 sessions over 8 weeks), PCT (10 sessions over 8 weeks), or massed-PE (M-PE; 10 sessions over 2 weeks). Using latent profile analysis, we determined the optimal number of PTSD symptom change classes over time and analyzed whether baseline and follow-up variables were associated with class membership. Five classes, namely rapid responder (7–17%), steep linear responder (14–22%), gradual responder (30–34%), non-responder (27–33%), and symptom exacerbation (7–13%) classes, characterized each treatment. No baseline clinical characteristics predicted class membership for S-PE and M-PE; in PCT, more negative baseline trauma cognitions predicted membership in the non-responder v. gradual responder class. Class membership was robustly associated with PTSD, trauma cognitions, and depression up to 6 months after treatment for both S-PE and M-PE but not for PCT. Distinct profiles of treatment response emerged that were similar across interventions. By and large, no baseline variables predicted responder class. Responder status was a strong predictor of future symptom severity for PE, whereas response to PCT was not as strongly associated with future symptoms. Using Sub-Sampling/Inpainting to Control the Kinetics and Observation Efficiency of Dynamic Processes in Liquids N. D. Browning, B. L. Mehdi, A. Stevens, M. E. Gehm, L. Kovarik, N. Jiang, H. Mehta, A. Liyu, S. Reehl, B. Stanfill, L. Luzzi, K. MacPhee, L. Bramer Journal: Microscopy and Microanalysis / Volume 24 / Issue S1 / August 2018 Published online by Cambridge University Press: 01 August 2018, pp. 242-243 Print publication: August 2018 Pulsatile hyperglycemia increases insulin secretion but not pancreatic β-cell mass in intrauterine growth-restricted fetal sheep B. H. Boehmer, L. D. Brown, S. R. Wesolowski, W. W. Hay, P. J. Rozance Journal: Journal of Developmental Origins of Health and Disease / Volume 9 / Issue 5 / October 2018 Published online by Cambridge University Press: 05 July 2018, pp. 492-499 Print publication: October 2018 Impaired β-cell development and insulin secretion are characteristic of intrauterine growth-restricted (IUGR) fetuses. In normally grown late gestation fetal sheep pancreatic β-cell numbers and insulin secretion are increased by 7–10 days of pulsatile hyperglycemia (PHG). Our objective was to determine if IUGR fetal sheep β-cell numbers and insulin secretion could also be increased by PHG or if IUGR fetal β-cells do not have the capacity to respond to PHG. Following chronic placental insufficiency producing IUGR in twin gestation pregnancies (n=7), fetuses were administered a PHG infusion, consisting of 60 min, high rate, pulsed infusions of dextrose three times a day with an additional continuous, low-rate infusion of dextrose to prevent a decrease in glucose concentrations between the pulses or a control saline infusion. PHG fetuses were compared with their twin IUGR fetus, which received a saline infusion for 7 days. The pulsed glucose infusion increased fetal arterial glucose concentrations an average of 83% during the infusion. Following the 7-day infusion, a square-wave fetal hyperglycemic clamp was performed in both groups to measure insulin secretion. The rate of increase in fetal insulin concentrations during the first 20 min of a square-wave hyperglycemic clamp was 44% faster in the PHG fetuses compared with saline fetuses (P<0.05). There were no differences in islet size, the insulin+ area of the pancreas and of the islets, and β-cell mass between groups (P>0.23). Chronic PHG increases early phase insulin secretion in response to acute hyperglycemia, indicating that IUGR fetal β-cells are functionally responsive to chronic PHG. Magnetothermodynamics: measurements of the thermodynamic properties in a relaxed magnetohydrodynamic plasma M. Kaur, L. J. Barbano, E. M. Suen-Lewis, J. E. Shrock, A. D. Light, D. A. Schaffner, M. B. Brown, S. Woodruff, T. Meyer Journal: Journal of Plasma Physics / Volume 84 / Issue 1 / February 2018 Published online by Cambridge University Press: 19 February 2018, 905840114 We have explored the thermodynamics of compressed magnetized plasmas in laboratory experiments and we call these studies 'magnetothermodynamics'. The experiments are carried out in the Swarthmore Spheromak eXperiment device. In this device, a magnetized plasma source is located at one end and at the other end, a closed conducting can is installed. We generate parcels of magnetized plasma and observe their compression against the end wall of the conducting cylinder. The plasma parameters such as plasma density, temperature and magnetic field are measured during compression using HeNe laser interferometry, ion Doppler spectroscopy and a linear ${\dot{B}}$ probe array, respectively. To identify the instances of ion heating during compression, a PV diagram is constructed using measured density, temperature and a proxy for the volume of the magnetized plasma. Different equations of state are analysed to evaluate the adiabatic nature of the compressed plasma. A three-dimensional resistive magnetohydrodynamic code (NIMROD) is employed to simulate the twisted Taylor states and shows stagnation against the end wall of the closed conducting can. The simulation results are consistent to what we observe in our experiments. Follow Up of GW170817 and Its Electromagnetic Counterpart by Australian-Led Observing Programmes Gravitational Wave Astronomy I. Andreoni, K. Ackley, J. Cooke, A. Acharyya, J. R. Allison, G. E. Anderson, M. C. B. Ashley, D. Baade, M. Bailes, K. Bannister, A. Beardsley, M. S. Bessell, F. Bian, P. A. Bland, M. Boer, T. Booler, A. Brandeker, I. S. Brown, D. A. H. Buckley, S.-W. Chang, D. M. Coward, S. Crawford, H. Crisp, B. Crosse, A. Cucchiara, M. Cupák, J. S. de Gois, A. Deller, H. A. R. Devillepoix, D. Dobie, E. Elmer, D. Emrich, W. Farah, T. J. Farrell, T. Franzen, B. M. Gaensler, D. K. Galloway, B. Gendre, T. Giblin, A. Goobar, J. Green, P. J. Hancock, B. A. D. Hartig, E. J. Howell, L. Horsley, A. Hotan, R. M. Howie, L. Hu, Y. Hu, C. W. James, S. Johnston, M. Johnston-Hollitt, D. L. Kaplan, M. Kasliwal, E. F. Keane, D. Kenney, A. Klotz, R. Lau, R. Laugier, E. Lenc, X. Li, E. Liang, C. Lidman, L. C. Luvaul, C. Lynch, B. Ma, D. Macpherson, J. Mao, D. E. McClelland, C. McCully, A. Möller, M. F. Morales, D. Morris, T. Murphy, K. Noysena, C. A. Onken, N. B. Orange, S. Osłowski, D. Pallot, J. Paxman, S. B. Potter, T. Pritchard, W. Raja, R. Ridden-Harper, E. Romero-Colmenero, E. M. Sadler, E. K. Sansom, R. A. Scalzo, B. P. Schmidt, S. M. Scott, N. Seghouani, Z. Shang, R. M. Shannon, L. Shao, M. M. Shara, R. Sharp, M. Sokolowski, J. Sollerman, J. Staff, K. Steele, T. Sun, N. B. Suntzeff, C. Tao, S. Tingay, M. C. Towner, P. Thierry, C. Trott, B. E. Tucker, P. Väisänen, V. Venkatraman Krishnan, M. Walker, L. Wang, X. Wang, R. Wayth, M. Whiting, A. Williams, T. Williams, C. Wolf, C. Wu, X. Wu, J. Yang, X. Yuan, H. Zhang, J. Zhou, H. Zovaro Published online by Cambridge University Press: 20 December 2017, e069 The discovery of the first electromagnetic counterpart to a gravitational wave signal has generated follow-up observations by over 50 facilities world-wide, ushering in the new era of multi-messenger astronomy. In this paper, we present follow-up observations of the gravitational wave event GW170817 and its electromagnetic counterpart SSS17a/DLT17ck (IAU label AT2017gfo) by 14 Australian telescopes and partner observatories as part of Australian-based and Australian-led research programs. We report early- to late-time multi-wavelength observations, including optical imaging and spectroscopy, mid-infrared imaging, radio imaging, and searches for fast radio bursts. Our optical spectra reveal that the transient source emission cooled from approximately 6 400 K to 2 100 K over a 7-d period and produced no significant optical emission lines. The spectral profiles, cooling rate, and photometric light curves are consistent with the expected outburst and subsequent processes of a binary neutron star merger. Star formation in the host galaxy probably ceased at least a Gyr ago, although there is evidence for a galaxy merger. Binary pulsars with short (100 Myr) decay times are therefore unlikely progenitors, but pulsars like PSR B1534+12 with its 2.7 Gyr coalescence time could produce such a merger. The displacement (~2.2 kpc) of the binary star system from the centre of the main galaxy is not unusual for stars in the host galaxy or stars originating in the merging galaxy, and therefore any constraints on the kick velocity imparted to the progenitor are poor. Unusually high illness severity and short incubation periods in two foodborne outbreaks of Salmonella Heidelberg infections with potential coincident Staphylococcus aureus intoxication J. H. NAKAO, D. TALKINGTON, C. A. BOPP, J. BESSER, M. L. SANCHEZ, J. GUARISCO, S. L. DAVIDSON, C. WARNER, M. G. McINTYRE, J. P. GROUP, N. COMSTOCK, K. XAVIER, T. S. PINSENT, J. BROWN, J. M. DOUGLAS, G. A. GOMEZ, N. M. GARRETT, H. A. CARLETON, B. TOLAR, M. E. WISE Journal: Epidemiology & Infection / Volume 146 / Issue 1 / January 2018 Published online by Cambridge University Press: 06 December 2017, pp. 19-27 We describe the investigation of two temporally coincident illness clusters involving salmonella and Staphylococcus aureus in two states. Cases were defined as gastrointestinal illness following two meal events. Investigators interviewed ill persons. Stool, food and environmental samples underwent pathogen testing. Alabama: Eighty cases were identified. Median time from meal to illness was 5·8 h. Salmonella Heidelberg was identified from 27 of 28 stool specimens tested, and coagulase-positive S. aureus was isolated from three of 16 ill persons. Environmental investigation indicated that food handling deficiencies occurred. Colorado: Seven cases were identified. Median time from meal to illness was 4·5 h. Five persons were hospitalised, four of whom were admitted to the intensive care unit. Salmonella Heidelberg was identified in six of seven stool specimens and coagulase-positive S. aureus in three of six tested. No single food item was implicated in either outbreak. These two outbreaks were linked to infection with Salmonella Heidelberg, but additional factors, such as dual aetiology that included S. aureus or the dose of salmonella ingested may have contributed to the short incubation periods and high illness severity. The outbreaks underscore the importance of measures to prevent foodborne illness through appropriate washing, handling, preparation and storage of food. Averages and moments associated to class numbers of imaginary quadratic fields D. R. Heath-Brown, L. B. Pierce Journal: Compositio Mathematica / Volume 153 / Issue 11 / November 2017 Published online by Cambridge University Press: 14 August 2017, pp. 2287-2309 Print publication: November 2017 For any odd prime $\ell$ , let $h_{\ell }(-d)$ denote the $\ell$ -part of the class number of the imaginary quadratic field $\mathbb{Q}(\sqrt{-d})$ . Nontrivial pointwise upper bounds are known only for $\ell =3$ ; nontrivial upper bounds for averages of $h_{\ell }(-d)$ have previously been known only for $\ell =3,5$ . In this paper we prove nontrivial upper bounds for the average of $h_{\ell }(-d)$ for all primes $\ell \geqslant 7$ , as well as nontrivial upper bounds for certain higher moments for all primes $\ell \geqslant 3$ . Educational Outreach by Avocational Paleontologists and Citizen Scientist for National Fossil Day—Junior Paleontologist Educational Kits Paul R. Roth, B. Alex Kittle, Vincent L. Santucci, Russell D. Brown Journal: The Paleontological Society Special Publications / Volume 13 / 2014 Published online by Cambridge University Press: 26 July 2017, p. 155 Low-Dose and In-Painting Methods for (Near) Atomic Resolution STEM Imaging of Metal Organic Frameworks (MOFs) B. L. Mehdi, A. J. Stevens, P. Moeck, A. Dohnalkova, A. Vjunov, J. L. Fulton, D. M. Camaioni, O. K. Farha, J. T. Hupp, B. C. Gates, J. A. Lercher, N. D. Browning Journal: Microscopy and Microanalysis / Volume 23 / Issue S1 / July 2017 Print publication: July 2017 Multi-Modal Characterization of New Battery Technologies by Operando ec-STEM B. L. Mehdi, J. Chen, A. Stevens, C. Park, L. Kovarik, A. V. Liyu, W. A. Henderson, J-G. Zhang, K. T. Mueller, N. D. Browning Solar system astrometry, Gaia, and the large surveys – a huge step ahead to stellar occultations by distant small solar system bodies J. I. B. Camargo, M. V. Banda-Huarca, R. L. Ogando, J. Desmars, F. Braga-Ribas, R. Vieira-Martins, M. Assafin, B. Sicardy, D. Bérard, G. Benedetti-Rossi, L. A. N. da Costa, M. A. G. Maia, M. Carrasco-Kind, A. Drlica-Wagner Journal: Proceedings of the International Astronomical Union / Volume 12 / Issue S330 / April 2017 Print publication: April 2017 The stellar occultation technique is a powerful tool to study distant small solar system bodies. Currently, around 2 500 trans-neptunian objects (TNOs) and Centaurs are known. With the astrometry from Gaia and large surveys like the Large Synoptic Survey Telescope (LSST), accurate predictions of occultation events will be available to tens of thousands of TNOs and Centaurs and boost the knowledge of the outer solar system. The white dwarf mass-radius relation with Gaia, Hubble and FUSE Simon R. G. Joyce, Martin A. Barstow, Sarah L. Casewell, Jay B. Holberg, Howard E. Bond White dwarfs are becoming useful tools for many areas of astronomy. They can be used as accurate chronometers over Gyr timescales. They are also clues to the history of star formation in our galaxy. Many of these studies require accurate estimates of the mass of the white dwarf. The theoretical mass-radius relation is often invoked to provide these mass estimates. While the theoretical mass-radius relation is well developed, observational tests of this relation show a much larger scatter in the results than expected. High precision observational tests to confirm this relation are required. Gaia is providing distance measurements which will remove one of the main source of uncertainty affecting most previous observations. We combine Gaia distances with spectra from the Hubble and FUSE satelites to make precise tests of the white dwarf mass-radius relation. Astrometry and Spectra Classification of Near Earth Asteroids with Lijiang 2.4m Telescope X. L. Zhang, B. Yang, J. M. Bai The Lijiang 2.4m telescope of Yunnan Observatories is located at longitude E100°01′51″, latitude N26°42′32″ and height 3250m above sea level (IAU code O44). Because of low latitude of the site, long-focus system and planetary tracking mode of telescope, high accuracy positioning and spectral classification of the near Earth objects (NEAs) especially in the Southern Hemisphere can be studied with the Lijiang 2.4m telescope. As a set of observational campaigns organized by the GAIA-FUN-SSO, astrometry of several near Earth asteroids including (367943) Duende and (99942) Apophis were made with Lijiang 2.4m telescope during 2013. From December 12, 2015, spectra of three near earth asteroids were also observed with the YFOSC terminal attached to the Lijiang 2.4m telescope. This paper will give the detailed introduction of Lijiaing 2.4m telescope and observational results of near Earth asteroids obtained with it. The cross-national epidemiology of specific phobia in the World Mental Health Surveys K. J. Wardenaar, C. C. W. Lim, A. O. Al-Hamzawi, J. Alonso, L. H. Andrade, C. Benjet, B. Bunting, G. de Girolamo, K. Demyttenaere, S. E. Florescu, O. Gureje, T. Hisateru, C. Hu, Y. Huang, E. Karam, A. Kiejna, J. P. Lepine, F. Navarro-Mateu, M. Oakley Browne, M. Piazza, J. Posada-Villa, M. L. ten Have, Y. Torres, M. Xavier, Z. Zarkov, R. C. Kessler, K. M. Scott, P. de Jonge Journal: Psychological Medicine / Volume 47 / Issue 10 / July 2017 Published online by Cambridge University Press: 22 February 2017, pp. 1744-1760 Although specific phobia is highly prevalent, associated with impairment, and an important risk factor for the development of other mental disorders, cross-national epidemiological data are scarce, especially from low- and middle-income countries. This paper presents epidemiological data from 22 low-, lower-middle-, upper-middle- and high-income countries. Data came from 25 representative population-based surveys conducted in 22 countries (2001–2011) as part of the World Health Organization World Mental Health Surveys initiative (n = 124 902). The presence of specific phobia as defined by the Diagnostic and Statistical Manual of Mental Disorders, fourth edition was evaluated using the World Health Organization Composite International Diagnostic Interview. The cross-national lifetime and 12-month prevalence rates of specific phobia were, respectively, 7.4% and 5.5%, being higher in females (9.8 and 7.7%) than in males (4.9% and 3.3%) and higher in high- and higher-middle-income countries than in low-/lower-middle-income countries. The median age of onset was young (8 years). Of the 12-month patients, 18.7% reported severe role impairment (13.3–21.9% across income groups) and 23.1% reported any treatment (9.6–30.1% across income groups). Lifetime co-morbidity was observed in 60.5% of those with lifetime specific phobia, with the onset of specific phobia preceding the other disorder in most cases (72.6%). Interestingly, rates of impairment, treatment use and co-morbidity increased with the number of fear subtypes. Specific phobia is common and associated with impairment in a considerable percentage of cases. Importantly, specific phobia often precedes the onset of other mental disorders, making it a possible early-life indicator of psychopathology vulnerability.
CommonCrawl